[bmwg] Genart last call review of draft-ietf-bmwg-vswitch-opnfv-03

Dan Romascanu <dromasca@gmail.com> Thu, 11 May 2017 11:05 UTC

Return-Path: <dromasca@gmail.com>
X-Original-To: bmwg@ietf.org
Delivered-To: bmwg@ietfa.amsl.com
Received: from ietfa.amsl.com (localhost [IPv6:::1]) by ietfa.amsl.com (Postfix) with ESMTP id D696212EBC0; Thu, 11 May 2017 04:05:50 -0700 (PDT)
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
From: Dan Romascanu <dromasca@gmail.com>
To: gen-art@ietf.org
Cc: draft-ietf-bmwg-vswitch-opnfv.all@ietf.org, ietf@ietf.org, bmwg@ietf.org, dromasca@gmail.com
X-Test-IDTracker: no
X-IETF-IDTracker: 6.50.0
Auto-Submitted: auto-generated
Precedence: bulk
Message-ID: <149450075072.16690.14546662616864459158@ietfa.amsl.com>
Date: Thu, 11 May 2017 04:05:50 -0700
Archived-At: <https://mailarchive.ietf.org/arch/msg/bmwg/dPPvMYty2hTja-odYZx0FgwuUuQ>
Subject: [bmwg] Genart last call review of draft-ietf-bmwg-vswitch-opnfv-03
X-BeenThere: bmwg@ietf.org
X-Mailman-Version: 2.1.22
List-Id: Benchmarking Methodology Working Group <bmwg.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/bmwg>, <mailto:bmwg-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/bmwg/>
List-Post: <mailto:bmwg@ietf.org>
List-Help: <mailto:bmwg-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/bmwg>, <mailto:bmwg-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 11 May 2017 11:05:51 -0000

Reviewer: Dan Romascanu
Review result: Almost Ready

I am the assigned Gen-ART reviewer for this draft. The General Area
Review Team (Gen-ART) reviews all IETF documents being processed
by the IESG for the IETF Chair.  Please treat these comments just
like any other last call comments.

For more information, please see the FAQ at

<https://trac.ietf.org/trac/gen/wiki/GenArtfaq>.

Document: draft-ietf-bmwg-vswitch-opnfv-??
Reviewer: Dan Romascanu
Review Date: 2017-05-11
IETF LC End Date: 2017-05-15
IESG Telechat date: Not scheduled for a telechat

Summary:

Almost Ready. 

This document describes describes the progress of the Open Platform
for NFV (OPNFV) project on virtual switch performance "VSPERF". That
project reuses the BMWG framework and specifications to benchmark
virtual switches implemented in general-purpose hardware. Some
differences with the benchmarking of specialized HW platforms are
identified and they may become work items for BMWG in the future. It's
a well written and clear document, but I have reservations about it
being published as an RFC, and I cannot find coverage for it in the WG
charter. I also have concerns that parts of the methodology used by
OPNFV break the BMWG principles, especially repeatability and
'black-box', and this is not clear enough articulated in the document.
 

Major issues:

1. It is not clear to me why this document needs to be published as an
RFC. The introduction says: 'This memo describes the progress of the
Open Platform for NFV (OPNFV) project on virtual switch performance
"VSPERF".  This project intends to build on the current and completed
work of the Benchmarking Methodology Working Group in IETF, by
referencing existing literature.' Why should the WG and the IESG
invest resources in publishing this, why an I-D or an Independent
Stream RFC is not sufficient? The WG charter says something about:
'VNF and Related Infrastructure Benchmarking: Benchmarking
Methodologies have reliably characterized many physical devices. This
work item extends and enhances the methods to virtual network
functions (VNF) and their unique supporting infrastructure. A first
deliverable from this activity will be a document that considers the
new benchmarking space to ensure that common issues are recognized
from the start, using background materials from industry and SDOs
(e.g., IETF, ETSI NFV).'. I do not believe that this document covers
the intent of the charter, as it focused on one organization only. 

2. In section 3 there 'repeatability' is mentioned, while
acknowledging that in a virtual environment there is no guarantee and
actually no way to know what other applications are being run.
Measuring parameters as the ones listed in 3.3 provides just part of
the answer, and they are internal parameters to the SUT. Also, the
different deployment scenarios in section 4 require different
configurations for the SUT, thus breaking the 'black-box' principle. I
believe that there is a need for a more clear explanation of why BMWG
specifications are appropriate and how comparison can be made while
repeatability cannot be ensured, and measurements are dependent upon
parameters internal to the SUT. 

Minor issues:

1. Some of the tests mentioned in Section 4 have no prior or in
progress work in the IETF: Control Path and Datapath Coupling Tests,
Noisy Neighbour Tests, characterization of acceleration technologies.
If new work is needed / proposed to be added for the BMWG scope and
framework it would be useful for BMWG to list these separately. 


Nits/editorial comments: 

1. What is called 'Deployment scenarios' from VS perspective in
Section 4 describe in fact different configurations of the SUT in BMWG
terms. It seems better to separate this second part of section 4 in a
separate section. If it belongs to an existing section it rather
belongs in 3 than in 4.