RE: Genart last call review of draft-ietf-bmwg-vswitch-opnfv-03

"MORTON, ALFRED C (AL)" <acmorton@att.com> Wed, 07 June 2017 01:48 UTC

Return-Path: <acmorton@att.com>
X-Original-To: ietf@ietfa.amsl.com
Delivered-To: ietf@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 5DFAD129564; Tue, 6 Jun 2017 18:48:20 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -5.4
X-Spam-Level:
X-Spam-Status: No, score=-5.4 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H2=-2.8, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id TUnk6VoGK47r; Tue, 6 Jun 2017 18:48:16 -0700 (PDT)
Received: from mx0a-00191d01.pphosted.com (mx0a-00191d01.pphosted.com [67.231.149.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 39458129576; Tue, 6 Jun 2017 18:48:16 -0700 (PDT)
Received: from pps.filterd (m0053301.ppops.net [127.0.0.1]) by mx0a-00191d01.pphosted.com (8.16.0.17/8.16.0.17) with SMTP id v571j541028010; Tue, 6 Jun 2017 21:48:11 -0400
Received: from tlpd255.enaf.dadc.sbc.com (sbcsmtp3.sbc.com [144.160.112.28]) by mx0a-00191d01.pphosted.com with ESMTP id 2ax37e6t9x-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 06 Jun 2017 21:48:10 -0400
Received: from enaf.dadc.sbc.com (localhost [127.0.0.1]) by tlpd255.enaf.dadc.sbc.com (8.14.5/8.14.5) with ESMTP id v571m9H9103516; Tue, 6 Jun 2017 20:48:09 -0500
Received: from dalint01.pst.cso.att.com (dalint01.pst.cso.att.com [135.31.133.159]) by tlpd255.enaf.dadc.sbc.com (8.14.5/8.14.5) with ESMTP id v571m3Bx103503 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Tue, 6 Jun 2017 20:48:03 -0500
Received: from clpi183.sldc.sbc.com (clpi183.sldc.sbc.com [135.41.1.46]) by dalint01.pst.cso.att.com (RSA Interceptor); Wed, 7 Jun 2017 01:47:47 GMT
Received: from sldc.sbc.com (localhost [127.0.0.1]) by clpi183.sldc.sbc.com (8.14.5/8.14.5) with ESMTP id v571lliM017316; Tue, 6 Jun 2017 20:47:47 -0500
Received: from mail-green.research.att.com (mail-green.research.att.com [135.207.255.15]) by clpi183.sldc.sbc.com (8.14.5/8.14.5) with ESMTP id v571lhOR017126; Tue, 6 Jun 2017 20:47:43 -0500
Received: from exchange.research.att.com (njmtcas2.research.att.com [135.207.255.47]) by mail-green.research.att.com (Postfix) with ESMTP id 62EDEE1097; Tue, 6 Jun 2017 21:47:37 -0400 (EDT)
Received: from njmtexg5.research.att.com ([fe80::b09c:ff13:4487:78b6]) by njmtcas2.research.att.com ([fe80::d550:ec84:f872:cad9%15]) with mapi id 14.03.0319.002; Tue, 6 Jun 2017 21:47:42 -0400
From: "MORTON, ALFRED C (AL)" <acmorton@att.com>
To: Warren Kumari <warren@kumari.net>, "alissa@cooperw.in" <alissa@cooperw.in>
CC: "gen-art@ietf.org" <gen-art@ietf.org>, "draft-ietf-bmwg-vswitch-opnfv.all@ietf.org" <draft-ietf-bmwg-vswitch-opnfv.all@ietf.org>, "ietf@ietf.org" <ietf@ietf.org>, "bmwg@ietf.org" <bmwg@ietf.org>, "dromasca@gmail.com" <dromasca@gmail.com>
Subject: RE: Genart last call review of draft-ietf-bmwg-vswitch-opnfv-03
Thread-Topic: Genart last call review of draft-ietf-bmwg-vswitch-opnfv-03
Thread-Index: AQHSykaQpvZtcVQqDEiJ+tr7bcWy3KHvK1EQgACSnQCAKPeQgP//5Opg
Date: Wed, 07 Jun 2017 01:47:41 +0000
Message-ID: <4D7F4AD313D3FC43A053B309F97543CF25FD64E2@njmtexg5.research.att.com>
References: <149450075072.16690.14546662616864459158@ietfa.amsl.com> <4D7F4AD313D3FC43A053B309F97543CF25F9B072@njmtexg4.research.att.com> <CAFgnS4UAfMe5L7k=zwCqfUjgz14ArBpTE_bz675dCf0_Jk21fg@mail.gmail.com> <CAHw9_iKZxmmYszcketrLwLQOq8wnVWZceLF=eXTudepODoBRHQ@mail.gmail.com>
In-Reply-To: <CAHw9_iKZxmmYszcketrLwLQOq8wnVWZceLF=eXTudepODoBRHQ@mail.gmail.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-originating-ip: [73.178.187.36]
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-RSA-Inspected: yes
X-RSA-Classifications: public
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:, , definitions=2017-06-06_15:, , signatures=0
X-Proofpoint-Spam-Details: rule=outbound_policy_notspam policy=outbound_policy score=0 priorityscore=1501 malwarescore=0 suspectscore=0 phishscore=0 bulkscore=0 spamscore=0 clxscore=1011 lowpriorityscore=0 impostorscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1703280000 definitions=main-1706070030
Archived-At: <https://mailarchive.ietf.org/arch/msg/ietf/E9NBTBt8cL2-rOIN1DQnsV7JEfo>
X-BeenThere: ietf@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: IETF-Discussion <ietf.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/ietf>, <mailto:ietf-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/ietf/>
List-Post: <mailto:ietf@ietf.org>
List-Help: <mailto:ietf-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/ietf>, <mailto:ietf-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 07 Jun 2017 01:48:20 -0000

Alissa and Warren,

After Warren's message, a few more replies in-line, below,

Al (for the co-authors)
> -----Original Message-----
> From: Warren Kumari [mailto:warren@kumari.net]
> Sent: Tuesday, June 06, 2017 4:29 PM
> To: Dan Romascanu
> Cc: MORTON, ALFRED C (AL); gen-art@ietf.org; draft-ietf-bmwg-vswitch-
> opnfv.all@ietf.org; ietf@ietf.org; bmwg@ietf.org
> Subject: Re: Genart last call review of draft-ietf-bmwg-vswitch-opnfv-03
> 
> On Thu, May 11, 2017 at 2:52 PM, Dan Romascanu <dromasca@gmail.com>
> wrote:
> > Hi,
> >
> > Please see in-line.
> >
> > Regards,
> >
> > Dan
> >
> >
> > On Thu, May 11, 2017 at 8:00 PM, MORTON, ALFRED C (AL)
> <acmorton@att.com>
> > wrote:
> >>
> >> Hi Dan,
> >> please see replies, [ACM], below.
> >>
> >> > -----Original Message-----
> >> > From: Dan Romascanu [mailto:dromasca@gmail.com]
> >> > Sent: Thursday, May 11, 2017 7:06 AM
> >> > To: gen-art@ietf.org
> >> > Cc: draft-ietf-bmwg-vswitch-opnfv.all@ietf.org; ietf@ietf.org;
> >> > bmwg@ietf.org; dromasca@gmail.com
> >> > Subject: Genart last call review of draft-ietf-bmwg-vswitch-opnfv-
> 03
> >> >
> >> > Reviewer: Dan Romascanu
> >> > Review result: Almost Ready
> >> >
> >> > I am the assigned Gen-ART reviewer for this draft. The General Area
> >> > Review Team (Gen-ART) reviews all IETF documents being processed
> >> > by the IESG for the IETF Chair.  Please treat these comments just
> >> > like any other last call comments.
> >> >
> >> > For more information, please see the FAQ at
> >> >
> >> > <https://urldefense.proofpoint.com/v2/url?u=https->> >
> 3A__trac.ietf.org_trac_gen_wiki_GenArtfaq&d=DwICaQ&c=LFYZ-
> >> > o9_HUMeMTSQicvjIg&r=OfsSu8kTIltVyD1oL72cBw&m=NTVlLBN-
> >> > L3u9zGPHm_CNVcXW7_OGX8_18CtaAalZin0&s=2Hr-
> >> > dhKaDHIguY7W97z33RlKjqPDtmoYmM2-jWrbS-o&e= >.
> >> >
> >> > Document: draft-ietf-bmwg-vswitch-opnfv-03
> >> > Reviewer: Dan Romascanu
> >> > Review Date: 2017-05-11
> >> > IETF LC End Date: 2017-05-15
> >> > IESG Telechat date: Not scheduled for a telechat
> >> >
> >> > Summary:
> >> >
> >> > Almost Ready.
> >> >
> >> > This document describes describes the progress of the Open Platform
> >> > for NFV (OPNFV) project on virtual switch performance "VSPERF".
> That
> >> > project reuses the BMWG framework and specifications to benchmark
> >> > virtual switches implemented in general-purpose hardware. Some
> >> > differences with the benchmarking of specialized HW platforms are
> >> > identified and they may become work items for BMWG in the future.
> It's
> >> > a well written and clear document, but I have reservations about it
> >> > being published as an RFC, and I cannot find coverage for it in the
> WG
> >> > charter. I also have concerns that parts of the methodology used by
> >> > OPNFV break the BMWG principles, especially repeatability and
> >> > 'black-box', and this is not clear enough articulated in the
> document.
> >> [ACM]
> >> Ok, let's address your specific issues, and come back to your
> >> reservations.
> >>
> >> >
> >> >
> >> > Major issues:
> >> >
> >> > 1. It is not clear to me why this document needs to be published as
> an
> >> > RFC. The introduction says: 'This memo describes the progress of
> the
> >> > Open Platform for NFV (OPNFV) project on virtual switch performance
> >> > "VSPERF".  This project intends to build on the current and
> completed
> >> > work of the Benchmarking Methodology Working Group in IETF, by
> >> > referencing existing literature.' Why should the WG and the IESG
> >> > invest resources in publishing this, why an I-D or an Independent
> >> > Stream RFC is not sufficient?
> >> [ACM]
> >> The WG considered and discussed this document over 3 revisions
> >> and a year of time before reaching consensus to develop it further
> >> as a chartered item, so this decision was not taken lightly.
> >> See more below.
> >>
> >> > The WG charter says something about:
> >> > 'VNF and Related Infrastructure Benchmarking: Benchmarking
> >> > Methodologies have reliably characterized many physical devices.
> This
> >> > work item extends and enhances the methods to virtual network
> >> > functions (VNF) and their unique supporting infrastructure. A first
> >> > deliverable from this activity will be a document that considers
> the
> >> > new benchmarking space to ensure that common issues are recognized
> >> > from the start, using background materials from industry and SDOs
> >> > (e.g., IETF, ETSI NFV).'. I do not believe that this document
> covers
> >> > the intent of the charter, as it focused on one organization only.
> >> [ACM]
> >> I'm sorry, but here you are mistaken. The document that satisfied
> >> the "first deliverable ... document that considers the new
> benchmarking
> >> space"
> >> is: https://urldefense.proofpoint.com/v2/url?u=https-
> 3A__tools.ietf.org_html_draft-2Dietf-2Dbmwg-2Dvirtual-2Dnet-
> 2D05&d=DwIBaQ&c=LFYZ-
> o9_HUMeMTSQicvjIg&r=OfsSu8kTIltVyD1oL72cBw&m=hoHWXv58ay3WgLc115XYrprVmF9
> 14zpiiGCA6MQknbs&s=_mgJOrDLK33Nz_s9OA4mSGcbkZAkQSLjPNJ5qDIEAu4&e=
> >> titled: Considerations for Benchmarking Virtual Network Functions and
> >> Their Infrastructure
> >> which has been submitted to IESG and approved for publication.
> >> Further, the current draft (draft-ietf-bmwg-vswitch-opnfv-03)
> >> references the approved "Considerations" draft in Section 3
> >> (as does almost every related Industry spec I'm aware of).
> >>
> >> The BMWG Charter continues:
> >>   Benchmarks for platform capacity and performance characteristics of
> >>   virtual routers, switches, and related components will follow,
> including
> >>   comparisons between physical and virtual network functions. In many
> >> cases,
> >>   the traditional benchmarks should be applicable to VNFs, but the
> lab
> >>   set-ups, configurations, and measurement methods will likely need
> to
> >>   be revised or enhanced.
> >>
> >> This draft constitutes one of several follow-on efforts, approaching
> >> the problem exactly as we described in the last sentence above.
> >
> >[Dan wrote]
> > How? What last sentence?
> >
> > Is it:
> >
> > 'In many cases,
> >   the traditional benchmarks should be applicable to VNFs, but the lab
> >   set-ups, configurations, and measurement methods will likely need to
> >   be revised or enhanced.'
> >
[ACM] Yes

> > How does this document approach this problem? 
[ACM] 
Lab setups (mentioned in the sentence) are illustrated in Section 4.
https://tools.ietf.org/html/draft-ietf-bmwg-vswitch-opnfv-03#page-12

Configuration parameters (mentioned in the sentence) are listed in Section 3.3
https://tools.ietf.org/html/draft-ietf-bmwg-vswitch-opnfv-03#section-3.3

> > If there is a need to revise
> > or enhance existing BMWG work, what is needed is specific revisions of
> > documents. This informational document only documents work in one external
> > organization. I have reservations that this is a WG task to advance this
> > document, and of the IESG to approve it. Why can't it stand as an I-D until
> > the WG decides whatever work needs to be undertaken (if any) to meet the
> > OPNFV needs? Or if they with to have an RFC, why can't it be Independent
> > Stream?
> >
> > Will the WG write similar documents for all (or several) other organizations
> > that implement VNFs one way or another? Should it?
> >
> [Warren wrote]
> Apologies for the delay, which is my almost entirely fault.
> 
> From the Shepherds write-up:
> "There has been a fair amount of work done on this draft, and progress
> made on revisions, feedback, and comments. Several presentations have
> been made in the room during IETF meetings, and followup and
> discussion taken to the BMWG list. This draft is particularly useful,
> given the popularity of VNF's within the industry."
> 
> I believe that there is value in the IETF publishing this as an
> Informational document - the document provides useful information for
> the Internet community (especially those folk benchmarking VNFs :-)).
> The OPNFV virtual switch performance characterization work is very
> closely related and relevant to the BMWG work, and I think that having
> a collaboration type document (written fom the IETF viewpoint, and
> informing IETF participants) is useful.
> This *could* have been an Independent Stream doc, but it was discussed
> and worked on in the WG, and so having it be a WG document feels much
> more correct to me.
> 
> W
> 
> >>
> >> An aspect of Industry collaboration that we did not anticipate in the
> >> BMWG Charter is our current interactions with Open Source Communities.
> >> The current Charter was approved in June 2014, then OPNFV was founded
> >> on September 30, 2014 [0] and the VSPERF Project was created on
> >> Dec 16, 2014, so we did not anticipate extensive collaboration on
> >> this and other benchmarking topics.
> >>
> >>
> >>
> >> >[Dan wrote]
> >> > 2. In section 3 there 'repeatability' is mentioned, while
> >> > acknowledging that in a virtual environment there is no guarantee and
> >> > actually no way to know what other applications are being run.
> >> [ACM]
> >> See:
> >> https://urldefense.proofpoint.com/v2/url?u=https-
> 3A__tools.ietf.org_html_draft-2Dietf-2Dbmwg-2Dvirtual-2Dnet-2D05-
> 23section-2D3.4&d=DwIBaQ&c=LFYZ-
> o9_HUMeMTSQicvjIg&r=OfsSu8kTIltVyD1oL72cBw&m=hoHWXv58ay3WgLc115XYrprVmF9
> 14zpiiGCA6MQknbs&s=tKgYWpJU1JA2Ig-UrXD5ClUdXXD9WUVsvu3UcWv5Caw&e=
> >>
> >> There are certainly ways to assess the current set of processes
> >> at a particular time. The Software configuration parameters in
> >> Section 3.3 are intended to capture this aspect as part of set-up.
> >> At the same time, there will be challenges to assess the DUT
> >> performance when resources are fully shared, and new testing
> >> strategies will be needed:
> >> https://urldefense.proofpoint.com/v2/url?u=https-
> 3A__tools.ietf.org_html_draft-2Dietf-2Dbmwg-2Dvirtual-2Dnet-2D05-
> 23section-2D3.3&d=DwIBaQ&c=LFYZ-
> o9_HUMeMTSQicvjIg&r=OfsSu8kTIltVyD1oL72cBw&m=hoHWXv58ay3WgLc115XYrprVmF9
> 14zpiiGCA6MQknbs&s=eopW3xeia2lNEFJSCCl2Pb--5azWfReW5I7kchktg3Y&e=
> >>
> >> [Dan wrote]
> >> > Measuring parameters as the ones listed in 3.3 provides just part of
> >> > the answer, and they are internal parameters to the SUT.
> >> [ACM]
> >> Yes, knowing the tested configuration is a critical pillar
> >> supporting repeatability (these items are not measured, but configured),
> >> and why we provided this section.
> >> [Dan wrote]
> >> > Also, the
> >> > different deployment scenarios in section 4 require different
> >> > configurations for the SUT, thus breaking the 'black-box' principle.
> >> [ACM]
> >> Specifying DUT configuration does not break any part of the
> >> black-box principle, which establishes that benchmark measurements
> >> will be based on externally observable phenomena. See:
> >> https://urldefense.proofpoint.com/v2/url?u=https-3A__tools.ietf.org_html_draft-2Dietf-2Dbmwg-2Dvirtual-2Dnet-2D05-23section-2D4.2&d=DwIBaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=OfsSu8kTIltVyD1oL72cBw&m=hoHWXv58ay3WgLc115XYrprVmF914zpiiGCA6MQknbs&s=QFaIozbOelJnOh204GJCLlxA2MRLnHlxQJNO7BZkcqU&e=
> >>
> >> Previous BMWG RFCs have identified the critical configuration
> >> parameters of the DUT, such as the number and type of
> >> network interfaces, the arrangement of DUTs in a SUT, etc.
> >
> >[Dan wrote]
> > I may have not been clear enough. The document talks about repeatability and
> > about comparing benchmarks with the ones of specialized HW implementations.
> > Then it goes into a long but still partial list of factors that can
> > influence the benchmarking, the majority of them depend on the HW and SW
> > measurements and parameters of the internal systems. How can this be
> > compared with a number of small and indeed externally observable
> > configuration parameters like the number and type of network interfaces.
> > This is several degrees of magnitude apart in complexity.
> >>
[ACM] These are predominantly configured parameters in Section 3.3.
Frankly, I don't see anything listed that is measured.

BMWG has often specified configuration parameters that
are not directly observable, such as the routing protocol 
configuration parameters in parts of 
https://tools.ietf.org/html/rfc6413#section-5
for example.

Everyone actually working this problem appreciates the difficulty
and increase of about 3 orders of magnitude in configuration 
complexity.  Nevertheless, this is the reality of networking
with general purpose computers, and we have to solve this problem
to benchmark the performance in a scientific way.

> >> {Dan wrote]
> >> > I believe that there is a need for a more clear explanation of why BMWG
> >> > specifications are appropriate and how comparison can be made while
> >> > repeatability cannot be ensured, and measurements are dependent upon
> >> > parameters internal to the SUT.
> >> [ACM]
> >> I believe that draft-ietf-bmwg-virtual-net-05 already
> >> indicates why the existing BMWG RFCs are a reasonable
> >> starting place for NFV benchmarks, in part because
> >> we want to measure the same benchmarks of physical
> >> network functions in many cases. See
> >> https://urldefense.proofpoint.com/v2/url?u=https-
> 3A__tools.ietf.org_html_draft-2Dietf-2Dbmwg-2Dvirtual-2Dnet-2D05-
> 23section-2D4.1&d=DwIBaQ&c=LFYZ-
> o9_HUMeMTSQicvjIg&r=OfsSu8kTIltVyD1oL72cBw&m=hoHWXv58ay3WgLc115XYrprVmF9
> 14zpiiGCA6MQknbs&s=uiLI0WilNHRGx4QniJ_A7JFakbFuwaolWQPL6_cWOY4&e=
> >>
> >> Repeatability is a goal of all experiments, and we understand
> >> that there is more work to do in this regard, but what
> >> we know now (documented in this draft) should
> >> be a valuable contribution to the Industry.
> >>
> >[Dan wrote]
> > Measuring the same benchmarks is a good goal. I believe that the claim of
> > repeatability needs to be better argued.
> >
[ACM] 
I disagree, the repeatability problem needs to be solved, 
and many of us are working on it.

> >>
> >>
> >> >
> >> > Minor issues:
> >> >
> >> > 1. Some of the tests mentioned in Section 4 have no prior or in
> >> > progress work in the IETF: Control Path and Datapath Coupling
> Tests,
> >> > Noisy Neighbour Tests, characterization of acceleration
> technologies.
> >> [ACM]
> >> I'm sorry, but that's not an accurate portrayal of BMWG's literature.
> >>
> >> https://urldefense.proofpoint.com/v2/url?u=https-
> 3A__tools.ietf.org_html_rfc6413&d=DwIBaQ&c=LFYZ-
> o9_HUMeMTSQicvjIg&r=OfsSu8kTIltVyD1oL72cBw&m=hoHWXv58ay3WgLc115XYrprVmF9
> 14zpiiGCA6MQknbs&s=FfwuOuYcIFOsJXVrIH-r4q2JyOmLP8Wqt6_sjGDnPe4&e=
> examined Control Plane/Dataplane
> >> interactions, for example.
> >>
> >> https://urldefense.proofpoint.com/v2/url?u=https-
> 3A__tools.ietf.org_html_draft-2Dietf-2Dbmwg-2Dvirtual-2Dnet-2D05-
> 23section-2D3.3&d=DwIBaQ&c=LFYZ-
> o9_HUMeMTSQicvjIg&r=OfsSu8kTIltVyD1oL72cBw&m=hoHWXv58ay3WgLc115XYrprVmF9
> 14zpiiGCA6MQknbs&s=eopW3xeia2lNEFJSCCl2Pb--5azWfReW5I7kchktg3Y&e=
> >> item 2 specifically included Noisy Neighbour among the new
> >> testing strategies.
> >>
> >[Dan wrote]
> > Please provide references for each in text.
> >
[ACM] 
I'm sorry, but these references were for you, and 
only to remind you of BMWG's scope of work.
The IGP benchmarking reference is not appropriate here, and 
the other BMWG draft has been sufficiently referenced earlier.

> >>
> >> Every network interface with an ASIC is an example of acceleration,
> >> one that we've characterized in physical network devices for years.
> >>
> >
> > Yes, but we do not deal here with externally observable interfaces only, and
> > if the characterization of the acceleration technologies matters than you
> > need a way to express it (where can we find it in existing BMWG work? new
> > work?)
[ACM] 
I think you misunderstood my reply. The acceleration technologies
we're calling-out perform specific functions, such as de/encapsulation
or encryption/decryption. An external/black-box test should measure
performance improvements in a way that can be compared to systems
under test that do not have the benefit of acceleration.
So, like I said above, when router vendors added ASICs on interfaces,
it didn’t change the fundamental benchmarks, but the performance 
improved.

> >
> >>
> >> > If new work is needed / proposed to be added for the BMWG scope and
> >> > framework it would be useful for BMWG to list these separately.
> >> >
> >> >
> >> > Nits/editorial comments:
> >> >
> >> > 1. What is called 'Deployment scenarios' from VS perspective in
> >> > Section 4 describe in fact different configurations of the SUT in
> BMWG
> >> > terms. It seems better to separate this second part of section 4 in
> a
> >> > separate section. If it belongs to an existing section it rather
> >> > belongs in 3 than in 4.
> >> >
> >> [ACM]
> >> Section 3 is more about extending the configuration guidance
> >> from
> >> https://urldefense.proofpoint.com/v2/url?u=https-
> 3A__tools.ietf.org_html_draft-2Dietf-2Dbmwg-2Dvirtual-2Dnet-2D05-
> 23section-2D3.2&d=DwIBaQ&c=LFYZ-
> o9_HUMeMTSQicvjIg&r=OfsSu8kTIltVyD1oL72cBw&m=hoHWXv58ay3WgLc115XYrprVmF9
> 14zpiiGCA6MQknbs&s=Lwt9SappgKQIl2xBbUX9xEWpUrt0QV41B43nSef0BxM&e=
> >>
> >> Section 4 summarizes the VSPERF Level Test Design document,
> >> of which these deployment scenarios are a key part.
> >
> >[Dan wrote]
> > Yes, but this seams to belong to configurations of SUTs, even if they are
> > called 'Deployment scenarios' in OPNFV-speak, and they impact repeatibility.
[ACM] 
You can only compare the performance of different computer SUTs when they
implement the same vSwitch (and VMs) and deployment scenario.
Consistent onfiguration parameters and test setups are all important 
components of achieving repeatable results.

> >
> >>
> >>
> >> thanks for your comments; hopefully this detailed reply
> >> will reduce your reservations about publication.
> >>
> >> Al
> >> (for the co-authors)
> >>
> >> [0]
> >> https://urldefense.proofpoint.com/v2/url?u=https-
> 3A__www.opnfv.org_announcements_2014_09_30_telecom-2Dindustry-2Dand-
> 2Dvendors-2Dunite-2Dto-2Dbuild-2Dcommon-2Dopen-2Dplatform-2Dto-
> 2Daccelerate-2Dnetwork-2Dfunctions-2Dvirtualization&d=DwIBaQ&c=LFYZ-
> o9_HUMeMTSQicvjIg&r=OfsSu8kTIltVyD1oL72cBw&m=hoHWXv58ay3WgLc115XYrprVmF9
> 14zpiiGCA6MQknbs&s=uvGoI10jbVot1-EAjx0SCT0fy0RzEwTlWyW6DWre4IU&e=
> >>
> >
> 
> 
> 
> --
> I don't think the execution is relevant when it was obviously a bad
> idea in the first place.
> This is like putting rabid weasels in your pants, and later expressing
> regret at having chosen those particular rabid weasels and that pair
> of pants.
>    ---maf