Re: [bmwg] Applied review comments (from Luis M. Contreras) of draft-rosa-bmwg-vnfbench

Raphael Vicente Rosa <raphaelvrosa@gmail.com> Thu, 29 August 2019 13:36 UTC

Return-Path: <raphaelvrosa@gmail.com>
X-Original-To: bmwg@ietfa.amsl.com
Delivered-To: bmwg@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 8845112007C for <bmwg@ietfa.amsl.com>; Thu, 29 Aug 2019 06:36:20 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.998
X-Spam-Level:
X-Spam-Status: No, score=-1.998 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FREEMAIL_FROM=0.001, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id KfkZ9Fmi_yrY for <bmwg@ietfa.amsl.com>; Thu, 29 Aug 2019 06:36:17 -0700 (PDT)
Received: from mail-wr1-x42b.google.com (mail-wr1-x42b.google.com [IPv6:2a00:1450:4864:20::42b]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 0EE40120071 for <bmwg@ietf.org>; Thu, 29 Aug 2019 06:36:17 -0700 (PDT)
Received: by mail-wr1-x42b.google.com with SMTP id z11so3474935wrt.4 for <bmwg@ietf.org>; Thu, 29 Aug 2019 06:36:16 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to; bh=Eoa8rgIQ/hyWaerDa2eqrgBL8flXxzQdE7xvg5y2UZQ=; b=BMAZDq8Qw3GEOspVFRQurWrH4OKYQbdRsbT/OrfhbygXRSsaHtlHbZctdH5HfQexOP O1uTYGiRFiKY32coNN0JrjLBK8lw+hqdFvjko7b1o/G7gDPnc5NAePyIEu/ZE3y4DRcy t4l1D7XzO+zjuCMs6UlaL0u1dup55DU3kx2FWBokYw6+mLeu4/RS/P06HecLuE+t3Xdl kQVCk9A+0UOUe4ot23bLcR9HsPpheD8AID9IBjPgTLZdZMkroEWbzRS1FvhUwoCz0Aar kfM+zmbCRqIdkxSjb8B7VaSgeT+iZCdB1+AQei75e4ZfkK9o0XdlxoOUtpKJqNsGGWb3 SJdg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to; bh=Eoa8rgIQ/hyWaerDa2eqrgBL8flXxzQdE7xvg5y2UZQ=; b=RAzC/bWg5OjSEs/bz05vsWEjN2N2aaRBzYk3fiKTzUOMLBaMqcpoCAw/YRNR02bXBs izjVOBXWRAJmjAiJw+yGrXhHdpbk+YdU6KvYCUMscy3tGl5c1yf5eOEB2/runxY+69L8 dSJAbtxW6KyRAcSwsb6x+40+vlWskvF8tQJjXsYKtLusmirD8Y+dQpwE9LJdAnne9fH9 VZgIopn8CiY3JsnK82egXd/FyFWX/JUumoRHn6z/yiD1wz/RM+/k38mxQljOY3RnIU1k RB4Z3fNO4L9MoBFGot6pLDFe+xfSMlS3Os5oLMbigy3hwFYhf9NZy8wBCvkdjO7BUqTw /89g==
X-Gm-Message-State: APjAAAXWpnelLMSqLka6sPDpRhLsAEAlvPi5tddX2RB1CyS3yAlToloI dQXF8esl2sFo04KgZeOp7tZQRWhPp5gD6iTCUFlOaRph
X-Google-Smtp-Source: APXvYqzJx/ej32vuCa/h9P2/gDM5QwaU/zJylYtAHLUPhCT7WwTf7yLnydVqvfKSotseJPzm9J9NuCwz1BuU8Xfz3XU=
X-Received: by 2002:a5d:4446:: with SMTP id x6mr11536060wrr.11.1567085775057; Thu, 29 Aug 2019 06:36:15 -0700 (PDT)
MIME-Version: 1.0
References: <mailman.83.1564599616.17604.bmwg@ietf.org> <CAD-XRrXemqdPHTn2gwBcMN72fmJUiz1=CMUsnRoNE2GPgOWw3A@mail.gmail.com>
In-Reply-To: <CAD-XRrXemqdPHTn2gwBcMN72fmJUiz1=CMUsnRoNE2GPgOWw3A@mail.gmail.com>
From: Raphael Vicente Rosa <raphaelvrosa@gmail.com>
Date: Thu, 29 Aug 2019 10:36:03 -0300
Message-ID: <CAD-XRrULZ6O=JVCbh=8Cg46EvF5HyzDKJ3piw_LAwC=xB-nifw@mail.gmail.com>
To: bmwg@ietf.org
Content-Type: multipart/alternative; boundary="000000000000f47692059141962b"
Archived-At: <https://mailarchive.ietf.org/arch/msg/bmwg/gHalZQbEngtNgWmMzY3ytkNdfMs>
Subject: Re: [bmwg] Applied review comments (from Luis M. Contreras) of draft-rosa-bmwg-vnfbench
X-BeenThere: bmwg@ietf.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Benchmarking Methodology Working Group <bmwg.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/bmwg>, <mailto:bmwg-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/bmwg/>
List-Post: <mailto:bmwg@ietf.org>
List-Help: <mailto:bmwg-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/bmwg>, <mailto:bmwg-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 29 Aug 2019 13:36:21 -0000

Dear Luis Contreras,

We appreciate the comments provided for the draft, which follow answered
below. We already updated the draft based on your comments (find the last
version here https://github.com/raphaelvrosa/vnf-bench-meth). However we
still are running tests and updating the draft (specially the VNF-PP
section) before committing it to the version -05.
When available we will post it here and we would appreciate if you could
contribute with more comments.

.- it would be good to have some text in the draft indicating how this
benchmarking methodology relates with the activities in ETSI NFV TST
working group
*Answer: *We highlighted in the "Considerations" section the work in ETSI
TST as complimentary example of benchmarking and measurement models for VNF
benchmarking methodology.
We are currently analyzing the ETSI TST specifications looking for common
denominators with the draft. In general, ETSI TST has no focus on
automation of VNF benchmarking methodologies, however presents
methods/metrics that can be referenced by VNF benchmarking methodologies.


.- an special case for benchmarking could be the redundancy. There are
different schemas of redundancy (e.g., VNFs with active / standby VNFCs,
M:1 redundant VNFCs, non-redundant VNFCs, etc). Would the redundancy be
part of the scope of the draft/methodology that you describe? And if so,
how this could be included as part of the descriptors / setup that you
describe?
*Answer: *In general, we consider VNFCs as black boxes, part of a VNF. In
such case, different VNF images (specified by version/release) might be
provided for benchmarking, differing among them the internal redundancy
expected from the Tester point of view.
If needed, monitoring points need to be specified for each VNFC in the VNF
benchmarking methodology.
We recognize the particular case for Redundancy – defining special
conditions for it described in a new document item 6.4.2.

.- Can we in general terms assume that Agents represent active probing
while Monitors represent passive probing? If so, it would be probably good
to make it explicit
*Answer: *We highlighted the Agent/Monitor definitions referring to
active/passive probing.

/Specific comments/
.- Section 5, bullet on VNF (after Fig. 1): with relation to the VNFCs, can
be those components tested individually or should them be always tested as
part of the comprehensive VNF?
*Answer: *We consider that VNFs are benchmarked as they are, i.e., VNFCs
are black boxes that make the minimum set of VNF functionalities available.
Such statement is reinforced in the subsection Scenario/Nodes, item 6.1.5.1.

.- Figure 1: the Monitor box, as depicted, it is not too much clear. The
boundaries of the box overlap with the boundaries of VNF component and
Execution environment. This could be done on purpose, but the figure
becomes a bit confusing, at least to me. Additionally, I can see arrows
to/from the agents, but no arrows to/from the monitor. From/to where the
information is send to Monitor box?
*Answer: *We clarified the Fig. 1 with Monitoring interfaces well-defined
for infra/VNF (SUT as a whole) – explainining in details the interfaces.

.- Section 6.1: should it be included information about the kind of VIM,
MANO, etc to use for on-boarding, managing and running the VNF? Should the
NMS of the VNF part of the tests (maybe assisting on the
configuration/collection of information)? Should it be also declared the
usage of VM, containers, etc in the setup? Where?
*Answer: *We added a clarification statement in the end of item 6.1.4. In
that section, name "Environment", we propose the Manager component realizes
specific interfaces to a MANO system, so it can deploy the VNF-BD scenario.
The draft leaves that part open for implementation, not constrained to any
particular technology.
Reference details of VNF-BD deployment (on-boarding, etc) are provided in
the VNF-PP, as we intend to keep VNF-BD technology agnostic. As the VNF-PP
contains the deployment settings, those must be detailed there, describing
the VIM/MANO templates, for instance.
However, we are still investiganting if an interface from the Manager
component to the VNF or the VNFM should be provided for VNF benchmarking
methodologies.

.- Section 6.3.1: should it be included there the duration of the tests?
*Answer: *In the item "6.3.2.  Automated Execution", the automated
execution of benchmarking tests depend on the fixed time limit specified in
VNF-BD or any specified exit conditions (convergence of metrics values).
Therefore, the duration of tests will depend on the specification of the
VNF-BD Proceedings, i.e., the probers/listener parameters that will execute
the tests. For instance, a simple ping prober could have its execution time
limited by the number of packets to be sent as stimulus.

.- Section 6.2.2: the VNF processing / Active metrics, are those equivalent
to the kind of metrics can be obtained from a PNF? Any difference? If not,
it could be maybe convenient to reflect that fact since same metric
description could be reused by the operators for performing the tests (and
for comparison, as well)
*Answer: *In general metrics are defined depending on the benchmarking
methodology of a network function, in a virtual or physical implementation
case.
Metrics are specific to the VNF/PNF. VNF metrics can be compared to PNF,
however a VNF benchmarking can have more metrics, and depend on the tools
(probers/listeners) used to benchmark it. Standardization should be clear
on the VNF benchmarking methodology how the metrics are defined and their
source of measurement.
The draft, as focused on benchmarking automation, addresses a VNF-PP
technology agnostic. I.e., any VNF benchmarking methodology can have its
metrics specified in a VNF-PP. Comparison factors can be defined among a
VNF and a PNF if their benchmarking methodology share the same definition
of metrics.

.- Section 6.3.2: if the Manager collects all measurements, then it has to
support some kind of interface for information retrieval, hasn’t it? If so,
it could be maybe convenient to reflect it in figure 1 and in the text.
*Answer: *The Manager description was clarified, for instance stating in
Sec. 5: "Manager executes the main configuration, operation, and management
actions to deliver the VNF benchmarking report. Hence, it detains
interfaces open for users interact with the whole benchmarking framework,
realizing, for instance,...".

.- Section 6.4.3: failure handling can be considered as active or passive
testing?
.- Section 8: during the VNF benchmarking, it could be considered the
running of security tests? For instance DDoS, etc. If so, it could be
mentioned.
*Answer: *The questions above address specific realizations of VNF
benchmarking, which we do not consider, as the draft focus on the VNF
benchmarking automation. For instance, a DDoS could be caused by, or
defined as a particular case of, a VNF benchmarking methodology
specification (e.g., an algorithm running a throughput evaluation at rates
configured as DDoS case).
In the case of failure handling, both conditions could be valid (active and
passive). For instance, while there exists stimulus traffic directed to the
VNF, it is being monitored for malfunctioning.

/Editorial comments/
.- The Agent/Prober and Monitor/Listener bullets should be better aligned,
maybe using different levels of bullets and skipping existing space lines
.- section 6.3.2, bullet 1: s/ … compose the all the permutations … / …
compose all the permutations …
.- section 6.4.1, 1st paragraph:  s/ … to mitigateside effects … / … to
mitigate side effects …
*Answer: *The draft is going through an extensive and detailed
grammar/spell-check review. We will apply those fixes in version -05.

Please, let us know if the answers didn't cover the clarification questions
in your comments.

Sincerely,
The authors

On Thu, Aug 1, 2019 at 10:22 AM Raphael Vicente Rosa <raphaelvrosa@gmail.com>;
wrote:

> Luis, thanks a lot for the great review!
> We appreciate the engagement, it sure will help us clarify and improve the
> draft content for the next version, a path for the bmwg adoption.
>
> Best regards,
> (The authors)
>
> On Wed, Jul 31, 2019 at 4:01 PM <bmwg-request@ietf.org>; wrote:
>
>> Send bmwg mailing list submissions to
>>         bmwg@ietf.org
>>
>> To subscribe or unsubscribe via the World Wide Web, visit
>>         https://www.ietf.org/mailman/listinfo/bmwg
>> or, via email, send a message with subject or body 'help' to
>>         bmwg-request@ietf.org
>>
>> You can reach the person managing the list at
>>         bmwg-owner@ietf.org
>>
>> When replying, please edit your Subject line so it is more specific
>> than "Re: Contents of bmwg digest..."
>>
>>
>> Today's Topics:
>>
>>    1. Review comments of draft-rosa-bmwg-vnfbench (Luis M. Contreras)
>>
>>
>> ----------------------------------------------------------------------
>>
>> Message: 1
>> Date: Wed, 31 Jul 2019 17:33:01 +0200
>> From: "Luis M. Contreras" <contreras.ietf@gmail.com>;
>> To: bmwg@ietf.org
>> Cc: LUIS MIGUEL CONTRERAS MURILLO
>>         <luismiguel.contrerasmurillo@telefonica.com>;
>> Subject: [bmwg] Review comments of draft-rosa-bmwg-vnfbench
>> Message-ID:
>>         <CAE4dcxnUdkm=zmtmy+Ve+G=
>> CFNTAVnasfC6_Oda+yvgxSNxAMg@mail.gmail.com>;
>> Content-Type: text/plain; charset="utf-8"
>>
>> Hi all,
>>
>> as committed last week during BMWG session, I have performed a review
>> of draft-rosa-bmwg-vnfbench.
>>
>> These are the comments coming from my review.
>>
>> /General comments/
>>
>> ..- it would be good to have some text in the draft indicating how this
>> benchmarking methodology relates with the activities in ETSI NFV TST
>> working group
>>
>> ..- an special case for benchmarking could be the redundancy. There are
>> different schemas of redundancy (e.g., VNFs with active / standby VNFCs,
>> M:1 redundant VNFCs, non-redundant VNFCs, etc). Would the redundancy be
>> part of the scope of the draft/methodology that you describe? And if so,
>> how this could be included as part of the descriptors / setup that you
>> describe?
>>
>> ..- Can we in general terms assume that Agents represent active probing
>> while Monitors represent passive probing? If so, it would be probably good
>> to make it explicit
>>
>>
>>
>> /Specific comments/
>>
>> ..- Section 5, bullet on VNF (after Fig. 1): with relation to the VNFCs,
>> can
>> be those components tested individually or should them be always tested as
>> part of the comprehensive VNF?
>>
>> ..- Figure 1: the Monitor box, as depicted, it is not too much clear. The
>> boundaries of the box overlap with the boundaries of VNF component and
>> Execution environment. This could be done on purpose, but the figure
>> becomes a bit confusing, at least to me. Additionally, I can see arrows
>> to/from the agents, but no arrows to/from the monitor. From/to where the
>> information is send to Monitor box?
>>
>> ..- Section 6.1: should it be included information about the kind of VIM,
>> MANO, etc to use for on-boarding, managing and running the VNF? Should the
>> NMS of the VNF part of the tests (maybe assisting on the
>> configuration/collection of information)? Should it be also declared the
>> usage of VM, containers, etc in the setup? Where?
>>
>> ..- Section 6.3.1: should it be included there the duration of the tests?
>>
>> ..- Section 6.2.2: the VNF processing / Active metrics, are those
>> equivalent
>> to the kind of metrics can be obtained from a PNF? Any difference? If not,
>> it could be maybe convenient to reflect that fact since same metric
>> description could be reused by the operators for performing the tests (and
>> for comparison, as well)
>>
>> ..- Section 6.3.2: if the Manager collects all measurements, then it has
>> to
>> support some kind of interface for information retrieval, hasn?t it? If
>> so,
>> it could be maybe convenient to reflect it in figure 1 and in the text.
>>
>> ..- Section 6.4.3: failure handling can be considered as active or passive
>> testing?
>>
>> ..- Section 8: during the VNF benchmarking, it could be considered the
>> running of security tests? For instance DDoS, etc. If so, it could be
>> mentioned.
>>
>>
>>
>> /Editorial comments/
>>
>> ..- The Agent/Prober and Monitor/Listener bullets should be better
>> aligned,
>> maybe using different levels of bullets and skipping existing space lines
>>
>> ..- section 6.3.2, bullet 1: s/ ? compose the all the permutations ? / ?
>> compose all the permutations ?
>>
>> ..- section 6.4.1, 1st paragraph:  s/ ? to mitigateside effects ? / ? to
>> mitigate side effects ?
>>
>>
>>
>> I would like to tahnks the authors for the very good document produced so
>> far.
>>
>>
>> Best regards
>>
>>
>> Luis
>>
>>
>> --
>> ___________________________________________
>> Luis M. Contreras
>> contreras.ietf@gmail.com
>> luismiguel.contrerasmurillo@telefonica.com
>> Global CTIO unit / Telefonica
>> -------------- next part --------------
>> An HTML attachment was scrubbed...
>> URL: <
>> https://mailarchive.ietf.org/arch/browse/bmwg/attachments/20190731/85826251/attachment.html
>> >
>>
>> ------------------------------
>>
>> Subject: Digest Footer
>>
>> _______________________________________________
>> bmwg mailing list
>> bmwg@ietf.org
>> https://www.ietf.org/mailman/listinfo/bmwg
>>
>>
>> ------------------------------
>>
>> End of bmwg Digest, Vol 178, Issue 14
>> *************************************
>>
>