[bmwg] Spencer Dawkins' No Objection on draft-ietf-bmwg-sdn-controller-benchmark-meth-08: (with COMMENT)

Spencer Dawkins <spencerdawkins.ietf@gmail.com> Mon, 16 April 2018 19:46 UTC

Return-Path: <spencerdawkins.ietf@gmail.com>
X-Original-To: bmwg@ietf.org
Delivered-To: bmwg@ietfa.amsl.com
Received: from ietfa.amsl.com (localhost [IPv6:::1]) by ietfa.amsl.com (Postfix) with ESMTP id 2DF52126DCA; Mon, 16 Apr 2018 12:46:04 -0700 (PDT)
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
From: Spencer Dawkins <spencerdawkins.ietf@gmail.com>
To: The IESG <iesg@ietf.org>
Cc: draft-ietf-bmwg-sdn-controller-benchmark-meth@ietf.org, Al Morton <acmorton@att.com>, bmwg-chairs@ietf.org, acmorton@att.com, bmwg@ietf.org
X-Test-IDTracker: no
X-IETF-IDTracker: 6.78.0
Auto-Submitted: auto-generated
Precedence: bulk
Message-ID: <152390796412.19681.526742517809994752.idtracker@ietfa.amsl.com>
Date: Mon, 16 Apr 2018 12:46:04 -0700
Archived-At: <https://mailarchive.ietf.org/arch/msg/bmwg/kPSro6kRo7GYd0kDBvHnWHtNM_U>
Subject: [bmwg] Spencer Dawkins' No Objection on draft-ietf-bmwg-sdn-controller-benchmark-meth-08: (with COMMENT)
X-BeenThere: bmwg@ietf.org
X-Mailman-Version: 2.1.22
List-Id: Benchmarking Methodology Working Group <bmwg.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/bmwg>, <mailto:bmwg-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/bmwg/>
List-Post: <mailto:bmwg@ietf.org>
List-Help: <mailto:bmwg-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/bmwg>, <mailto:bmwg-request@ietf.org?subject=subscribe>
X-List-Received-Date: Mon, 16 Apr 2018 19:46:04 -0000

Spencer Dawkins has entered the following ballot position for
draft-ietf-bmwg-sdn-controller-benchmark-meth-08: No Objection

When responding, please keep the subject line intact and reply to all
email addresses included in the To and CC lines. (Feel free to cut this
introductory paragraph, however.)


Please refer to https://www.ietf.org/iesg/statement/discuss-criteria.html
for more information about IESG DISCUSS and COMMENT positions.


The document, along with other ballot positions, can be found here:
https://datatracker.ietf.org/doc/draft-ietf-bmwg-sdn-controller-benchmark-meth/



----------------------------------------------------------------------
COMMENT:
----------------------------------------------------------------------

I have a few questions, at the No Objection level ... do the right thing, of
course.

I apologize for attempting to play amateur statistician, but it seems to me
that this text

4.7. Test Repeatability

   To increase the confidence in measured result, it is recommended
   that each test SHOULD be repeated a minimum of 10 times.

is recommending a heuristic, when I'd think that you'd want to repeat a test
until the results seem to be converging on some measure of central tendency,
given some acceptable margin of error, and this text

Procedure:

   1. Establish the network connections between controller and network
     nodes.
   2. Query the controller for the discovered network topology
     information and compare it with the deployed network topology
     information.
   3. If the comparison is successful, increase the number of nodes by 1
     and repeat the trial.
     If the comparison is unsuccessful, decrease the number of nodes by
     1 and repeat the trial.
   4. Continue the trial until the comparison of step 3 is successful.
   5. Record the number of nodes for the last trial (Ns) where the
     topology comparison was successful.

seems to beg for a binary search, especially if you're testing whether a
controller can support a large number of controllers ...

This text

Reference Test Setup:

   The test SHOULD use one of the test setups described in section 3.1
   or section 3.2 of this document in combination with Appendix A.

or some variation is repeated about 16 times, and I'm not understanding why
this is using BCP 14 language, and if BCP 14 language is the right thing to do,
I'm not understanding why it's always SHOULD.

I get the part that this will help compare results, if two researchers are
running the same tests. Is there more to the requirement than that?

In this text,

Procedure:

   1. Perform the listed tests and launch a DoS attack towards
     controller while the trial is running.

   Note:

    DoS attacks can be launched on one of the following interfaces.

     a. Northbound (e.g., Query for flow entries continuously on
       northbound interface)
     b. Management (e.g., Ping requests to controller's management
       interface)
     c. Southbound (e.g., TCP SYN messages on southbound interface)

is there a canonical description of "DoS attack" that researchers should be
using, in order to compare results? These are just examples, right?

Is the choice of

  [OpenFlow Switch Specification]  ONF,"OpenFlow Switch Specification"
              Version 1.4.0 (Wire Protocol 0x05), October 14, 2013.

intentional? I'm googling that the current version of OpenFlow is 1.5.1, from
2015.