[bmwg] Benjamin Kaduk's No Objection on draft-ietf-bmwg-sdn-controller-benchmark-meth-08: (with COMMENT)

Benjamin Kaduk <kaduk@mit.edu> Thu, 19 April 2018 12:22 UTC

Return-Path: <kaduk@mit.edu>
X-Original-To: bmwg@ietf.org
Delivered-To: bmwg@ietfa.amsl.com
Received: from ietfa.amsl.com (localhost [IPv6:::1]) by ietfa.amsl.com (Postfix) with ESMTP id 1B9F01271FD; Thu, 19 Apr 2018 05:22:11 -0700 (PDT)
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
From: Benjamin Kaduk <kaduk@mit.edu>
To: The IESG <iesg@ietf.org>
Cc: draft-ietf-bmwg-sdn-controller-benchmark-meth@ietf.org, Al Morton <acmorton@att.com>, bmwg-chairs@ietf.org, acmorton@att.com, bmwg@ietf.org
X-Test-IDTracker: no
X-IETF-IDTracker: 6.78.0
Auto-Submitted: auto-generated
Precedence: bulk
Message-ID: <152414053109.28837.9925559089834201998.idtracker@ietfa.amsl.com>
Date: Thu, 19 Apr 2018 05:22:11 -0700
Archived-At: <https://mailarchive.ietf.org/arch/msg/bmwg/Wgp3UzXjpklbSvn9Lrhh2GILGmE>
Subject: [bmwg] Benjamin Kaduk's No Objection on draft-ietf-bmwg-sdn-controller-benchmark-meth-08: (with COMMENT)
X-BeenThere: bmwg@ietf.org
X-Mailman-Version: 2.1.22
List-Id: Benchmarking Methodology Working Group <bmwg.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/bmwg>, <mailto:bmwg-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/bmwg/>
List-Post: <mailto:bmwg@ietf.org>
List-Help: <mailto:bmwg-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/bmwg>, <mailto:bmwg-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 19 Apr 2018 12:22:11 -0000

Benjamin Kaduk has entered the following ballot position for
draft-ietf-bmwg-sdn-controller-benchmark-meth-08: No Objection

When responding, please keep the subject line intact and reply to all
email addresses included in the To and CC lines. (Feel free to cut this
introductory paragraph, however.)


Please refer to https://www.ietf.org/iesg/statement/discuss-criteria.html
for more information about IESG DISCUSS and COMMENT positions.


The document, along with other ballot positions, can be found here:
https://datatracker.ietf.org/doc/draft-ietf-bmwg-sdn-controller-benchmark-meth/



----------------------------------------------------------------------
COMMENT:
----------------------------------------------------------------------

In the Abstract:

   This document defines the methodologies for benchmarking control
   plane performance of SDN controllers.

Why "the" methodologies?   That seems more authoritative than is
appropriate in an Informational document.


Why do we need the test setup diagrams in both the terminology draft
and this one?  It seems like there is some excess redundancy, here.


In Section 4.1, how can we even have a topology with just one
network device?  This "at least 1" seems too low.  Similarly, how
would TP1 and TP2 *not* be connected to the same node if there is
only one device?

Thank you for adding consideration to key distribution in Section
4.4, as noted by the secdir review.  But insisting on having key
distribution done prior to testing gives the impression that keys
are distributed once and updated never, which has questionable
security properties.  Perhaps there is value in doing some testing
while rekeyeing is in progress?

I agree with others that the statistical methodology is not clearly
justified, such as the sample size of 10 in Section 4.7 (with no
consideration for sample relative variance), use of sample vs.
population veriance, etc.

It seems like the measurements being described sometimes start the
timer at an event at a network element and other times start the
timer when a message enters the SDN controller itself (similarly for
outgoing messages), which seems to include a different treatment of
propagation delays in the network, for different tests.  Assuming
these differences were made by conscious choice, it might be nice to
describe why the network propagation is/is not included for any
given measurement.

It looks like the term "Nxrn" is introduced implicitly and the
reader is supposed to infer that the 'n' represents a counter, with
Nrx1 corresponding to the first measurement, Nrx2 the second, etc.
It's probably worth mentioning this explicitly, for all fields that
are measured on a per-trial/counter basis.

I'm not sure that the end condition for the test in Section 5.2.2
makes sense.

It seems like the test in Section 5.2.3 should not allow flexibility
in "unique source and/or destination address" and rather should
specify exactly what happens.

In Section 5.3.1, only considering 2% of asynchronous messages as
invalid implies a preconception about what might be the reason for
such invalid messages, but that assumption might not hold in the
case of an active attack, which may be somewhat different from the
pure DoS scenario considered in the following section.

Section 5.4.1 says "with incremental sequence number and source
address" -- are both the sequence number and source address
incrementing for each packet sent?  This could be more clear.
It also is a little jarring to refer to "test traffic generator TP2"
when TP2 is just receiving traffic and not generating it.

Appendix B.3 indicates that plain TCP or TLS can be used for
communications between switch and controller.  It seems like this
would be a highly relevant test parameter to report with the results
for the tests described in this document, since TLS would introduce
additional overhead to be quantified!

The figure in Section B.4.5 leaves me a little confused as to what
is being measured, if the SDN Application is depicted as just
spontaneously installing a flow at some time vaguely related to
traffic generation but not dependent on or triggered by the traffic
generation.