Re: [bmwg] Benjamin Kaduk's No Objection on draft-ietf-bmwg-sdn-controller-benchmark-meth-08: (with COMMENT)

Sarah B <> Thu, 19 April 2018 16:32 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 8ECE912DA28; Thu, 19 Apr 2018 09:32:46 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -1.9
X-Spam-Status: No, score=-1.9 tagged_above=-999 required=5 tests=[BAYES_00=-1.9] autolearn=ham autolearn_force=no
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id gWMGDcKUhltG; Thu, 19 Apr 2018 09:32:44 -0700 (PDT)
Received: from ( []) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by (Postfix) with ESMTPS id 8203012D86E; Thu, 19 Apr 2018 09:32:43 -0700 (PDT)
Received: from localhost (localhost []) by (Postfix) with ESMTP id CEBB1800EC; Thu, 19 Apr 2018 12:32:42 -0400 (EDT)
X-Virus-Scanned: Debian amavisd-new at
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id cyQw-vFwvc_3; Thu, 19 Apr 2018 12:32:42 -0400 (EDT)
Received: from [] ( []) by (Postfix) with ESMTPSA id 086B5800EB; Thu, 19 Apr 2018 12:32:41 -0400 (EDT)
Content-Type: text/plain; charset=us-ascii
Mime-Version: 1.0 (Mac OS X Mail 10.3 \(3273\))
From: Sarah B <>
In-Reply-To: <>
Date: Thu, 19 Apr 2018 09:32:38 -0700
Cc: The IESG <>,,, ALFRED MORTON <>,, "Bhuvan (Veryx Technologies)" <>
Content-Transfer-Encoding: quoted-printable
Message-Id: <>
References: <>
To: Benjamin Kaduk <kaduk@MIT.EDU>
X-Mailer: Apple Mail (2.3273)
Archived-At: <>
Subject: Re: [bmwg] Benjamin Kaduk's No Objection on draft-ietf-bmwg-sdn-controller-benchmark-meth-08: (with COMMENT)
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Benchmarking Methodology Working Group <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Thu, 19 Apr 2018 16:32:47 -0000

Hi Benjamin,
	Thanks for your review. A few comments inline below with SB//

> ----------------------------------------------------------------------
> ----------------------------------------------------------------------
> In the Abstract:
>   This document defines the methodologies for benchmarking control
>   plane performance of SDN controllers.
> Why "the" methodologies?   That seems more authoritative than is
> appropriate in an Informational document.

Sure, we can remove this.

> Thank you for adding consideration to key distribution in Section
> 4.4, as noted by the secdir review.  But insisting on having key
> distribution done prior to testing gives the impression that keys
> are distributed once and updated never, which has questionable
> security properties.  Perhaps there is value in doing some testing
> while rekeyeing is in progress?

We'll discuss internally.. the goal here wasn't to test rekeying, which is why the test doesn't call for anything other than doing the distribution prior to the test, and remember, this is lab testing; this isn't what happens on a live network, but it's an interesting point, and we'll discuss.

> I agree with others that the statistical methodology is not clearly
> justified, such as the sample size of 10 in Section 4.7 (with no
> consideration for sample relative variance), use of sample vs.
> population veriance, etc.

As a draft, we aren't saying you only have to do 10; it's been a point of discussion in BMWG for some time, what number do you put? We haven't always found a happy medium to this. We're suggesting you perform the test 10 times, but there's nothing that stops a tester from making that <n> times; for example, 100 times. It's important to note how many times the test was performed and move on. We can clarify this text to reflect that, if it helps. However, note, BMWG is about repeatability, and that was the goal here.

> It seems like the measurements being described sometimes start the
> timer at an event at a network element and other times start the
> timer when a message enters the SDN controller itself (similarly for
> outgoing messages), which seems to include a different treatment of
> propagation delays in the network, for different tests.  Assuming
> these differences were made by conscious choice, it might be nice to
> describe why the network propagation is/is not included for any
> given measurement.
> It looks like the term "Nxrn" is introduced implicitly and the
> reader is supposed to infer that the 'n' represents a counter, with
> Nrx1 corresponding to the first measurement, Nrx2 the second, etc.
> It's probably worth mentioning this explicitly, for all fields that
> are measured on a per-trial/counter basis.

> Appendix B.3 indicates that plain TCP or TLS can be used for
> communications between switch and controller.  It seems like this
> would be a highly relevant test parameter to report with the results
> for the tests described in this document, since TLS would introduce
> additional overhead to be quantified!

SB// Agreed. The intent is to have the scenario specifically documented by the tester to reflect the choice made; we'll clarify the text.