[bmwg] Eric Rescorla's No Objection on draft-ietf-bmwg-sdn-controller-benchmark-meth-08: (with COMMENT)

Eric Rescorla <ekr@rtfm.com> Tue, 17 April 2018 21:14 UTC

Return-Path: <ekr@rtfm.com>
X-Original-To: bmwg@ietf.org
Delivered-To: bmwg@ietfa.amsl.com
Received: from ietfa.amsl.com (localhost [IPv6:::1]) by ietfa.amsl.com (Postfix) with ESMTP id 9A7BA1289B0; Tue, 17 Apr 2018 14:14:12 -0700 (PDT)
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
From: Eric Rescorla <ekr@rtfm.com>
To: The IESG <iesg@ietf.org>
Cc: draft-ietf-bmwg-sdn-controller-benchmark-meth@ietf.org, Al Morton <acmorton@att.com>, bmwg-chairs@ietf.org, acmorton@att.com, bmwg@ietf.org
X-Test-IDTracker: no
X-IETF-IDTracker: 6.78.0
Auto-Submitted: auto-generated
Precedence: bulk
Message-ID: <152399965258.11535.11874306299818806488.idtracker@ietfa.amsl.com>
Date: Tue, 17 Apr 2018 14:14:12 -0700
Archived-At: <https://mailarchive.ietf.org/arch/msg/bmwg/jcp0Nr4y2NxMKdiob3ElfYpIq5E>
Subject: [bmwg] Eric Rescorla's No Objection on draft-ietf-bmwg-sdn-controller-benchmark-meth-08: (with COMMENT)
X-BeenThere: bmwg@ietf.org
X-Mailman-Version: 2.1.22
List-Id: Benchmarking Methodology Working Group <bmwg.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/bmwg>, <mailto:bmwg-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/bmwg/>
List-Post: <mailto:bmwg@ietf.org>
List-Help: <mailto:bmwg-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/bmwg>, <mailto:bmwg-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 17 Apr 2018 21:14:13 -0000

Eric Rescorla has entered the following ballot position for
draft-ietf-bmwg-sdn-controller-benchmark-meth-08: No Objection

When responding, please keep the subject line intact and reply to all
email addresses included in the To and CC lines. (Feel free to cut this
introductory paragraph, however.)


Please refer to https://www.ietf.org/iesg/statement/discuss-criteria.html
for more information about IESG DISCUSS and COMMENT positions.


The document, along with other ballot positions, can be found here:
https://datatracker.ietf.org/doc/draft-ietf-bmwg-sdn-controller-benchmark-meth/



----------------------------------------------------------------------
COMMENT:
----------------------------------------------------------------------

Rich version of this review at:
https://mozphab-ietf.devsvcdev.mozaws.net/D3948






COMMENTS
>      reported.
>   
>   4.7. Test Repeatability
>   
>      To increase the confidence in measured result, it is recommended
>      that each test SHOULD be repeated a minimum of 10 times.

Nit: you might be happier with "RECOMMENDED that each test be repeated
..."

Also, where does 10 come from? Generally, the number of trials you
need depends on the variance of each trial.



>      Test Reporting
>   
>      Each test has a reporting format that contains some global and
>      identical reporting components, and some individual components that
>      are specific to individual tests. The following test configuration
>      parameters and controller settings parameters MUST be reflected in

This is an odd MUST, as it's not required for interop.


>      5. Stop the trial when the discovered topology information matches
>        the deployed network topology, or when the discovered topology
>        information return the same details for 3 consecutive queries.
>      6. Record the time last discovery message (Tmn) sent to controller
>        from the forwarding plane test emulator interface (I1) when the
>        trial completed successfully. (e.g., the topology matches).

How large is the TD usually? How much does 3 seconds compare to that?


>                                                   Total Trials
>   
>                                              SUM[SQUAREOF(Tri-TDm)]
>      Topology Discovery Time Variance (TDv)  ----------------------
>                                                  Total Trials -1
>   

You probably don't need to specify individual formulas for mean and
variance. However, you probably do want to explain why you are using
the n-1 sample variance formula.


>   
>   Measurement:
>   
>                                              (R1-T1) + (R2-T2)..(Rn-Tn)
>      Asynchronous Message Processing Time Tr1 = -----------------------
>                                                          Nrx

Incidentally, this formula is the same as \sum_i{R_i} - \sum_i{T_i}


>      messages transmitted to the controller.
>   
>      If this test is repeated with varying number of nodes with same
>      topology, the results SHOULD be reported in the form of a graph. The
>      X coordinate SHOULD be the Number of nodes (N), the Y coordinate
>      SHOULD be the average Asynchronous Message Processing Time.

This is an odd metric because an implementation which handled overload
by dropping every other message would look better than one which
handled overload by queuing.