[bmwg] comments on draft-bhuvan-bmwg-of-controller-benchmarking-01

"MORTON, ALFRED C (AL)" <acm@research.att.com> Wed, 22 October 2014 20:42 UTC

Return-Path: <acm@research.att.com>
X-Original-To: bmwg@ietfa.amsl.com
Delivered-To: bmwg@ietfa.amsl.com
Received: from localhost (ietfa.amsl.com [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 060E41A1B2B for <bmwg@ietfa.amsl.com>; Wed, 22 Oct 2014 13:42:11 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.113
X-Spam-Level:
X-Spam-Status: No, score=-1.113 tagged_above=-999 required=5 tests=[BAYES_40=-0.001, J_CHICKENPOX_35=0.6, J_CHICKENPOX_57=0.6, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001, T_RP_MATCHES_RCVD=-0.01] autolearn=ham
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id HiTCWGcmiCdr for <bmwg@ietfa.amsl.com>; Wed, 22 Oct 2014 13:42:01 -0700 (PDT)
Received: from mail-pink.research.att.com (mail-pink.research.att.com [204.178.8.22]) by ietfa.amsl.com (Postfix) with ESMTP id 9797B1A1AA5 for <bmwg@ietf.org>; Wed, 22 Oct 2014 13:42:01 -0700 (PDT)
Received: from mail-azure.research.att.com (unknown [135.207.255.18]) by mail-pink.research.att.com (Postfix) with ESMTP id A710812017F; Wed, 22 Oct 2014 16:39:23 -0400 (EDT)
Received: from exchange.research.att.com (njfpsrvexg0.research.att.com [135.207.240.40]) by mail-azure.research.att.com (Postfix) with ESMTP id 31029E0033; Wed, 22 Oct 2014 16:29:52 -0400 (EDT)
Received: from NJFPSRVEXG0.research.att.com ([fe80::c5dd:2310:7197:58ea]) by NJFPSRVEXG0.research.att.com ([fe80::c5dd:2310:7197:58ea%17]) with mapi; Wed, 22 Oct 2014 16:29:52 -0400
From: "MORTON, ALFRED C (AL)" <acm@research.att.com>
To: "bmwg@ietf.org" <bmwg@ietf.org>, "draft-bhuvan-bmwg-of-controller-benchmarking@tools.ietf.org" <draft-bhuvan-bmwg-of-controller-benchmarking@tools.ietf.org>
Date: Wed, 22 Oct 2014 16:29:51 -0400
Thread-Topic: comments on draft-bhuvan-bmwg-of-controller-benchmarking-01
Thread-Index: AQHP7jbuJs8iBU2b6UuKvYJnXaG9NQ==
Message-ID: <4AF73AA205019A4C8A1DDD32C034631D7BBF8A16@NJFPSRVEXG0.research.att.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
acceptlanguage: en-US
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
Archived-At: http://mailarchive.ietf.org/arch/msg/bmwg/UjlO1OylSFVIrglj9UALWMlbvNU
Subject: [bmwg] comments on draft-bhuvan-bmwg-of-controller-benchmarking-01
X-BeenThere: bmwg@ietf.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: Benchmarking Methodology Working Group <bmwg.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/bmwg>, <mailto:bmwg-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/bmwg/>
List-Post: <mailto:bmwg@ietf.org>
List-Help: <mailto:bmwg-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/bmwg>, <mailto:bmwg-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 22 Oct 2014 20:42:11 -0000

Hi Bhuvan, Anton, Vishwas, and Mark,

Thanks for preparing a very complete and interesting draft!

My comments on the draft are dispersed throughout the
text below, all prefaced by "ACM:"

regards,
Al
(as participant)


     Benchmarking Methodology for SDN Controller Performance
        draft-bhuvan-bmwg-of-controller-benchmarking-01

...
1. Introduction

   This document provides generic metrics and methodologies for
   benchmarking SDN controller performance. An SDN controller may
   support many northbound and southbound protocols, implement wide
   range of applications and work as standalone or as a group to
   achieve the desired functionality. This document considers an SDN
   controller as a black box, regardless of design and implementation.
   The tests defined in the document can be used to benchmark various
   controller designs for performance, scalability, reliability and
   security independent of northbound and southbound protocols. These
   tests can be performed on an SDN controller running as a virtual
   machine (VM) instance or on a bare metal server. This document is
   intended for those who want to measure the SDN controller
   performance as well as compare various SDN controllers performance.

   Conventions used in this document

   The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
   "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
   document are to be interpreted as described in RFC 2119.

2. Terminology

ACM:
Let's try to be consistent with other efforts that have
also defined terms.  For example, the IRTF SDNRG has an
approved draft now, with many related terms:
http://tools.ietf.org/html/draft-irtf-sdnrg-layer-terminology-04
They define terms like SDN and Interface in the SDN context.
The draft also provides an SDN Architecture in Figure 1,
showing 2 different types of Southbound Interfaces
(Control and Management).

   SDN Node:
      An SDN node is a physical or virtual entity that forwards
      data in a software defined environment.

   Flow:
      A flow is a traffic stream having same source and destination
      address. The address could be MAC or IP or combination of both.
ACM:
This could be closer to the definition of a microflow,
see
http://tools.ietf.org/html/rfc4689#section-3.1.5


   Learning Rate:
      The rate at which the controller learns the new source addresses
      from the received traffic without dropping.
ACM:
I suggest to leave out "without dropping", to give a more general
metric.  Using this definition, we could define the "lossless" or
"reliable" Learning rate where the additional condition of no
messages dropped applies.


   Controller Forwarding Table:
      A controller forwarding table contains flow records for the flows
      configured in the data path.



Bhuvan, et al.            Expires March 26, 2015               [Page 3]

Internet Draft    SDN Controller Benchmarking Methodology    March 2015


   Northbound Interface:
      Northbound interface is the application programming interface
      provided by the SDN controller for communication with SDN
      services and applications.
ACM:
http://tools.ietf.org/html/draft-irtf-sdnrg-layer-terminology-04#page-7
Figure 1 doesn't show the Northbound interface (but it doesn't
specifically show the boundaries of the controller, either...

   Southbound Interface:
      Southbound interface is the application programming interface
      provided by the SDN controller for communication with the SDN
      nodes.

   Proactive Flow Provisioning:
      Proactive flow provisioning is the pre-provisioning of flow
      entries into the controller's forwarding table through
      controller's northbound interface or management interface.

   Reactive Flow Provisioning:
      Reactive flow provisioning is the dynamic provisioning of flow
      entries into the controller's forwarding table based on traffic
      forwarded by the SDN nodes through controller's southbound
      interface.

   Path:
      A path is the route taken by a flow while traversing from a source
      node to destination node.
ACM:
"route" seems unclear, we want to say something about the nodes traversed.
There are lots of definitions of path, we could adapt the one from
from RFC2330 (below), or other source if you want:
   path A sequence of the form < h0, l1, h1, ..., ln, hn >, where n >=
        0, each hi is a host, each li is a link between hi-1 and hi,
        each h1...hn-1 is a router.  A pair <li, hi> is termed a 'hop'.
        In an appropriate operational configuration, the links and
        routers in the path facilitate network-layer communication of
        packets from h0 to hn.  Note that path is a unidirectional
        concept.


   Standalone Mode:
      Single controller handling all control plane functionalities.

   Cluster/Redundancy Mode:
      Group of controllers handling all control plane functionalities .

ACM: for the Mode definitions above:
The Group case should indicate possibilities for how the group
shares the control responsibilities: shared load, separate loads,
active/standby, etc.  The name Cluster/Redundancy could be any of
these types - maybe define each separately. For the group case,
How are the Management Plane functions divided? should add this
aspect.

   Synchronous Message:
      Any message from the SDN node that triggers a response message
      from the controller e.g., Keepalive request and response message,
      flow setup request and response message etc.,

ACM:
"synchronous" seems like the wrong adjective here.  Did this
term come from one of the implementations?  Seems like
"request-response" message or "response required" message is
more exact, but that's just the idealist commenting...

3. Scope

   This document defines a number of tests to measure the networking
   aspects of SDN controllers. These tests are recommended for
   execution in lab environments rather than in real time deployments.

ACM:
s/measure the/measure the performance of/
suggest
s/networking/control and management/ (or just control)








Bhuvan, et al.            Expires March 26, 2015               [Page 4]

Internet Draft    SDN Controller Benchmarking Methodology    March 2015


4. Test Setup

   The tests defined in this document enable measurement of SDN
   controller's performance in Standalone mode and Cluster mode. This
   section defines common reference topologies that are later referred
   to in individual tests.

ACM:
In the network cases below (4.1-4.4), we should probably show the
network path more explicitly (since we went to the trouble to define
it above) between the nodes.  So here the path would be
Node1, link, Node2, link, . . . Noden

                  ----------      ----------        ----------
                 |   SDN    |____|   SDN    |__..__|   SDN    |
                 |  Node 1  |    |  Node 2  |      |  Node n  |
                  ----------      ----------        ----------


4.1 SDN Network - Controller working in Standalone Mode

                          --------------------
                         |  SDN Applications  |
                          --------------------
                                   |
                                   | (Northbound interface)
                         -----------------------
                        |     SDN Controller    |
                        |          (DUT)        |
                         -----------------------
                                   | (Southbound interface)
                                   |
                       ---------------------------
                      |            |              |
                  ----------    ----------    ----------
                 |   SDN    |  |   SDN    |..|   SDN    |
                 |  Node 1  |  |  Node 2  |  |  Node n  |
                  ----------    ----------    ----------

                                  Figure 1

4.2 SDN Network - Controller working in Cluster Mode

                          --------------------
                         |  SDN Applications  |
                          --------------------
                                   |
                                   | (Northbound interface)
        ---------------------------------------------------------
       |  ------------------             ------------------      |
       | | SDN Controller 1 | <--E/W--> | SDN Controller n |     |
       |  ------------------             ------------------      |
        ---------------------------------------------------------
                                   | (Southbound interface)
                                   |
                       ---------------------------
                      |            |              |
                  ----------    ----------    ----------
                 |   SDN    |  |   SDN    |..|   SDN    |
                 |  Node 1  |  |  Node 2  |  |  Node n  |
                  ----------    ----------    ----------

                                  Figure 2
ACM:
Does this apply to shared control and active/standby ?


Bhuvan, et al.            Expires March 26, 2015               [Page 5]

Internet Draft    SDN Controller Benchmarking Methodology    March 2015



4.3 SDN Network with Traffic Endpoints (TE) - Controller working in
    Standalone Mode

                          --------------------
                         |  SDN Applications  |
                          --------------------
                                   |
                                   | (Northbound interface)
                         -----------------------
                        |  SDN Controller (DUT) |
                         -----------------------
                                   | (Southbound interface)
                                   |
                       ---------------------------
                      |            |              |
                  ----------    ----------    ----------
                 |   SDN    |  |   SDN    |..|   SDN    |
                 |  Node 1  |  |  Node 2  |  |  Node n  |
                  ----------    ----------    ----------
                      |                           |
                --------------             --------------
               |   Traffic    |           |   Traffic    |
               | Endpoint TP1 |           | Endpoint TP2 |
                --------------             --------------

                                  Figure 3

4.4 SDN Network with Traffic Endpoints (TE) - Controller working in
    Cluster Mode

                          --------------------
                         |  SDN Applications  |
                          --------------------
                                   |
                                   | (Northbound interface)
        ---------------------------------------------------------
       |  ------------------             ------------------      |
       | | SDN Controller 1 | <--E/W--> | SDN Controller n |     |
       |  ------------------             ------------------      |
        ---------------------------------------------------------
                                   |









Bhuvan, et al.            Expires March 26, 2015               [Page 6]

Internet Draft    SDN Controller Benchmarking Methodology    March 2015


                                   | (Southbound interface)
                                   |
                       ---------------------------
                      |            |              |
                  ----------    ----------    ----------
                 |   SDN    |  |   SDN    |..|   SDN    |
                 |  Node 1  |  |  Node 2  |  |  Node n  |
                  ----------    ----------    ----------
                      |                           |
                --------------             --------------
               |   Traffic    |           |   Traffic    |
               | Endpoint TP1 |           | Endpoint TP2 |
                --------------             --------------

                                  Figure 4


4.5 SDN Node with Traffic Endpoints (TE) - Controller working in
    Standalone Mode
                          --------------------
                         |  SDN Applications  |
                          --------------------
                                   |
                                   | (Northbound interface)
                         -----------------------
                        |     SDN Controller    |
                        |          (DUT)        |
                         -----------------------
                                   | (Southbound interface)
                                   |
                               ----------
                       -------|   SDN    |---------
                      |       |  Node 1  |         |
                      |        ----------          |
                  ----------                  ----------
                 | Traffic  |                | Traffic  |
                 | Endpoint |                | Endpoint |
                 |   TP1    |                |   TP2    |
                  ----------                  ----------

                                  Figure 5










Bhuvan, et al.            Expires March 26, 2015               [Page 7]

Internet Draft    SDN Controller Benchmarking Methodology    March 2015


4.6 SDN Node with Traffic Endpoints (TE) - Controller working in Cluster
    Mode

                          --------------------
                         |  SDN Applications  |
                          --------------------
                                   |
                                   | (Northbound interface)
        ---------------------------------------------------------
       |  ------------------             ------------------      |
       | | SDN Controller 1 | <--E/W--> | SDN Controller n |     |
       |  ------------------             ------------------      |
        ---------------------------------------------------------
                                   | (Southbound interface)
                                   |
                               ----------
                       -------|   SDN    |---------
                      |       |  Node 1  |         |
                      |        ----------          |
                  ----------                  ----------
                 | Traffic  |                | Traffic  |
                 | Endpoint |                | Endpoint |
                 |   TP1    |                |   TP2    |
                  ----------                  ----------

                                  Figure 6

5. Test Considerations

5.1 Network Topology

   The network SHOULD be deployed with SDN nodes interconnected in
   either fully meshed, tree or linear topology. Care should be taken
   to make sure that the loop prevention mechanism is enabled either in
   the SDN controller or in the network. To get complete performance
   characterization of SDN controller, it is recommended that the
   controller be benchmarked for many network topologies. These network
   topologies can be deployed using real hardware or emulated in
   hardware platforms.

5.2 Test Traffic

   Test traffic can be used to notify the controller about the arrival
   of new flows or generate notifications/events towards controller.
   In either case, it is recommended that at least five different frame
   sizes and traffic types be used, depending on the intended network
   deployment.

ACM:
Single size tests?  (should be "yes")
We should recommend the default sizes here or reference another set.


Bhuvan, et al.            Expires March 26, 2015               [Page 8]

Internet Draft    SDN Controller Benchmarking Methodology    March 2015


5.3 Connection Setup

   There may be controller implementations that support
   unencrypted and encrypted network connections with SDN nodes.
   Further, the controller may have backward compatibility with SDN
   nodes running older versions of southbound protocols. It is
   recommended that the controller performance be measured with the
   applicable connection setup methods.

   1. Unencrypted connection with SDN nodes, running same protocol
      version.
   2. Unencrypted connection with SDN nodes, running
      different (previous) protocol versions.
   3. Encrypted connection with SDN nodes,running same protocol version
   4. Encrypted connection with SDN nodes, running
      different (previous)protocol versions.

ACM:
suggest
s/previous/current and older/

5.4 Measurement Accuracy

   The measurement accuracy depends on the
   point of observation where the indications are captured. For example,
   the notification can be observed at the ingress or egress point of
   the SDN node. If it is observed at the egress point of the SDN node,
   the measurement includes the latency within the SDN node also. It is
   recommended to make observation at the ingress point of the SDN node
   unless it is explicitly mentioned otherwise in the individual test.

ACM:
This is really about specificity of measurement points.
The accuracy of results-reporting depends on the measurement
point specifications, but there are lots of other factors
affecting accuracy.
I suggest calling this section
"Measurement Point Specification and Recommendation"


5.5 Real World Scenario

   Benchmarking tests discussed in the document are
   to be performed on a "black-box" basis, relying solely on
   measurements observable external to the controller. The network
   deployed and the test parameters should be identical to the
   deployment scenario to obtain value added measures.

ACM:
suggest:
... to obtain measurements with the greatest value.

6. Test Reporting

   Each test has a reporting format which is specific to individual
   test. In addition, the following configuration parameters SHOULD be
   reflected in the test report.
   1. Controller name and version
   2. Northbound protocols and version
   3. Southbound protocols and version
   4. Controller redundancy mode (Standalone or Cluster Mode)
   5. Connection setup (Unencrypted or Encrypted)
   6. Network Topology (Mesh or Tree or Linear)
   7. SDN Node Type (Physical or Virtual or Emulated)
   8. Number of Nodes
   9. Number of Links
   10. Test Traffic Type

ACM:
I think we may need some more HW specifications here.
check-out:
https://tools.ietf.org/html/draft-morton-bmwg-virtual-net-01#section-3



Bhuvan, et al.            Expires March 26, 2015               [Page 9]

Internet Draft    SDN Controller Benchmarking Methodology    March 2015


7. Benchmarking Tests

7.1 Performance

7.1.1 Network Topology Discovery Time

ACM:
This is a good Benchmark.  One small detail is that we usually
present the Benchmark Definitions separately from the test
procedures - it makes it easier to understand what will be
quantified in a section with all the Benchmark definitions
side by side.  More comments below.

   Objective:
      To measure the time taken to discover the network topology- nodes
      and its connectivity by a controller, expressed in milliseconds.

   Setup Parameters:
      The following parameters MUST be defined:

      Network setup parameters:
      Number of nodes (N) - Defines the number of nodes present in the
      defined network topology
ACM:
suggest:
      Topology: clear specification (e.g., full mesh) or diagram.
------

ACM:
Latency on the links between nodes will affect the result, right?
Perhaps this should be measured and reported, too.
------

      Test setup parameters:
      Test Iterations (Tr) - Defines the number of times the test needs
      to be repeated. The recommended value is 3.
      Test Interval (To)- Defines the maximum time for the test to
      complete, expressed in milliseconds.
ACM:
For un-successful discovery iterations, how are the results reported?

      Test Setup:
      The test can use one of the test setup described in section 4.1
      and 4.2 of this document.

   Prerequisite:
      1.  The controller should support network discovery.
ACM:
. . . MUST support network discovery???

      2.  Tester should be able to retrieve the discovered topology
ACM:
s/should/SHOULD/
          information either through controller's management interface
          or northbound interface.
ACM: add
. . . to determine if the discovery was successful and complete.

   Procedure:
      1.  Initialize the controller - network applications, northbound
          and southbound interfaces.
      2.  Deploy the network with the given number of nodes using mesh
          or linear topology.
      3.  Initialize the network connections between controller and
          network nodes.
ACM:
So, the controller starts out knowing all the nodes it controls,
that makes sense with Topology discovery.

      4.  Record the time for the first discovery message exchange
          between the controller and the network node (Tm1).
      5.  Query the controller continuously for the discovered network
          topology information and compare it with the deployed network
          topology information.
      6.  Stop the test when the discovered topology information is
          matching with the deployed network topology or the expiry of
          test interval (To).



Bhuvan, et al.            Expires March 26, 2015              [Page 10]

Internet Draft    SDN Controller Benchmarking Methodology    March 2015


      7.  Record the time last discovery message exchange between the
          controller and the network node (Tmn) when the test completed
          successfully.
ACM:
. . . successfully (e.g., the topology matches).

   Note: While recording the Tmn value, it is recommended that the
         messages that are used for aliveness check or session
         management be ignored.

   Measurement:
      Topology Discovery Time Tr1 = Tmn-Tm1.

                                        Tr1 + Tr2 + Tr3 .. Trn
      Average Topology Discovery Time = -----------------------
                                        Total Test Iterations

   Note:
      1. To increase the certainty of measured result, it is
ACM:
s/certainty of/confidence in/

         recommended that this test be performed several times with
         same number of nodes using same topology.
      2. To get the full characterization of a controller's topology
         discovery functionality
         a. Perform the test with varying number of nodes using same
            topology
         b. Perform the test with same number of nodes using different
            topologies.

   Reporting Format:
      The Topology Discovery Time results SHOULD be reported in the
      format of a table, with a row for each iteration. The last row of
      the table indicates the average Topology Discovery Time.

      If this test is repeated with varying number of nodes over the
      same topology, the results SHOULD be reported in the form of a
      graph. The X coordinate SHOULD be the Number of nodes (N), the
      Y coordinate SHOULD be the average Topology Discovery Time.
ACM:
nicely done, and very traditional.

      If this test is repeated with same number of nodes over different
      topologies,the results SHOULD be reported in the form of a graph.
      The X coordinate SHOULD be the Topology Type, the Y coordinate
      SHOULD be the average Topology Discovery Time.

ACM:
many of the comments above apply to the sections below,
I won't repeat them.









Bhuvan, et al.            Expires March 26, 2015              [Page 11]

Internet Draft    SDN Controller Benchmarking Methodology    March 2015


7.1.2 Synchronous Message Processing Time

   Objective:
      To measure the time taken by the controller to process a
      synchronous message, expressed in milliseconds.

   Setup Parameters:
      The following parameters MUST be defined:

      Network setup parameters:
      Number of nodes (N) - Defines the number of nodes present in the
      defined network topology

      Test setup parameters:
      Test Iterations (Tr) - Defines the number of times the test needs
      to be repeated. The recommended value is 3.
      Test Duration (Td) - Defines the duration of test iteration,
      expressed in seconds. The recommended value is 5 seconds.

      Test Setup:
      The test can use one of the test setup described in section 4.1
      and 4.2 of this document.

   Prerequisite:
      1. The controller should have completed the network topology
         discovery for the connected nodes.

   Procedure:
      1. Generate a synchronous message from every connected nodes one
         at a time and wait for the response before generating the
         next message.
ACM:
So this is serial message processing time.
We may want to distinguish this in the name of the Benchmark.
-------

ACM:
I don't see how the loss of a request message would be handled.
This needs to be mentioned somewhere.  For example I suppose
a request sender will time-out and re-send the request, but then
we need to know that time-out.  Time-outs have a large impact
on the average for that iteration - perhaps there should be a way
to count the re-transmitted requests. There's definitely an
issue here.
-------


      2. Record total number of messages sent to the controller by all
         nodes (Ntx) and the responses received from the
         controller (Nrx) within the test duration (Td).

ACM:
So this is a fixed duration test, and the number of successful responses
completed determines the average request response time.

   Measurement:
                                                  Td
      Synchronous Message Processing Time Tr1 = ------
                                                  Nrx

                                                   Tr1 + Tr2 + Tr3..Trn
      Average Synchronous Message Processing Time= --------------------
                                                  Total Test Iterations








Bhuvan, et al.            Expires March 26, 2015              [Page 12]

Internet Draft    SDN Controller Benchmarking Methodology    March 2015


   Note:
      1. The above test measures the controller's message processing
         time at lower traffic rate. To measure the controller's
         message processing time at full connection rate, apply the
         same measurement equation with the Td and Nrx values obtained
         from Synchronous Message Processing Rate test
         (defined in Section 7.1.3).
      2. To increase the certainty of measured result, it is
         recommended that this test be performed several times with
         same number of nodes using same topology.
      3. To get the full characterization of a controller's synchronous
         message processing time
         a. Perform the test with varying number of nodes using same
            topology
         b. Perform the test with same number of nodes using different
            topologies.

   Reporting Format:
      The Synchronous Message Processing Time results SHOULD be
      reported in the format of a table with a row for each iteration.
      The last row of the table indicates the average Synchronous
      Message Processing Time.

      The report should capture the following information in addition
      to the configuration parameters captured in section 6.
      - Offered rate (Ntx)

      If this test is repeated with varying number of nodes with same
      topology, the results SHOULD be reported in the form of a graph.
      The X coordinate SHOULD be the Number of nodes (N), the
      Y coordinate SHOULD be the average Synchronous Message Processing
      Time.

      If this test is repeated with same number of nodes using
      different topologies, the results SHOULD be reported in the form
      of a graph. The X coordinate SHOULD be the Topology Type, the
      Y coordinate SHOULD be the average Synchronous Message Processing
      Time.

7.1.3 Synchronous Message Processing Rate

   Objective:
      To measure the maximum number of synchronous messages (session
      aliveness check message, new flow arrival notification
      message etc.) a controller can process within the test duration,
      expressed in messages processed per second.

ACM:
even when the controller is dropping messages?
(this is kind of the Benchmark definition above,
so I'm asking for clarification)





Bhuvan, et al.            Expires March 26, 2015              [Page 13]

Internet Draft    SDN Controller Benchmarking Methodology    March 2015


   Setup Parameters:
      The following parameters MUST be defined:

      Network setup parameters:
      Number of nodes (N) - Defines the number of nodes present in the
      defined network topology.

      Test setup parameters:
      Test Iterations (Tr) - Defines the number of times the test needs
      to be repeated. The recommended value is 3.
      Test Duration (Td) - Defines the duration of test iteration,
      expressed in seconds. The recommended value is 5 seconds.

      Test Setup:
      The test can use one of the test setup described in section 4.1
      and 4.2 of this document.

   Prerequisite:
      1. The controller should have completed the network topology
         discovery for the connected nodes.

   Procedure:
      1. Generate synchronous messages from all the connected nodes
         at the full connection capacity for the Test Duration (Td).
ACM:
I think we need to add detail on the connection capacity from
each node to the controller.  Is it a shared link with an aggregation
point?  Or, do these control connections use traffic management,
and we are talking about the capacity of a virtual pipe, not PHY.?

      2. Record total number of messages sent to the controller by all
         nodes (Ntx) and the responses received from the
         controller (Nrx) within the test duration (Td).

ACM:
We have to distinguish the lossy case, where Ntx not.equal Nrx.
----------

   Measurement:
                                                 Nrx
      Synchronous Message Processing Rate Tr1 = -----
                                                 Td
                                                   Tr1 + Tr2 + Tr3..Trn
      Average Synchronous Message Processing Rate= --------------------
                                                  Total Test Iterations

ACM:
and I think we want a version of this for lossless operation.
perhaps another case with loss ratio measured would also be useful.
ALSO, I think this comment to recognize Loss conditions
may apply to the procedures that follow.

-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

ACM:
A lot of good Benchmark tests follow, but I will stop here
since i've already made quite a few comments.

I think the 3x3 matrix is helpful in the draft, because there
are so many benchmarks described.

>From the user perspective, the Scalability metrics affect the
reliability of the system as they would perceive it. For example
a system operating at max capacity would block further requests
until space is available, so I can make a case that scale influences
reliability (but so does capacity engineering)

EOT.