Re: [bmwg] WGLC: draft-ietf-bmwg-sdn-controller-benchmark-term-02 and meth-02

"Bhuvan (Veryx Technologies)" <bhuvaneswaran.vengainathan@veryxtech.com> Fri, 06 January 2017 22:30 UTC

Return-Path: <bhuvaneswaran.vengainathan@veryxtech.com>
X-Original-To: bmwg@ietfa.amsl.com
Delivered-To: bmwg@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 5D2151295BB for <bmwg@ietfa.amsl.com>; Fri, 6 Jan 2017 14:30:20 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -5.001
X-Spam-Level:
X-Spam-Status: No, score=-5.001 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, RP_MATCHES_RCVD=-3.1, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id dN79NHdWCze5 for <bmwg@ietfa.amsl.com>; Fri, 6 Jan 2017 14:30:17 -0800 (PST)
Received: from mail3.veryxtech.com (mail3.veryxtech.com [199.193.251.103]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id D53DA12955A for <bmwg@ietf.org>; Fri, 6 Jan 2017 14:30:16 -0800 (PST)
Received: from localhost (localhost.localdomain [127.0.0.1]) by mail3.veryxtech.com (Postfix) with ESMTP id 5879D763F2D; Sat, 7 Jan 2017 04:00:20 +0530 (IST)
X-Virus-Scanned: amavisd-new at veryxtech.com
Received: from mail3.veryxtech.com ([127.0.0.1]) by localhost (mail3.veryxtech.com [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id 48Wn44gepNnf; Sat, 7 Jan 2017 04:00:19 +0530 (IST)
Received: from mail3.veryxtech.com (mail3.veryxtech.com [199.193.251.103]) by mail3.veryxtech.com (Postfix) with ESMTP id D847D763F46; Sat, 7 Jan 2017 04:00:19 +0530 (IST)
Date: Sat, 07 Jan 2017 04:00:19 +0530
From: "Bhuvan (Veryx Technologies)" <bhuvaneswaran.vengainathan@veryxtech.com>
To: "MORTON, ALFRED C (AL)" <acmorton@att.com>
Message-ID: <2014878977.47064.1483741819729.JavaMail.zimbra@veryxtech.com>
In-Reply-To: <1313057802.47013.1483741026473.JavaMail.zimbra@veryxtech.com>
References: <4AF73AA205019A4C8A1DDD32C034631D45A1F2EAC8@NJFPSRVEXG0.research.att.com> <4AF73AA205019A4C8A1DDD32C034631D45A1F2EAC9@NJFPSRVEXG0.research.att.com> <1313057802.47013.1483741026473.JavaMail.zimbra@veryxtech.com>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="----=_Part_47062_926173560.1483741819721"
X-Originating-IP: [199.193.251.103]
X-Mailer: Zimbra 8.6.0_GA_1194 (ZimbraWebClient - GC55 (Win)/8.6.0_GA_1194)
Thread-Topic: WGLC: draft-ietf-bmwg-sdn-controller-benchmark-term-02 and meth-02
Thread-Index: AdItVTPLlIcRTDoESOC8It9MD0hq/wAAIovA2G2O1bZEQeZ6Hg==
Archived-At: <https://mailarchive.ietf.org/arch/msg/bmwg/7qsW8gzBBIMMhIuNyRegWJTQA9Y>
X-Mailman-Approved-At: Tue, 10 Jan 2017 13:11:36 -0800
Cc: Vishwas Manral <vishwas.manral@gmail.com>, bmwg@ietf.org, "Tassinari, Mark" <mark.tassinari@hpe.com>
Subject: Re: [bmwg] WGLC: draft-ietf-bmwg-sdn-controller-benchmark-term-02 and meth-02
X-BeenThere: bmwg@ietf.org
X-Mailman-Version: 2.1.17
Precedence: list
List-Id: Benchmarking Methodology Working Group <bmwg.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/bmwg>, <mailto:bmwg-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/bmwg/>
List-Post: <mailto:bmwg@ietf.org>
List-Help: <mailto:bmwg-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/bmwg>, <mailto:bmwg-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 06 Jan 2017 22:30:20 -0000

Hi Al,

Thank you for comments/suggestions on the methodology draft.
All you comments are fine with us. We will addressing them in the latest revision that will be submitted in couple of days.

Also you had raised few questions on the methodology draft. Attached is our response inline with tag [Authors].
Please let us know if you have further questions.

Best Regards,
Bhuvan

----- Original Message -----
From: "MORTON, ALFRED C (AL)" <acmorton@att.com>
To: bmwg@ietf.org
Sent: Sunday, 23 October, 2016 12:51:07
Subject: Re: [bmwg] WGLC: draft-ietf-bmwg-sdn-controller-benchmark-term-02 and	meth-02

Hi Bhuvan and co-authors,

I have the following comments/suggestions on the methodology draft.

thanks for considering them,
Al
(as a participant)



Section 2., paragraph 1:
OLD:

    This document defines methodology to measure the networking metrics
    of SDN controllers. For the purpose of this memo, the SDN controller
    is a function that manages and controls Network Devices. Any SDN
    controller without a control capability is out of scope for this
    memo. The tests defined in this document enable benchmarking of SDN
    Controllers in two ways; as a standalone controller and as a cluster
    of homogeneous controllers. These tests are recommended for
    execution in lab environments rather than in live network
    deployments. Performance benchmarking of a federation of controllers
    is beyond the scope of this document.

NEW:

    This document defines methodology to measure the networking metrics
    of SDN controllers. For the purpose of this memo, the SDN controller
    is a function that manages and controls Network Devices. Any SDN
    controller without a control capability is out of scope for this
    memo. The tests defined in this document enable benchmarking of SDN
    Controllers in two ways; as a standalone controller and as a cluster
    of homogeneous controllers. These tests are recommended for
    execution in lab environments rather than in live network
    deployments. Performance benchmarking of a federation of controllers
    is beyond the scope of this document.
 [ACM]
 How do you distinguish a homogenious cluster from a federation ??
 It would be good to clarify this here.


Section 3., paragraph 1:
OLD:

    The tests defined in this document enable measurement of an SDN
    controllers performance in standalone mode and cluster mode. This
    section defines common reference topologies that are later referred
    to in individual tests.

NEW:

    The tests defined in this document enable measurement of an SDN
    controllers performance in standalone mode and cluster mode. This
    section defines common reference topologies that are later referred
 |   to in individual tests (Additional forwarding Plane topologies are
 |   provided in Appendix A).


Section 3.2., paragraph 1:
OLD:

           +-----------------------------------------------------------+
           |               Application Plane Test Emulator             |
           |                                                           |
           |        +-----------------+      +-------------+           |
           |        |   Application   |      |   Service   |           |
           |        +-----------------+      +-------------+           |
           |                                                           |
           +-----------------------------+(I2)-------------------------+
                                         |
                                         |
                                         | (Northbound interface)
            +---------------------------------------------------------+
            |                                                         |
            |  ------------------             ------------------      |
            | | SDN Controller 1 | <--E/W--> | SDN Controller n |     |
            |  ------------------             ------------------      |
            |                                                         |
            |                    Device Under Test (DUT)              |
            +---------------------------------------------------------+
                                         | (Southbound interface)
                                         |
                                         |
           +-----------------------------+(I1)-------------------------+
           |                                                           |
           |          +-----------+              +-----------+         |
           |          |  Network  |l1        ln-1|  Network  |         |
           |          |  Device 1 |---- .... ----|  Device n |         |
           |          +-----------+              +-----------+         |
           |               |l0                        |ln              |
           |               |                          |                |
           |               |                          |                |
           |       +---------------+          +---------------+        |
           |       | Test Traffic  |          | Test Traffic  |        |
           |       |  Generator    |          |  Generator    |        |
           |       |    (TP1)      |          |    (TP2)      |        |
           |       +---------------+          +---------------+        |
           |                                                           |
           |              Forwarding Plane Test Emulator               |
           +-----------------------------------------------------------+

NEW:

           +-----------------------------------------------------------+
           |               Application Plane Test Emulator             |
           |                                                           |
           |        +-----------------+      +-------------+           |
           |        |   Application   |      |   Service   |           |
           |        +-----------------+      +-------------+           |
           |                                                           |
           +-----------------------------+(I2)-------------------------+
                                         |
                                         |
 |                                       | (Northbound interfaces)
            +---------------------------------------------------------+
            |                                                         |
            |  ------------------             ------------------      |
            | | SDN Controller 1 | <--E/W--> | SDN Controller n |     |
            |  ------------------             ------------------      |
            |                                                         |
            |                    Device Under Test (DUT)              |
            +---------------------------------------------------------+
 |                                       | (Southbound interfaces)
                                         |
                                         |
           +-----------------------------+(I1)-------------------------+
           |                                                           |
           |          +-----------+              +-----------+         |
           |          |  Network  |l1        ln-1|  Network  |         |
           |          |  Device 1 |---- .... ----|  Device n |         |
           |          +-----------+              +-----------+         |
           |               |l0                        |ln              |
           |               |                          |                |
           |               |                          |                |
           |       +---------------+          +---------------+        |
           |       | Test Traffic  |          | Test Traffic  |        |
           |       |  Generator    |          |  Generator    |        |
           |       |    (TP1)      |          |    (TP2)      |        |
           |       +---------------+          +---------------+        |
           |                                                           |
           |              Forwarding Plane Test Emulator               |
           +-----------------------------------------------------------+


Section 4.1., paragraph 1:
OLD:

    The test cases SHOULD use Leaf-Spine topology with at least 1
    Network Device in the topology for benchmarking. The test traffic
    generators TP1 and TP2 SHOULD be connected to the first and the last
    leaf Network Device. If a test case uses test topology with 1
    Network Device, the test traffic generators TP1 and TP2 SHOULD be
    connected to the same node. However to achieve a complete
    performance characterization of the SDN controller, it is
    recommended that the controller be benchmarked for many network
    topologies and a varying number of Network Devices. This document
    includes a few sample test topologies, defined in Section 10 -
    Appendix A for reference. Further, care should be taken to make sure
    that a loop prevention mechanism is enabled either in the SDN
    controller, or in the network when the topology contains redundant
    network paths.

NEW:

    The test cases SHOULD use Leaf-Spine topology with at least 1
    Network Device in the topology for benchmarking. The test traffic
    generators TP1 and TP2 SHOULD be connected to the first and the last
    leaf Network Device. If a test case uses test topology with 1
    Network Device, the test traffic generators TP1 and TP2 SHOULD be
    connected to the same node. However to achieve a complete
    performance characterization of the SDN controller, it is
    recommended that the controller be benchmarked for many network
    topologies and a varying number of Network Devices. This document
 |  includes two sample test topologies, defined in Section 10 -
    Appendix A for reference. Further, care should be taken to make sure
    that a loop prevention mechanism is enabled either in the SDN
    controller, or in the network when the topology contains redundant
    network paths.


Section 4.2., paragraph 1:
OLD:

    Test traffic is used to notify the controller about the arrival of
    new flows. The test cases SHOULD use multiple frame sizes as
    recommended in RFC2544 for benchmarking.

NEW:

 |  Test traffic is used to notify the controller about the asynchronous arrival of
    new flows. The test cases SHOULD use multiple frame sizes as
    recommended in RFC2544 for benchmarking.
 [ACM]
 DISCUSS: what is the value of using all the 2544/different packet sizes,
 when a single packet of any size is sufficient to trigger the
 notification?  Maybe only use two sizes, man MTU and min ???
 Sometimes the packet is forwarded with the notification, and
 this could make a difference...


Section 4.4., paragraph 1:
OLD:

    There may be controller implementations that support unencrypted and
    encrypted network connections with Network Devices. Further, the
    controller may have backward compatibility with Network Devices
    running older versions of southbound protocols. It is recommended
    that the controller performance be measured with one or more
    applicable connection setup methods defined below.

NEW:

    There may be controller implementations that support unencrypted and
    encrypted network connections with Network Devices. Further, the
    controller may have backward compatibility with Network Devices
 |  running older versions of southbound protocols. It may be useful
 |  to measure the controller performance with one or more
 |  applicable connection setup methods defined below.


Section 4.5., paragraph 1:
OLD:

    The measurement accuracy depends on several factors including the
    point of observation where the indications are captured. For
    example, the notification can be observed at the controller or test
    emulator. The test operator SHOULD make the observations/
    measurements at the interfaces of test emulator unless it is
    explicitly mentioned otherwise in the individual test.

NEW:

    The measurement accuracy depends on several factors including the
    point of observation where the indications are captured. For
    example, the notification can be observed at the controller or test
    emulator. The test operator SHOULD make the observations/
    measurements at the interfaces of test emulator unless it is
    explicitly mentioned otherwise in the individual test.
 |  In any case, the locations of measurement points MUST be reported.


Section 4.6., paragraph 1:
OLD:

    The SDN controller in the test setup SHOULD be connected directly
    with the forwarding and the management plane test emulators to avoid
    any delays or failure introduced by the intermediate devices during
    benchmarking tests.

NEW:

    The SDN controller in the test setup SHOULD be connected directly
    with the forwarding and the management plane test emulators to avoid
    any delays or failure introduced by the intermediate devices during
 |  benchmarking tests. When the controller is implemented as a Virtual
 |  machine, details aof the physical and logical connectivity MUST
 |  be reported.


Section 4.7., paragraph 5:
OLD:

       1.Controller name and version
       2.Northbound protocols and versions
       3.Southbound protocols and versions
       4.Controller redundancy mode (Standalone or Cluster Mode)
       5.Connection setup (Unencrypted or Encrypted)
       6.Network Topology (Mesh or Tree or Linear)
       7.Network Device Type (Physical or Virtual or Emulated)
       8.Number of Nodes
       9.Number of Links
       10.Test Traffic Type
       11.Controller System Configuration (e.g., CPU, Memory, Operating
         System, Interface Speed etc.,)
       12.Reference Test Setup (e.g., Section 3.1 etc.,)

NEW:

       1.Controller name and version
       2.Northbound protocols and versions
       3.Southbound protocols and versions
       4.Controller redundancy mode (Standalone or Cluster Mode)
       5.Connection setup (Unencrypted or Encrypted)
       6.Network Topology (Mesh or Tree or Linear)
       7.Network Device Type (Physical or Virtual or Emulated)
       8.Number of Nodes
       9.Number of Links
 |     10.Test Traffic Type (dataplane ??)
       11.Controller System Configuration (e.g., CPU, Memory, Operating
 |       System, Interface Speed etc., see Section 3.2 of [draft-ietf-bmwg-virtual-net-04])
       12.Reference Test Setup (e.g., Section 3.1 etc.,)


Section 4.7., paragraph 6:
OLD:

    Controller Settings Parameters:
       1.Topology re-discovery timeout
       2.Controller redundancy mode (e.g., active-standby etc.,)

NEW:

    Controller Settings Parameters:
       1.Topology re-discovery timeout
       2.Controller redundancy mode (e.g., active-standby etc.,)
 |     3.Controller state persistence enabled/disabled


Section 5.1.1., paragraph 4:
OLD:

    The test SHOULD use one of the test setups described in section 3.1
    or section 3.2 of this document.

NEW:

    The test SHOULD use one of the test setups described in section 3.1
 |  or section 3.2 of this document in combination with Appendix A.
 [ACM] there are more places to add this point...


Section 5.1.2., paragraph 5:
OLD:

 Prerequisite:

    1.The controller MUST have completed the network topology discovery
       for the connected Network Devices.

NEW:

 Prerequisite:
 [ACM] add for all prereq's: successfully (like below)
 |  1.The controller MUST have successfully completed the network topology discovery
       for the connected Network Devices.


Section 5.1.2., paragraph 10:
OLD:

     Where Nrx is the total number of successful messages exchanged
                                                  Tr1 + Tr2 + Tr3..Trn
    Average Asynchronous Message Processing Time= --------------------
                                                  Total Test Iterations

NEW:

     Where Nrx is the total number of successful messages exchanged
                                                  Tr1 + Tr2 + Tr3..Trn
    Average Asynchronous Message Processing Time= --------------------
 |                                                Total Test repetitions


Section 2., paragraph 3:
OLD:

                                                  Tr1 + Tr2 + Tr3..Trn
    Average Asynchronous Message Processing Rate= --------------------
                                                  Total Test Iterations

NEW:

                                                  Tr1 + Tr2 + Tr3..Trn
    Average Asynchronous Message Processing Rate= --------------------
 |                                                Total Test Repetitions


Section 5.1.4., paragraph 2:
OLD:

    The time taken by the controller to setup a path reactively between
    source and destination node, defined as the interval starting with
    the first flow provisioning request message received by the
    controller(s), ending with the last flow provisioning response
    message sent from the controller(s) at it Southbound interface.

NEW:

    The time taken by the controller to setup a path reactively between
    source and destination node, defined as the interval starting with
    the first flow provisioning request message received by the
 |  controller(s) at its Southbound? interface, ending with the last flow provisioning response
 |  message sent from the controller(s) at its Southbound interface.


Section 5.1.4., paragraph 4:
OLD:

    The test SHOULD use one of the test setups described in section 3.1
    or section 3.2 of this document.

NEW:

    The test SHOULD use one of the test setups described in section 3.1
 |  or section 3.2 of this document. The number of Network Devices in the
 |  path is a parameter of the test that may be varied from 2 to ?? in
 |  repetitions of this test.


Section 5., paragraph 3:
OLD:

                                                Tr1 + Tr2 + Tr3 .. Trn
     Average Proactive Path Provisioning Time = -----------------------
                                                 Total Test Iterations

NEW:

                                                Tr1 + Tr2 + Tr3 .. Trn
     Average Proactive Path Provisioning Time = -----------------------
 |                                               Total Test Repetitions


Section 2., paragraph 3:
OLD:

                                                Tr1 + Tr2 + Tr3 .. Trn
     Average Reactive Path Provisioning Rate = ------------------------
                                                Total Test Iterations

NEW:

                                                Tr1 + Tr2 + Tr3 .. Trn
     Average Reactive Path Provisioning Rate = ------------------------
 |                                              Total Test Repetitions


Section 5.1.7., paragraph 2:
OLD:

    Measure the maximum number of independent paths a controller can
    concurrently establish between source and destination nodes
    proactively, defined as the number of paths provisioned by the
    controller(s) at its Southbound interface for the paths provisioned
    in its Northbound interface between the start of the test and the
    expiry of given test duration .

NEW:

 |  Measure the maximum rate of independent paths a controller can
    concurrently establish between source and destination nodes
    proactively, defined as the number of paths provisioned by the
 |  controller(s) at its Southbound interface for the paths requested
    in its Northbound interface between the start of the test and the
 |  expiry of given test duration. The measurement is based on dataplane
 |  observations of successful path activation (but this is dependent
 |  on the emulated switches and the controller ??)


Section 3., paragraph 3:
OLD:

                                                Tr1 + Tr2 + Tr3 .. Trn
     Average Proactive Path Provisioning Rate = -----------------------
                                                Total Test Iterations

NEW:

                                                Tr1 + Tr2 + Tr3 .. Trn
     Average Proactive Path Provisioning Rate = -----------------------
 |                                              Total Test Repetitions


Section 5.1.8., paragraph 2:
OLD:

    The amount of time required for the controller to detect any changes
    in the network topology, defined as the interval starting with the
    notification message received by the controller(s) at its Southbound
    interface, ending with the first topology rediscovery messages sent
    from the controller(s) at its Southbound interface.

NEW:

 |  The amount of time required for the controller to react to changes
    in the network topology, defined as the interval starting with the
    notification message received by the controller(s) at its Southbound
    interface, ending with the first topology rediscovery messages sent
    from the controller(s) at its Southbound interface.


Section 4., paragraph 3:
OLD:

                                                  Tr1 + Tr2 + Tr3 .. Trn
     Average Network Topology Change Detection Time = ------------------
                                                   Total Test Iterations

NEW:

                                                  Tr1 + Tr2 + Tr3 .. Trn
     Average Network Topology Change Detection Time = ------------------
 |                                                 Total Test Repetitions


Section 4., paragraph 6:
OLD:

 5.2. 6.2 Scalability

NEW:

 |5.2.  Scalability


Section 3., paragraph 0:
OLD:

    1. Establish control connection with controller from every Network
       Device emulated in the forwarding plane test emulator.
    2. Stop the test when the controller starts dropping the control
       connection.
    3. Record the number of successful connections established with the
       controller (CCn) at the forwarding plane test emulator.

NEW:

    1. Establish control connection with controller from every Network
       Device emulated in the forwarding plane test emulator.
    2. Stop the test when the controller starts dropping the control
 |     connections, or refuses further connections.
    3. Record the number of successful connections established with the
       controller (CCn) at the forwarding plane test emulator.


Section 6., paragraph 5:
OLD:

 5.2.3. 6.2.3 Forwarding Table Capacity

NEW:

 |5.2.3.  Forwarding Table Capacity


Section 2., paragraph 12:
OLD:

 5.3. 6.3 Security

NEW:

 |5.3. Security


Section 2., paragraph 13:
OLD:

 5.3.1. 6.3.1 Exception Handling

NEW:

 |5.3.1.  Exception Handling


Section 5., paragraph 6:
OLD:

     - Number of cluster nodes
     - Redundancy mode

NEW:

 |   - Number of cluster nodes and data overlap among cluster members
 |   - Redundancy mode (examples ??)


Section 5., paragraph 7:
OLD:

     - Controller Failover

NEW:

 |   - Controller Failover ( ?? )


Section 5., paragraph 8:
OLD:

     - Time Packet Loss

NEW:

 |   - Time Packet Loss ( ?? )


Section 4., paragraph 0:
OLD:

    1. Send bi-directional traffic continuously with unique sequence
       number from TP1 and TP2.
    2. Bring down a link or switch in the traffic path.
    3. Stop the test after receiving first frame after network re-
       convergence.
    4. Record the time of last received frame prior to the frame loss at
       TP2 (TP2-Tlfr) and the time of first frame received after the
       frame loss at TP2 (TP2-Tffr).

NEW:

    1. Send bi-directional traffic continuously with unique sequence
       number from TP1 and TP2.
    2. Bring down a link or switch in the traffic path.
    3. Stop the test after receiving first frame after network re-
       convergence.
    4. Record the time of last received frame prior to the frame loss at
       TP2 (TP2-Tlfr) and the time of first frame received after the
 |     frame loss at TP2 (TP2-Tffr). There must be a gap in sequence numbers
 |     of these frames.


Section 8., paragraph 1:
OLD:

    Benchmarking tests described in this document are limited to the
    performance characterization of controller in lab environment with
    isolated network.

NEW:

    Benchmarking tests described in this document are limited to the
    performance characterization of controller in lab environment with
    isolated network.
 [ACM]  more would be good here, see BMWG examples.


Appendix B., paragraph 0:
OLD:

 Appendix B. Benchmarking Methodology using OpenFlow Controllers

NEW:

 [ACM] stopped here...

 Appendix B. Benchmarking Methodology using OpenFlow Controllers



> -----Original Message-----
> From: bmwg [mailto:bmwg-bounces@ietf.org] On Behalf Of MORTON, ALFRED C
> (AL)
> Sent: Sunday, October 23, 2016 1:45 PM
> To: bmwg@ietf.org
> Subject: [bmwg] WGLC: draft-ietf-bmwg-sdn-controller-benchmark-term-02
> and meth-02
>
>
> BMWG:
>
> A WG Last Call period for the Internet-Drafts on
> SDN Controller Benchmarking Terminology and Methodology:
>
> https://tools.ietf.org/html/draft-ietf-bmwg-sdn-controller-benchmark-
> term-02
> https://tools.ietf.org/html/draft-ietf-bmwg-sdn-controller-benchmark-
> meth-02
>
> will be open from 24 October 2016 through 15 November 2016.
>
>
> These drafts are beginning the BMWG Last Call Process. See
> http://www1.ietf.org/mail-archive/web/bmwg/current/msg00846.html
>
> Please read and express your opinion on whether or not these
> Internet-Drafts should be forwarded to the Area Directors for
> publication as Informational RFCs.  Send your comments
> to this list or to co-chair acmorton@att.com
>
> for the co-chairs,
> Al
>
> _______________________________________________
> bmwg mailing list
> bmwg@ietf.org
> https://www.ietf.org/mailman/listinfo/bmwg

_______________________________________________
bmwg mailing list
bmwg@ietf.org
https://www.ietf.org/mailman/listinfo/bmwg