Re: [bmwg] WGLC: draft-ietf-bmwg-sdn-controller-benchmark-term-02 and meth-02
"MORTON, ALFRED C (AL)" <acmorton@att.com> Sun, 23 October 2016 17:51 UTC
Return-Path: <acmorton@att.com>
X-Original-To: bmwg@ietfa.amsl.com
Delivered-To: bmwg@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 4D2E012943D for <bmwg@ietfa.amsl.com>; Sun, 23 Oct 2016 10:51:44 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.101
X-Spam-Level:
X-Spam-Status: No, score=-2.101 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H2=-0.001, RCVD_IN_SORBS_SPAM=0.5] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 0BgESvObPUz5 for <bmwg@ietfa.amsl.com>; Sun, 23 Oct 2016 10:51:41 -0700 (PDT)
Received: from mx0a-00191d01.pphosted.com (mx0b-00191d01.pphosted.com [67.231.157.136]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 66CDF129436 for <bmwg@ietf.org>; Sun, 23 Oct 2016 10:51:41 -0700 (PDT)
Received: from pps.filterd (m0049463.ppops.net [127.0.0.1]) by m0049463.ppops.net-00191d01. (8.16.0.17/8.16.0.17) with SMTP id u9NHia54034648 for <bmwg@ietf.org>; Sun, 23 Oct 2016 13:51:40 -0400
Received: from alpi155.enaf.aldc.att.com (sbcsmtp7.sbc.com [144.160.229.24]) by m0049463.ppops.net-00191d01. with ESMTP id 268entq9dr-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for <bmwg@ietf.org>; Sun, 23 Oct 2016 13:51:40 -0400
Received: from enaf.aldc.att.com (localhost [127.0.0.1]) by alpi155.enaf.aldc.att.com (8.14.5/8.14.5) with ESMTP id u9NHpcPp023949 for <bmwg@ietf.org>; Sun, 23 Oct 2016 13:51:40 -0400
Received: from mlpi407.sfdc.sbc.com (mlpi407.sfdc.sbc.com [130.9.128.239]) by alpi155.enaf.aldc.att.com (8.14.5/8.14.5) with ESMTP id u9NHpUOb023705 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO) for <bmwg@ietf.org>; Sun, 23 Oct 2016 13:51:37 -0400
Received: from clpi183.sldc.sbc.com (clpi183.sldc.sbc.com [135.41.1.46]) by mlpi407.sfdc.sbc.com (RSA Interceptor) for <bmwg@ietf.org>; Sun, 23 Oct 2016 17:51:14 GMT
Received: from sldc.sbc.com (localhost [127.0.0.1]) by clpi183.sldc.sbc.com (8.14.5/8.14.5) with ESMTP id u9NHpEXY008223 for <bmwg@ietf.org>; Sun, 23 Oct 2016 12:51:14 -0500
Received: from mail-blue.research.att.com (mail-blue.research.att.com [135.207.178.11]) by clpi183.sldc.sbc.com (8.14.5/8.14.5) with ESMTP id u9NHpBlS008129 for <bmwg@ietf.org>; Sun, 23 Oct 2016 12:51:11 -0500
Received: from exchange.research.att.com (njfpsrvexg0.research.att.com [135.207.255.124]) by mail-blue.research.att.com (Postfix) with ESMTP id 8CFEAF0AB7 for <bmwg@ietf.org>; Sun, 23 Oct 2016 13:51:10 -0400 (EDT)
Received: from NJFPSRVEXG0.research.att.com ([fe80::108a:1006:9f54:fd90]) by NJFPSRVEXG0.research.att.com ([fe80::108a:1006:9f54:fd90%25]) with mapi; Sun, 23 Oct 2016 13:51:10 -0400
From: "MORTON, ALFRED C (AL)" <acmorton@att.com>
To: "bmwg@ietf.org" <bmwg@ietf.org>
Date: Sun, 23 Oct 2016 13:51:07 -0400
Thread-Topic: [bmwg] WGLC: draft-ietf-bmwg-sdn-controller-benchmark-term-02 and meth-02
Thread-Index: AdItVTPLlIcRTDoESOC8It9MD0hq/wAAIovA
Message-ID: <4AF73AA205019A4C8A1DDD32C034631D45A1F2EAC9@NJFPSRVEXG0.research.att.com>
References: <4AF73AA205019A4C8A1DDD32C034631D45A1F2EAC8@NJFPSRVEXG0.research.att.com>
In-Reply-To: <4AF73AA205019A4C8A1DDD32C034631D45A1F2EAC8@NJFPSRVEXG0.research.att.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
acceptlanguage: en-US
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-RSA-Inspected: yes
X-RSA-Classifications: public
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:, , definitions=2016-10-23_08:, , signatures=0
X-Proofpoint-Spam-Details: rule=outbound_policy_notspam policy=outbound_policy score=0 priorityscore=1501 malwarescore=0 suspectscore=0 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 impostorscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1609300000 definitions=main-1610230324
Archived-At: <https://mailarchive.ietf.org/arch/msg/bmwg/yehBUZwt8TM63p6gsAfNkjww8fA>
Subject: Re: [bmwg] WGLC: draft-ietf-bmwg-sdn-controller-benchmark-term-02 and meth-02
X-BeenThere: bmwg@ietf.org
X-Mailman-Version: 2.1.17
Precedence: list
List-Id: Benchmarking Methodology Working Group <bmwg.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/bmwg>, <mailto:bmwg-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/bmwg/>
List-Post: <mailto:bmwg@ietf.org>
List-Help: <mailto:bmwg-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/bmwg>, <mailto:bmwg-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sun, 23 Oct 2016 17:51:44 -0000
Hi Bhuvan and co-authors, I have the following comments/suggestions on the methodology draft. thanks for considering them, Al (as a participant) Section 2., paragraph 1: OLD: This document defines methodology to measure the networking metrics of SDN controllers. For the purpose of this memo, the SDN controller is a function that manages and controls Network Devices. Any SDN controller without a control capability is out of scope for this memo. The tests defined in this document enable benchmarking of SDN Controllers in two ways; as a standalone controller and as a cluster of homogeneous controllers. These tests are recommended for execution in lab environments rather than in live network deployments. Performance benchmarking of a federation of controllers is beyond the scope of this document. NEW: This document defines methodology to measure the networking metrics of SDN controllers. For the purpose of this memo, the SDN controller is a function that manages and controls Network Devices. Any SDN controller without a control capability is out of scope for this memo. The tests defined in this document enable benchmarking of SDN Controllers in two ways; as a standalone controller and as a cluster of homogeneous controllers. These tests are recommended for execution in lab environments rather than in live network deployments. Performance benchmarking of a federation of controllers is beyond the scope of this document. [ACM] How do you distinguish a homogenious cluster from a federation ?? It would be good to clarify this here. Section 3., paragraph 1: OLD: The tests defined in this document enable measurement of an SDN controllers performance in standalone mode and cluster mode. This section defines common reference topologies that are later referred to in individual tests. NEW: The tests defined in this document enable measurement of an SDN controllers performance in standalone mode and cluster mode. This section defines common reference topologies that are later referred | to in individual tests (Additional forwarding Plane topologies are | provided in Appendix A). Section 3.2., paragraph 1: OLD: +-----------------------------------------------------------+ | Application Plane Test Emulator | | | | +-----------------+ +-------------+ | | | Application | | Service | | | +-----------------+ +-------------+ | | | +-----------------------------+(I2)-------------------------+ | | | (Northbound interface) +---------------------------------------------------------+ | | | ------------------ ------------------ | | | SDN Controller 1 | <--E/W--> | SDN Controller n | | | ------------------ ------------------ | | | | Device Under Test (DUT) | +---------------------------------------------------------+ | (Southbound interface) | | +-----------------------------+(I1)-------------------------+ | | | +-----------+ +-----------+ | | | Network |l1 ln-1| Network | | | | Device 1 |---- .... ----| Device n | | | +-----------+ +-----------+ | | |l0 |ln | | | | | | | | | | +---------------+ +---------------+ | | | Test Traffic | | Test Traffic | | | | Generator | | Generator | | | | (TP1) | | (TP2) | | | +---------------+ +---------------+ | | | | Forwarding Plane Test Emulator | +-----------------------------------------------------------+ NEW: +-----------------------------------------------------------+ | Application Plane Test Emulator | | | | +-----------------+ +-------------+ | | | Application | | Service | | | +-----------------+ +-------------+ | | | +-----------------------------+(I2)-------------------------+ | | | | (Northbound interfaces) +---------------------------------------------------------+ | | | ------------------ ------------------ | | | SDN Controller 1 | <--E/W--> | SDN Controller n | | | ------------------ ------------------ | | | | Device Under Test (DUT) | +---------------------------------------------------------+ | | (Southbound interfaces) | | +-----------------------------+(I1)-------------------------+ | | | +-----------+ +-----------+ | | | Network |l1 ln-1| Network | | | | Device 1 |---- .... ----| Device n | | | +-----------+ +-----------+ | | |l0 |ln | | | | | | | | | | +---------------+ +---------------+ | | | Test Traffic | | Test Traffic | | | | Generator | | Generator | | | | (TP1) | | (TP2) | | | +---------------+ +---------------+ | | | | Forwarding Plane Test Emulator | +-----------------------------------------------------------+ Section 4.1., paragraph 1: OLD: The test cases SHOULD use Leaf-Spine topology with at least 1 Network Device in the topology for benchmarking. The test traffic generators TP1 and TP2 SHOULD be connected to the first and the last leaf Network Device. If a test case uses test topology with 1 Network Device, the test traffic generators TP1 and TP2 SHOULD be connected to the same node. However to achieve a complete performance characterization of the SDN controller, it is recommended that the controller be benchmarked for many network topologies and a varying number of Network Devices. This document includes a few sample test topologies, defined in Section 10 - Appendix A for reference. Further, care should be taken to make sure that a loop prevention mechanism is enabled either in the SDN controller, or in the network when the topology contains redundant network paths. NEW: The test cases SHOULD use Leaf-Spine topology with at least 1 Network Device in the topology for benchmarking. The test traffic generators TP1 and TP2 SHOULD be connected to the first and the last leaf Network Device. If a test case uses test topology with 1 Network Device, the test traffic generators TP1 and TP2 SHOULD be connected to the same node. However to achieve a complete performance characterization of the SDN controller, it is recommended that the controller be benchmarked for many network topologies and a varying number of Network Devices. This document | includes two sample test topologies, defined in Section 10 - Appendix A for reference. Further, care should be taken to make sure that a loop prevention mechanism is enabled either in the SDN controller, or in the network when the topology contains redundant network paths. Section 4.2., paragraph 1: OLD: Test traffic is used to notify the controller about the arrival of new flows. The test cases SHOULD use multiple frame sizes as recommended in RFC2544 for benchmarking. NEW: | Test traffic is used to notify the controller about the asynchronous arrival of new flows. The test cases SHOULD use multiple frame sizes as recommended in RFC2544 for benchmarking. [ACM] DISCUSS: what is the value of using all the 2544/different packet sizes, when a single packet of any size is sufficient to trigger the notification? Maybe only use two sizes, man MTU and min ??? Sometimes the packet is forwarded with the notification, and this could make a difference... Section 4.4., paragraph 1: OLD: There may be controller implementations that support unencrypted and encrypted network connections with Network Devices. Further, the controller may have backward compatibility with Network Devices running older versions of southbound protocols. It is recommended that the controller performance be measured with one or more applicable connection setup methods defined below. NEW: There may be controller implementations that support unencrypted and encrypted network connections with Network Devices. Further, the controller may have backward compatibility with Network Devices | running older versions of southbound protocols. It may be useful | to measure the controller performance with one or more | applicable connection setup methods defined below. Section 4.5., paragraph 1: OLD: The measurement accuracy depends on several factors including the point of observation where the indications are captured. For example, the notification can be observed at the controller or test emulator. The test operator SHOULD make the observations/ measurements at the interfaces of test emulator unless it is explicitly mentioned otherwise in the individual test. NEW: The measurement accuracy depends on several factors including the point of observation where the indications are captured. For example, the notification can be observed at the controller or test emulator. The test operator SHOULD make the observations/ measurements at the interfaces of test emulator unless it is explicitly mentioned otherwise in the individual test. | In any case, the locations of measurement points MUST be reported. Section 4.6., paragraph 1: OLD: The SDN controller in the test setup SHOULD be connected directly with the forwarding and the management plane test emulators to avoid any delays or failure introduced by the intermediate devices during benchmarking tests. NEW: The SDN controller in the test setup SHOULD be connected directly with the forwarding and the management plane test emulators to avoid any delays or failure introduced by the intermediate devices during | benchmarking tests. When the controller is implemented as a Virtual | machine, details aof the physical and logical connectivity MUST | be reported. Section 4.7., paragraph 5: OLD: 1.Controller name and version 2.Northbound protocols and versions 3.Southbound protocols and versions 4.Controller redundancy mode (Standalone or Cluster Mode) 5.Connection setup (Unencrypted or Encrypted) 6.Network Topology (Mesh or Tree or Linear) 7.Network Device Type (Physical or Virtual or Emulated) 8.Number of Nodes 9.Number of Links 10.Test Traffic Type 11.Controller System Configuration (e.g., CPU, Memory, Operating System, Interface Speed etc.,) 12.Reference Test Setup (e.g., Section 3.1 etc.,) NEW: 1.Controller name and version 2.Northbound protocols and versions 3.Southbound protocols and versions 4.Controller redundancy mode (Standalone or Cluster Mode) 5.Connection setup (Unencrypted or Encrypted) 6.Network Topology (Mesh or Tree or Linear) 7.Network Device Type (Physical or Virtual or Emulated) 8.Number of Nodes 9.Number of Links | 10.Test Traffic Type (dataplane ??) 11.Controller System Configuration (e.g., CPU, Memory, Operating | System, Interface Speed etc., see Section 3.2 of [draft-ietf-bmwg-virtual-net-04]) 12.Reference Test Setup (e.g., Section 3.1 etc.,) Section 4.7., paragraph 6: OLD: Controller Settings Parameters: 1.Topology re-discovery timeout 2.Controller redundancy mode (e.g., active-standby etc.,) NEW: Controller Settings Parameters: 1.Topology re-discovery timeout 2.Controller redundancy mode (e.g., active-standby etc.,) | 3.Controller state persistence enabled/disabled Section 5.1.1., paragraph 4: OLD: The test SHOULD use one of the test setups described in section 3.1 or section 3.2 of this document. NEW: The test SHOULD use one of the test setups described in section 3.1 | or section 3.2 of this document in combination with Appendix A. [ACM] there are more places to add this point... Section 5.1.2., paragraph 5: OLD: Prerequisite: 1.The controller MUST have completed the network topology discovery for the connected Network Devices. NEW: Prerequisite: [ACM] add for all prereq's: successfully (like below) | 1.The controller MUST have successfully completed the network topology discovery for the connected Network Devices. Section 5.1.2., paragraph 10: OLD: Where Nrx is the total number of successful messages exchanged Tr1 + Tr2 + Tr3..Trn Average Asynchronous Message Processing Time= -------------------- Total Test Iterations NEW: Where Nrx is the total number of successful messages exchanged Tr1 + Tr2 + Tr3..Trn Average Asynchronous Message Processing Time= -------------------- | Total Test repetitions Section 2., paragraph 3: OLD: Tr1 + Tr2 + Tr3..Trn Average Asynchronous Message Processing Rate= -------------------- Total Test Iterations NEW: Tr1 + Tr2 + Tr3..Trn Average Asynchronous Message Processing Rate= -------------------- | Total Test Repetitions Section 5.1.4., paragraph 2: OLD: The time taken by the controller to setup a path reactively between source and destination node, defined as the interval starting with the first flow provisioning request message received by the controller(s), ending with the last flow provisioning response message sent from the controller(s) at it Southbound interface. NEW: The time taken by the controller to setup a path reactively between source and destination node, defined as the interval starting with the first flow provisioning request message received by the | controller(s) at its Southbound? interface, ending with the last flow provisioning response | message sent from the controller(s) at its Southbound interface. Section 5.1.4., paragraph 4: OLD: The test SHOULD use one of the test setups described in section 3.1 or section 3.2 of this document. NEW: The test SHOULD use one of the test setups described in section 3.1 | or section 3.2 of this document. The number of Network Devices in the | path is a parameter of the test that may be varied from 2 to ?? in | repetitions of this test. Section 5., paragraph 3: OLD: Tr1 + Tr2 + Tr3 .. Trn Average Proactive Path Provisioning Time = ----------------------- Total Test Iterations NEW: Tr1 + Tr2 + Tr3 .. Trn Average Proactive Path Provisioning Time = ----------------------- | Total Test Repetitions Section 2., paragraph 3: OLD: Tr1 + Tr2 + Tr3 .. Trn Average Reactive Path Provisioning Rate = ------------------------ Total Test Iterations NEW: Tr1 + Tr2 + Tr3 .. Trn Average Reactive Path Provisioning Rate = ------------------------ | Total Test Repetitions Section 5.1.7., paragraph 2: OLD: Measure the maximum number of independent paths a controller can concurrently establish between source and destination nodes proactively, defined as the number of paths provisioned by the controller(s) at its Southbound interface for the paths provisioned in its Northbound interface between the start of the test and the expiry of given test duration . NEW: | Measure the maximum rate of independent paths a controller can concurrently establish between source and destination nodes proactively, defined as the number of paths provisioned by the | controller(s) at its Southbound interface for the paths requested in its Northbound interface between the start of the test and the | expiry of given test duration. The measurement is based on dataplane | observations of successful path activation (but this is dependent | on the emulated switches and the controller ??) Section 3., paragraph 3: OLD: Tr1 + Tr2 + Tr3 .. Trn Average Proactive Path Provisioning Rate = ----------------------- Total Test Iterations NEW: Tr1 + Tr2 + Tr3 .. Trn Average Proactive Path Provisioning Rate = ----------------------- | Total Test Repetitions Section 5.1.8., paragraph 2: OLD: The amount of time required for the controller to detect any changes in the network topology, defined as the interval starting with the notification message received by the controller(s) at its Southbound interface, ending with the first topology rediscovery messages sent from the controller(s) at its Southbound interface. NEW: | The amount of time required for the controller to react to changes in the network topology, defined as the interval starting with the notification message received by the controller(s) at its Southbound interface, ending with the first topology rediscovery messages sent from the controller(s) at its Southbound interface. Section 4., paragraph 3: OLD: Tr1 + Tr2 + Tr3 .. Trn Average Network Topology Change Detection Time = ------------------ Total Test Iterations NEW: Tr1 + Tr2 + Tr3 .. Trn Average Network Topology Change Detection Time = ------------------ | Total Test Repetitions Section 4., paragraph 6: OLD: 5.2. 6.2 Scalability NEW: |5.2. Scalability Section 3., paragraph 0: OLD: 1. Establish control connection with controller from every Network Device emulated in the forwarding plane test emulator. 2. Stop the test when the controller starts dropping the control connection. 3. Record the number of successful connections established with the controller (CCn) at the forwarding plane test emulator. NEW: 1. Establish control connection with controller from every Network Device emulated in the forwarding plane test emulator. 2. Stop the test when the controller starts dropping the control | connections, or refuses further connections. 3. Record the number of successful connections established with the controller (CCn) at the forwarding plane test emulator. Section 6., paragraph 5: OLD: 5.2.3. 6.2.3 Forwarding Table Capacity NEW: |5.2.3. Forwarding Table Capacity Section 2., paragraph 12: OLD: 5.3. 6.3 Security NEW: |5.3. Security Section 2., paragraph 13: OLD: 5.3.1. 6.3.1 Exception Handling NEW: |5.3.1. Exception Handling Section 5., paragraph 6: OLD: - Number of cluster nodes - Redundancy mode NEW: | - Number of cluster nodes and data overlap among cluster members | - Redundancy mode (examples ??) Section 5., paragraph 7: OLD: - Controller Failover NEW: | - Controller Failover ( ?? ) Section 5., paragraph 8: OLD: - Time Packet Loss NEW: | - Time Packet Loss ( ?? ) Section 4., paragraph 0: OLD: 1. Send bi-directional traffic continuously with unique sequence number from TP1 and TP2. 2. Bring down a link or switch in the traffic path. 3. Stop the test after receiving first frame after network re- convergence. 4. Record the time of last received frame prior to the frame loss at TP2 (TP2-Tlfr) and the time of first frame received after the frame loss at TP2 (TP2-Tffr). NEW: 1. Send bi-directional traffic continuously with unique sequence number from TP1 and TP2. 2. Bring down a link or switch in the traffic path. 3. Stop the test after receiving first frame after network re- convergence. 4. Record the time of last received frame prior to the frame loss at TP2 (TP2-Tlfr) and the time of first frame received after the | frame loss at TP2 (TP2-Tffr). There must be a gap in sequence numbers | of these frames. Section 8., paragraph 1: OLD: Benchmarking tests described in this document are limited to the performance characterization of controller in lab environment with isolated network. NEW: Benchmarking tests described in this document are limited to the performance characterization of controller in lab environment with isolated network. [ACM] more would be good here, see BMWG examples. Appendix B., paragraph 0: OLD: Appendix B. Benchmarking Methodology using OpenFlow Controllers NEW: [ACM] stopped here... Appendix B. Benchmarking Methodology using OpenFlow Controllers > -----Original Message----- > From: bmwg [mailto:bmwg-bounces@ietf.org] On Behalf Of MORTON, ALFRED C > (AL) > Sent: Sunday, October 23, 2016 1:45 PM > To: bmwg@ietf.org > Subject: [bmwg] WGLC: draft-ietf-bmwg-sdn-controller-benchmark-term-02 > and meth-02 > > > BMWG: > > A WG Last Call period for the Internet-Drafts on > SDN Controller Benchmarking Terminology and Methodology: > > https://tools.ietf.org/html/draft-ietf-bmwg-sdn-controller-benchmark- > term-02 > https://tools.ietf.org/html/draft-ietf-bmwg-sdn-controller-benchmark- > meth-02 > > will be open from 24 October 2016 through 15 November 2016. > > > These drafts are beginning the BMWG Last Call Process. See > http://www1.ietf.org/mail-archive/web/bmwg/current/msg00846.html > > Please read and express your opinion on whether or not these > Internet-Drafts should be forwarded to the Area Directors for > publication as Informational RFCs. Send your comments > to this list or to co-chair acmorton@att.com > > for the co-chairs, > Al > > _______________________________________________ > bmwg mailing list > bmwg@ietf.org > https://www.ietf.org/mailman/listinfo/bmwg
- [bmwg] WGLC: draft-ietf-bmwg-sdn-controller-bench… MORTON, ALFRED C (AL)
- Re: [bmwg] WGLC: draft-ietf-bmwg-sdn-controller-b… MORTON, ALFRED C (AL)
- Re: [bmwg] WGLC: draft-ietf-bmwg-sdn-controller-b… Bhuvan (Veryx Technologies)
- Re: [bmwg] WGLC: draft-ietf-bmwg-sdn-controller-b… Marius Georgescu
- Re: [bmwg] WGLC: draft-ietf-bmwg-sdn-controller-b… Sarah B
- [bmwg] WGLC: draft-ietf-bmwg-vswitch-opnfv-01 Sarah B
- Re: [bmwg] WGLC: draft-ietf-bmwg-vswitch-opnfv-01 Sarah B
- Re: [bmwg] WGLC: draft-ietf-bmwg-sdn-controller-b… Bhuvan (Veryx Technologies)
- Re: [bmwg] WGLC: draft-ietf-bmwg-sdn-controller-b… Marius Georgescu
- Re: [bmwg] WGLC: draft-ietf-bmwg-sdn-controller-b… Bhuvan (Veryx Technologies)
- Re: [bmwg] WGLC: draft-ietf-bmwg-sdn-controller-b… Bhuvan (Veryx Technologies)