Re: [bmwg] Feedback for Benchmarking Methodology for OpenFlow SDN Controller Performance

"Castelli, Brian" <Brian.Castelli@spirent.com> Wed, 04 June 2014 15:28 UTC

Return-Path: <Brian.Castelli@spirent.com>
X-Original-To: bmwg@ietfa.amsl.com
Delivered-To: bmwg@ietfa.amsl.com
Received: from localhost (ietfa.amsl.com [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id D7BC61A02E5 for <bmwg@ietfa.amsl.com>; Wed, 4 Jun 2014 08:28:51 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.651
X-Spam-Level:
X-Spam-Status: No, score=-2.651 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, HTML_MESSAGE=0.001, J_CHICKENPOX_32=0.6, RCVD_IN_DNSWL_LOW=-0.7, RP_MATCHES_RCVD=-0.651, SPF_PASS=-0.001] autolearn=ham
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id wnilaZ_85x_j for <bmwg@ietfa.amsl.com>; Wed, 4 Jun 2014 08:28:49 -0700 (PDT)
Received: from webmail.spirent.com (smtp1.spirent.com [38.111.148.215]) (using TLSv1 with cipher AES128-SHA (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 7E6DA1A02DA for <bmwg@ietf.org>; Wed, 4 Jun 2014 08:28:49 -0700 (PDT)
Received: from SPCCOREXCMBX01.AD.SPIRENTCOM.COM ([169.254.1.250]) by SPCCOREXCCAS01.AD.SPIRENTCOM.COM ([10.96.66.20]) with mapi id 14.02.0387.000; Wed, 4 Jun 2014 08:28:43 -0700
From: "Castelli, Brian" <Brian.Castelli@spirent.com>
To: "Castelli, Brian" <Brian.Castelli@spirent.com>, "Bhuvan (Veryx Technologies)" <bhuvaneswaran.vengainathan@veryxtech.com>, "bmwg@ietf.org" <bmwg@ietf.org>
Thread-Topic: [bmwg] Feedback for Benchmarking Methodology for OpenFlow SDN Controller Performance
Thread-Index: AQHPgAmqkw5oL53PiESsXt8X0YocVA==
Date: Wed, 04 Jun 2014 15:28:42 +0000
Message-ID: <CFB4B28A.541E%brian.castelli@spirent.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Microsoft-MacOutlook/14.4.2.140509
x-originating-ip: [10.96.66.253]
x-disclaimer: Yes
Content-Type: multipart/alternative; boundary="_000_CFB4B28A541Ebriancastellispirentcom_"
MIME-Version: 1.0
Archived-At: http://mailarchive.ietf.org/arch/msg/bmwg/gRTTvlagolrE5d-8ak-DVnaGa68
Cc: 'Anton Basil' <anton.basil@veryxtech.com>, 'Vishwas Manral' <vishwas.manral@gmail.com>
Subject: Re: [bmwg] Feedback for Benchmarking Methodology for OpenFlow SDN Controller Performance
X-BeenThere: bmwg@ietf.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: Benchmarking Methodology Working Group <bmwg.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/bmwg>, <mailto:bmwg-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/bmwg/>
List-Post: <mailto:bmwg@ietf.org>
List-Help: <mailto:bmwg-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/bmwg>, <mailto:bmwg-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 04 Jun 2014 15:28:52 -0000

I forgot one comment…

Section 6.1.1.1.5 reporting format:

The last sentence of the first paragraph says, “The test report MUST indicate whether learning was enabled or disabled.” What type of learning does this requirement refer to? And, if it’s important, shouldn’t it be a Setup Parameter listed in section 6.1.1.1.2?

From: <Castelli>, Brian Castelli <brian.castelli@spirent.com<mailto:brian.castelli@spirent.com>>
Date: Wednesday, June 4, 2014 at 11:15 AM
To: "Bhuvan (Veryx Technologies)" <bhuvaneswaran.vengainathan@veryxtech.com<mailto:bhuvaneswaran.vengainathan@veryxtech.com>>, "bmwg@ietf.org<mailto:bmwg@ietf.org>" <bmwg@ietf.org<mailto:bmwg@ietf.org>>
Cc: 'Anton Basil' <anton.basil@veryxtech.com<mailto:anton.basil@veryxtech.com>>, 'Vishwas Manral' <vishwas.manral@gmail.com<mailto:vishwas.manral@gmail.com>>
Subject: Re: [bmwg] Feedback for Benchmarking Methodology for OpenFlow SDN Controller Performance

Bhuvan,

Thank you for the response and the advice on how to best participate in the discussion. I have a few follow-up comments/questions:

Section 6.1.1.1.3 Procedure:

How is it that we are going to assist the person running the test at creating the proper packet-in messages? What are the characteristics of the packet-ins that will make this test successful?

[Bhuvan] To make the test successful, we need to simulate packet-in messages for each flow. Typically flows are identified using dst.mac and dst.ip.

[Brian] Is this well known to test authors? Should the document provide guidance about how to create packet_in messages that correspond to the desired flows?



  *   In the reactive mode setup, there is no mention of the rate at which packet-ins are sent, but the test result can be skewed by the rate. If I send only one packet-in per second, for example, the FSR can’t be faster than one flow per second. I think we should consider specifying the rate at which the packet-ins are sent to the controller, iterating over packet-in rates, iterating over numbers of switches.

[Bhuvan] We agree with your comment. We are planning to define the minimum number of packet-ins that need to be sent on each OpenFlow connection (typically greater than 1). But the assumption is that we need to send as many packet-in messages as possible on an established OpenFlow connection to measure the FSR.

 [Brian] I think it would be interesting to iterate over different packet_in rates. I would expect controller response time to degrade with increased packet_in load. An iteration would allow us to determine how well the controller scales and how gracefully response time degrades.

  *   Without the iteration I have suggested, above, I don’t understand why the test should be repeated multiple times? Do we not expect the test results to be consistent between test runs?

[Bhuvan] We expect slight variations across test runs. Hence we are repeating the test multiple times to provide tester a reliable metric.

[Brian] Personal experience says that testers do not care about slight variations. They want to run the test and get an answer, not run the test 10 times and find that variation is less than 1% across all tests. It is far more interesting for a tester when there is a variable that changes between iterations, as I have suggested, above. Running the same exact test multiple times is usually a waste of test resources.

Section 6.1.1.1.4 Measurement:

  *   FSRi equation is incorrect. FSR = number of flows/time, not OFR/time.

[Bhuvan] I would like to provide some more clarification on this equation. Here, the number of flows = number of packet-in messages sent to the controller. We have conducted this test with various controllers and observed that the number of OF responses received from the controller is always lesser than the number of packet-in messages sent to the controller due to the TCP window restrictions. Hence we recommend, FSR should be measured based on the OF responses received from the controller. The FSR equation will therefore be OF responses/time.

 [Brian] This is my mistake. I jumped to the conclusion that ‘R’ in OFR was rate. Looking back at section 2, Terminology, OFR = OpenFlow responses. I thought the equation was rate/time. My bad. I withdraw the comment.


E-mail confidentiality.
--------------------------------
This e-mail contains confidential and / or privileged information belonging to Spirent Communications plc, its affiliates and / or subsidiaries. If you are not the intended recipient, you are hereby notified that any disclosure, copying, distribution and / or the taking of any action based upon reliance on the contents of this transmission is strictly forbidden. If you have received this message in error please notify the sender by return e-mail and delete it from your system.

Spirent Communications plc
Northwood Park, Gatwick Road, Crawley, West Sussex, RH10 9XN, United Kingdom.
Tel No. +44 (0) 1293 767676
Fax No. +44 (0) 1293 767677

Registered in England Number 470893
Registered at Northwood Park, Gatwick Road, Crawley, West Sussex, RH10 9XN, United Kingdom.

Or if within the US,

Spirent Communications,
26750 Agoura Road, Calabasas, CA, 91302, USA.
Tel No. 1-818-676- 2300