[bmwg] FW: WGLC: draft-ietf-bmwg-igp-dataplane drafts

"Kris Michielsen" <kmichiel@cisco.com> Fri, 29 January 2010 17:08 UTC

Return-Path: <kmichiel@cisco.com>
X-Original-To: bmwg@core3.amsl.com
Delivered-To: bmwg@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id 5680D3A6960 for <bmwg@core3.amsl.com>; Fri, 29 Jan 2010 09:08:18 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: 0.231
X-Spam-Level:
X-Spam-Status: No, score=0.231 tagged_above=-999 required=5 tests=[BAYES_00=-2.599, FM_ASCII_ART_SPACINGc=0.833, HTML_MESSAGE=0.001, J_CHICKENPOX_22=0.6, MIME_QP_LONG_LINE=1.396]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id Ce767c96OvmX for <bmwg@core3.amsl.com>; Fri, 29 Jan 2010 09:07:15 -0800 (PST)
Received: from av-tac-bru.cisco.com (weird-brew.cisco.com [144.254.15.118]) by core3.amsl.com (Postfix) with ESMTP id 777173A693D for <bmwg@ietf.org>; Fri, 29 Jan 2010 09:07:14 -0800 (PST)
X-TACSUNS: Virus Scanned
Received: from strange-brew.cisco.com (localhost.cisco.com [127.0.0.1]) by av-tac-bru.cisco.com (8.13.8+Sun/8.13.8) with ESMTP id o0TH7ZOU010973 for <bmwg@ietf.org>; Fri, 29 Jan 2010 18:07:35 +0100 (CET)
Received: from kmichielwxp (rtp-vpn3-1085.cisco.com [10.82.220.66]) by strange-brew.cisco.com (8.13.8+Sun/8.13.8) with ESMTP id o0TH7QBI025207; Fri, 29 Jan 2010 18:07:27 +0100 (CET)
From: Kris Michielsen <kmichiel@cisco.com>
To: bmwg@ietf.org
Date: Fri, 29 Jan 2010 18:07:25 +0100
Message-ID: <001701caa105$8969ef50$42dc520a@emea.cisco.com>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="----=_NextPart_000_0018_01CAA10D.EB2E5750"
X-Mailer: Microsoft Office Outlook 11
X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.3350
Thread-Index: Acpb8kR+HpYbPrm4RAuNTsFtTLScJQQaLybgCf9HmWABYuoosAADmsYQAJ7FElABJbp5cA==
X-Mailman-Approved-At: Fri, 29 Jan 2010 09:12:06 -0800
Subject: [bmwg] FW: WGLC: draft-ietf-bmwg-igp-dataplane drafts
X-BeenThere: bmwg@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Benchmarking Methodology Working Group <bmwg.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/bmwg>, <mailto:bmwg-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/bmwg>
List-Post: <mailto:bmwg@ietf.org>
List-Help: <mailto:bmwg-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/bmwg>, <mailto:bmwg-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 29 Jan 2010 17:08:18 -0000

Forwarding this discussion to the list.
Reactions and/or more comments welcome!
 
Regards,
Kris

  _____  

From: Dewangan, Anuj [mailto:Anuj.Dewangan@spirent.com] 
Sent: 24 January 2010 02:20
To: Kris Michielsen; Al Morton; sporetsky@allot.com; bimhoff@planetspork.com
Subject: RE: [bmwg] WGLC: draft-ietf-bmwg-igp-dataplane drafts



Hi Kris,


 

I have commented inline in red and marked as [Anuj1:]. Also I have added comments/additions to the draft and have attached it.
Please look for "[Anuj:]" to find the comments. However, the draft still does not answer some problems/issues seen when such a test
is performed practically:

 

1. Due to inherent jitters in the traffic forwarded by the DUT, the graph is never as smooth as in theory. Even without a
convergence event, the traffic rate is seen fluctuating due to a combination of jitters in the forwarded traffic and the resolution
of sampling interval, which is supposed to be as small as possible (and with the definition of atleast one packet per route) and
should usually be in milliseconds for any useful/accurate measurement required. As an example, if there are only a few routes in the
test, then even a couple of packets extra seen in a sampling interval (due to forwarding jitters) will cause a major fluctuation in
the convergence graph. In such a case, the convergence instants are very difficult (or impossible) to calculate. This is a problem
even with a "normal" number of routes but a very small sampling interval - which is possible if the offered rate (=DUT throughput)
is high. This is not addressed anywhere in the draft.

 

2. Sampling interval just as a function of the number of routes and the offered rate is not sufficient as is seen above. For the
ECMP tests, because Sampling Interval value is set on each egress port and it is calculated as being the time for sending one packet
per route, and each ECMP egress port receives part of the traffic (corresponding to the partial routes corresponding to that egress
port on the DUT FIB), the convergence graph on each of the ports is even more fluctuating. Hence while adding up the rates from
these ports, it becomes even more difficult to determine convergence instants especially the recovery instant. Again this has not
been addressed in the draft.

 

3. Please give special attention to my comments on Offered Load and the measurement accuracy in the attached document. I have
additional comments on things that I find missing in the draft and will comment on it when required.

 

The answers to these may need changes to meth and the term drafts. Are you willing to work with me on this?

 

Thanks,

Anuj

 

  _____  

From: Kris Michielsen [mailto:kmichiel@cisco.com] 
Sent: Thursday, January 21, 2010 6:29 AM
To: Dewangan, Anuj; 'Al Morton'; sporetsky@allot.com; bimhoff@planetspork.com
Subject: RE: [bmwg] WGLC: draft-ietf-bmwg-igp-dataplane drafts

Anuj,

 

Thank you for taking the time to review these drafts.

Replies below in green.

 


  _____  


From: Dewangan, Anuj [mailto:Anuj.Dewangan@spirent.com] 
Sent: 20 January 2010 17:26
To: Kris Michielsen; Al Morton; sporetsky@allot.com; bimhoff@planetspork.com
Subject: RE: [bmwg] WGLC: draft-ietf-bmwg-igp-dataplane drafts

Answers inline in red.

 


  _____  


From: Kris Michielsen [mailto:kmichiel@cisco.com] 
Sent: Thursday, January 14, 2010 8:52 AM
To: Dewangan, Anuj; 'Al Morton'; sporetsky@allot.com; bimhoff@planetspork.com
Subject: RE: [bmwg] WGLC: draft-ietf-bmwg-igp-dataplane drafts

 

Hi Anuj,

 

Many thanks for your very valuable comments and suggestions. See comments and questions below.


  _____  


From: Dewangan, Anuj [mailto:Anuj.Dewangan@spirent.com] 
Sent: 23 November 2009 17:54
To: Al Morton; sporetsky@allot.com; bimhoff@planetspork.com; kmichiel@cisco.com
Subject: RE: [bmwg] WGLC: draft-ietf-bmwg-igp-dataplane drafts

Hi All,

 

Some comments on the dataplane drafts (http://tools.ietf.org/html/draft-ietf-bmwg-igp-dataplane-conv-meth-19 and
http://tools.ietf.org/html/draft-ietf-bmwg-igp-dataplane-conv-term-19) that I sent to the authors last year:

 

1. Section 3.1 and Section 3.2: 

 

For the first topology, the Tester emulates two routers and routes for traffic destinations. If the Tester is assumed to be able to
do that why is it assumed that R2 cannot be emulated by the tester? By doing that, the convergence time on R1 can be calculated. The
whole point being, that instead of making assumptions about a Tester capabilities, the standard should talk about how topologies
should look like to measure convergence on a particular device. How that is done should be left on the Tester.

There aredifferences between R2 being emulated by the Tester and R2 being a real device: a real device needs time to detect the
failure, schedule,generate and transmit the LSP/LSA. These may sum up to a significant part of the total convergence time equation,
which is lacking or not matching reality when emulating that device.

 

The Tester emulation can run an implementation of the same routing protocols in question here and should be capable of performing
routing functions like a "real" router. Assumptions on tester capabilities/incapabilities should be avoided. 

Obviously a Tester can perfectly emulate routing protocols. But the timing of a real device R2 is crucial in this testcase. If R2 is
not a real router of the same type as R1 then you are measuring only a part of the convergence time equation, and you get a testcase
equivalent to the IGP metric change in 8.3.2.

 

[Anuj1:]  Could you elaborate on why R2 should be the same device model as R1 and how the "timing" of R2 influences the convergence
times? The role of control plane in a "dataplane" only convergence test is restricted to signaling changes in the topology or
simulating some fail-overs (like stopping hellos to simulate router down etc). This can be done equally well by a tester as any
other device because it would be running the same IGP protocol. The additional hop between R2 and the Tester can be simulated by the
Tester too. The only case where this may be important is the presence of lots of routes where the protocol parameters like LSA/LSP
update intervals may determine the resulting traffic patterns. But again these "Protocol timings on the Tester SHOULD be made equal
to the timings on R1." - a condition like this will ensure that a convergence test is performed using a single router (DUT) like R1.

 

 

Also, the topologies are very restrictive fundamentally. There is a possibility of a topology where multiple egress interfaces are
present. Each interface except the Preferred Egress Interface advertises the same route cost. So effectively there can be N
Next-best Egress Interfaces. When a Convergence Event takes place, the traffic should move from the Preferred Egress Interface to
load distribution across the Next-best Interfaces till total convergence is achieved in the network. Is such a topology is not
acceptable, then it should be clearly mentioned and the reason for it stated. If such a topology is not acceptable, then the same
reasoning should be applied to the N Interfaces for Section 3.3. If the focus to this standard is not for such cases, then these
should be mentioned as out of scope.

Same applies to the topology in Section 3.4 for remote events. Tester capability is assumed and not documented. 

 

The likelihood to have N to N-1convergence is much higher than a 1 to N convergence. But I have no objections against such a
topology.

 

Does this not mean that such a topology should either be addressed in the benchmark or a reason given as to why this is not
addressed? 

I added it for now, but I'm not yet fully convinced if these cases are needed.

 

[Anuj1:]  Figure 1 now is just a specific case of Figure 4 i.e. Figure 4 with number of members in next-best ECMP set = 1 is
equivalent to Figure 1. This is what I originally meant with my comment, that Figure 1 is too specific and should be generalized
like you have in Figure 4. The only case is that either topology for Figure 1 should be removed or should be highlighted as a
specific case of topology in Figure 4.

 

Similar argument as above applies to Figure 5 and Figure 2.

 

 

 

2. Measurement accuracy for loss derived method (6.1.3) should specify which metric it is referring to. e.g. If it is the metric
calculated as "Connectivity Packet Loss/Offered Load" then the convergence time may be upto 1 Inter-Packet arrival period more. This
is because the first packet in the sequence of packets that got dropped could have possibly been dropped if this packet had arrived
in the interval between the last packet to this route and this packet. The interval packet arrival is calculated as "1/offered
load". Also the convergence time could be just greater than Inter-Packet arrival for "Connectitivity Packet Loss-1" packets =
"(Connectivity Packet Loss - 1)/Offered load". Again this is possible, if there were a packet following the last packet dropped in
the sequence of dropped packets to the route in the interval between the last packet dropped and the packet following it (=Inter
packet arrival). Hence in this case, the range of the metric is "Connectivity Packet loss/Offered load +- 1/offered load". Ranges
should be specified for each of the reported metrics.

I agree, the accuracyneeds to be corrected.

 

3. Section 6.2.1 recommends a Sampling interval. There is no discussion on the influence of the offered traffic rate and the
sampling interval. eg. if the offered traffic rate is 10 packets/second and the Sampling rate is 10 ms, then 1 packet is received
every 10 Sampling intervals. This means that 9/10 sampling intervals have a traffic rate of 0 because no packets were received
during those Sampling Intervals. This will have a profound impact on the convergence graph. The argument of offered rate being equal
to the DUT throughput and hence not being a small value would be a generic assumption on all DUTs and should not be resorted to in a
standard, because there is no knowledge about the DUT throughput and anything else would be an assumption. Instead of recommending a
sampling interval, sampling interval should be recommended to be a function of the following:

 

i. Offered Traffic rate: 

 

This would mean that the Sampling Interval would be calculated based on the Offered traffic rate or the Received Traffic rate (as
argued below) at the egress ports. This can be done by benchmarking minimum number of packets per sampling interval. Hence if x
packets per Sampling interval is benchmarked, then the Sampling Interval will become a function of the offered traffic rate - which
is benchmarked as the DUT throughput. Hence the Sampling Interval for each test may be different but at the same time ensure that
the convergence graph is "smoother" and the problem stated at the head of this section is solved.

-Note that this may not apply to the ECMP test cases as traffic is distributed across the egress interfaces and the smoothness of
the graph will be lost because of the traffic distribution and consequent smaller number of packets per sampling interval. So this
standard MAY be based on RECEIVED traffic rate on the egress ports and not the offered traffic rate.

To make sure I understand what you're saying here: "for the ECMP testcases we should base sampling rate on the traffic rate received
per egress port since the total offered load is distributed over multiple egress interfaces". Correct?

 

 

This is true for all testcases not only ECMP test-cases. The sampling rate then can only be calculated per port. The inaccuracy of
the entire test can then be a function of the sampling rates on each port.

 

Don't we only care about total received rate? Even if traffic is received over more than one port, we should add all port stats
together and sample that total. Or sample all port stats andsum up the sampled stats.

 

Because sampling rate and sampling is per port, only summing up would work. 

Only the aggregate load on the ECMP members is of importance, otherwise one has to make assumptions/requirements on how the router
distributes the load over the ECMP members.

 

[Anuj1:] Yes. Only the aggregate is important but sampling still remains per port and it has practical implications. This is
discussed in Point 2 in my email from 23rd Jan.

 

 

ii. Number of routes: 

 

There should be atleast one packet per route in the sampling interval. This has been addressed by the standard. However if the
number of routes in the test is very large, then the Sampling interval again becomes a function of the Offered Traffic Rate. eg. If
the number of routes is 10000 and the offered rate (=DUT throughput) is 10000 fps, then the Sampling Interval becomes 1 second. This
example is based on the present specification. In this case, the Sampling Interval cannot be set to 10 ms, because then it does not
make sense in two ways:

-There is far too much fluctuations in the convergence graph. This is because there are only 100 packets per Sampling Interval.

-Setting it to 10ms does not increase the accuracy of the test because of the fact that one packet is not being sent to each route.
Hence the gating factor for the test accuracy becomes the interval between consecutive packets to the same route and not the
Sampling Interval.

 

Because number of routes is already considered a parameter in sampling interval and the value recommended is 10 ms, then this is
calling for scale troubles. Suppose there are 10000 routes (not unreasonable assumption); hence 10000 packets per 10 ms need to be
offered to the DUT. This is 1000000 packets/second, which is greater than most DUT throughputs in the market now. Hence with the
current specifications convergence times in scale environment is an issue.

 

You have a point.10ms seemed to be a fair accuracy goal, but low end devices, where 10ms sampling interval is a stretch based on the
limited throughput of such devices, were not taken into account. An equation such as "sampling interval >= #routes/offered load"
(but still as small as possible) would be better.

 

The equation above needs to be factored for the minimum number of packets per sampling interval as stated in i) above. 

The requirement to have >= 1 packet to each route per sampling interval is absolute.

 

4. Section 6.2.3 talks about measurement accuracy. The measurement accuracy stated as an addition of the Sampling Interval and the
time between consecutive packets to the same route may be a generalization. This is not true for a case where the offered traffic
has packets generated to each route in a round-robin fashion and the DUT has FCFS que processing for forwarding. In this case the
inaccuracy would be MAX of Sampling interval and the time to offer consecutive packets to the same route. Note that these values may
be the same if Sampling Interval is set as a function of number of routes as described in the previous section. Also the  

I agree the accuracy statements may be a generalization. The accuracy for the different instants can be better specified seperately:

 

If sampling interval is calculated as per the arguments in 3., it will be the only factor influencing the accuracy of the test. 

 If sampling interval == time between consecutive packets to the same route then the highest accuracy can be achieved, but it's not
a requirement, it can be >=.

 

1) convergence event instant:

This is instantaneous for all routes by definition (otherwise a timestamp needs to be collected).

accuracy interval: -(sampling interval), +0

 

This should have been: -(sampling interval + 1/offered load), +0. But if 1/offered load <<sampling interval then the 1/offered load
term can be ignored.

 

2) first route convergence instant and convergence recovery instant

 

The accuracy interval for these two also needs to be specified as is for convergence event instant and is pretty trivial. 

I did, but for these instants one can distinguish situations a) and b) below. 

 

a) convergence recovery transition is non-instantaneous for all routes

accuracy interval: -(time between consecutive packets to the same route + sampling interval), +0

The "time between consecutive packets to the same route" term is the uncertainty when traffic is sent to a destination.

 

"time between consecutive packets to the same route" can be a certainty if the traffic packet scheduling algorithm is round-robin
and DUT is FCFS processing (discussed below). This value will then be equal to the sampling interval.

"Uncertainty" in the sense that one doesn't know when a packet is sent to the 1st, 2nd, ... last route to complete convergence,
since that also depends on the order of convergence which is unknown before the test.

 

[Anuj1:] Discussed in my comments in the attached draft

 

 

b) convergence recovery transition is instantaneous for all routes, they're equal so only measuring first route convergence instant
is enough

I don't think this is a realistic case for IGP convergence.

accuracy interval: -(sampling interval), +0

 

This should have been: -(sampling interval + 1/offered load), +0. But if 1/offered load << sampling interval then the 1/offered load
term can be ignored

 

The above equations will not be true if "sampling interval > #routes/offered load". They will only be true if the traffic data
packet scheduling algorithm sends data packets to the routes in a round-robin (or an algorithm that ensure that one packet is sent
to each route before a second packet is sent to any route) and the DUT strictly follows FCFS queue processing. These conditions MUST
be met in the test.

 

These traffic/forwarding assumptions are implied.

Can you show why "The above equations will not be true if "sampling interval > #routes/offered load""? I think they are correct as
they are.

 

[Anuj1:] Discussed in my comments in the attached draft

 

 

Specifying forces a change of:

"   When using the Rate-Derived Method, the Convergence Recovery Instant
   falls within the Packet Sampling Interval preceding the first
   interval where the observed Forwarding Rate on the Next-Best Egress
   Interface equals the Offered Load."

Since under the assumption quoted here the accuracy would be -(time between consecutive packets to the same route), +(sampling
interval)

 

measurement accuracy should be a range and is per metric. These metrics even include Convergence Event Instant, Convergence recovery
instant, First Route Convergence Instant. The derived metrics from these like the rate-derived convergence time, first route
convergence time, convergence recovery transition, convergence event transition have a different range because they are derived from
a range itself. These I feel should be part of the specification.  

The accuracy intervals I reported previously (below) were incorrect. These are the correct ones:

 

The accuracy interval of the metrics Rate-Derived Convergence Time and First Route Convergence Time is: -(Packet Sampling Interval +
time between two consecutive packets to the same destination), +(Packet Sampling Interval + 1/Offered Load).

 

If the Convergence Recovery Transition is instantaneous for all routes then the accuracy interval of the metrics Rate-Derived
Convergence Time and First Route Convergence Time is: -(Packet Sampling Interval + 1/Offered Load), +(Packet Sampling Interval +
1/Offered Load).

 

If 1/Offered Load is much smaller than Packet Sampling Interval the term "1/Offered Load" can be ignored in the accuracy intervals
above.

 

[Anuj1:] Discussed in my comments in the attached draft

 

 Are your accuracy algorithms different from the following:

a) convergence recovery transition is non-instantaneous for all routes

rate-derived convergence time and first route convergence time accuracy:

-(sampling interval), +(time between consecutive packets to the same route)

 

These are by definition functions of the instants (convergence event instant, convergence recovery instant, etc). As the instants
themselves are intervals, the intervals for these derived values should "engulf" the intervals which they are a function of. This
again is very trivial once we know the intervals of the instants as we discussed above.

 

convergence recovery transition duration accuracy:

-(time between consecutive packets to the same route), +(time between consecutive packets to the same route)

 

b) convergence recovery transition is instantaneous for all routes

-(sampling interval), +(sampling interval)

 

Discussed above.

 

5. The above three sections of this email discuss how some things in the specification conflict and do not address a convergence
test requirements for many devices in the market now. One of the solution approaches for Sampling Interval, Offered rate, number of
routes and measurement accuracy could be to make Sampling Interval a function of just the Received Rate on the Egress Port, validate
the minimum offered rate, and address the problem of having one packet to each route in the measurement accuracy of the metrics.

 

6. Sustained convergence validation time: What is the rationale behing setting it to a constant value of 5 seconds? This value may
again spell trouble if there is a test where the number of routes to the offered traffic rate is greater than 5 seconds, leading to
not even a single packet being sent to each route during the convergence test. An approach where n consecutive packets are sent to
each route and the forwarded traffic rate is cnstant and on the next-best egress port seems more logical.

Itprobably needs to be a combination of a number of packet transmissions cycles and a 5 seconds interval, otherwise there is a
similar issue on the lower end of packet cycle intervals.

 

If sampling interval is calculated as we discussed above (and hence ensuring one packet per route is sent in the interval), then
this value could just be a multiple of the sampling interval. The multiplier though needs to be benchmarked. 

I chose sustained convergence validation time to be max(5sec, 5*(time between consecutive packets to the same route)). If one just
takes n*(time between consecutive packets to the same route) or n*(sampling interval) it may end up being a very small duration.

 

[Anuj1:] Sounds good as long, as it is a function of time between consecutive packets to the same route.

 

 

7. It has not been mentioned in the standard that traffic is just a means of measuring convergence times and hence traffic rate is a
factor in the accuracy of the test. This should be highlighted in the beginning of the draft to lend better understanding to the
user. 

I'll see how it can be emphasized more.

 

Many thanks again,

 

Kris

 

As stated earlier I would add value to the benchmarking draft and would love to be a contributing author. Please give it a thought
and let me know.

 

I don't think it isneeded at this point.

 

I attached new versions of the drafts addressing the comments sofar. Can you review?

[Anuj1:] Reviewed and attached.

 

Thanks,

Kris

 

Thanks,

Anuj

 

 

Please write back to me with responses/discussions/questions.

 

I will be have limited email access in the next few weeks and would not be able to reply to the responses immediately.

 

Thanks,

Anuj Dewangan

Spirent Communications,

Raleigh, NC 27560

 


  _____  


From: bmwg-bounces@ietf.org [mailto:bmwg-bounces@ietf.org] On Behalf Of Al Morton
Sent: Monday, November 02, 2009 9:52 AM
To: bmwg@ietf.org
Subject: [bmwg] WGLC: draft-ietf-bmwg-igp-dataplane drafts

 

BMWG,

This message begins a WG Last Call on the IGP-Dataplane Convergence
Time Benchmarking drafts.

http://tools.ietf.org/html/draft-ietf-bmwg-igp-dataplane-conv-term-19 

http://tools.ietf.org/html/draft-ietf-bmwg-igp-dataplane-conv-meth-19

The Last Call with end on November 16, 2009, at 5PM US EST, 2300 GMT.

This is a topic we've been discussing in BMWG  
as long as I have been chairman.  The state of the art advanced
while we were developing these drafts, and hopefully now they 
are fully in-sync and relevant.  The term and meth drafts 
have been substantially revised in the -19- versions.

We also need to decide whether we need this expired draft:
http://tools.ietf.org/html/draft-ietf-bmwg-igp-dataplane-conv-app-17
It may be that the revisions to bring this in sync with the terms
and meth drafts are fairly trivial.  Comments on this are welcome.

Please weigh-in on whether or not these Internet-Drafts
should be given to the Area Directors and IESG for consideration and
publication as an Informational RFCs.  Send your comments
to this list or acmorton@att.com.

Al
bmwg chair

 



E-mail confidentiality.
--------------------------------
This e-mail contains confidential and / or privileged information belonging to Spirent Communications plc, its affiliates and / or
subsidiaries. If you are not the intended recipient, you are hereby notified that any disclosure, copying, distribution and / or the
taking of any action based upon reliance on the contents of this transmission is strictly forbidden. If you have received this
message in error please notify the sender by return e-mail and delete it from your system. If you require assistance, please contact
our IT department at helpdesk@spirent.com.

Spirent Communications plc
Northwood Park, Gatwick Road, Crawley, West Sussex, RH10 9XN, United Kingdom.
Tel No. +44 (0) 1293 767676
Fax No. +44 (0) 1293 767677

Registered in England Number 470893
Registered at Northwood Park, Gatwick Road, Crawley, West Sussex, RH10 9XN, United Kingdom.

Or if within the US,

Spirent Communications,
26750 Agoura Road, Calabasas, CA, 91302, USA.
Tel No. 1-818-676- 2300 

 



E-mail confidentiality.
--------------------------------
This e-mail contains confidential and / or privileged information belonging to Spirent Communications plc, its affiliates and / or
subsidiaries. If you are not the intended recipient, you are hereby notified that any disclosure, copying, distribution and / or the
taking of any action based upon reliance on the contents of this transmission is strictly forbidden. If you have received this
message in error please notify the sender by return e-mail and delete it from your system. If you require assistance, please contact
our IT department at helpdesk@spirent.com.

Spirent Communications plc
Northwood Park, Gatwick Road, Crawley, West Sussex, RH10 9XN, United Kingdom.
Tel No. +44 (0) 1293 767676
Fax No. +44 (0) 1293 767677

Registered in England Number 470893
Registered at Northwood Park, Gatwick Road, Crawley, West Sussex, RH10 9XN, United Kingdom.

Or if within the US,

Spirent Communications,
26750 Agoura Road, Calabasas, CA, 91302, USA.
Tel No. 1-818-676- 2300 




E-mail confidentiality.
--------------------------------
This e-mail contains confidential and / or privileged information belonging to Spirent Communications plc, its affiliates and / or
subsidiaries. If you are not the intended recipient, you are hereby notified that any disclosure, copying, distribution and / or the
taking of any action based upon reliance on the contents of this transmission is strictly forbidden. If you have received this
message in error please notify the sender by return e-mail and delete it from your system. If you require assistance, please contact
our IT department at helpdesk@spirent.com.

Spirent Communications plc
Northwood Park, Gatwick Road, Crawley, West Sussex, RH10 9XN, United Kingdom.
Tel No. +44 (0) 1293 767676
Fax No. +44 (0) 1293 767677

Registered in England Number 470893
Registered at Northwood Park, Gatwick Road, Crawley, West Sussex, RH10 9XN, United Kingdom.

Or if within the US,

Spirent Communications,
26750 Agoura Road, Calabasas, CA, 91302, USA.
Tel No. 1-818-676- 2300