[bmwg] RE: RE: Benchmarking Network-layer Traffic Control Mechanisms ext ension for artificial congestion

sporetsky@reefpoint.com Thu, 25 May 2006 12:11 UTC

Received: from [127.0.0.1] (helo=stiedprmman1.va.neustar.com) by megatron.ietf.org with esmtp (Exim 4.43) id 1FjEgD-0007ti-Sc; Thu, 25 May 2006 08:11:05 -0400
Received: from [10.91.34.44] (helo=ietf-mx.ietf.org) by megatron.ietf.org with esmtp (Exim 4.43) id 1Fj12p-0003QM-R4 for bmwg@ietf.org; Wed, 24 May 2006 17:37:31 -0400
Received: from email.quarrytech.com ([4.17.144.4] helo=qtech1.quarrytech.com) by ietf-mx.ietf.org with esmtp (Exim 4.43) id 1Fj12n-0002cT-L7 for bmwg@ietf.org; Wed, 24 May 2006 17:37:31 -0400
Received: by email.quarrytech.com with Internet Mail Service (5.5.2653.19) id <LB5LFMYG>; Wed, 24 May 2006 17:30:40 -0400
Message-ID: <496A8683261CD211BF6C0008C733261A0803CA97@email.quarrytech.com>
From: sporetsky@reefpoint.com
To: riwatts@cisco.com
Date: Wed, 24 May 2006 17:30:32 -0400
MIME-Version: 1.0
X-Mailer: Internet Mail Service (5.5.2653.19)
X-Spam-Score: 0.2 (/)
X-Scan-Signature: 53d8338727c025398cafff5ce1b05495
X-Mailman-Approved-At: Thu, 25 May 2006 08:11:04 -0400
Cc: bmwg@ietf.org
Subject: [bmwg] RE: RE: Benchmarking Network-layer Traffic Control Mechanisms ext ension for artificial congestion
X-BeenThere: bmwg@ietf.org
X-Mailman-Version: 2.1.5
Precedence: list
List-Id: Benchmarking Methodology Working Group <bmwg.ietf.org>
List-Unsubscribe: <https://www1.ietf.org/mailman/listinfo/bmwg>, <mailto:bmwg-request@ietf.org?subject=unsubscribe>
List-Post: <mailto:bmwg@ietf.org>
List-Help: <mailto:bmwg-request@ietf.org?subject=help>
List-Subscribe: <https://www1.ietf.org/mailman/listinfo/bmwg>, <mailto:bmwg-request@ietf.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2085242728=="
Errors-To: bmwg-bounces@ietf.org

Hi Richard,
 
What do you mean when you use the term 'artifical congestion'?
 
Scott
 

-----Original Message-----
From: Richard Watts (riwatts) [mailto:riwatts@cisco.com]
Sent: Tuesday, May 23, 2006 11:19 AM
To: sporetsky@reefpoint.com
Cc: bmwg@ietf.org
Subject: FW: RE: Benchmarking Network-layer Traffic Control Mechanisms
extension for artificial congestion


Hi Scott,
 
as per previous mail below please see proposed updates for discussion.
 

I have deliberately attempted to avoid introducing any new terminology here,
but we may have to include 'artificial forwarding congestion' to the
terminology draft as well. 
Hopefully the text in the updates below provides explanatory text for the
two following test topology diagrams
 
Discussion:
So the purpose is to provide an 'extension' to the existing ' Benchmarking
Network-Layer Traffic Control Mechanisms draft' for artificial congestion.
The decision to be made is whether a separate draft is required e.g.
Benchmarking Network-layer Traffic Control Mechanisms extension for
artificial congestion or, are we able to accommodate this extension within
the existing draft. I would like to propose/suggest that as the majority of
the concepts/terminology/methodology are the same that we accommodate this
within the existing draft.
 
'Artificial' congestion can be created on virtual/logical interfaces by
Traffic Control mechanisms, such that the Forwarding Capacity is limited and
typically less than the Forwarding Capacity of the actual interface.
Essentially the same test methodologies can be applied and the same
terminology (just different mechanism for creating congestion), but with the
consideration that the Output Vector will be based on the 'Limited'
Forwarding Capacity due to the applied Traffic Control Mechanisms versus,
the full Forwarding Capacity of the interface.
 
Updates to Existing Draft:
 
re: Section 3.1. Test Topologies
 
<To be added..>
 
Figure 3 shows the test topology for benchmarking performance when
'artificial' Forwarding Congestion does not exist on the egress link. This
topology is to be used when benchmarking the Undifferentiated Response and
the Traffic Control without 'artificial' Forwarding Congestion.
 
'Artificial' Forwarding Congestion does not exist due to the fact that the
Offered Vector (Offer Load) does not exceed the 'limited' Forwarding
Capacity of the Traffic Control Mechanisms
 
Figure 4 shows the test topology for benchmarking performance when
'artificial' forwarding congestion does exist on the egress link. This
topology is to be used when benchmarking the Traffic Control with
'artificial' Forwarding Congestion
 
'Artificial' Forwarding Congestion is produced by an Offered Vector (Offered
Load) to an ingress interface on the DUT destined for a single egress
interface configured with traffic control mechanisms that limits the Output
Vector to a value ' less than ' full interface Output Vector.
 
 


 
        Expected                                 
        Vector                                       
           |                                            
           |                                         
           \/                                        
        ---------        Offered Vector (Limited) ---------
        |       |<--------------------------------|       |
        |       |                                 |       |
        |       |                                 |       |
        |  DUT  |                                 | Tester|
        |       |                                 |       |
        |       |~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~>|       |
        |       |        Output Vector (Limited)  |       |
        ---------                                 ---------
                      
           Figure 3. Test Topology for Benchmarking 
                     Without 'artificial' Forwarding Congestion
 
 
        Expected                                  
        Vector                                        
           |                                            
           |                                         
           \/                                        
        ---------       Offered Vector (Unlimited)---------
        |       |<--------------------------------|       |
        |       |                                 |       |
        |       |                                 |       |
        |  DUT  |                                 | Tester|
        |       |                                 |       |
        |       |~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~>|       |
        |       |         Output Vector (Limited) |       |
        ---------                                 ---------
                                             
               Figure 2. Test Topology for Benchmarking

                         With 'artificial' Forwarding Congestion
Re-use of existing and slight modifications to the Test Cases to be added as
below


4. Test Cases 
 
  4.4 Undifferentiated Response  
 
     Purpose:  
     To establish the baseline performance of the DUT.
 
     Procedure:
     1. Configure DUT with Expected Vector.
     2. Configure the Tester for the Offered Vector.  
        Number of DSCPs MUST equal 1 and the 
        RECOMMENDED DSCP value is 0 (Best Effort). 
        Use 1000 Flows identified by IP SA/DA.  All flows
        have the same DSCP value.
     3. Using the Test Topology in Figure 3, source the 
        Offered Load from the Tester to the DUT.
     4. Measure and record the Output Vector. 
     5. Maintain offered load for 10 minutes minimum 
        to observe possible variations in measurements.
     6. Repeat steps 2 through 5 with 10000 and 100000
        Flows.
 
     Expected Results:
     Forwarding Vector equals the Offered Load.  There 
     is no packet loss and no out-of-order packets.
 
 
  4.5 Traffic Control Baseline Performance
     Purpose:  
     To benchmark the Output Vectors for a Codepoint Set
     without 'artificial' Forwarding Congestion.
 
     Procedure:
     1. Configure DUT with Expected Vector for each DSCP in 
        the Codepoint Set.
     2. Configure the Tester for the Offered Vector.  
        Number of DSCPs MUST 2 or more. Any DSCP values can
        be used.  Use 1000 Flows identified by IP SA/DA
        and DSCP value.
     3. Using the Test Topology in Figure 3, source the 
        Offered Load from the Tester to the DUT.
     4. Measure and record the Output Vector for each DSCP
        in the Codepoint Set. 
     5. Maintain offered load for 10 minutes minimum 
        to observe possible variations in measurements.
     6. Repeat steps 2 through 5 with 10000 and 100000
        Flows.
     7. Increment number of DSCPs used and repeat steps 
        1 through 6.
 
     Expected Results:
     Forwarding Vector equals the Offered Load.  There is  
     no packet loss and no out-of-order packets.  Output 
     vectors match the Expected Vectors for each DSCP in 
     the Codepoint Set.
 
Poretsky                                              [Page 6] 
   
INTERNET-DRAFT       Methodology for Benchmarking         March 2006 
                 Network-layer Traffic Control Mechanisms 
 
  4.6 Traffic Control Performance with Forwarding Congestion
     Purpose:  
     To benchmark the Output Vectors for a Codepoint Set
     with 'artificial'Forwarding Congestion.
 
     Procedure:
     1. Configure DUT with Expected Vector for each DSCP in 
        the Codepoint Set.
     2. Configure the Tester for the Offered Vector.  
        Number of DSCPs MUST 2 or more. Any DSCP values can
        be used.  Use 1000 Flows identified by IP SA/DA
        and DSCP value.  The Offered Load MUST exceed the
        'limited'Forwarding Capacity of a single egress link by 25%.
     3. Using the Test Topology in Figure 4, source the 
        Offered Load from the Tester to the DUT.  The 
        ingress offered load MUST exceed 
        the 'limited' Forwarding Capacity of the egress link to 
        produce Forwarding Congestion.
     4. Measure and record the Output Vector for each DSCP
        in the Codepoint Set. 
     5. Maintain offered load for 10 minutes minimum 
        to observe possible variations in measurements.
     6. Repeat steps 2 through 5 with 10000 and 100000
        Flows.
     7. Increment offered load by 25% to 200% maximum.
     8. Increment number of DSCPs used and repeat steps 
        1 through 6.
 
     Expected Results:
     Forwarding Vector equals the Offered Load.  There is  
     no packet loss and no out-of-order packets.  Output 
     vectors match the Expected Vectors for each DSCP in 
     the Codepoint Set.
Scott, I am keen to hear your thoughts and please can you provide comment on
this proposal or make alternative suggestions so that we can then make the
appropriate changes following discussion, to the existing draft ?
 
Kind Regards
 
Richard

  _____  

From: Richard Watts (riwatts) 
Sent: 05 May 2006 16:01
To: 'sporetsky@reefpoint.com'
Cc: bmwg@ietf.org
Subject: RE: Benchmarking Network-layer Traffic Control Mechanisms extension
for artificial congestion





Hi Scott

As we discussed and agreed I am keen to provide input and support to the
above mentioned draft with respect to making artificial congestion  an
extension to the existing benchmarking draft.

Please see my initial comments about the existing draft that I forwarded a
little while ago and I will also send out soon what I think the wording may
be for blending in the artificial congestion aspects of the benchmarking, so
that we might get some dialogue going on this topic. The next ITEF in
Montreal is not that long away now and I look forward to meeting both you
and the rest of the group then.

Look forward to your fedback

 

Regards

 

Richard

 

I would like to

Hi Scott

Apologies for the slight delay in getting back to you, time has been a bit
of a challenge, as always. However, please see below some 'cosmetic'
comments and queries regarding the existing benchmarking methodology draft,
which I hope are useful

Re: Section 3.1 Test Topologies

There seems to be a slight typo in the text i.e. ' Figure 2 shows the Test
Topology for benchmarking performance when Forwarding Congestion does not
exist on the egress link '. The 'not' needs to be removed to align with
Figure 2 heading.

Re: Section 3.2.3 c) under Offered Vector

' Packet size must be equal to or less than the interface MTU so that there
is no fragmentation' The 'must' needs to be in upper case.

Re: Section 3.2.5 Expected Vector

The last sentence 'Test cases may be repeated with variation to the expected
vector to produce a more benchmark results'. I take this to mean vary the
SLA requirements such as packet loss, jitter, forwarding delay etc. If so is
this actually required, I understand that the draft uses the word 'may' so
infers optional. But, if the DUT is tested to the tightest SLAs and they are
achieved, is there any mileage in testing to achieve 'less tight' SLAs ?

Re: Step 2 in the procedure for both 4.2 and 4.3 should have 'be' inserted
between the 'MUST' and '2 or more'

Re: 'Expected Results' under section 4.3 states ' Forwarding Vector equals
the Offered Load. There is no packet loss and no out of order packets.
Output vectors match the Expected Vectors for each DSCP in the Codepoint
set' 

Should we not ensure consistency in terminology and change Forwarding Vector
to Output Vector as per bottom of Page 5 or, change Output Vector on bottom
of Page 5 to be consistent with Forwarding Vector in this section ?
Additionally, it states 'Forwarding Vector equals the Offered Load'. Offered
load, should be Expected Vector as this is the process for the benchmarking
of 'with' forwarding congestion ?

Not sure what your thoughts are, but I would not be inclined to state
anything about what the expected vector should be in the expected results
section, as this will vary depending upon what the target is for the
benchmark and how its configured on the DUT. So comments about no packet
loss may not be valid.

I am personally of the opinion that we can manipulate this draft to take
into account the artificial congestion, I think we just need to weave in the
appropriate wording so that the audience is aware that this methodology
applies also to artificial congestion. I think the concepts and approach do
not change, just because the mechanism for creating congestion might differ.

If you are in agreement I will go ahead and try to make the appropriate
changes for your review, comment and input ?

I will also review the Terminology draft very shortly as well as feedback
any comments to yourself and the co-authors

Kind Regards

Richard

-----Original Message-----

From: Richard Watts (riwatts) 

Sent: 24 March 2006 10:57

To: sporetsky@reefpoint.com; Gunter Van de Velde (gvandeve)

Cc: acmorton@att.com; gunter@vandevelde.cc; Richard Watts (riwatts)

Subject: RE: Benchmarking Network-layer Traffic Control Mechanisms extension
for artificial congestion

 

Hi Scott

Many thanks for your invite to co-author the current methodology draft of
which I will gladly accept.

I also agree with your approach with how to potentially move forward with
the methodology document(s). It would be easier I guess if we could leverage
the existing methodology document rather than, having to create a
new/separate one.

My initial thoughts are that we should be able to use the existing
methodology as we are still creating congestion (just artificially) through
the use of shapers and the like on virtual links, that said, with HQF
architectures, we have tiered levels of congestion management, without the
need to create artificial congestion through shaping.

Will start looking at the terminology and methodology documents to see how
best we might address this.

Once again, many thanks for your cordial invitation and your acceptance to
co-author, should we need to generate a separate methodology document.

Kind Regards

Richard

-----Original Message-----

From: sporetsky@reefpoint.com [  <mailto:sporetsky@reefpoint.com>
mailto:sporetsky@reefpoint.com] 

Sent: 22 March 2006 17:15

To: Gunter Van de Velde (gvandeve)

Cc: Richard Watts (riwatts); acmorton@att.com; gunter@vandevelde.cc

Subject: RE: Benchmarking Network-layer Traffic Control Mechanisms extension
for artificial congestion

Gunter,

Hello. It was a pleasure to meet you yesterday. Great work on IPv6! I am

looking forward to the author team's further work on it. 

The current Network-Layer Taffic Control methodology addresses benchmarking

of egress QoS mechanisms, without naming specific mechanisms or

implementations. Yesterday's BMWG meeting raised the need for the

Network-Layer Taffic Control work item to have methodologies that addressed

classification/shaping and application of DiffServ to virtual links. First

we will need to look at how classification/shaping and application of

DiffServ to virtual links can be addressed in the current methodology

document. If we determine that these require separate methodology

documents, then it was agreed that these methodologies can be addressed as

separate documents as part of the current Network-Layer Taffic Control work

item using the existing Terminology document. If you agree with this

approach then I would be happy to participate as co-author for either of

these methology drafts, if we determine the documents are needed. Likewise,

I would like to invite you or your colleague to join as co-author on the

current methodology draft.

Thanks!

Scott

-----Original Message-----

From: Gunter Van de Velde (gvandeve) [  <mailto:gvandeve@cisco.com>
mailto:gvandeve@cisco.com]

Sent: Wednesday, March 22, 2006 11:55 AM

To: sporetsky@reefpoint.com

Cc: riwatts@cisco.com; acmorton@att.com; gunter@vandevelde.cc

Subject: Benchmarking Network-layer Traffic Control Mechanisms extension

for artificial congestion

 

Hi Scott,

Many thanks yesterday for your presentation and insights in the

Benchmarking test methodology for Network-Layer Control Mechanisms.

As mentioned during the BMWG meeting, a congestion scenario seen often

is that of artificial congestion caused by a diffserv traffic shaping 

function.

This is as you know commonly seen at the boundary of an network to condition

the traffic for the right parameters (whatever these parameters actually

are).

I would like to pick up the task to be involved with this work, and would

like

to invite you to be one of the co-authors to advice and share your

experience

in the benchmarking area. Please let me know if you are interested in this 

contributing

role? I would like to introduce Richard Watts who is based in UK and is an 

expert in QoS deployment (he leads a QoS expert team in Europe). Richard 

offered to take the

lead editor role for this piece of work. This would mean that if you accept

co-authorship we will be with the three to start the work.

Would you or Al have any recommended next steps in mind so that

we can present first draft results at IETF66?

My believe is that this work should use the draft-ietf-bmwg-dsmterm-12.txt

and

draft-ietf-bmwg-dsmmeth-01.txt as foundation and complement these two

documents with two new drafts. Consequence is that the existing drafts

will have to be included as 'normative reference' which sounds logical

and acceptable to me.

The first question is now on how to proceed? Should we initially only

prepare a new terminology document for IETF66 or should we do in addition

the methodology draft immediately in parallel?

Any suggestions and advice is welcome,

Kind Regards,

G/

_______________________________________________
bmwg mailing list
bmwg@ietf.org
https://www1.ietf.org/mailman/listinfo/bmwg