Re: [aqm] WGLC on draft-ietf-aqm-eval-guidelines

Nicolas Kuhn <nicolas.kuhn.guivarch@gmail.com> Tue, 22 September 2015 06:38 UTC

Return-Path: <nicolas.kuhn.guivarch@gmail.com>
X-Original-To: aqm@ietfa.amsl.com
Delivered-To: aqm@ietfa.amsl.com
Received: from localhost (ietfa.amsl.com [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 12DD51B2AD9 for <aqm@ietfa.amsl.com>; Mon, 21 Sep 2015 23:38:39 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: 3.711
X-Spam-Level: ***
X-Spam-Status: No, score=3.711 tagged_above=-999 required=5 tests=[BAYES_50=0.8, DKIM_SIGNED=0.1, FREEMAIL_FROM=0.001, GB_SUMOF=1, HTML_MESSAGE=0.001, J_CHICKENPOX_12=0.6, J_CHICKENPOX_21=0.6, J_CHICKENPOX_34=0.6, SPF_PASS=-0.001, T_DKIM_INVALID=0.01] autolearn=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id sOakfPO47UN7 for <aqm@ietfa.amsl.com>; Mon, 21 Sep 2015 23:38:12 -0700 (PDT)
Received: from mail-wi0-x231.google.com (mail-wi0-x231.google.com [IPv6:2a00:1450:400c:c05::231]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 4E1971B2AD2 for <aqm@ietf.org>; Mon, 21 Sep 2015 23:38:11 -0700 (PDT)
Received: by wiclk2 with SMTP id lk2so8460846wic.1 for <aqm@ietf.org>; Mon, 21 Sep 2015 23:38:10 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=dYXLEu/DRjXtL3oX002CHOz1t11FvqYlejm695+o+wM=; b=XvrKKT7aeWKb2LdQtMoV/Yq4yL9hqXIt3yB/HgD3JfyNOeqouVDASSIXiP32u+T8eG XsyxPPXtvVtvhYFbDgNB4O+tYWiRnsP6W1Cbm/Usnmab1bOjRaxgTUVW/CD5Zjp9GtVn kVeI37zNbhCEbW+kQYTbFpFLMuvnd/Ogc8hBoigzZZRk8UgbgkJKGTvw1qy3j8nJ2Zm6 cEWukmMwwYri0HqB4HL2EDDJIvLsscSMCphRAM/TwBwYKFqGy8Q4yBA7LNGhOBD1w2ZF IgmuCHN5a8N+jr+8Q20uvyZoOldNlyEQTlwihbyn8OkNwcqEZATVQ91JM3k0vynDRHIW Ge1g==
MIME-Version: 1.0
X-Received: by 10.194.24.68 with SMTP id s4mr1083610wjf.12.1442903889709; Mon, 21 Sep 2015 23:38:09 -0700 (PDT)
Received: by 10.28.92.78 with HTTP; Mon, 21 Sep 2015 23:38:09 -0700 (PDT)
In-Reply-To: <55D3AB99.3090405@kit.edu>
References: <55C8AA8E.4000802@mti-systems.com> <55D3AB99.3090405@kit.edu>
Date: Tue, 22 Sep 2015 08:38:09 +0200
Message-ID: <CAEGuzz9XkmM8K8P4fD6dvUAO7hkYQNSzkT-A1LUTz=TnrX=L-w@mail.gmail.com>
From: Nicolas Kuhn <nicolas.kuhn.guivarch@gmail.com>
To: aqm@ietf.org
Content-Type: multipart/alternative; boundary="047d7b472704ca91f70520503d6b"
Archived-At: <http://mailarchive.ietf.org/arch/msg/aqm/xHVL8d2mBPG6FfQrduDCEmgQZCg>
Subject: Re: [aqm] WGLC on draft-ietf-aqm-eval-guidelines
X-BeenThere: aqm@ietf.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: "Discussion list for active queue management and flow isolation." <aqm.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/aqm>, <mailto:aqm-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/aqm/>
List-Post: <mailto:aqm@ietf.org>
List-Help: <mailto:aqm-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/aqm>, <mailto:aqm-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 22 Sep 2015 06:38:39 -0000

 Hi all,

Sorry for the delay of our answer.
We have just posted a new version of the draft that, we hope,
assesses the comments of Wolfram, Roland and Polina.

Following Wolfram's comments, we:
- changed the units of the equation ( FCT [s] = Fs [b] / ( G [bps] ) );
- mention only one reference for the latency/goodput trade-off graphs;
- updated the "long-term UDP" terminology.

Following the comments of Polina and Roland, we:
- moved the " Methodology section " at the beginning of the document.
- included discussions on ECN and scheduling in the methodology section. I
think it is clearer that way.
- fixed the table at the end with updated section numbers.

For the details of the changes, please see below.

####################################################

"Unfortunately", we (Polina and I) did a thorough review, which is
attached. TL;DR: from our point-of-view the I-D needs a  major revision.
Regards, Roland I completed my review for
draft-ietf-aqm-eval-guidelines-07 and discussed it also with Polina, who
did her own review which we eventually aggregated here. We both think that
this document needs a major revision due to the amount of issues we
identified.

Major issues:
-------------
1) Structure, overview, rationale and requirements
   The structure should/could be improved.
   The goal and methodology should be put first. Some
   motivation given in Section 14 should be moved to the
   beginning, e.g., the goal of this document is stated in
   Section 14.3.

>> We have moved the section on the Methodology
before describing the tests. Indeed, this section presents the goal of
the document and some important methodology aspect. For the sake of
clarity, we have introduced the sections on ECN
 and scheduling in this "methodology section".

2) It is unclear whether the tests from Sections 4-9 should be
   carried out without or with ECN. Section 12 discusses this
   much too late.

>> As mentioned earlier, we have moved the section on ECN earlier in the
document and the guidelines do not stipulate on whether ECN must be enabled
for the tests or not (then, as an example, the tester might use the
proposed tests to evaluate the benefits of using ECN).

3) the overall number of tests and parameter combinations is really high

>> We acknowledge that the number of test is
important. This is the reason why we have listed all the tests at the
end of the document in a Table: if there is any scenario, that you
believe should be moved from a "MUST" to a "MAY" requirement
 ?

If the problem is not only the amount of scenarios, but the different
cases in each of them, do you have any suggestion on where we should
remove some parameter combinations ?

4) from the discussed end-to-end metrics only latency/goodput metrics
   are used in the scenarios and for some of the scenarios these metrics
   are not suitable to show the desired behavior

>> The main idea was to present the available and useful metrics early
in the document, so that the tester can choose between them those that
are of interest for the considered scenario. The guidelines may propose
some metrics for specific scenarios, but they
 do not make any specific requirements.

If there is anywhere in the document where the proposed metric is not
suitable for showing the desired behavior, please let us know.

5) some sections in this document (e.g., 7.3, 10, 13) specify requirements
   for an AQM standard(/draft) and not requirements for a performance
   evaluation, so these sections should be moved to
[draft-ietf-aqm-recommendation]

>> These guidelines are not for performance evaluation only:
"
   The guidelines help to quantify performance of AQM schemes in terms
   of latency reduction, goodput maximization and the trade-off between
   these two.  The guidelines also help to discuss safe deployment of
   AQM, including self-adaptation, stability analysis, fairness, design
   and implementation complexity and robustness to different operating
   conditions.
"

We believe that
"
   AQM schemes need to be compared against both performance and
   deployment categories.
"

Therefore, these points (sections 7.3, 10, 13) could have been discussed
 in the "recommendation document", but it is important to consider them
as well in the characterization of an AQM.


6) Related Work: There are several works that deal with
       evaluation of TCP or congestion control performance:

>> Thanks for these pointers. However, I am not quite convinced that they
are all needed for the document.

       - RFC 5166 <https://tools.ietf.org/html/rfc5166>
https://tools.ietf.org/html/rfc5166
       (Metrics for the Evaluation of Congestion Control Mechanisms)
       is IMHO higly relevant but neither referenced nor discussed

>> This document can be of interest for TCP evaluation and we could
    have used to legitimate the choice of the metrics, but we are not quite
sure that this is needed,
    now that we have quite a long list of metrics.

      - Yee-Ting Li, Douglas Leith, and Robert
        N. Shorten. 2007. Experimental evaluation of TCP protocols for
        high-speed networks. IEEE/ACM Trans. Netw. 15, 5 (October 2007),
        1109-1122. DOI=10.1109/TNET.2007.896240
        http://dx.doi.org/10.1109/TNET.2007.896240

 >> This paper compares the performance of various TCP variants. Do you
want us to extract metrics or scenarios from this article ?

      - Andrew et al.: Towards a Common TCP Evaluation Suite,
        Proceedings of the International Workshop on Protocols for Fast
        Long-Distance Networks (PFLDnet), Manchester, United Kingdom,
        March 2008

>> Our draft somehow extends this paper by providing more content in the
"AQM section". We refer to this document for the discussion on the
topology. I am not sure on how we could further use this reference.


Detailed comments per section:
==============================
(the %%%%%% just separates different issues within the section comments)

{Section 1}
-----------
    AQM schemes aim at reducing mean buffer occupancy, and
    therefore both end-to-end delay and jitter.
==> is this true for every AQM ?

>> replaced by
" AQM schemes aim at reducing buffer occupancy, and therefore the end-
   to-end delay. "

%%%%%%
   In real implementations of switches, a global
   memory is shared between the available devices:
This may be a common architecture nowadays, but not necessarily
always be the case...
=> In real implementations of switches, a global
   memory is _often_ shared between the available devices:

>> Thanks - text updated.

%%%%%%
   the size of the buffer for a given communication does not
   make sense ...
and then...
   The rest of this memo therefore refers to the
   maximum queue depth as the size of the buffer for a given
   communication.
=> I don't understand what you mean here. First you say
   it doesn't make sense, then you define maximum queue depth
   as exactly the size of the buffer for a given
   communication.
   - Do you mean buffer occupancy?
   - Is "communication" here an end-to-end data flow or an aggregated flow?
   - the term "maximum queue depth" is never used in the document again...
     but "maximum queue size", "maximum buffer size"
I think it is essential to understand the difference between the
buffer size and the buffer occupancy that the AQM tries to control.
Due to shared memory architectures the buffer size may not be fixed
and thus vary for a given interface.
Is the buffer (size) here meant in both directions for bidirectional
traffic?

>> Thanks, some clarity was indeed needed on that point.
We have replaced:
"
A buffer is a physical volume of memory in which a queue or set of queues
are stored. In real implementations of switches, a global memory is often
shared between the available devices: the size of the buffer for a given
communication does not make sense, as its dedicated memory may vary over
the time and real-world buffering architectures are complex. For the sake
of simplicity, when speaking of a specific queue in this document, "buffer
size" refers to the maximum amount of data the buffer may store, which can
be measured in bytes or packets. The rest of this memo therefore refers to
the maximum queue depth as the size of the buffer for a given communication.
"
by
"
A buffer is a physical volume of memory in which a queue or set of queues
are stored. When speaking of a specific queue in this document, "buffer
occupancy" refers to the amount of data (measured in bytes or packets) that
are in the queue, and the "buffer size" refers to the maximum buffer
occupancy. In real implementations of switches, a global memory is often
shared between the available devices, and thus, the buffer size may vary
over the time.
"
We hope that this is clearer. On top of this change, we have updated the
glossary section with "queues, buffer, buffer occupancy and buffer sizes".

%%%%%%
   Bufferbloat [BB2011] is the consequence of deploying large unmanaged
   buffers on the Internet, which has lead to an increase in end-to-end
   delay: the buffering has often been measured to be ten times or
   hundred times larger than needed.
   Large buffers per se are not a real problem unless combined with TCP
bandwidth
   probing or unresponsive flows that fill buffers.

>> Addressed in text; “…is the consequence of deploying large unmanaged
buffers on the Internet -- the buffering has often been measured to be ten
times or hundred times larger than needed. Large buffer sizes in
combination with TCP and/or unresponsive flows increases end-to-end delay. "


%%%%%%
   The Active Queue Management and Packet Scheduling Working Group (AQM
   WG) was recently formed within the TSV area to address the problems
   with large unmanaged buffers in the Internet.  Specifically, the AQM
   IMHO this and the following paragraphs should be rephrased so that the
statement is also true in some years
   after the WG has concluded...

>> Addressed in text; "The Active Queue Management and Packet Scheduling
Working Group (AQM WG) was chartered to address the problems with large
unmanaged buffers in the Internet."

%%%%%%
    Missing: The use of ECN is also an incentive to use/deploy AQMs

>> We do not clearly understand your point. Do you want us to speak about
incentives to deploy AQM in the Introduction of the document ? We do not
see how this would fit in the current introduction which focuses on the
need for characterization guidelines.

{Section 1.1}
-------------
   The trade-off between reducing the latency and maximizing the goodput
=> Goodput isn't defined at its first use, probably do a forward reference
   to section 2.5 and/or put it in the Glossary (sec. 1.4)

>> Updated text; Added to Glossary.

  This document provides guidelines that enable
  the reader to quantify (1) reduction of latency, (2) maximization of
  goodput and (3) the trade-off between the two.
=> This should be moved into Section 1.2, but seems to be redundant with its
   first sentence anyway:
   The guidelines help to quantify performance of AQM schemes in terms
   of latency reduction, goodput maximization and the trade-off between
   these two.

>> Removed  text.

%%%%%%
   These guidelines provide the tools to understand the deployment costs
   ...
=> I doubt that anything is said about deployment _costs_ in the draft.
       14.3.2 discusses some aspects w.r.t. handling the AQM in practice,
       but not really deployment costs...

>> Updated text; "These guidelines discuss methods to understand ease of
development, deployment and operational aspects of the AQM scheme verses
the potential gain in performance from the introduction of the proposed
scheme."

{Section 1.2}
-------------
   The guidelines also help to discuss safe deployment of
   AQM, including self-adaptation, stability analysis, fairness, design
   and implementation complexity and robustness to different operating
   conditions.
=> These terms should be explained before they are actually used.

>> Updated text; "The guidelines also discuss methods to understand the
various aspects associated with safely deploying and operating the AQM
scheme. “

%%%%%%
   This memo details generic characterization scenarios against which
   any AQM proposal needs to be evaluated
=> *needs* sounds a bit strange

>> Updated text; “…against which any AQM proposal should be evaluated.."

%%%%%%
   This document details how an AQM designer can rate the feasibility of
   their proposal in different types of network devices (switches,
   routers, firewalls, hosts, drivers, etc) where an AQM may be
   implemented
=> There is nothing specific about firewalls, hosts, and drivers in the
       rest of the document. The proposed test topology considers routers
only.

>> Proposing guidelines on how to characterize AQM for different network
devices was a primary objective of these guidelines. However, it appears
that we, indeed, mainly focus on routers. The text has thus been updated
and re-organized as follows:
"
These guidelines do not cover every possible aspect of a particular
algorithm. In addition, it is worth noting that the proposed criteria are
not bound to a particular evaluation toolset. These guidelines do not
present context-dependent scenarios (such as 802.11 WLANs, data-centers or
rural broadband networks).
"

{Section 1.3}
-------------
AQM: Should be expanded at least once here

>> done

{Section 1.4}
-------------
Strictly speaking, queue should be defined here, too

>> We have updated the glossary section.
About the definition of the queue, we would have preferred refer to other
RFCs, but none of RFC7567 or 2309 actually defines it. We believe that it
is clear enough as it is:
"
1.4.  Glossary

   o  AQM: [RFC7567] separately describes the Active Queue Managment
      (AQM) algorithm implemented in a router from the scheduling of
      packets sent by the router.  The rest of this memo refers to the
      AQM as a dropping/marking policy as a separate feature to any
      interface scheduling scheme.

   o  buffer: a physical volume of memory in which a queue or set of
      queues are stored.

   o  buffer occupancy: amount of data that are stored in a buffer,
      measured in bytes or packets.

   o  buffer size: maximum buffer occupancy, that is the maximum amount
      of data that may be stored in a buffer, measured in bytes or
      packets.

   o  goodput: goodput is defined as the number of bits per unit of time
      forwarded to the correct destination minus any bits lost or
      retransmitted [RFC2647].
"

{Section 2.1}
-------------
   FCT [s] = Fs [B] / ( G [Mbps] / 8 )
   please use unambiguous units instead of B and bps:
   FCT [s] = Fs [Byte] / ( G [Bit/s] / 8 [Bit/Byte] )

>> done

=> Goodput of a flow is defined in 2.5 but referenced here.

>> Goodput now added to glossary

=> Can one really speak of Goodput for a flow that is 10-100 packets long?
   it probably makes more sence to measure FCT directly


>> We acknowledge that sometimes the metrics may not suit every class of
traffic. This is reason why, in the "e2e metrics" section, we mention that
the chosen metrics may not be relevant to the scenario:
"
Some metrics listed in this section are not suited to every type of traffic
detailed in the rest of this document. It is therefore not necessary to
measure all of the following metrics: the chosen metric may not be relevant
to the context of the evaluation scenario (e.g., latency vs. goodput
trade-off in application-limited traffic scenarios). Guidance is provided
for each metric.
"

%%%%%
If this metric is used to evaluate the performance of web transfers,
     *we propose*
=> Avoid "we", e.g., replace with "it is suggested"

>> done

=> (Considering section 6.2 too) It might be a good idea to standardize
     how to generate web traffic and what metric to measure. Consider, for
     example, how web traffic is generated in "Experimental evaluation of
TCP
     protocols for high-speed networks"

>> I agree that this would be a good idea, however one objective of these
guidelines is not to be platform dependent and should also work for
simulations or experiment. This is the main reason why we do not go much
into the details on how to generate specific traffic.
This is not the role of the present guidelines to standardize how to
generate web traffic. That would induce another wide range of discussions
(using HTTP2.0, QUIC, etc.) and do not fit in this document, IMO.


{2.2.  Flow start up time}
This metric is not used in later tests...

>> The metrics listed are just "informal" and are of interests for AQM
evaluations. As said in the text:
"
This section provides normative requirements for metrics that can be used
to assess the performance of an AQM scheme.
"
As the metric may not make sense for specific traffic or context, we do not
specify when using which metrics. We would keep this metric in the list,
even if it is not directly used later in the document.

{2.3 Packet loss}
    Packet loss can occur within a network device, this can impact the
   end-to-end performance measured at receiver.
 => It can also occur at the sender and receiver...

>> Updated text; "Packet loss can occur en-route, …”

%%%%%%
    This metric is not used in later tests...(except indirectly in goodput)
    Measuring packet loss is probably essential since
    retransmissions can also be triggered by reordering.
    Furthermore, packet loss caused by the AQM through
    packet drops should be measured separately (in order
    to find out whether other drops happened elsewhere).

>> Same as for the flow startup, we do not specify when using these metrics
- identifying the source of drops can be done in simulations, however it
can hardly be done in real life testbed. As the way to measure a metric can
hardly depend on the test platform, we do not specify much that drops
induced by AQM should be considered in simulation.

%%%%%%
   The tester SHOULD evaluate loss experienced at the receiver using one
   This may be misleading if the cause of the loss isn't clear...see above.

>> See above comment.

{Section 2.5}
-------------
    number of bits per unit of time forwarded to the correct destination
       interface of the Device Under Test or the System Under Test, minus
    => are Device Under Test and System Under Test universally known terms?
     they are defined in RFC 2544

>> Updated text; “… to the correct destination interface, minus any bits
lost or retransmitted. "


{Section 2.6}
-------------

One-way delay as discussed in RFC 2679 is a little bit more
precise since it also specifies at which layer the delay is
measured (Type-P-One-way-Delay). I guess that we want to
consider IP packet delay?!

>> Since we have made the decision to talk about only routers in this
document, we believe that it is  OK to consider sending/receiving host IP
packet delay.

%%%%%%
Typo:
    -  There is a consensus on a adequate metric for the jitter, that
    ----
    +  There is a consensus on an adequate metric for the jitter, that

>> Updated text

%%%%%%
       The end-to-end latency differs from the queuing delay: it is linked
       to the network topology and the path characteristics.
    => this reads a bit strange to me: queuing delay is part of the
       end-to-end latency (together with signal propagation delay,
       transmission delay, processing delay).
    => what is exactly meant by path characteristics here? is that
       the fixed delay portion, i.e., signal propagation delay,
       transmission delay?

>> Updated text; "The end-to-end latency includes components other than
just the queuing delay, such as the signal processing delay, transmission
delay and the processing delay.”

%%%%%%
       Moreover, the
       jitter also strongly depends on the traffic pattern and the topology.

    => I'm not sure how jitter depends on the topology. Jitter is usually
       caused by variations in queuing and processing delay (e.g.,
       scheduling effects and so on).

>> Updated text; "Moreover, the jitter is caused by variations in queuing
and processing delay (e.g., scheduling effects). “


%%%%%%
   The introduction of an AQM scheme would impact these metrics and
=> these metrics are: one-way delay and one-way delay variations?


>> Updated text

{Section 2.7}
-------------

       With regards to the goodput, and in addition to the long-term
       stationary goodput value, it is RECOMMENDED to take measurements
       every multiple of RTTs.  We suggest a minimum value of 10 x RTT (to

    => "every multiple of RTTs" is probably a bad recommendation since RTT
       is variable due to queuing delay. minRTT would be probably ok.

>> Updated text; “… every multiple of the minimum RTT between A and B…."

%%%%%%
   smooth out the fluctuations) but higher values are encouraged
    => what does "higher" mean here? more frequently? (if so, please
       rephrase)

>> Updated text:
"
We suggest to take measurements at least every K x minRTT (to smooth out
the fluctuations), with K=10. Higher values for K are encouraged whenever
it is more appropriate for the presentation of the results. The value for K
may depend on the network's path characteristics.
"

%%%%%%
       From each of these sets of measurements, the CDF of the considered
    => Please expand CDF at least once.

>> Updated text “cumulative density function (CDF)…"

%%%%%%
       This graph provides part of a better understanding of (1) the delay/
       goodput trade-off for a given *congestion control mechanism*,
    + AQM scheme
                                      and (2)
       how the goodput and *average queue size* vary as a function of the
       traffic load.
    => in order to see how something varies as a function of the traffic
load
       one should perform measurements for different traffic loads, which
is
       not done in every scenario.

>> Updated text to add reference to relevance tests.

=> average queue size should probably be replaced with delay.


>> done

%%%%%%
   the goodput and ellipses are computed such as detailed in [WINS2014].
=> since nearly every of the following tests recommends plots according
   to this graph, please write it up here. Maybe Keith's thesis is
   accessible for some years, but it would be good to document such a
   central element within the Draft/RFC itself.



>> Updated text:
"
>From each of these sets of measurements, the cumulative density
   function (CDF) of the considered metrics SHOULD be computed.  For
   each scenario, the following graph may be generated: the x-axis shows
   queuing delay (that is the average per-packet delay in excess of
   minimum RTT), the y-axis the goodput.  Ellipses are computed such as
   detailed in [WINS2014]: "We take each individual [...] run [...] as
   one point, and then compute the 1-epsilon elliptic contour of the
   maximum-likelihood 2D Gaussian distribution that explains the points.
   [...] we plot the median per-sender throughput and queueing delay as
   a circle. [...] The orientation of an ellipse represents the
   covariance between the throughput and delay measured for the
   protocol."  This graph provides part of a better understanding of (1)
   the delay/goodput trade-off for a given congestion control mechanism
   Section 5, and (2) how the goodput and average queue delay vary as a
   function of the traffic load Section 8.2.
“

{Section 3.1}
-------------
in the figure:
        +            +-+---+---+     +--+--+---+            +
        |            |Router L |     |Router R |            |
        |            |---------|     |---------|            |
        |            | AQM     |     |         |            |
        |            | BuffSize|     |         |            |
        |            | (Bsize) +-----+         |            |
        |            +-----+--++     ++-+------+            |
        +                  |  |       | |                   +
its unclear to me what these lines here between the traffic class boxes
mean:
        +
        |
        |
        |
        |
        |
        |
        +
may be replace by an ellipsis
.
.
.
=> moreover, what about the buffers in Router R? The are assumed
to be empty I guess ...

>> We have updated the figure to have both AQM and Bsize on the reverse
direction (@ L), as we discuss AQM on the reverse path for some tests.

%%%%%%
   o  various classes of traffic can be introduced;
=> I would avoid traffic class since this can be confused
   with diffserv classes easily. Later in the document
   their are called "traffic profiles" which I find a
   more suitable term (then use it consistently throughout
   the document).

>> Updated as discussed next

%%%%%%
       o  various classes of traffic can be introduced;
    => better rephrase to:
       o  sender with different traffic characteristics (i.e.,
          traffic profiles) can be introduced;

>> Updated text as suggested.

%%%%%%
       o  each link is characterized by a couple (RTT,capacity);
    => better one-way delay instead of RTT? Probably the links
       are symmetric or asymmetric...

>> Updated text; “…a couple (one-way delay, capacity)"

%%%%%%
       o  flows are generated between A and B, sharing a bottleneck (Routers
          L and R);
    => "generated between A and B" is weird and the bottleneck is the _link_
       between L and R, so:

       o  flows are generated at A and sent to B, sharing a bottleneck (the
          link between routers L and R);

>> Updated as suggested

%%%%%%
          AQM mechanism whereas the asymmetric link scenario evaluates an
          AQM mechanism in a more realistic setup;
    => sounds like only DSL scenarios are a realistic setup...
       please consider the usefulness of AQM also in other networks, e.g.
       even in data centers...

>> Based on chairs recommendation, we decided to focus just on generic
scenarios in this document.

%%%%%%
-   an AQM scheme when comparing this scheme with a new proposed AQM
----
+   an AQM scheme when comparing this scheme with a newly proposed AQM

>> done

{Section 3.2}
-------------
   The size of the buffers should be carefully chosen, and is to be set
    to the bandwidth-delay product
    => bandwidth-delay product between which points exactly? A and B or L
and R?
    => buffer and buffer size are defined as a whole buffer size available
for a
       device. Is it enough for bidirectional traffic?

>> These are addressed in the following sentence "the bandwidth being the
bottleneck capacity and the delay the largest RTT in the considered
network.”

%%%%%%
       capacity and the delay the larger RTT in the considered network.  The
    => the largest RTT?

>> done

%%%%%%
-   size of the buffer can impact on the AQM performance and is a
----
+   size of the buffer can impact the AQM performance and is a

>> done

{Section 3.3}
-------------
   This memo features three kind of congestion controls:

=> sounds a bit strange. Maybe something like: This documents
   considers running three different congestion control algorithms
   between

>> Updated text; "This document considers running three different
congestion control algorithms between A and B"

      this category is TCP Cubic.
=> a reference would be good here...

>> Indeed.
We have added a reference to:
"
a base-line congestion control for
      this category is TCP Cubic [I-D.ietf-tcpm-cubic].

[...]

  [I-D.ietf-tcpm-cubic]
              Rhee, I., Xu, L., Ha, S., Zimmermann, A., Eggert, L., and
              R. Scheffenegger, "CUBIC for Fast Long-Distance Networks",
              draft-ietf-tcpm-cubic-00 (work in progress), June 2015.
"

{Section 4.}
 -------------
 This sections reads more like more congestion control evaluation...
 %%%%%%
        Network and end-devices need to be configured with a reasonable
        amount of buffer space to absorb transient bursts.  In some
        situations, network providers tend to configure devices with large
        buffers to avoid packet drops triggered by a full buffer and to
        maximize the link utilization for standard loss-based TCP traffic.
     => This whole paragraph belongs more to section 3.2
        Moreover, one needs to evaluate several operation points (parameter
        settings) to see the AQM behavior in a Goodput/Delay graph. One must
        change variables that really change the behavior (there are enough
        papers that vary the buffer size, which would be pretty useless for
        AQMs like PIE or Codel).

>> The focus in this section is to understand how effective the AQM scheme
is with different kinds of transport senders as discussed in the AQM
recommendation draft. Varying AQM’s operation points doesn’t help in
understanding the same. In fact, this document prescribes treating the AQM
as black box  for all tests.

     => be configured with a reasonable
        amount of buffer space to absorb transient bursts.

     => What is "reasonable" now? Usually BDP is recommened, but this may
        be highly variable, too...

>> Yes, this is highly variable. We would like to not get into that
discussion here. Therefore, we say that the buffer "is to be set to the
bandwidth-delay product".

%%%%
        TCP is a widely deployed transport.  It fills up
       *unmanaged* buffers until a sender transfering a bulk flow with TCP
       receives a signal (packet drop) that reduces the sending rate.

    => suggestion: replace "umanaged" buffers by "available" buffers
       because TCP will fill managed buffers too, until the sender receives
       a congestion signal.

>> Updated text.

{Section 4.1.1}
---------------
It would be good to describe the objectives of the test, i.e.,
the rationale and expected AQM/TCP behavior.

%%%%%%
       friendly transport sender.  A single long-lived, non application-
       limited, TCP NewReno flow, with an Initial congestion Window (IW) set
    => explicitly defining what "non application-limited" means exactly
       wouldn't hurt. For instance, an application could be bandwidth or
       rate limited or also (sending) window limited.

>> Updated text; “… non application-limited (unlimited data available to
the transport sender from application layer)”

%%%%%%
   For each TCP-friendly transport considered, the graph described in
   Section 2.7 could be generated.
I guess the latency vs. goodput graph is meant here...


>> Yes


{Section 4.1.2}
---------------
   For this scenario, two types of flows MUST be generated between
=> Yes, so two types doesn't mean necessarily only two flows
   (cf. SEN.Flow1.1 ... SEN.Flow1.X).
   o  A single long-lived application-limited TCP NewReno flow, with an
      IW set to 3 or 10 packets.  The size of the data transferred must
      be strictly higher than 10 packets and should be lower than 100
      packets.
=> what does "long-lived" mean?
=> I doubt that 100 packets is really long-lived!
=> Are these 1500 bytes packets?


>> Agree, Maybe this was copy paste typo? Updated text; “A single
application-limited TCP…”

   For each of these scenarios, the graph described in Section 2.7 could
   be generated for each class of traffic (application-limited and non
   application-limited).

=> what exactly is the goal of this metric? Does delay/throughput
   graph make sense for 10-100 packet flow? According to the section
   title, the goal is likely to assert how fast the two flows converge
   to a fair share depending on the IW of the second flow. In this
   case, both the scenario and the metric are not very
   significant. According to the scenario itself the goal could be to
   assert flow completion time of short flows under presence of
   background flows. In this case these metrics should be reflected on
   a result graph. Probably it's useful to add a metric without
   background flows as a reference point


>> We recommend flow completion time as the metric for application-limited
flow and delay/throughput for the non application-limited one.

{Section 4.3}
-------------
-   to keep responsive fraction under control.  This scenario considers a
-----
+   to keep the responsive fraction under control.  This scenario considers
a


>> Updated text


%%%%%%
   sender A and receiver B.  As opposed to the first scenario, the rate
   of the UDP traffic should not be greater than the bottleneck
   capacity, and should not be higher than half of the bottleneck
   capacity.  For each type of traffic, the graph described in
=> Not clear why the UDP flow shouldn't be larger than half of the
   bottleneck capacity. If it had 75% of the bottleneck capacity,
   one could see whether the AQM is able to squeeze it down to
   50% while allowing the TCP flow to get the other half.
=> Again, what is the goal of this scenario? It looks like that the scenario
   aims at showing what share of the bandwidth does the TCP flow receive.
In this
   case the results are better illustrated by a fairness index, or two
   throughput bars and not by the delay/throughput tradeoff.


>> I think that it should be
"
As opposed to the first scenario, the rate
   of the UDP traffic should not be greater than the bottleneck
   capacity, and should be higher than half of the bottleneck
   capacity.
"
The goal of this scenario is indeed to see how TCP and UDP flows coexist in
presence of AQM. The delay/throughput tradeoff graph can be enough to see
the difference of goodput between the TCP and UDP flows. Also, the section
that presents the "delay-goodput" trade-off graph also advise for the plot
of the CDF of the goodput.

{Section 4.4}
------------
-   Single long-lived non application-limited TCP NewReno flows transfer
------
+   A single long-lived non application-limited TCP NewReno flow transfers

>> Updated text

%%%%%%
   sender A and receiver B.  We recommend to set the target delay and
       gain values of LEDBAT respectively to 5 ms and 10 [TRAN2014].  Other
    => 10ms? That would however be RTT dependent, i.e., if the topology has
       much lower RTTs then these values must be adapted accordingly...
    => again the choice of metrics is questionable.

>> As far as we know, LEDBAT's target is not related to the RTT (such as
the queuing allowed by an AQM should not be related to the RTT). There is
no version of LEDBAT that adapts its target value to the network
characteristics.

As for the metrics, we advise to plot the figures mentioned in the section
that speaks about the "delay goodput" trade-off. In this section, we
mention the CDF for the queuing delay, the goodput AND the queuing
delay-goodput graphs.

{Section 5}
-----------
see also Section 2.3 of RFC 5166

>> See comments below for intra-protocol fairness issue.

{Section 5.1}
-------------
   The ability of AQM schemes to control the queuing delay highly
   depends on the way end-to-end protocols react to congestion signals.
    => I don't think that this is true in every case. Some AQMs also control
       queuing delay even for completely unresponsive flows. Therefore,
       "highly depends" is a bit overstated...

>> Indeed. The text has been updated.

%%%%%%
    for a set of RTTs (e.g., from 5 ms to 200 ms).
    => RTTs between A and B or between R and L?

>> Updated text; "An AQM scheme's congestion signals (via drops or ECN
marks) must reach the transport sender so that a responsive sender can
initiate its congestion control mechanism and adjust the sending rate. This
procedure is thus dependent on the end-to-end path RTT. When the RTT
varies, the onset of congestion control is impacted, and in turn impacts
the ability of an AQM scheme to control the queue. It is therefore
important to assess the AQM schemes for a set of RTTs between A and B
(e.g., from 5 ms to 200 ms)”

%%%%%%
   Introducing an AQM scheme may cause the unfairness between the flows,
   even if the RTTs are identical.  This potential unfairness SHOULD be
   investigated as well.
=> if it should, it could be defined as an Intra-Protocol Fairness
   in Section 4 (IMHO between 4.2 and 4.3).


>> Good point. However, we would like to keep section 4’s focus on
different transport senders and section 5’s on RTT fairness.

{Section 5.2}
-------------
   o  To evaluate the impact of the RTT value on the AQM performance and
      the intra-protocol fairness (the fairness for the flows using the
      same paths/congestion control), for each run, two flows (Flow1 and
      Flow2) should be introduced.  For each experiment, the set of RTT
      SHOULD be the same for the two flows and in [5ms;560ms].
=> this is evaluating not RTT fairness since both flows use the same RTT,
   but this probably evaluates sensitivity to different RTTs

=> (forward referencing 5.3) the metric of choice for this scenario
(cumulative
   average goodput of two flows) definitely doesn't show whether the flows
   are fair to each other.


{Section 5.3}
-------------
see also RFC 5166, sec. 2.3.3., Fairness and round-trip times.

>> Good reference. But we will keep the fairness metric as goodput since it
captures quite well the RTT fairness. (Easier argument if we remove the 2nd
test).
We  have removed the second test. The text for this section is now:
"
6.  Round Trip Time Fairness

6.1.  Motivation

   An AQM scheme's congestion signals (via drops or ECN marks) must
   reach the transport sender so that a responsive sender can initiate
   its congestion control mechanism and adjust the sending rate.  This
   procedure is thus dependent on the end-to-end path RTT.  When the RTT
   varies, the onset of congestion control is impacted, and in turn
   impacts the ability of an AQM scheme to control the queue.  It is
   therefore important to assess the AQM schemes for a set of RTTs
   between A and B (e.g., from 5 ms to 200 ms).

   The asymmetry in terms of difference in intrinsic RTT between various
   paths sharing the same bottleneck SHOULD be considered so that the
   fairness between the flows can be discussed since in this scenario, a
   flow traversing on shorter RTT path may react faster to congestion
   and recover faster from it compared to another flow on a longer RTT
   path.  The introduction of AQM schemes may potentially improve this
   type of fairness.

   Introducing an AQM scheme may cause the unfairness between the flows,
   even if the RTTs are identical.  This potential unfairness SHOULD be
   investigated as well.

6.2.  Recommended tests

   The RECOMMENDED topology is detailed in Figure 1.

   To evaluate the RTT fairness, for each run, two flows divided into
   two categories.  Category I which RTT between sender A and receiver B
   SHOULD be 100ms.  Category II which RTT between sender A and receiver
   B should be in [5ms;560ms].  The maximum value for the RTT represents
   the RTT of a satellite link that, according to section 2 of [RFC2488]
   should be at least 558ms.

   A set of evaluated flows MUST use the same congestion control
   algorithm: all the generated flows could be single long-lived non
   application-limited TCP NewReno flows.

6.3.  Metrics to evaluate the RTT fairness

   The outputs that MUST be measured are: (1) the cumulative average
   goodput of the flow from Category I, goodput_Cat_I (Section 2.5); (2)
   the cumulative average goodput of the flow from Category II,
   goodput_Cat_II (Section 2.5); (3) the ratio goodput_Cat_II/
   goodput_Cat_I; (4) the average packet drop rate for each category
   (Section 2.3).
"

{Section 6.1}
-------------
   An AQM scheme can result in bursts of packet arrivals due to various
   reasons.  Dropping one or more packets from a burst can result in

=> I don't get this. TCP or applications usually send/generate bursts, but
   AQM schemes?

>>  I think that this is a typo. The text has been updated as follows:
"
An AQM scheme can face bursts of packet arrivals due to
        various reasons.
"

%%%%%%
   An AQM scheme that maintains short queues allows some remaining space
   in the queue for bursts of arriving packets.
=> should be (?): some remaining space in the buffer for bursts of ...

>> Text updated.

%%%%%%
-   directly linked to the AQM algorithm.  Moreover, one AQM scheme may
----
+   directly linked to the AQM algorithm.  Moreover, an AQM scheme may

>> Text updated.

{Section 6.2}
-------------
   o  Bursty video frames;
How? What? Congestion Controlled? App limited/rate limited streaming?

>> There are some many ways of generating bursty video frames that
we did not want to focus on any specific one. The way of generating such
traffic is highly depend on the test platform, therefore we would better let
the tester chose.

%%%%%%
-   o  Constant bit rate UDP traffic.
----
+   o  Constant bit rate (CBR) UDP traffic.

>> Thanks.

=> at which rate BTW?

>> The type of traffic that could be modeled by this CBR UDP traffic is
highly depend on the context in which the AQM will be deployed. In this
section, we preferred presenting general type of application that could be
generated, but did not want to focus on specific use cases.

%%%%%%
     o  A single bulk TCP flow as background traffic.
  => non-application-limited?

>> Text updated.

%%%%%%
Figure 2:
-   |    |Video|Webs (IW 10)| CBR| Bulk TCP Traffic   |
----
+   |    |Video|Web  (IW 10)| CBR| Bulk TCP Traffic   |

>> updated.

%%%%%%
Probably it would make sense to join it with section 8
(which also needs a more precise workload description).

>> I am not quite sure to understand the comment.

%%%%%%
   For each of these scenarios, the graph described in Section 2.7 could
   be generated.  Metrics such as end-to-end latency, jitter, flow
=> For each of these scenarios, ... the graph for every flow could be
   generated?

>> Text updated.

=> is not obvious why these scenarios evaluate burst absorbtion,
   so an explanation of what should/could be expected would be appreciated.

>> We think that this is addressed in the "motivation" subsection:
"
The ability to accommodate bursts translates to larger queue length
        and hence more queuing delay. On the one hand, it is important that
an
        AQM scheme quickly brings bursty traffic under control. On the other
        hand, a peak in the packet drop rates to bring a packet burst
quickly
        under control could result in multiple drops per flow and severely
        impact transport and application performance. Therefore, an AQM
scheme
        ought to bring bursts under control by balancing both aspects -- (1)
        queuing delay spikes are minimized and (2) performance penalties for
        ongoing flows in terms of packet drops are minimized.
"

{Section 7.2}
-------------
   application-limited TCP flows.  For each of the below scenarios, the
   results described in Section 2.7 SHOULD be generated.  For
=> replace "results" with "graphs"?

>> text updated.

=> for the throughput of many flows, is it cumulative or average throughput?

>> I would say that it should be cumulative. The text has been updated.

=> One problem with the suggested output is that these metrics are
   not showing time-dependencies, which are important to see for
   analyzing transient behavior.

>> Indeed. In the description of the metrics, we have added:
"
If the considered scenario introduces dynamically varying parameters,
temporal evolution of the metrics could also be generated.
"

{Section 7.2.5}
---------------
Why not also consider I,II,III,II,I,...?

>> The current order is just an advice - other order could be considered:
"
The following phases may be considered:
"

{Section 7.2.6}
---------------
-   reflect the exact conditions of Wi-Fi environments since its hard to
----
+   reflect the exact conditions of Wi-Fi environments since it is hard to

>> Thanks.

%%%%%%
   o  Experiment 1: the capacity varies between two values within a
      large time-scale.  As an example, the following phases may be
      considered: phase I - 100Mbps during 0-20s; phase II - 10Mbps
      during 20-40s; phase I again, and so on.
=> Are 20s really large enough? Sometimes TCP needs several seconds
   until it finds the available bandwidth.

>> This is just a suggestion. The adequate value for the "20s" is related
to the RTT, the test environment, etc., which is why we can not be strict
on that. How much would you suggest ?

%%%%%%
-   The scenario consist of TCP NewReno flows between sender A and
-----
+   The scenario consists of TCP NewReno flows between sender A and

>> Thanks.


%%%%%%
        behavior, the tester MUST compare its performance with those of
drop-
        tail and SHOULD provide a reference document for their proposal

     => Isn't a comparison to drop-tail (and a buffer of size BDP) also
        relevant for the earlier described tests?

>> In fact, the guidelines advice to compare all the results with droptail:
"
Testers therefore need to provide a reference document for their proposal
discussing performance and deployment compared to those of drop-tail.
"
In this section, we just further explain why this is particularly important
for this scenario.

=> irrespective of wi-fi: for traffic load there are first a set of
   tests with different stable conditions and then a test with a
   transient condition.  For RTT there is a test that evaluates AQM's
   behavior for different RTTs in Section 5.2. Is there any reason why
   several different bottleneck capacities are not considered?

>> We could have added a scenario on the impact of the bottleneck's
capacity on the performance of an AQM scheme. However, this would be very
context-dependent and we decided not to focus on use cases in particular.
Do you suggest that we should add such scenario ?


{Section 7.3}
-------------
     describes more general remarks that probably belong into an earlier
     section...

     theoretical analysis belongs to the AQM specification and thus
     this whole section should be probably better moved to
     [draft-ietf-aqm-recommendation]
     This document can include tests whether the theoretical analysis is
     "valid" in practice.

>> On the one hand, having such section in the recommendation document
could have been a good thing; however we believe that it is also important
to have such content when "characterizing" an AQM.

{Section 8.1}
-------------
Traffic mix = a mix of streams with different traffic profiles?

>> Yes.

%%%%%%
-   Webs pages download (such as detailed in Section 6.2); 1 CBR; 1
----
+   Webs pages download (such as detailed in Section 6.2); 1 CBR; 1

>> Text updated. ( I guess you meant "Web pages")

{Section 9.2}
-------------
We recommend (2 times) => We should be avoided

>> We have removed one occurrence of the "recommend", and also some
occurrences of "we".

{Section 10.1}
--------------
   scheme on a particular hardware or software device.  This also helps
   the WG understand which kind of devices can easily support the AQM
   and which cannot.
=> as already earlier commented: this document is hopefully useful beyond
 the WG...

>>  Updated text; "This also facilitates discsusions around which kind of
devices can easily support the AQM and which cannot.”

=> this belongs more to the requirement for an AQM proposal and thus
       this section should be moved to [draft-ietf-aqm-recommendation],
       Too

>> OK, we agree but same as above, we believe that it is also important to
have such content when "characterizing" an AQM.

{Section 11.1}
--------------
   Additionally, the safety of an AQM scheme is directly related to its
   stability under varying operating conditions such as varying traffic
   profiles and fluctuating network conditions, as described in
   Section 7.  Operating conditions vary often and hence the AQM needs
   to remain stable under these conditions without the need for
   additional external tuning.  If AQM parameters require tuning under
=> this could also be mentioned in/moved to section 7...

>> Added text to beginning of section 7.1 “Motivation”; "The safety of an
AQM scheme is directly related to its stability under varying operating
conditions such as varying traffic profiles and fluctuating network
conditions. Since operating conditions can vary often the AQM needs to
remain stable under these conditions without the need for additional
external tuning. “

A minimal number of
       control parameters minimizes the number of ways a *possibly naive*
user
       can break a system where an AQM scheme is deployed at.
    => this sounds a little bit strange, so better remove *possibly naive*

>> Updated text; "A minimal number of control parameters minimizes the
number of ways a user can break a system …”

{Section 12}
------------
All previous tests could be performed with or without ECN...

>> Sure, don’t follow what we should change here? We believe that the
updated ToC solves this issue.


{Section 14.1}
--------------
This should be discussed earlier in the document....

%%%%%%
   ascertain whether a specific AQM is not only better than drop-tail
   but also safe to deploy.  Testers therefore need to provide a
=> better than drop-tail with a BDP-sized buffer. The buffer size alone
   is a parameter that affects performance of a CC scheme.

>> Updated text; “… AQM is not only better than drop-tail (with BDP-sized
buffer) but also safe to deploy. “


{Section 14.3.1}
----------------
           [bullet 1] For example, to compare how well a
           queue-length based AQM scheme controls queueing delay vs. a
           queueing-delay based AQM scheme, a tester can identify the
           parameters of the schemes that control queue delay and ensure
           that their input values are comparable.
    => It would be preferable if AQM designers described these
       parameters. Ideally, an AQM proposal could describe the parameters
       as a function of network characteristics such as capacity and
       average RTT, similar to how it is done in Sally Floyd's Adaptive
       RED paper.

>> Updated text; “Additionally, it would be preferable if an AQM proposal
listed such parameters and discussed how each relates to network
characteristics such as capacity, average RTT etc”.


          [bullet 2] In such situations, these schemes need to be
           compared over a range of input configurations.

    => From this text it can be inferred, that the goal is to run some/all
       scenarios in this document with different settings of an AQM
       parameter that affects delay/throughput tradeoff. This is probably
       very valuable, because the network administrator can choose the
       desired delay and then see what AQM provides better throughput for
       this value of the delay from the graphs described in Section 2.7.
       For this reason this paragraph is probably very important and
       should be moved to the beginning of the document together with the
       requirement to compare AQM against drop-tail. It would also be good
       if AQM document explicitly specified what parameters to tune
       (similar to how target in Codel affects power metric: see
       http://www.ietf.org/proceedings/84/slides/slides-84-tsvarea-4.pdf
       slides 17-19)


>> The intent here was to ensure testers considered range of parameters
while comparing AQM schemes, since parameter values may not semantically
match between schemes. However, w.r.t evaluating a single AQM, the draft
recommends maintaining the AQM as a black box and not change any
parameters.

{Section 20.2}
--------------
   [HAYE2013]
              Hayes, D., Ros, D., Andrew, L., and S. Floyd, "Common TCP
              Evaluation Suite", IRTF (Work-in-Progress) , 2013.
is this referring to https://tools.ietf.org/html/draft-irtf-iccrg-tcpeval-01
?
This should be as precise as possible.

>> Updated by:
  [I-D.irtf-iccrg-tcpeval]
              Hayes, D., Ros, D., Andrew, L., and S. Floyd, "Common TCP
              Evaluation Suite", draft-irtf-iccrg-tcpeval-01 (work in
              progress), July 2014.

draft-ietf-aqm-eval-guidelines-08.xml

<?xml version="1.0" encoding="US-ASCII"?>
<!DOCTYPE rfc SYSTEM "rfc2629.dtd">
<?xml-stylesheet type='text/xsl' href='rfc2629.xslt' ?>
<?rfc strict="yes" ?>
<?rfc toc="yes"?>
<?rfc tocdepth="4"?>
<?rfc symrefs="yes"?>
<?rfc sortrefs="yes" ?>
<?rfc compact="yes" ?>
<?rfc subcompact="no" ?>
<rfc
	category="info"
        docName="draft-ietf-aqm-eval-guidelines-08"
	ipr="trust200902">
  <!-- category values: std, bcp, info, exp, and historic
     ipr values: full3667, noModification3667, noDerivatives3667
     you can add the attributes updates="NNNN" and obsoletes="NNNN"
     they will automatically be output with "(if approved)" -->

  <!-- ***** FRONT MATTER ***** -->

  <front>
    <!-- The abbreviated title is used in the page header - it is only
necessary if the
         full title is longer than 39 characters -->

    <title abbrev="AQM Characterization Guidelines">AQM
Characterization Guidelines</title>

    <author fullname="Nicolas Kuhn" initials="N." role="editor" surname="Kuhn">
      <organization>Telecom Bretagne</organization>
      <address>
        <postal>
          <street>2 rue de la Chataigneraie</street>
          <city>Cesson-Sevigne</city>
          <region></region>
          <code>35510</code>
          <country>France</country>
        </postal>
        <phone>+33 2 99 12 70 46</phone>
        <email>nicolas.kuhn@telecom-bretagne.eu</email>
      </address>
    </author>

    <author fullname="Preethi Natarajan" initials="P." role="editor"
surname="Natarajan">
      <organization>Cisco Systems</organization>
      <address>
        <postal>
          <street>510 McCarthy Blvd</street>
          <city>Milpitas</city>
          <region>California</region>
          <code></code>
          <country>United States</country>
        </postal>
        <phone></phone>
        <email>prenatar@cisco.com</email>
      </address>
    </author>

<author fullname="Naeem Khademi" initials="N." role="editor" surname="Khademi">
    <organization>University of Oslo</organization>
    <address>
        <postal>
            <street>Department of Informatics, PO Box 1080 Blindern</street>
            <city>N-0316 Oslo</city>
            <region></region>
            <code></code>
            <country>Norway</country>
        </postal>
        <phone>+47 2285 24 93</phone>
        <email>naeemk@ifi.uio.no</email>
    </address>
</author>

    <author fullname="David Ros" initials="D." surname="Ros">
      <organization>Simula Research Laboratory AS</organization>
      <address>
        <postal>
          <street>P.O. Box 134</street>
          <city>Lysaker, 1325</city>
          <region></region>
          <code></code>
          <country>Norway</country>
        </postal>
        <phone>+33 299 25 21 21</phone>
        <email>dros@simula.no</email>
      </address>
    </author>

    <date month="September" year="2015" />

    <!-- If the month and year are both specified and are the current
ones, xml2rfc will fill
         in the current day for you. If only the current year is
specified, xml2rfc will fill
	 in the current day and month for you. If the year is not the current
one, it is
	 necessary to specify at least a month (xml2rfc assumes day="1" if
not specified for the
	 purpose of calculating the expiry date).  With drafts it is normally
sufficient to
	 specify just the year. -->

    <!-- Meta-data Declarations -->

    <area>Transport</area>

    <workgroup>Internet Engineering Task Force</workgroup>

    <!-- WG name at the upperleft corner of the doc,
         IETF is fine for individual submissions.
	 If this element is not present, the default is "Network Working Group",
         which is used by the RFC Editor as a nod to the history of
the IETF. -->

    <keyword>template</keyword>

    <!-- Keywords will be incorporated into HTML output
         files in a meta tag but they have no effect on text or nroff
         output. If you submit your draft to the RFC Editor, the
         keywords will be used for the search engine. -->

        <!-- ######################################################-->
        <!-- ######################################################-->
        <!-- Head of the document -->
        <!-- ######################################################-->
        <!-- ######################################################-->

    <abstract>
    <t>Unmanaged large buffers in today's networks have given rise to
a slew of performance issues. These performance issues can be
addressed by some form of Active Queue Management (AQM) mechanism,
optionally in combination with a packet scheduling scheme such as fair
queuing. The IETF Active Queue Management and Packet Scheduling
working group was formed to standardize AQM schemes that are robust,
easily implementable, and successfully deployable in today's networks.
This document describes various criteria for performing precautionary
characterizations of AQM proposals. This document also helps in
ascertaining whether any given AQM proposal should be taken up for
standardization by the AQM WG.</t>
    </abstract>
  </front>

  <middle>

	<section anchor="sec:introduction" title="Introduction">
        <t>Active Queue Management (AQM) <xref
target="RFC7567"></xref> addresses the concerns arising from using
unnecessarily large and unmanaged buffers to improve network and
application performance. Several AQM algorithms have been proposed in
the past years, most notably Random Early Detection (RED), BLUE, and
Proportional Integral controller (PI), and more recently CoDel <xref
target="NICH2012"></xref> and PIE <xref target="PAN2013"></xref>. In
general, these algorithms actively interact with the Transmission
Control Protocol (TCP) and any other transport protocol that deploys a
congestion control scheme to manage the amount of data they keep in
the network. The available buffer space in the routers and switches
should be large enough to accommodate the short-term buffering
requirements. AQM schemes aim at reducing buffer occupancy, and
therefore the end-to-end delay. Some of these algorithms, notably RED,
have also been widely implemen
ted in some network devices. However, the potential benefits of the
RED scheme have not been realized since RED is reported to be usually
turned off. The main reason of this reluctance to use RED in today's
deployments comes from its sensitivity to the operating conditions in
the network and the difficulty of tuning its parameters.</t>
	<t>A buffer is a physical volume of memory in which a queue or set of
queues are stored. When speaking of a specific queue in this document,
"buffer occupancy" refers to the amount of data (measured in bytes or
packets) that are in the queue, and the "maximum buffer size" refers
to the maximum buffer occupancy. In real implementations of switches,
a global memory is often shared between the available devices, and
thus, the maximum buffer size may vary over the time.</t>
	<!-- <t>A buffer is a physical volume of memory in which a queue or
set of queues are stored. In real implementations of switches, a
global memory is often shared between the available devices: the size
of the buffer for a given communication does not make sense, as its
dedicated memory may vary over the time and real-world buffering
architectures are complex. For the sake of simplicity, when speaking
of a specific queue in this document, "buffer size" refers to the
maximum amount of data the buffer may store, which can be measured in
bytes or packets. The rest of this memo therefore refers to the
maximum queue depth as the size of the buffer for a given
communication.</t> -->
	<!-- <t>In order to meet mostly throughput-based Service-Level
Agreement (SLA) requirements and to avoid packet drops, many home
gateway manufacturers resort to increasing the available memory beyond
"reasonable values". This increase is also referred to as Bufferbloat
<xref target="BB2011"></xref>. Deploying large unmanaged buffers on
the Internet has lead to an increase in end-to-end delay, resulting in
poor performance for latency-sensitive applications such as real-time
multimedia (e.g., voice, video, gaming, etc). The degree to which this
affects modern networking equipment, especially consumer-grade
equipment's, produces problems even with commonly used web services.
Active queue management is thus essential to control queuing delay and
decrease network latency.</t> -->
	<t>Bufferbloat <xref target="BB2011"></xref> is the consequence of
deploying large unmanaged buffers on the Internet -- the buffering has
often been measured to be ten times or hundred times larger than
needed. Large buffer sizes in combination with TCP and/or unresponsive
flows increases end-to-end delay. This results in poor performance for
latency-sensitive applications such as real-time multimedia (e.g.,
voice, video, gaming, etc). The degree to which this affects modern
networking equipment, especially consumer-grade equipment's, produces
problems even with commonly used web services. Active queue management
is thus essential to control queuing delay and decrease network
latency.</t>
	<t>The Active Queue Management and Packet Scheduling Working Group
(AQM WG) was chartered to address the problems with large unmanaged
buffers in the Internet. Specifically, the AQM WG is tasked with
standardizing AQM schemes that not only address concerns with such
buffers, but also are robust under a wide variety of operating
conditions.</t>
	<t>In order to ascertain whether the WG should undertake
standardizing an AQM proposal, the WG requires guidelines for
assessing AQM proposals. This document provides the necessary
characterization guidelines. <xref target="RFC7567"></xref> separately
describes the AQM algorithm implemented in a router from the
scheduling of packets sent by the router. The rest of this memo refers
to the AQM as a dropping/marking policy as a separate feature to any
interface scheduling scheme. This document may be complemented with
another one on guidelines for assessing combination of packet
scheduling and AQM. We note that such a document will inherit all the
guidelines from this document plus any additional scenarios relevant
for packet scheduling such as flow starvation evaluation or impact of
the number of hash buckets.</t>
	
	<section anchor="subsec:intro_tradeoff" title="Reducing the latency
and maximizing the goodput">
	<t>The trade-off between reducing the latency and maximizing the
goodput is intrinsically linked to each AQM scheme and is key to
evaluating its performance. This trade-off MUST be considered in a
variety of scenarios to ensure the safety of an AQM deployment.
Whenever possible, solutions ought to aim at both maximizing goodput
and minimizing latency.</t>
	<!-- <t>Testers SHOULD discuss in a reference document the
performance of their proposal in terms of performance and deployment
compared to those of drop-tail: basically, -->
	</section>
	
	<section anchor="subsec:intro_guidelines" title="Guidelines for AQM
evaluation">
        <t>The guidelines help to quantify performance of AQM schemes
in terms of latency reduction, goodput maximization and the trade-off
between these two. The guidelines also discuss methods to understand
the various aspects associated with safely deploying and operating the
AQM scheme. These guidelines discuss methods to understand ease of
development, deployment and operational aspects of the AQM scheme
verses the potential gain in performance from the introduction of the
proposed scheme.</t>
        <t>This memo details generic characterization scenarios
against which any AQM proposal should be evaluated, irrespective of
whether or not an AQM is standardized by the IETF. This documents
recommends the relevant scenarios and metrics to be considered. The
document presents central aspects of an AQM algorithm that must be
considered whatever the context, such as burst absorption capacity,
RTT fairness or resilience to fluctuating network conditions.</t>
	<t>These guidelines do not cover every possible aspect of a
particular algorithm. In addition, it is worth noting that the
proposed criteria are not bound to a particular evaluation toolset.
These guidelines do not present context-dependent scenarios (such as
802.11 WLANs, data-centers or rural broadband networks).</t>
          <!-- Therefore, this document considers two different
categories of evaluation scenarios: (1) generic scenarios that any AQM
proposal SHOULD be evaluated against, and (2) evaluation scenarios
specific to a network environment. Irrespective of whether or not an
AQM is standardized by the WG, we recommend the relevant scenarios and
metrics discussed in this document to be considered. Since a specific
AQM scheme MAY NOT be applicable to all network environments, the
specific evaluation scenarios enable to establish the environments
where the AQM is applicable. These guidelines do not present every
possible scenario and cannot cover every possible aspect of a
particular algorithm.  In addition, it is worth noting that the
proposed criteria are not bound to a particular evaluation toolset.
</t> -->
	</section>

	<section anchor="subsec:intro_requi" title="Requirements Language">
	<t>The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL
NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in
this document are to be interpreted as described in <xref
target="RFC2119">RFC 2119</xref>.</t>
	</section>

	<section anchor="subsec:intro_glossary" title="Glossary">
	<t><list style="symbols">
	<t>AQM: <xref target="RFC7567"></xref> separately describes the
Active Queue Managment (AQM) algorithm implemented in a router from
the scheduling of packets sent by the router. The rest of this memo
refers to the AQM as a dropping/marking policy as a separate feature
to any interface scheduling scheme.</t>
	<t>buffer: a physical volume of memory in which a queue or set of
queues are stored.</t>
	<t>buffer occupancy: amount of data that are stored in a buffer,
measured in bytes or packets.</t>
	<t>buffer size: maximum buffer occupancy, that is the maximum amount
of data that may be stored in a buffer, measured in bytes or
packets.</t>
  	<t>goodput: goodput is defined as the number of bits per unit of
time forwarded to the correct destination minus any bits lost or
retransmitted <xref target="RFC2647"> </xref>. </t>

	</list></t>	
	</section>
	
	</section>

        <!-- ######################################################-->
        <!-- ######################################################-->
        <!-- Body of the document -->
        <!-- ######################################################-->
        <!-- ######################################################-->

        <!-- ######################################################-->
        <!-- New section -->
        <!-- ######################################################-->

        <section anchor="sec:e2e_metrics" title="End-to-end metrics">
          <t>End-to-end delay is the result of propagation delay,
serialization delay, service delay in a switch, medium-access delay
and queuing delay, summed over the network elements along the path.
AQM schemes may reduce the queuing delay by providing signals to the
sender on the emergence of congestion, but any impact on the goodput
must be carefully considered. This section presents the metrics that
could be used to better quantify (1) the reduction of latency, (2)
maximization of goodput and (3) the trade-off between these two. This
section provides normative requirements for metrics that can be used
to assess the performance of an AQM scheme.</t>
          <t>Some metrics listed in this section are not suited to
every type of traffic detailed in the rest of this document. It is
therefore not necessary to measure all of the following metrics: the
chosen metric may not be relevant to the context of the evaluation
scenario (e.g., latency vs. goodput trade-off in application-limited
traffic scenarios). Guidance is provided for each metric.</t>

	<section anchor="subsec:e2e_metrics_complet_time" title="Flow completion time">
          <t>The flow completion time is an important performance
metric for the end-user when the flow size is finite. Considering the
fact that an AQM scheme may drop/mark packets, the flow completion
time is directly linked to the dropping/marking policy of the AQM
scheme. This metric helps to better assess the performance of an AQM
depending on the flow size. The Flow Completion Time (FCT) is related
to the flow size (Fs) and the goodput for the flow (G) as follows:</t>
          <t> FCT [s] = Fs [Byte] / ( G [Bit/s] / 8 [Bit/Byte] ) </t>
	  <t>If this metric is used to evaluate the performance of web
transfers, it is suggested to rather consider the time needed to
download all the objects that compose the web page, as this makes more
sense in terms of user experience than assessing the time needed to
download each object.</t>
          <!-- <t>To illustrate this metric: the x-axis show the size
of the flow and the y-axis the flow completion time.</t> -->
	</section>

	<section anchor="subsec:e2e_metrics_flow_start" title="Flow start up time">
          <t>The flow start up time is the time between the request
has been sent from the client and the server starts to transmit data.
The amount of packets dropped by an AQM may seriously affect the
waiting period during which the data transfer has not started. This
metric would specifically focus on the operations such as DNS lookups,
TCP opens of SSL handshakes.</t>
	</section>

	<section anchor="subsec:e2e_metrics_loss" title="Packet loss">
          <t>Packet loss can occur en-route, this can impact the
end-to-end performance measured at receiver.</t>
	<t>The tester SHOULD evaluate loss experienced at the receiver using
one of the two metrics:</t>
	<t><list style="symbols">
	<t>the packet loss ratio: this metric is to be frequently measured
during the experiment. The long-term loss ratio is of interest for
steady-state scenarios only;</t>
        <t>the interval between consecutive losses: the time between
two losses is to be measured.</t>
        <!-- <t>the packet loss pattern.</t> -->
	</list></t>
        <!--<t>The guidelines advice that the tester SHOULD determine
the minimum, average and maximum measurements of these metrics and the
coefficient of variation for the average value as well.</t>-->
        <t>The packet loss ratio can be assessed by simply evaluating
the loss ratio as a function of the number of lost packets and the
total number of packets sent. This might not be easily done in
laboratory testing, for which these guidelines advice the tester:</t>
	<t><list style="symbols">
            <t>to check that for every packet, a corresponding packet
was received within a reasonable time, as explained in <xref
target="RFC2680"> </xref>.</t>
            <t>to keep a count of all packets sent, and a count of the
non-duplicate packets received, as explained in the section 10 of
<xref target="RFC2544"> </xref>.</t>
        </list></t>
        <t>The interval between consecutive losses, which is also
called a gap, is a metric of interest for VoIP traffic and, as a
result, has been further specified in <xref target="RFC3611">
</xref>.</t>
        </section>
	
        <section anchor="subsec:e2e_metrics_synch_loss" title="Packet
loss synchronization">
        <t>One goal of an AQM algorithm is to help to avoid global
synchronization of flows sharing a bottleneck buffer on which the AQM
operates (<xref target="RFC2309"> </xref>,<xref
target="RFC7567"></xref>). The "degree" of packet-loss synchronization
between flows SHOULD be assessed, with and without the AQM under
consideration.</t>
        <t>As discussed e.g., in <xref target="HASS2008"></xref>, loss
synchronization among flows may be quantified by several slightly
different metrics that capture different aspects of the same issue.
However, in real-world measurements the choice of metric could be
imposed by practical considerations -- e.g., whether fine-grained
information on packet losses in the bottleneck available or not. For
the purpose of AQM characterization, a good candidate metric is the
global synchronization ratio, measuring the proportion of flows losing
packets during a loss event. <xref target="JAY2006"></xref> used this
metric in real-world experiments to characterize synchronization along
arbitrary Internet paths; the full methodology is described in <xref
target="JAY2006"></xref>.</t>
        <t>If an AQM scheme is evaluated using real-life network
environments, it is worth pointing out that some network events, such
as failed link restoration may cause synchronized losses between
active flows and thus confuse the meaning of this metric.</t>


        <!--
          <t>With the introduction of AQM schemes, the packet loss
synchronization can be reduced. This is one original goal of AQMs, as
explained in <xref target="RFC2309"> </xref>.</t>
        <t>The synchronization ratio is defined as the degree of
synchronization of loss events between two TCP flows on the same path:
this metric is determined largely by the traffic mix on the congested
link and by the AQM mechanism introduced <xref
target="IRTF-TOOLS-5"></xref>.</t>
        <t>The overall synchronization ratio (Sij) is defined for two
flows i and j that lose packets in the same time slot.
Sij=max(Si_j,Sj_i), where Sk_n denotes the fraction of loss events of
flow k in which flow n (!=k) also suffers packet loss.</t>
        <t>More details on the other metrics that can evaluate the
packet loss synchronization can be found in <xref
target="HASS2008"></xref>.</t>
        -->

        <!--
        <t>It is important to evaluate this metric in order to check
whether an AQM mechanism fairly drops packets of two flows or not. The
introduction of AQM impacts on this metric has already been measured
in <xref target="LOSS-SYNCH-AQM-08"></xref> and should be considered
while evaluating an AQM proposal.</t>
        <t>These guidelines propose to quantify the loss
synchronization by the utilization of three possible metrics:</t>
	<t><list style="symbols">
	<t>overall synchronization ratio (Sij): this metric is defined for
two flows i and j that lose packets in the same time slot.
Sij=max(Si_j,Sj_i), where Sk_n denotes the fraction of loss events of
flow k in which flow n (!=k) also suffers packet loss.</t>
	<t>synchronization rate (Li): proportion of the total loss events at
which i sees a packet loss.</t>
	<t>global synchronization rate (Rl): proportion of flows losing
packets during loss event l.</t>
        </list></t>-->	

	</section>
	
        <section anchor="subsec:e2e_metrics_goodput" title="Goodput">
        <t>The goodput has been defined in section 3.17 of <xref
target="RFC2647"> </xref> as the number of bits per unit of time
forwarded to the correct destination interface, minus any bits lost or
retransmitted. This definition induces that the test setup needs to be
qualified to assure that it is not generating losses on its own.</t>
          <t>Measuring the end-to-end goodput provides an appreciation
of how well an AQM scheme improves transport and application
performance. The measured end-to-end goodput is linked to the
dropping/marking policy of the AQM scheme -- e.g., the fewer the
number of packet drops, the fewer packets need retransmission,
minimizing the impact of AQM on transport and application performance.
Additionally, an AQM scheme may resort to Explicit Congestion
Notification (ECN) marking as an initial means to control delay.
Again, marking packets instead of dropping them reduces the number of
packet retransmissions and increases goodput. End-to-end goodput
values help to evaluate the AQM scheme's effectiveness of an AQM
scheme in minimizing packet drops that impact application performance
and to estimate how well the AQM scheme works with ECN.</t>
            <!-- Additionally, an AQM scheme may resort to Explicit
Congestion Notification (ECN) marking as an initial means to control
delay. Again, marking packets instead of dropping them reduces number
of packet retransmissions and increases goodput. Overall, end-to-end
goodput values help evaluate the AQM scheme's effectiveness in
minimizing packet drops that impact application performance and
estimate how well the AQM scheme works with ECN. </t> -->
        <!-- <t>If scheduling comes into play, a measure of how
individual queues are serviced may be necessary: the scheduling
introduced on top of the AQM may starve some flows and boost others.
The utilization of the link does not cover this, as the utilization
would be the same, whereas the goodput lets the tester see if some
flows are starved or not.</t> -->
        <!--<t>The guidelines advice that the tester SHOULD determine
the minimum, average and maximum measurements of the goodput and the
coefficient of variation for the average value as well.</t>-->
        <t>The measurement of the goodput allows the tester evaluate
to which extent an AQM is able to maintain a high bottleneck
utilization. This metric should be also obtained frequently during an
experiment as the long-term goodput is relevant for steady-state
scenarios only and may not necessarily reflect how the introduction of
an AQM actually impacts the link utilization during at a certain
period of time. Fluctuations in the values obtained from these
measurements may depend on other factors than the introduction of an
AQM, such as link layer losses due to external noise or corruption,
fluctuating bandwidths (802.11 WLANs), heavy congestion levels or
transport layer's rate reduction by congestion control mechanism.</t>
	</section>

        <section anchor="subsec:e2e_metrics_latency" title="Latency and jitter">
          <t>The latency, or the one-way delay metric, is discussed in
<xref target="RFC2679"> </xref>. There is a consensus on an adequate
metric for the jitter, that represents the one-way delay variations
for packets from the same flow: the Packet Delay Variation (PDV),
detailed in <xref target="RFC5481"> </xref>, serves well all use
cases.</t>
	<t>The end-to-end latency includes components other than just the
queuing delay, such as the signal processing delay, transmission delay
and the processing delay. Moreover, the jitter
   is caused by variations in queuing and processing delay (e.g.,
   scheduling effects). The introduction of an AQM scheme would impact
these metrics (end-to-end latency and jitter) and therefore they
should be considered in the end-to-end evaluation of performance.</t>
	<!-- <t>The tester SHOULD determine the minimum, average and maximum
measurements for end-to-end latency and jitter, and also the
coefficient of variation for their average values.</t> -->
	</section>

        <section anchor="subsec:e2e_metrics_tradeoff"
title="Discussion on the trade-off between latency and goodput">
	<t>The metrics presented in this section may be considered as
explained in the rest of this document, in order to discuss and
quantify the trade-off between latency and goodput.</t>
        <!-- <t>This trade-off can also be illustrated with figures
following the recommendations of section 5 of <xref
target="HAYE2013"></xref>. Each of the end-to-end delay and the
goodput SHOULD be measured frequently for every fixed time
interval.</t> -->
	<t>With regards to the goodput, and in addition to the long-term
stationary goodput value, it is RECOMMENDED to take measurements every
multiple of the minimum RTT (minRTT) between A and B. It is suggested
to take measurements at least every K x minRTT (to smooth out the
fluctuations), with K=10. Higher values for K are encouraged whenever
it is more appropriate for the presentation of the results. The value
for K may depend on the network's path characteristics. The
measurement period MUST be disclosed for each experiment and when
results/values are compared across different AQM schemes, the
comparisons SHOULD use exactly the same measurement periods. With
regards to latency, it is RECOMMENDED to take the samples on
per-packet basis whenever possible depending on the features provided
by hardware/software and the impact of sampling itself on the hardware
performance. It is generally RECOMMENDED to provide at least 10
samples per RTT.</t>
	<t>From each of these sets of measurements, the cumulative density
function (CDF) of the considered metrics SHOULD be computed. If the
considered scenario introduces dynamically varying parameters,
temporal evolution of the metrics could also be generated. For each
scenario, the following graph may be generated: the x-axis shows
queuing delay (that is the average per-packet delay in excess of
minimum RTT), the y-axis the goodput. Ellipses are computed such as
detailed in <xref target="WINS2014"></xref>: "We take each individual
[...] run [...] as one point, and then compute the 1-epsilon elliptic
contour of the maximum-likelihood 2D Gaussian distribution that
explains the points. [...] we plot the median per-sender throughput
and queueing delay as a circle. [...] The orientation of an ellipse
represents the covariance between the throughput and delay measured
for the protocol." This graph provides part of a better understanding
of (1) the delay/goodput trade-off for
 a given congestion control mechanism <xref target="sec:perf"></xref>,
and (2) how the goodput and average queue delay vary as a function of
the traffic load <xref target="subsec:stability_tests"></xref>.</t>
	
        <!-- <t>The end-to-end trade-off MUST be considered:</t>
        <t><list style="symbols">
          <t> end-to-end delay vs. goodput: the x-axis shows the
end-to-end delay and the y-axis the average goodput;</t>
	  <t>drop rate vs. end-to-end delay: the x-axis shows the end-to-end
delay and the y-axis the drop rate.</t>
        </list></t>
          <t>Each of the end-to-end delay, goodput and drop
probability should be measured every second. From each of this sets of
measurements, the 10th and 90th percentile and the median value should
be computed. For each scenario case, an ellipse can be generated from
the measurement of the percentiles and a point for the median value
can be plotted.</t>
          <t>This pair of graphs provide part of a better
understanding (1) of the delay/goodput/drop-rate trade-off for a given
congestion control mechanism, and (2) of how the goodput and average
queue size vary as a function of the traffic load.</t> -->
	</section>

        </section>

        <!-- ######################################################-->
        <!-- New section -->
        <!-- ######################################################-->

	<section anchor="sec:discuss_setting" title="Generic setup for evaluations">
          <t>This section presents the topology that can be used for
each of the following scenarios, the corresponding notations and
discusses various assumptions that have been made in the document.</t>

	  <section anchor="subsec:discuss_setting_topo_nota" title="Topology
and notations">
	  <figure anchor="fig:topology" title="Topology and notations">
            <artwork>

+---------+
+-----------+|senders A|
|receivers B|
+---------+                                        +-----------+

+--------------+                                +--------------+
|traffic class1|                                |traffic class1|
|--------------|                                |--------------|
| SEN.Flow1.1 +---------+            +-----------+ REC.Flow1.1 |
|        +     |        |            |          |        +     |
|        |     |        |            |          |        |     |
|        +     |        |            |          |        +     |
| SEN.Flow1.X +-----+   |            |  +--------+ REC.Flow1.X |
+--------------+    |   |            |  |       +--------------+
     +            +-+---+---+     +--+--+---+            +
     |            |Router L |     |Router R |            |
     |            |---------|     |---------|            |
     |            | AQM     |     |         |            |
     |            | BuffSize|     | BuffSize|            |
     |            | (Bsize) +-----+ (Bsize) |            |
     |            +-----+--++     ++-+------+            |
     +                  |  |       | |                   +
+--------------+        |  |       | |
+--------------+|traffic classN|        |  |       | |
|traffic classN|
|--------------|        |  |       | |          |--------------|
| SEN.FlowN.1 +---------+  |       | +-----------+ REC.FlowN.1 |
|        +     |           |       |            |        +     |
|        |     |           |       |            |        |     |
|        +     |           |       |            |        +     |
| SEN.FlowN.Y +------------+       +-------------+ REC.FlowN.Y |
+--------------+                                +--------------+
		</artwork>
	  </figure>
	  <t><xref target="fig:topology"></xref> is a generic topology where:</t>
	  <t><list style="symbols">
	    <t> sender with different traffic characteristics (i.e., traffic
profiles) can be introduced;</t>
	    <t>the timing of each flow could be different (i.e., when does
each flow start and stop);</t>
	    <t>each traffic profile can comprise various number of flows;</t>
	    <t>each link is characterized by a couple (one-way delay, capacity);</t>
	    <t>flows are generated at A and sent to B, sharing a bottleneck
(the link between routers L and R);</t>
            <t>the tester SHOULD consider both scenarios of asymmetric
and symmetric bottleneck links in terms of bandwidth. In case of
asymmetric link, the capacity from senders to receivers is higher than
the one from receivers to senders; the symmetric link scenario
provides a basic understanding of the operation of the AQM mechanism
whereas the asymmetric link scenario evaluates an AQM mechanism in a
more realistic setup;</t>
            <t>in asymmetric link scenarios, the tester SHOULD study
the bi-directional traffic between A and B (downlink and uplink) with
the AQM mechanism deployed on one direction only. The tester MAY
additionally consider a scenario with AQM mechanism being deployed on
both directions. In each scenario, the tester SHOULD investigate the
impact of drop policy of the AQM on TCP ACK packets and its impact on
the performance.</t>

          </list></t>
          <t>Although this topology may not perfectly reflect actual
topologies, the simple topology is commonly used in the world of
simulations and small testbeds. It can be considered as adequate to
evaluate AQM proposals, similarly to the topology proposed in <xref
target="I-D.irtf-iccrg-tcpeval"></xref>. Testers ought to pay
attention to the topology that has been used to evaluate an AQM scheme
when comparing this scheme with a newly proposed AQM scheme.</t>
	  </section>

          <section anchor="subsec:discuss_setting_buff_size"
title="Buffer size">
            <t>The size of the buffers should be carefully chosen, and
is to be set to the bandwidth-delay product; the bandwidth being the
bottleneck capacity and the delay the largest RTT in the considered
network. The size of the buffer can impact the AQM performance and is
a dimensioning parameter that will be considered when comparing AQM
proposals.</t>
            <t> If a specific buffer size is required, the tester MUST
justify and detail the way the maximum queue size is set. Indeed, the
maximum size of the buffer may affect the AQM's performance and its
choice SHOULD be elaborated for a fair comparison between AQM
proposals. While comparing AQM schemes the buffer size SHOULD remain
the same across the tests.</t>
          </section>

          <section anchor="subsec:discuss_setting_congestion_control"
title="Congestion controls">
          <t>This document considers running three different
congestion control algorithms between A and B</t>
	  <t><list style="symbols">
              <t>Standard TCP congestion control: the base-line
congestion control is TCP NewReno with SACK, as explained in  <xref
target="RFC5681"> </xref>.</t>
              <t>Aggressive congestion controls: a base-line
congestion control for this category is TCP Cubic <xref
target="I-D.ietf-tcpm-cubic"> </xref>.</t>
              <t>Less-than Best Effort (LBE) congestion controls: an
LBE congestion control 'results in smaller bandwidth and/or delay
impact on standard TCP than standard TCP itself, when sharing a
bottleneck with it.' <xref target="RFC6297"> </xref></t>
          </list></t>
          <t>Other transport congestion controls can OPTIONALLY be
evaluated in addition. Recent transport layer protocols are not
mentioned in the following sections, for the sake of simplicity.</t>
          </section>
	</section>

        <!-- ######################################################-->
        <!-- New section -->
        <!-- ######################################################-->

        <section anchor="sec:discussion" title="Methodology, Metrics,
AQM Comparisons, Packet Sizes, Scheduling and ECN">
          <section anchor="subsec:discussion_methodology" title="Methodology">

	<t>One key objective behind formulating the guidelines is to help
ascertain whether a specific AQM is not only better than drop-tail
(with BDP-sized buffer) but also safe to deploy. Testers therefore
need to provide a reference document for their proposal discussing
performance and deployment compared to those of drop-tail.</t>

        <t>A description of each test setup SHOULD be detailed to
allow this test to be compared with other tests. This also
        allows others to replicate the tests if needed. This test setup SHOULD
        detail software and hardware versions. The tester could make its data
        available.</t>

            <t>The proposals SHOULD be evaluated on real-life systems,
or they MAY be evaluated with event-driven simulations (such as ns-2,
ns-3, OMNET, etc). The proposed scenarios are not bound to a
particular evaluation toolset.</t>
            <t>The tester is encouraged to make the detailed test
setup and the results publicly available.</t>

          </section>

          <section anchor="subsec:discussion_metrics" title="Comments
on metrics measurement">
            <t>The document presents the end-to-end metrics that ought
to be used to evaluate the trade-off between latency and goodput in
<xref target="sec:e2e_metrics"></xref>. In addition to the end-to-end
metrics, the queue-level metrics (normally collected at the device
operating the AQM) provide a better understanding of the AQM behavior
under study and the impact of its internal parameters. Whenever it is
possible (e.g., depending on the features provided by the
hardware/software), these guidelines advice to consider queue-level
metrics, such as link utilization, queuing delay, queue size or packet
drop/mark statistics in addition to the AQM-specific parameters.
However, the evaluation MUST be primarily based on externally observed
end-to-end metrics.</t>

            <t>These guidelines do not aim to detail on the way these
metrics can be measured, since the way these metrics are measured is
expected to depend on the
        evaluation toolset.</t>
            <!--NK: I am not sure whether we should refer to IPPM or not-->
          </section>

          <section anchor="subsec:discussion_comp_aqm"
title="Comparing AQM schemes">
            <t>This document recognizes that these guidelines may be
used for comparing AQM schemes.</t>
            <t>AQM schemes need to be compared against both
performance and deployment categories. In addition, this section
details how best to achieve a fair comparison of AQM schemes by
avoiding certain pitfalls.</t>
            <section anchor="subsubsec:discussion_comp_aqm_perf"
title="Performance comparison">
              <t>AQM schemes MUST be compared against all the generic
scenarios presented in this memo. AQM schemes MAY be compared for
specific network environments such as data centers, home networks,
etc. If an AQM scheme has parameter(s) that were externally tuned for
optimization or other purposes, these values MUST be disclosed.</t>
              <t>AQM schemes belong to different varieties such as
queue-length based schemes (ex. RED) or queueing-delay based scheme
(ex. CoDel, PIE). AQM schemes expose different control knobs
associated with different semantics. For example, while both PIE and
CoDel are queueing-delay based schemes and each expose a knob to
control the queueing delay -- PIE's "queueing delay reference" vs.
CoDel's "queueing delay target", the two tuning parameters of the
          two schemes have different semantics, resulting in different control
          points. Such differences in AQM schemes can be easily
overlooked while making comparisons.</t>
              <t>This document RECOMMENDS the following procedures for
a fair performance comparison between the AQM schemes: </t>
              <t> <list style="numbers">
                  <t>comparable control parameters and comparable
input values: carefully identify the set of parameters that control
similar behavior between the two AQM schemes and ensure these
parameters have comparable input values. For example, to compare how
well a queue-length based AQM scheme controls queueing delay vs. a
queueing-delay based AQM scheme, a tester can identify the parameters
of the schemes that control queue delay and ensure that their input
values are comparable. Similarly, to compare how well two AQM schemes
accommodate packet bursts, the tester can identify burst-related
control parameters and ensure they are configured with similar values.
Additionally, it would be preferable if an AQM proposal listed such
parameters and discussed how each relates to network characteristics
such as capacity, average RTT etc. </t>
                  <t>compare over a range of input configurations:
there could be situations when the set of control parameters that
affect a specific behavior have different semantics between the two
AQM schemes. As mentioned above, PIE has tuning parameters to control
queue delay  that has a different semantics from those used in CoDel.
In such situations, these schemes need to be compared over a range of
input configurations. For example, compare PIE vs. CoDel over the
range of target delay input configurations.</t>
              </list> </t>
            </section>
            <section anchor="subsec:discussion_comp_aqm_deploy"
title="Deployment comparison">
              <t>AQM schemes MUST be compared against deployment
criteria such as the parameter sensitivity (<xref
target="subsec:stability_param_sensitivity"></xref>), auto-tuning
(<xref target="sec:control_knobs"></xref>) or implementation cost
(<xref target="sec:imple_cost"></xref>).</t>
            </section>
          </section>

          <section anchor="subsec:discussion_packet_size"
title="Packet sizes and congestion notification">
            <t>An AQM scheme may be considering packet sizes while
generating congestion signals.  <xref target="RFC7141"></xref>
discusses the motivations behind this. For example, control packets
such as DNS requests/responses, TCP SYNs/ACKs are small, but their
loss can severely impact the application performance. An AQM scheme
may therefore be biased towards small packets by dropping them with
smaller probability compared to larger packets. However, such an AQM
scheme is unfair to data senders generating larger packets. Data
senders, malicious or otherwise, are motivated to take advantage of
such AQM scheme by transmitting smaller packets, and could result in
unsafe deployments and unhealthy transport and/or application
designs.</t>
        <t>An AQM scheme SHOULD adhere to the recommendations outlined in
        <xref target="RFC7141"></xref>, and SHOULD NOT provide undue advantage
        to flows with smaller packets <xref
        target="RFC7567"></xref>.</t> 	
        </section>

        <section anchor="sec:interaction_ecn" title="Interaction with ECN">
	  <t>Deployed AQM algorithms SHOULD implement Explicit Congestion
	Notification (ECN) as well as loss to signal congestion to endpoints <xref
	target="RFC7567"></xref>. ECN <xref target="RFC3168"></xref> is an alternative
	that allows AQM schemes to signal receivers about network congestion that does
	not use packet drop. The benefits of providing ECN support for an AQM scheme
	are described in <xref target="WELZ2015"></xref>.  Section 3 of <xref
	target="WELZ2015"></xref> describes expected operation of routers enabling ECN.
	AQM schemes SHOULD NOT drop or remark packets solely because the ECT(0) or
	ECT(1) codepoints are used, and when ECN-capable SHOULD set a CE-mark on
	ECN-capable packets in the presence of incipient congestion.</t>

	  <t>If the tested AQM scheme can support ECN <xref
	target="RFC7567"></xref>, the testers MUST discuss and describe the support of
	ECN. Since these guidelines can be used to evaluate the performance
of the tested
	AQM with and without ECN markings, they could also be used to
quantify the interest
	of enabling ECN.</t>

        </section>

         <section anchor="sec:interaction_scheduling"
title="Interaction with Scheduling">
          <t>A network device may use per-flow or per-class queuing with a
          scheduling algorithm to either prioritize certain applications or
          classes of traffic, limit the rate of transmission, or to provide
          isolation between different traffic flows within a common class <xref
          target="RFC7567"></xref>.</t>

	  <t>The scheduling and the AQM conjointly impact on the end-to-end
	performance. Therefore, the AQM proposal MUST discuss the feasibility
to add scheduling combined with the AQM algorithm. This discussion as
an instance, MAY explain whether the dropping policy is applied when
packets are being enqueued or dequeued.</t>

	  <t>These guidelines do not propose guidelines to assess the
	performance of scheduling algorithms. Indeed, as opposed to characterizing AQM
	schemes that is related to their capacity to control the queuing delay in a
	queue, characterizing scheduling schemes is related to the scheduling itself
	and its interaction with the AQM scheme. As one example, the scheduler may
	create sub-queues and the AQM scheme may be applied on each of the sub-queues,
	and/or the AQM could be applied on the whole queue. Also, schedulers might,
	such as FQ-CoDel <xref target="HOEI2015"></xref> or FavorQueue <xref
	target="ANEL2014"></xref>, introduce flow prioritization. In these cases,
	specific scenarios should be proposed to ascertain that these scheduler schemes
	not only helps in tackling the bufferbloat, but also are robust under a wide
	variety of operating conditions. This is out of the scope of this document that
	focus on dropping and/or marking AQM schemes.</t>
        </section>

	</section>

        <!-- ######################################################-->
        <!-- New section -->
        <!-- ######################################################-->

        <section anchor="sec:perf" title="Transport Protocols">
          <!--<t>This section presents the set of scenarios that MUST
be considered to evaluate the performance of an AQM scheme and
quantify the trade-off between latency and goodput. For each selected
scenario, the metrics presented in <xref
target="sec:e2e_metrics"></xref> should be considered. While
presenting the performance of an AQM algorithm for the selected
scenarios, the tester MUST provide any parameter that had to be set
beforehand. Moreover, the values for these parameters MUST be
explained and justified as detailed in <xref
target="subsec:stability_param_sensitivity"></xref>.</t>-->
          <!--<t>The tester SHOULD compare its proposal's performance
and deployment with those of drop-tail: basically, these guidelines
provide the tools to understand the cost (in terms of deployment)
versus the potential gain in performance of the introduction of the
proposed scheme.</t>-->
          <!--<t>This section does not present a large set of
scenarios to evaluate the performance of an AQM in specific contexts,
such as Wi-Fi, rural broadband or data-centers. These guidelines
provide generic scenarios for performance evaluations that MUST be
considered.</t>-->
            <t>Network and end-devices need to be configured with a
reasonable amount of buffer space to absorb transient bursts. In some
situations, network providers tend to configure devices with large
buffers to avoid packet drops triggered by a full buffer and to
maximize the link utilization for standard loss-based TCP traffic.</t>

      <t>AQM algorithms are often evaluated by considering Transmission Control
      Protocol (TCP) <xref target="RFC0793"></xref> with a limited number of
      applications. TCP is a widely deployed transport. It fills up
available buffers
      until a sender transfering a bulk flow with TCP receives a signal
      (packet drop) that reduces the sending rate. The larger the
buffer, the higher
      the buffer occupancy, and therefore the queuing delay. An
efficient AQM scheme
      sends out early congestion signals to TCP to bring the queuing delay under
      control.</t>

      <t>Not all endpoints (or applications) using TCP use the same flavor of
      TCP. Variety of senders generate different classes of traffic
which may not
      react to congestion signals (aka non-responsive flows <xref
      target="RFC7567"></xref>) or may not reduce their sending
      rate as expected (aka Transport Flows that are less responsive
than TCP<xref
      target="RFC7567"></xref>, also called "aggressive flows").
      In these cases, AQM schemes seek to control the queuing delay.</t>
	
            <t>This section provides guidelines to assess the
performance of an AQM proposal for various traffic profiles --
different types of senders (with different TCP congestion control
variants, unresponsive, aggressive).</t>
            <!--
                 <section
anchor="subsubsec:eval_generic_traff_profil_topo" title="Topology
Description">
              <t>The topology is presented in <xref
target="fig:topology"></xref>. In this section, the capacities of the
links MUST be set to 10Mbps and the RTT between the senders and the
receivers to 100ms.</t>
            </section>
            -->
            <section
anchor="subsubsec:eval_generic_traff_profil_single_TCP"
title="TCP-friendly sender">

              <section
anchor="subsubsubsec:eval_generic_traff_profil_same_init_cwnd"
title="TCP-friendly sender with the same initial congestion window">
              <t>This scenario helps to evaluate how an AQM scheme
reacts to a TCP-friendly transport sender. A single long-lived, non
application-limited, TCP NewReno flow, with an Initial congestion
Window (IW) set to 3 packets, transfers data between sender A and
receiver B.<!-- during 100s.--> Other TCP friendly congestion control
schemes such as TCP-friendly rate control <xref target="RFC5348">
</xref> etc MAY also be considered.</t>
              <!-- <t>For each TCP-friendly transport considered, the
graphs described in <xref target="subsec:e2e_metrics_tradeoff"></xref>
MUST be generated.</t> -->
              <t>For each TCP-friendly transport considered, the graph
described in <xref target="subsec:e2e_metrics_tradeoff"></xref> could
be generated.</t>
              <!--We expect that an AQM proposal exhibit similar
behavior for all the TCP-friendly transports considered.</t>-->
            </section>

            <section
anchor="subsubsubsec:eval_generic_traff_profil_init_cwnd"
title="TCP-friendly sender with different initial congestion windows">
              <t>This scenario can be used to evaluate how an AQM
scheme adapts to a traffic mix consisting of TCP flows with different
values of the IW.</t>
              <!-- <t><list style="symbols">
                  <t>TCP: Cubic and/or NewReno;</t>
                  <t>IW: 3 or 10 packets.</t>
              </list></t> -->
              <t>For this scenario, two types of flows MUST be
generated between sender A and receiver B:</t>
              <t><list style="symbols">
                  <t>A single long-lived non application-limited TCP
NewReno flow;</t>
                  <t>A single application-limited TCP NewReno flow,
with an IW set to 3 or 10 packets. The size of the data transferred
must be strictly higher than 10 packets and should be lower than 100
packets.</t>
              </list></t>
              <t>The transmission of the non application-limited flow
must start before the transmission of the application-limited flow and
only after the steady state has been reached by non
application-limited flow.</t>
              <t>For each of these scenarios, the graph described in
<xref target="subsec:e2e_metrics_tradeoff"></xref> could be generated
for each class of traffic (application-limited and non
application-limited). The completion time of the application-limited
TCP flow could be measured.</t>
            </section>
            </section>

            <section
anchor="subsubsec:eval_generic_traff_profil_aggress" title="Aggressive
transport sender">
              <t>This scenario helps testers to evaluate how an AQM
scheme reacts to a transport sender that is more aggressive than a
single TCP-friendly sender. We define 'aggressiveness' as a higher
increase factor than standard upon a successful transmission and/or a
lower than standard decrease factor upon a unsuccessful transmission
(e.g., in case of congestion controls with Additive-Increase
Multiplicative-Decrease (AIMD) principle, a larger AI and/or MD
factors).

A single long-lived, non application-limited, TCP Cubic flow transfers
data between sender A and receiver B.<!-- during 100s--> Other
aggressive congestion control schemes MAY also be considered. </t>
              <!-- <t>For each flavor of aggressive transport, the
graphs described in <xref target="subsec:e2e_metrics_tradeoff"></xref>
MUST be generated.</t> -->
              <t>For each flavor of aggressive transports, the graph
described in <xref target="subsec:e2e_metrics_tradeoff"></xref> could
be generated.</t>
            </section>

            <section
anchor="subsubsec:eval_generic_traff_profil_unresp"
title="Unresponsive transport sender">
              <t>This scenario helps testers to evaluate how an AQM
scheme reacts to
        a transport sender that is less responsive than TCP. Note that faulty
        transport implementations on an end host and/or faulty network
        elements en-route that "hide" congestion signals in packet headers
        <xref target="RFC7567"></xref> may also lead to a
        similar situation, such that the AQM scheme needs to adapt to
        unresponsive traffic. To this end, these guidelines propose the two
        following scenarios.</t>
              <t>The first scenario can be used to evaluate queue build up. It
        considers unresponsive flow(s) whose sending rate is greater than the
        bottleneck link capacity between routers L and R. This scenario
        consists of a long-lived non application limited UDP flow transmits
        data <!--with an aggregate rate of 12Mbps--> between sender A and
        receiver B.<!--during 100s.--> Graphs described in <xref
        target="subsec:e2e_metrics_tradeoff"></xref> <!--MUST--> could be
        generated.</t>
              <t>The second scenario can be used to evaluate if the
AQM scheme is
        able to keep the responsive fraction under control. This scenario
        considers a mixture of TCP-friendly and unresponsive traffics. It
        consists of a long-lived UDP flow from unresponsive application
        and a single
        long-lived, non application-limited (unlimited data available
to the transport sender from application layer), TCP New Reno flow
that transmit
        data between sender A and receiver B. As opposed to the first
        scenario, the rate of the UDP traffic should not be greater than the
        bottleneck capacity, and should be higher than half of the
        bottleneck capacity. For each type of traffic, the graph described in
        <xref target="subsec:e2e_metrics_tradeoff"></xref> could be
        generated.</t>
            </section>

            <section
anchor="subsubsec:eval_generic_traff_profil_delay" title="Less-than
Best Effort transport sender">
            <t>This scenario helps to evaluate how an AQM scheme
reacts to LBE congestion controls that 'results in smaller bandwidth
and/or delay impact on standard TCP than standard TCP itself, when
sharing a bottleneck with it.' <xref target="RFC6297"> </xref>. The
potential fateful interaction when AQM and LBE techniques are combined
has been shown in <xref target="GONG2014"></xref>; this scenario helps
to evaluate whether the coexistence of the proposed AQM and LBE
techniques may be possible.</t>
            <t>A single long-lived non application-limited TCP NewReno
flow transfers data between sender A and receiver B. Other
TCP-friendly congestion control schemes MAY also be considered. Single
long-lived non application-limited LEDBAT <xref
target="RFC6817"></xref> flows transfer data between sender A and
receiver B. We recommend to set the target delay and gain values of
LEDBAT respectively to 5 ms and 10 <xref target="TRAN2014"></xref>.
Other LBE congestion control schemes, any of those listed in <xref
target="RFC6297"></xref>, MAY also be considered.</t>
              <t>For each of the TCP-friendly and LBE transports, the
graph described in <xref target="subsec:e2e_metrics_tradeoff"></xref>
could be generated.</t>
            </section>
        </section>

        <!-- ######################################################-->
        <!-- New section -->
        <!-- ######################################################-->

        <section anchor="sec:rtt_fairness" title="Round Trip Time Fairness">
          <section anchor="subsec:rtt_fairness_motivation" title="Motivation">
            <t>An AQM scheme's congestion signals (via drops or ECN
marks) must reach the transport sender so that a responsive sender can
initiate its congestion control mechanism and adjust the sending rate.
This procedure is thus dependent on the end-to-end path RTT. When the
RTT varies, the onset of congestion control is impacted, and in turn
impacts the ability of an AQM scheme to control the queue. It
        is therefore important to assess the AQM schemes for a set of
RTTs between A and B
        (e.g., from 5 ms to 200 ms).</t>
            <t>The asymmetry in terms of difference in intrinsic RTT
between various paths sharing the same bottleneck SHOULD be considered
so that the fairness between the flows can be discussed since in this
scenario, a flow traversing on shorter RTT path may react faster to
congestion and recover faster from it compared to another flow on a
longer RTT path. The introduction of AQM schemes may potentially
improve this type of fairness.</t>
            <t>Introducing an AQM scheme may cause the unfairness
between the flows, even if the RTTs are identical. This potential
unfairness SHOULD be investigated as well.</t>
          </section>
          <section anchor="subsec:rtt_fairness_tests" title="Recommended tests">
            <t>The RECOMMENDED topology is detailed in <xref
target="fig:topology"></xref>.</t>
            <!-- <t><list style="symbols"> -->
            <t>To evaluate the RTT fairness, for each run, two flows
divided into two categories. Category I which RTT between sender A and
receiver B SHOULD be 100ms. Category II which RTT between sender A and
receiver B should be in [5ms;560ms]. The maximum value for the RTT
represents the RTT of a satellite link that, according to section 2 of
<xref target="RFC2488"></xref> should be at least 558ms.</t>
                <!-- <t>To evaluate the impact of the RTT value on the
AQM performance and the intra-protocol fairness (the fairness for the
flows using the same paths/congestion control), for each run, two
flows (Flow1 and Flow2) should be introduced. For each experiment, the
set of RTT SHOULD be the same for the two flows and in
[5ms;560ms].</t>
            </list></t> -->
            <t>A set of evaluated flows MUST use the same congestion
control algorithm: all the generated flows could be single long-lived
non application-limited TCP NewReno flows.</t>
          </section>
          <section anchor="subsubsec:rtt_fariness_metrics"
title="Metrics to evaluate the RTT fairness">
            <!-- <t>The outputs that MUST be measured are:</t> -->
            <!-- <t><list style="symbols"> -->
            <t>The outputs that MUST be measured are: (1) the
cumulative average goodput of the flow from Category I, goodput_Cat_I
(<xref target="subsec:e2e_metrics_goodput"></xref>); (2) the
cumulative average goodput of the flow from Category II,
goodput_Cat_II (<xref target="subsec:e2e_metrics_goodput"></xref>);
(3) the ratio goodput_Cat_II/goodput_Cat_I; (4) the average packet
drop rate for each category (<xref
target="subsec:e2e_metrics_loss"></xref>).</t>
              <!--  <t>for the intra-protocol RTT fairness: (1) the
cumulative average goodput of the two flows (<xref
target="subsec:e2e_metrics_goodput"></xref>); (2) the average packet
drop rate for the two flows (<xref
target="subsec:e2e_metrics_loss"></xref>).</t>
              </list></t> -->
          </section>
        </section>

        <!-- ######################################################-->
        <!-- New section -->
        <!-- ######################################################-->

        <section anchor="sec:burst_absorption" title="Burst Absorption">
      <t>"AQM mechanisms need to control the overall queue sizes, to ensure
      that arriving bursts can be accommodated without dropping packets" <xref
      target="RFC7567"></xref>.</t>

          <section anchor="subsec:burst_absorption_motivation"
title="Motivation">
        <t>An AQM scheme can face bursts of packet arrivals due to
        various reasons. Dropping one or more packets from a burst can result
        in performance penalties for the corresponding flows, since dropped
        packets have to be retransmitted. Performance penalties can result in
        failing to meet SLAs and be a disincentive to AQM adoption.</t>

        <t>The ability to accommodate bursts translates to larger queue length
        and hence more queuing delay. On the one hand, it is important that an
        AQM scheme quickly brings bursty traffic under control. On the other
        hand, a peak in the packet drop rates to bring a packet burst quickly
        under control could result in multiple drops per flow and severely
        impact transport and application performance. Therefore, an AQM scheme
        ought to bring bursts under control by balancing both aspects -- (1)
        queuing delay spikes are minimized and (2) performance penalties for
        ongoing flows in terms of packet drops are minimized.</t>

        <t>An AQM scheme that maintains short queues allows some remaining
        space in the buffer for bursts of arriving packets. The tolerance to
        bursts of packets depends upon the number of packets in the queue,
        which is directly linked to the AQM algorithm. Moreover, an AQM
        scheme may implement a feature controlling the maximum size of
        accepted bursts, that can depend on the buffer occupancy or the
        currently estimated queuing delay. The impact of the buffer size on
        the burst allowance may be evaluated.</t>
          </section>

          <section anchor="subsec:burst_absorption_tests"
title="Recommended tests">
            <!-- <t>The topology is presented in <xref
target="fig:topology"></xref>. For this scenario, the capacities of
the links MUST be set to 10Mbps and the RTT between senders and
receivers to 100ms.</t> -->

            <!--
            <t>The required tests presented in this section can be
divided into two scenarios: generic bursty traffic and realistic
bursty traffic. One of this scenario MUST be considered.</t>

            <section
anchor="subsubsec:burst_absorption_tests_generic_burst" title="Generic
bursty traffic">
              <t>For this scenario, the three following traffic MUST
be generated from sender A to receiver B in parallel:</t>
              <t><list style="symbols">
                  <t>One Constant bit rate UDP traffic: 1Mbps UDP flow;</t>
                  <t>One TCP bulk transfer: repeating 5MB file transmission;</t>
                  <t>Burst of packets: size of the burst from 5 to
MAX_BUFFER_SIZE packets.</t>
              </list></t>
            </section>

            <section
anchor="subsubsec:burst_absorption_tests_realistic_bursty"
title="Realistic bursty traffic">
            -->
              <t>For this scenario, tester MUST evaluate how the AQM
performs with the following traffic generated from sender A to
receiver B:</t>
              <t><list style="symbols">
                  <t>Web traffic with IW10;</t>
                  <t>Bursty video frames;</t>
                  <t>Constant Bit Rate (CBR) UDP traffic.</t>
		  <t>A single non application-limited bulk TCP flow as background
traffic. </t>
              </list></t>
              <t><xref target="fig:burst_traffic"></xref> presents the
various cases for the traffic that MUST be generated between sender A
and receiver B.</t>
              <figure anchor="fig:burst_traffic" title="Bursty traffic
scenarios">
		<artwork>
+-------------------------------------------------+|Case| Traffic Type
                              |
|    +-----+------------+----+--------------------+
|    |Video|Web  (IW 10)| CBR| Bulk TCP Traffic   |
+----|-----|------------|----|--------------------|
|I   |  0  |     1      |  1 |         0          |
+----|-----|------------|----|--------------------|
|II  |  0  |     1      |  1 |         1          |
|----|-----|------------|----|--------------------|
|III |  1  |     1      |  1 |         0          |
+----|-----|------------|----|--------------------|
|IV  |  1  |     1      |  1 |         1          |
+----+-----+------------+----+--------------------+
		</artwork>
              </figure>

            <!-- <section
anchor="subsubsec:burst_absorption_tests_metrics" title="Metrics to
evaluate the burst absorption capacity"> -->
	    <t>A new web page download could start after the previous web
page download is finished. Each web page could be composed by at least
50 objects and the size of each object should be at least 1kB. 6 TCP
parallel connections SHOULD be generated to download the objects, each
parallel connections having an initial congestion window set to 10
packets.</t>
              <t>For each of these scenarios, the graph described in
<xref target="subsec:e2e_metrics_tradeoff"></xref> could be generated
for each application. Metrics such as end-to-end latency, jitter, flow
completion time MAY be generated. For the cases of frame generation of
bursty video traffic as well as the choice of web traffic pattern,
these details and their presentation are left to the testers.</t>
              <!--</section>-->
            <!-- </section> -->
            </section>
        </section>

        <!-- ######################################################-->
        <!-- New section -->
        <!-- ######################################################-->

        <section anchor="sec:stability" title="Stability">
          <section anchor="subsec:stability_motivation" title="Motivation">
            <t>The safety of an AQM scheme is directly related to its
stability under varying operating conditions such as varying traffic
profiles and fluctuating network conditions. Since operating
conditions can vary often the AQM needs to remain stable under these
conditions without the need for additional external tuning. </t>

            <t>Network devices can experience varying operating
conditions depending on factors such as time of the day, deployment
scenario, etc. For example:</t>
            <t><list style="symbols">
                <t>Traffic and congestion levels are higher during
peak hours than off-peak hours.</t>
                <t>In the presence of a scheduler, the draining rate of a queue
            can vary depending on the occupancy of other queues: a low load on
            a high priority queue implies a higher draining rate for the lower
            priority queues.</t>
                <t>The capacity available can vary over time
           (e.g., a lossy channel, a link supporting traffic in a higher
            diffserv class).</t>
            </list></t>
            <t>Whether the target context is a not stable environment,
the ability of an AQM scheme to maintain its control over the queuing
delay and buffer occupancy can be challenged. This document proposes
guidelines to assess the behavior of AQM schemes under varying
congestion levels and varying draining rates.</t>
          </section>

          <section anchor="subsec:stability_tests" title="Recommended tests">
	    <t>Note that the traffic profiles explained below comprises non
application-limited TCP flows. For each of the below scenarios, the
graphs described in <xref target="subsec:e2e_metrics_tradeoff"></xref>
SHOULD be generated, and the goodput of the various flows should be
cumulated. For <xref
target="subsubsec:stability_tests_net_varying"></xref> and <xref
target="subsubsec:stability_tests_vary_dr_rate"></xref> they SHOULD
incorporate the results in per-phase basis as well.</t>
	     <t>Wherever the notion of time has explicitly mentioned in this
subsection, time 0 starts from the moment all TCP flows have already
reached their congestion avoidance phase.</t>
            <!--  <t>The topology is presented in <xref
target="fig:topology"></xref>. For this scenario, the capacities of
the links MUST be set to 10Mbps and the RTT between senders and
receivers to 100ms.</t> -->

            <section anchor="subsubsec:def_cong_level"
title="Definition of the congestion Level">
              <t>In these guidelines, the congestion levels are
represented by the projected packet drop rate, had a drop-tail queue
was chosen instead of an AQM scheme. When the bottleneck is shared
among non application-limited TCP flows. l_r, the loss rate projection
can be expressed as a function of N, the number of bulk TCP flows, and
S, the sum of the bandwidth-delay product and the maximum buffer size,
both expressed in packets, based on Eq. 3 of <xref
target="MORR2000"></xref>:</t>
              <t>l_r = 0.76 * N^2 / S^2 </t>
              <t>N = S * sqrt(1/0.76) * sqrt (l_r) </t>
	      <t>These guidelines use the loss rate to define the different
congestion levels, but they do not stipulate that in other
circumstances, measuring the congestion level gives you an accurate
estimation of  the loss rate or vice-versa.</t>
            </section>

            <section anchor="subsubsec:stability_tests_net_mild"
title="Mild congestion">
              <t>This scenario can be used to evaluate how an AQM
scheme reacts to a light load of incoming traffic resulting in mild
congestion -- packet drop rates around 0.1%. The number of bulk flows
required to achieve this congestion level, N_mild, is then:</t> <!--
The scenario consists of 4-5 TCP NewReno flows between sender A and
receiver B. All TCP flows start at random times during the initial
second of the experiment. --><!-- during 100s.-->
              <t>N_mild = round(0.036*S)</t>
            </section>

            <section anchor="subsubsec:stability_tests_net_medium"
title="Medium congestion">
              <t>This scenario can be used to evaluate how an AQM
scheme reacts to incoming traffic resulting in medium congestion --
packet drop rates around 0.5%. The number of bulk flows required to
achieve this congestion level, N_med, is then:<!--The scenario
consists of 10-20 TCP NewReno flows between sender A and receiver B.
All TCP flows start at random times during the initial second of the
experiment.--><!-- during 100s.--></t>
              <t> N_med = round (0.081*S)</t>
            </section>

            <section anchor="subsubsec:stability_tests_net_heavy"
title="Heavy congestion">
              <t>This scenario can be used to evaluate how an AQM
scheme reacts to incoming traffic resulting in heavy congestion --
packet drop rates around 1%. The number of bulk flows required to
achieve this congestion level, N_heavy, is then: <!--The scenario
consists of 30-40 TCP NewReno flows between sender A and receiver B.
All TCP flows start at random times during the initial second of the
experiment. --><!-- during 100s.--></t>
              <t> N_heavy = round (0.114*S)</t>
            </section>

            <section anchor="subsubsec:stability_tests_net_varying"
title="Varying the congestion level">
              <t>This scenario can be used to evaluate how an AQM
scheme reacts to incoming traffic resulting in various levels of
congestion during the experiment. In this scenario, the congestion
level varies within a large time-scale. The following phases may be
considered: phase I - mild congestion during 0-20s; phase II - medium
congestion during 20-40s; phase III - heavy congestion during 40-60s;
phase I again, and so on. <!--The scenario consists of 30-40 TCP
NewReno flows between sender A and receiver B. All TCP flows start at
random times during the initial second of the experiment. --><!--
during 100s.--></t>
            </section>

            <section anchor="subsubsec:stability_tests_vary_dr_rate"
title="Varying available capacity">
              <t>This scenario can be used to help characterize how
the AQM behaves and adapts to bandwidth changes. The experiments are
not meant to reflect the exact conditions of Wi-Fi environments since
it is hard to design repetitive experiments or accurate simulations
for such scenarios.</t>
              <t>To emulate varying draining rates, the bottleneck
capacity between nodes 'Router L' and 'Router R' varies over the
course of the experiment as follows:</t>
              <t><list style="symbols">
                  <t>Experiment 1: the capacity varies between two
values within a large time-scale. As an example, the following phases
may be considered: phase I - 100Mbps during 0-20s; phase II - 10Mbps
during 20-40s; phase I again, and so on.</t>
                  <t>Experiment 2: the capacity varies between two
values within a short time-scale. As an example, the following phases
may be considered: phase I - 100Mbps during 0-100ms; phase II - 10Mbps
during 100-200ms; phase I again, and so on.</t>
              </list></t>
	      <t>The tester MAY choose a phase time-interval value different
than what is stated above, if the network's path conditions (such as
bandwidth-delay product) necessitate. In this case the choice of such
time-interval value SHOULD be stated and elaborated.</t>
	      <t>The tester MAY additionally evaluate the two mentioned
scenarios (short-term and long-term capacity variations), during
and/or including TCP slow-start phase.</t>
              <t>More realistic fluctuating capacity patterns MAY be
considered. The tester MAY choose to incorporate realistic scenarios
with regards to common fluctuation of bandwidth in state-of-the-art
technologies.</t>
              <t>The scenario consists of TCP NewReno flows between
sender A and receiver B.<!-- All TCP flows start at random times
during the initial second. Each TCP flow transfers a large file for a
period of 150s. --> To better assess the impact of draining rates on
the AQM behavior, the tester MUST compare its performance with those
of drop-tail and SHOULD
          provide a reference document for their proposal discussing
          performance and deployment compared to those of drop-tail.
Burst traffic, such as presented in <xref
target="subsec:burst_absorption_tests"></xref>, could also be
considered to assess the impact of varying available capacity on the
burst absorption of the AQM.</t>
            </section>
          </section>

          <section anchor="subsec:stability_param_sensitivity"
title="Parameter sensitivity and stability analysis">
            <t>The control law used by an AQM is the primary means by
which the queuing delay is controlled. Hence understanding the control
law is critical to understanding the behavior of the AQM scheme. The
control law could include several input parameters whose values affect
the AQM scheme's output behavior and its stability. Additionally, AQM
schemes may auto-tune parameter values in order to maintain stability
under different network conditions (such as different  congestion
levels, draining rates or network environments). The stability of
these auto-tuning techniques is also important to understand.</t>

        <t>Transports operating under the control of AQM experience the effect
        of multiple control loops that react over different timescales. It is
        therefore important that proposed AQM schemes are seen to be stable
        when they are deployed at multiple points of potential congestion
        along an Internet path. The pattern of congestion signals (loss or
        ECN-marking) arising from AQM methods also need to not adversely
        interact with the dynamics of the transport protocols that they
        control.</t>

        <t>AQM proposals SHOULD provide background material showing control
        theoretic analysis of the AQM control law and the input parameter
        space within which the control law operates as expected; or could use
        another way to discuss the stability of the control law. For
        parameters that are auto-tuned, the material SHOULD include stability
        analysis of the auto-tuning mechanism(s) as well. Such analysis helps
        to understand an AQM<!-- and packet scheduling --> control law better
        and the network conditions/deployments under which the AQM is
        stable.</t>
            <!--<t>The impact of every externally tuned parameter MUST
be discussed. As an example, if an AQM proposal needs various external
tuning to work on different scenarios, these external modifications
MUST be clear for deployment issues. Also, the frequency at which some
parameters are re-configured MUST be evaluated, as it may impact the
capacity of the AQM to absorb incoming bursts of packets.</t>-->
          </section>
        </section>

        <!-- ######################################################-->
        <!-- New section -->
        <!-- ######################################################-->
        <section anchor="sec:traff" title="Various Traffic Profiles">
           <t>This section provides guidelines to assess the
performance of an AQM proposal for various traffic profiles such as
traffic with different applications or bi-directional traffic.</t>
            <section anchor="subsubsec:eval_generic_traff_profil_mix"
title="Traffic mix">
              <t>This scenario can be used to evaluate how an AQM
scheme reacts to a traffic mix consisting of different applications
such as:</t>
              <t><list style="symbols">
                  <t>Bulk TCP transfer</t> <!--: (continuous file
transmission (the tester may consider an LBE congestion control), or
repeating 5MB file transmission);</t>-->
                  <t>Web traffic</t> <!--(repeated download of 700kB);</t>-->
                  <t>VoIP</t> <!-- (each of them 87kbps UDP stream);</t>-->
                  <t>Constant Bit Rate (CBR) UDP traffic</t> <!--
(1Mbps UDP flow);</t>-->
                  <t>Adaptive video streaming</t> <!-- (2Mb/s and 4s
chunks (1MB file size for each chunk), chunks can be sent at 4s
intervals and their size may vary with standard deviation);</t>-->
              </list></t>	
              <t>Various traffic mixes can be considered. These
guidelines RECOMMEND to examine at least the following example: 1
bi-directional VoIP; 6 Web pages download (such as detailed in <xref
target="subsec:burst_absorption_tests"></xref>); 1 CBR; 1 Adaptive
Video; 5 bulk TCP. Any other combinations could be considered and
should be carefully documented.</t>
              <t>For each scenario, the graph described in <xref
target="subsec:e2e_metrics_tradeoff"></xref> could be generated for
each class of traffic. Metrics such as end-to-end latency, jitter and
flow completion time MAY be reported.</t>
<!--
              <t><xref target="fig:traffic_mix"></xref> presents the
various cases for the traffic that MUST be generated between sender A
and receiver B.</t>
              <figure anchor="fig:traffic_mix" title="Traffic mix scenarios">
                <artwork>
	 		+____+_____________________________+
	 		|Case| Number of flows             |
	 		+    +____+____+____+_________+____+
	 		|    |VoIP|Webs|CBR |AdaptVid |FTP |
	 		+____+____+____+____+_________+____+
	 		|I   |  1 |  1 |  0 |      0  |  0 |
			|    |    |    |    |         |    |
	 		|II  |  1 |  1 |  0 |      0  |  1 |
	 		|    |    |    |    |         |    |
	 		|III |  1 |  1 |  0 |      0  |  5 |
	 		|    |    |    |    |         |    |
	 		|IV  |  1 |  1 |  1 |      0  |  5 |
	 		|    |    |    |    |         |    |
	 		|V   |  1 |  1 |  0 |      1  |  5 |
	 		|    |    |    |    |         |    |
	 		+____+____+____+____+_________+____+
		</artwork>
              </figure>
-->
            </section>

            <section anchor="subsubsec:bidir_traff_profil"
title="Bi-directional traffic">
              <t>Control packets such as DNS requests/responses, TCP
SYNs/ACKs are small, but their loss can severely impact the
application performance. The scenario proposed in this section will
help in assessing whether the introduction of an AQM scheme increases
the loss probability of these important packets.</t>
              <t>For this scenario, traffic MUST be generated in both
downlink and uplink, such as defined in <xref
target="subsec:discuss_setting_topo_nota"></xref>. These guidelines
RECOMMEND to consider a mild congestion level and the traffic
presented in <xref target="subsubsec:stability_tests_net_mild"></xref>
in both directions. In this case, the metrics reported MUST be the
same as in <xref target="subsec:stability_tests"></xref> for each
direction.</t>
              <t>The traffic mix presented in <xref
target="subsubsec:eval_generic_traff_profil_mix"></xref> MAY also be
generated in both directions.</t>

            </section>

        </section>

        <!-- ######################################################-->
        <!-- New section -->
        <!-- ######################################################-->
        <section anchor="sec:consec-aqm" title="Multi-AQM Scenario">
          <section anchor="subsec:consec-aqm_motivation" title="Motivation">
	   <t>Transports operating under the control of AQM experience the
	effect of multiple control loops that react over different timescales. It is
	therefore important that proposed AQM schemes are seen to be stable when they
	are deployed at multiple points of potential congestion along an Internet path.
	The pattern of congestion signals (loss or ECN-marking) arising from AQM
	methods also need to not adversely interact with the dynamics of the transport
	protocols that they control.</t>
	  </section>
          <section anchor="subsec:consec-aqm_test" title="Details on
the evaluation scenario">

	  <figure anchor="fig:topology-multi" title="Topology for the
Multi-AQM scenario">
            <artwork>
+---------+                              +-----------+|senders A|---+
                    +---|receivers A|
+---------+   |                      |   +-----------+
        +-----+---+  +---------+  +--+-----+
        |Router L |--|Router M |--|Router R|
        |AQM      |  |AQM      |  |No AQM  |
        +---------+  +--+------+  +--+-----+
+---------+             |            |   +-----------+|senders
B|-------------+            +---|receivers B|
+---------+                              +-----------+
		</artwork>
	  </figure>

	   <t>This scenario can be used to evaluate how having AQM schemes in
sequence impact the induced latency reduction, the induced goodput
maximization and the trade-off between these two. The topology
presented in <xref target="fig:topology-multi"></xref> could be used.
AQM schemes introduced in Router L and Router M should be the same;
any other configurations could be considered. For this scenario, it is
recommended to consider a mild congestion level, the number of flows
specified in <xref target="subsubsec:stability_tests_net_mild"></xref>
being equally shared among senders A and B. Any other relevant
combination of congestion levels could be considered. We recommend to
measure the metrics presented in <xref
target="subsec:stability_tests"></xref>.</t>
	  </section>
	</section>

        <!-- ######################################################-->
        <!-- New section -->
        <!-- ######################################################-->

        <section anchor="sec:imple_cost" title="Implementation cost">
          <section anchor="subsec:imple_cost_motivation" title="Motivation">
            <!-- NK: do we keep that section ?? It is difficult to
have implementation cost evaluations in these guidelines:
recommendations for evaluation guidelines ?-->
            <t>Successful deployment of AQM is directly related to its
cost of implementation. Network devices can need hardware or software
implementations of the AQM mechanism. Depending on a device's
capabilities and limitations, the device may or may not be able to
implement some or all parts of their AQM logic.</t>
            <t>AQM proposals SHOULD provide pseudo-code for the
complete AQM scheme, highlighting generic implementation-specific
aspects of the scheme such as "drop-tail" vs. "drop-head", inputs
(e.g., current queuing delay, queue length), computations involved,
need for timers, etc. This helps to identify costs associated with
implementing the AQM scheme on a particular hardware or software
device. This also facilitates discsusions around which kind of devices
can easily support the AQM <!-- and packet scheduling --> and which
cannot.</t>
          </section>

          <section anchor="subsec:imple_cost_tests" title="Recommended
discussion">
            <t>AQM proposals SHOULD highlight parts of their AQM logic
that are device dependent and discuss if and how AQM behavior could be
impacted by the device. For example, a queueing-delay based AQM scheme
requires current queuing delay as input from the device. If the device
already maintains this value, then it can be trivial to implement the
their AQM logic on the device. If the device provides indirect means
to estimate the queuing delay (for example: timestamps, dequeuing
rate), then the AQM behavior is sensitive to the precision of the
queuing delay estimations are for that device. Highlighting the
sensitivity of an AQM scheme to queuing delay estimations helps
implementers to identify appropriate means of implementing the
mechanism on a device.</t>
          </section>
        </section>

        <!-- ######################################################-->
        <!-- New section -->
        <!-- ######################################################-->

        <section anchor="sec:control_knobs" title="Operator Control
and Auto-tuning">
          <section anchor="subsec:control_operation_motivation"
title="Motivation">
          <t>One of the biggest hurdles of RED deployment was/is its
parameter sensitivity to operating conditions -- how difficult it is
to tune RED parameters for a deployment to achieve acceptable benefit
from using RED. Fluctuating congestion levels and network conditions
add to the complexity. Incorrect parameter values lead to poor
performance.</t>
          <t>Any AQM scheme is likely to have parameters whose values
affect the control law and behaviour of an AQM. Exposing all these
parameters as control parameters to a network operator (or user) can
easily result in a unsafe AQM deployment. Unexpected AQM behavior
ensues when parameter values are set improperly. A minimal number of
control parameters minimizes the number of ways a user can break a
system where an AQM scheme is deployed at. Fewer control parameters
make the AQM scheme more user-friendly and easier to deploy and
debug.</t>
          <t><xref target="RFC7567"></xref> states "AQM
      algorithms SHOULD NOT require tuning of initial or configuration
      parameters in common use cases." A scheme ought to expose only those
      parameters that control the macroscopic AQM behavior such as queue delay
      threshold, queue length threshold, etc.</t>
          <t>Additionally, the safety of an AQM scheme is directly
related to its stability under varying operating conditions such as
varying traffic profiles and fluctuating network conditions, as
described in <xref target="sec:stability"></xref>. Operating
conditions vary often and
      hence the AQM needs to remain stable under these conditions without the
      need for additional external tuning. If AQM parameters require tuning
      under these conditions, then the AQM must self-adapt necessary parameter
      values by employing auto-tuning techniques.</t>
      </section>

      <section anchor="subsec:control_operation_discussion"
title="Recommended discussion">
	<t>In order to understand an AQM's deployment considerations and
performance under a specific environment, AQM proposals SHOULD
describe the parameters that control the
	macroscopic AQM behavior, and identify any parameters that require
	tuning to operational conditions. It could be interesting to also discuss that
	even if an AQM scheme may not adequately auto-tune its parameters,
the resulting
	performance may not be optimal, but close to something reasonable.</t>
	<t>If there are any fixed parameters within the AQM, their setting
SHOULD be discussed
	and justified, to help understand whether a fixed parameter value is
applicable for a particular environment. </t>
	<t> If an AQM scheme is
        evaluated with parameter(s) that were externally tuned for
        optimization or other purposes, these values MUST be disclosed.</t>
      </section>

        </section>

        <!-- ######################################################-->
        <!-- New section -->
        <!-- ######################################################-->

<!--
        <section anchor="sec:interaction_ecn" title="Interaction with ECN">
      <t>Deployed AQM algorithms SHOULD implement Explicit Congestion
      Notification (ECN) as well as loss to signal congestion to endpoints <xref
      target="RFC7567"></xref>. The benefits of providing ECN
      support for an AQM scheme are described in <xref
target="WELZ2015"></xref>.
      Section 3 of <xref target="WELZ2015"></xref> describes expected
operation of
      routers enabling ECN. AQM schemes SHOULD NOT drop or remark packets solely
      because the ECT(0) or ECT(1) codepoints are used, and when
ECN-capable SHOULD
      set a CE-mark on ECN-capable packets in the presence of
incipient congestion.</t>

         <section anchor="subsec:interaction_ecn_motivation" title="Motivation">
                    <t>ECN <xref target="RFC3168"></xref> is an
alternative that allows
        AQM schemes to signal receivers about network congestion that does not
        use packet drop.</t>

          </section>
          <section anchor="subsec:interaction_ecn_discussion"
title="Recommended discussion">
        <t>An AQM scheme can support ECN <xref
        target="RFC7567"></xref>, in which case testers
        MUST discuss and describe the support of ECN.</t>

          </section>
        </section>
-->

        <!-- ######################################################-->
        <!-- New section -->
        <!-- ######################################################-->

<!--
        <section anchor="sec:interaction_scheduling"
title="Interaction with Scheduling">
          <t>A network device may use per-flow or per-class queuing with a
          scheduling algorithm to either prioritize certain applications or
          classes of traffic, limit the rate of transmission, or to provide
          isolation between different traffic flows within a common class <xref
          target="RFC7567"></xref>.</t>

          <section anchor="subsec:interaction_scehduling_motivation"
title="Motivation">
            <t>Coupled with an AQM scheme, a router may schedule the
transmission of packets in a specific manner by introducing a
scheduling scheme. This algorithm may create sub-queues and integrate
a dropping policy on each of these sub-queues. Another scheduling
policy may modify the way packets are sequenced, modifying the
timestamp of each packet.</t>
          </section>

	  <section anchor="subsec:interaction_scheduling_discussion"
title="Recommended discussion">
            <t>The scheduling and the AQM conjointly impact on the
end-to-end performance. During the characterization process of a
dropping policy, the tester MUST discuss the feasibility to add
scheduling combined with the AQM algorithm. This discussion as an
instance, MAY explain whether the dropping policy is applied when
packets are being enqueued or dequeued.</t>
	  </section>

	  <section anchor="subsec:interaction_scheduling_assessing"
title="Assessing the interaction between AQM and scheduling">
	  <t>These guidelines do not propose guidelines to assess the
performance of scheduling algorithms. Indeed, as opposed to
characterizing AQM schemes that is related to their capacity to
control the queuing delay in a queue, characterizing scheduling
schemes is related to the scheduling itself and its interaction with
the AQM scheme. As one example, the scheduler may create sub-queues
and the AQM scheme may be applied on each of the sub-queues, and/or
the AQM could be applied on the whole queue. Also, schedulers might,
such as FQ-CoDel <xref target="HOEI2015"></xref> or FavorQueue <xref
target="ANEL2014"></xref>, introduce flow prioritization. In these
cases, specific scenarios should be proposed to ascertain that these
scheduler schemes not only helps in tackling the bufferbloat, but also
are robust under a wide variety of operating conditions. This is out
of the scope of this document that focus on dropping and/or marking
AQM schemes.</t>
          </section>
        </section>
-->

        <!-- ######################################################-->
        <!-- ######################################################-->
        <!-- Tail of the document -->
        <!-- ######################################################-->

	<section anchor="sec:conclusion" title="Conclusion">
		<t><xref target="fig:conclusive-table"></xref> lists the scenarios
and their requirements.</t>
		<figure anchor="fig:conclusive-table" title="Summary of the
scenarios and their requirements">
                <artwork>
+------------------------------------------------------------------+
|Scenario                   |Sec.  |Requirement                    |
+------------------------------------------------------------------+
+------------------------------------------------------------------+
|Interaction with ECN       | 4.5 |MUST be discussed if supported |
+------------------------------------------------------------------+|Interaction
with Scheduling| 4.6 |Feasibility MUST be discussed  |
+------------------------------------------------------------------+
|Transport Protocols        |5.    |                               |
| TCP-friendly sender       | 5.1  |Scenario MUST be considered    |
| Aggressive sender         | 5.2  |Scenario MUST be considered    |
| Unresponsive sender       | 5.3  |Scenario MUST be considered    |
| LBE sender                | 5.4  |Scenario MAY be considered     |
+------------------------------------------------------------------+
|Round Trip Time Fairness   | 6.2  |Scenario MUST be considered    |
+------------------------------------------------------------------+
|Burst Absorption           | 7.2  |Scenario MUST be considered    |
+------------------------------------------------------------------+
|Stability                  |8.    |                               |
| Varying congestion levels | 8.2.5|Scenario MUST be considered    |
| Varying available capacity| 8.2.6|Scenario MUST be considered    |
| Parameters and stability  | 8.3  |This SHOULD be discussed       |
+------------------------------------------------------------------+
|Various Traffic Profiles   |9.    |                               |
| Traffic mix               | 9.1  |Scenario is RECOMMENDED        |
| Bi-directional traffic    | 9.2  |Scenario MAY be considered     |
+------------------------------------------------------------------+
|Multi-AQM                  | 10.2 |Scenario MAY be considered     |
+------------------------------------------------------------------+
|Implementation Cost        | 11.2 |Pseudo-code SHOULD be provided |
+------------------------------------------------------------------+
|Operator Control           | 12.2 |Tuning SHOULD NOT be required  |
+------------------------------------------------------------------+

		</artwork>
              </figure>
	</section>
	
	<!-- Nicolas: end of the new conclusive section -->

	<section anchor="sec:acknowledgements" title="Acknowledgements">
	<t>This work has been partially supported by the European Community
under its Seventh Framework Programme through the Reducing Internet
Transport Latency (RITE) project (ICT-317700).</t>
	</section>

	<section anchor="sec:contributors" title="Contributors">
	<t> Many thanks to S. Akhtar, A.B. Bagayoko, F. Baker, R. Bless, D.
Collier-Brown, G. Fairhurst, J. Gettys, T. Hoiland-Jorgensen, K.
Kilkki, C. Kulatunga, W. Lautenschlager, A.C. Morton, R. Pan, D. Taht
and M. Welzl for detailed and wise feedback on this document.</t>
	</section>

	<section anchor="sec:IANA" title="IANA Considerations">
	<t>This memo includes no request to IANA.</t>
	</section>

	<section anchor="sec:ecurity" title="Security Considerations">
	<t>Some security considerations for AQM are identified in <xref
        target="RFC7567"></xref>.This document, by itself, presents no
new privacy nor security issues.<!--See <xref target="RFC3552">RFC
3552</xref> for a guide.--></t>
	</section>
	</middle>

	<!--  *****BACK MATTER ***** -->
	<back>
	<!-- References split into informative and normative -->
	<!-- There are 2 ways to insert reference entries from the citation libraries:
	1. define an ENTITY at the top, and use "ampersand character"RFC2629;
here (as shown)
	2. simply use a PI "less than character"?rfc
include="reference.RFC.2119.xml"?> here
	(for I-Ds: include="reference.I-D.narten-iana-considerations-rfc2434bis.xml")

	Both are cited textually in the same manner: by using xref elements.
	If you use the PI option, xml2rfc will, by default, try to find
included files in the same
	directory as the including file. You can also define the XML_LIBRARY
environment variable
	with a value containing a set of directories to search.  These can be
either in the local
	filing system or remote ones accessed by http (http://domain/dir/... ).-->

	<references title="Normative References">

	<?rfc include="reference.RFC.7567.xml"?>
	<?rfc include="reference.RFC.5348.xml"?>
	<?rfc include="reference.RFC.5681.xml"?>
	<?rfc include="reference.RFC.6297.xml"?>
	<?rfc include="reference.RFC.2488.xml"?>
	<?rfc include="reference.RFC.2679.xml"?>
	<?rfc include="reference.RFC.2680.xml"?>
	<?rfc include="reference.RFC.2544.xml"?>
	<?rfc include="reference.RFC.3611.xml"?>
	<?rfc include="reference.RFC.2647.xml"?>
	<?rfc include="reference.RFC.5481.xml"?>
	<?rfc include="reference.RFC.3168.xml"?>
	<?rfc include="reference.RFC.0793.xml"?>
	<?rfc include="reference.RFC.6817.xml"?>

        <?rfc include="reference.I-D.ietf-tcpm-cubic.xml"?>
        <?rfc include="reference.I-D.irtf-iccrg-tcpeval.xml"?>

        <!-- <?rfc include="reference.I-D.ietf-aqm-recommendation.xml"?> -->
        <!-- <?rfc
include="reference.I-D.ietf-tsvwg-byte-pkt-congest.xml"?> -->

	<reference anchor="RFC7141">
	<front>
	<title>Byte and Packet Congestion Notification</title>
	<author initials="B" surname="Briscoe">
	</author>
	<author initials="J" surname="Manner">
	</author>
        <date year="2014" />
	</front>
	<seriesInfo name="RFC" value="7141" />
	</reference>

        <!--
	<reference anchor="IRTF-TOOLS-5">
	<front>
	<title>Tools for the Evaluation of Simulation and Testbed Scenarios</title>
	<author initials="S" surname="Floyd">
	<organization></organization>
	</author>
	<author initials="E" surname="Kohler">
	<organization></organization>
	</author>
	<date year="2008" />
	</front>
	<seriesInfo name="TMRG-TOOLS" value="05" />
	</reference>
        -->

	<reference anchor="RFC2119">
	<front>
	<title>Key words for use in RFCs to Indicate Requirement Levels</title>
	<author initials="S" surname="Bradner">
	<organization></organization>
	</author>
	<date year="1997" />
	</front>
	<seriesInfo name="RFC" value="2119" />
	</reference>

        <!--
	<reference anchor="RFC5136">
	<front>
	<title>Defining Network Capacity</title>
	<author initials="P" surname="Chimento">
	<organization>JHU Applied Physics Lab</organization>
	</author>
	<author initials="J" surname="Ishac">
	<organization>NASA Glenn Research Center</organization>
	</author>
	<date year="2008" />
	</front>
	<seriesInfo name="RFC" value="5136" />
	</reference>
        -->

	</references>

	<references title="Informative References">
	
 	<reference anchor="TRAN2014">
	<front>
	<title>On The Existence Of Optimal LEDBAT Parameters</title>
	<author initials="S.Q.V" surname="Trang">
	</author>
	<author initials="N" surname="Kuhn">
	</author>
	<author initials="E" surname="Lochin">
	</author>
	<author initials="C" surname="Baudoin">
	</author>
	<author initials="E" surname="Dubois">
	</author>
	<author initials="P" surname="Gelard">
	</author>
	<date year="2014" />
	</front>
	<seriesInfo name="IEEE ICC 2014 - Communication QoS, Reliability and
Modeling Symposium" value="" />
	</reference>

 	<reference anchor="GONG2014">
	<front>
	<title>Fighting the bufferbloat: on the coexistence of AQM and low
priority congestion control</title>
	<author initials="Y" surname="Gong">
	</author>
	<author initials="D" surname="Rossi">
	</author>
	<author initials="C" surname="Testa">
	</author>
	<author initials="S" surname="Valenti">
	</author>
	<author initials="D" surname="Taht">
	</author>
	<date year="2014" />
	</front>
	<seriesInfo name="Computer Networks, Elsevier, 2014, 60, pp.115 -
128" value="" />
	</reference>

<reference anchor='RFC2309'>
<front>
<title abbrev='Internet Performance Recommendations'>Recommendations
on Queue Management and Congestion Avoidance in the Internet</title>
<author initials='B.' surname='Braden' fullname='Bob Braden'>
<organization>USC Information Sciences Institute</organization>
<address>
<postal>
<street>4676 Admiralty Way</street>
<city>Marina del Rey</city>
<region>CA</region>
<code>90292</code></postal>
<phone>310-822-1511</phone>
<email>Braden@ISI.EDU</email></address></author>
<author initials='D.D.' surname='Clark' fullname='David D. Clark'>
<organization>MIT Laboratory for Computer Science</organization>
<address>
<postal>
<street>545 Technology Sq.</street>
<city>Cambridge</city>
<region>MA</region>
<code>02139</code></postal>
<phone>617-253-6003</phone>
<email>DDC@lcs.mit.edu</email></address></author>
<author initials='J.' surname='Crowcroft' fullname='Jon Crowcroft'>
<organization>University College London</organization>
<address>
<postal>
<street>Department of Computer Science</street>
<street>Gower Street</street>
<street>London, WC1E 6BT</street>
<street>ENGLAND</street></postal>
<phone>+44 171 380 7296</phone>
<email>Jon.Crowcroft@cs.ucl.ac.uk</email></address></author>
<author initials='B.' surname='Davie' fullname='Bruce Davie'>
<organization>Cisco Systems, Inc.</organization>
<address>
<postal>
<street>250 Apollo Drive</street>
<city>Chelmsford</city>
<region>MA</region>
<code>01824</code></postal>
<email>bdavie@cisco.com</email></address></author>
<author initials='S.' surname='Deering' fullname='Steve Deering'>
<organization>Cisco Systems, Inc.</organization>
<address>
<postal>
<street>170 West Tasman Drive</street>
<city>San Jose</city>
<region>CA</region>
<code>95134-1706</code></postal>
<phone>408-527-8213</phone>
<email>deering@cisco.com</email></address></author>
<author initials='D.' surname='Estrin' fullname='Deborah Estrin'>
<organization>USC Information Sciences Institute</organization>
<address>
<postal>
<street>4676 Admiralty Way</street>
<city>Marina del Rey</city>
<region>CA</region>
<code>90292</code></postal>
<phone>310-822-1511</phone>
<email>Estrin@usc.edu</email></address></author>
<author initials='S.' surname='Floyd' fullname='Sally Floyd'>
<organization>Lawrence Berkeley National Laboratory, MS 50B-2239, One
Cyclotron Road, Berkeley CA 94720</organization>
<address>
<phone>510-486-7518</phone>
<email>Floyd@ee.lbl.gov</email></address></author>
<author initials='V.' surname='Jacobson' fullname='Van Jacobson'>
<organization>Lawrence Berkeley National Laboratory, MS 46A, One
Cyclotron Road, Berkeley CA 94720</organization>
<address>
<phone>510-486-7519</phone>
<email>Van@ee.lbl.gov</email></address></author>
<author initials='G.' surname='Minshall' fullname='Greg Minshall'>
<organization>Fiberlane Communications</organization>
<address>
<postal>
<street>1399 Charleston Road</street>
<city>Mountain View</city>
<region>CA</region>
<code>94043</code></postal>
<phone>+1 650 237 3164</phone>
<email>Minshall@fiberlane.com</email></address></author>
<author initials='C.' surname='Partridge' fullname='Craig Partridge'>
<organization>BBN Technologies</organization>
<address>
<postal>
<street>10 Moulton St.</street>
<street>Cambridge MA 02138</street></postal>
<phone>510-558-8675</phone>
<email>craig@bbn.com</email></address></author>
<author initials='L.' surname='Peterson' fullname='Larry Peterson'>
<organization>Department of Computer Science</organization>
<address>
<postal>
<street>University of Arizona</street>
<city>Tucson</city>
<region>AZ</region>
<code>85721</code></postal>
<phone>520-621-4231</phone>
<email>LLP@cs.arizona.edu</email></address></author>
<author initials='K.K.' surname='Ramakrishnan' fullname='K.K. Ramakrishnan'>
<organization>AT&amp;T Labs. Research</organization>
<address>
<postal>
<street>Rm. A155</street>
<street>180 Park Avenue</street>
<street>Florham Park, N.J. 07932</street></postal>
<phone>973-360-8766</phone>
<email>KKRama@research.att.com</email></address></author>
<author initials='S.' surname='Shenker' fullname='Scott Shenker'>
<organization>Xerox PARC</organization>
<address>
<postal>
<street>3333 Coyote Hill Road</street>
<city>Palo Alto</city>
<region>CA</region>
<code>94304</code></postal>
<phone>415-812-4840</phone>
<email>Shenker@parc.xerox.com</email></address></author>
<author initials='J.' surname='Wroclawski' fullname='John Wroclawski'>
<organization>MIT Laboratory for Computer Science</organization>
<address>
<postal>
<street>545 Technology Sq.</street>
<city>Cambridge</city>
<region>MA</region>
<code>02139</code></postal>
<phone>617-253-7885</phone>
<email>JTW@lcs.mit.edu</email></address></author>
<author initials='L.' surname='Zhang' fullname='Lixia Zhang'>
<organization>UCLA</organization>
<address>
<postal>
<street>4531G Boelter Hall</street>
<city>Los Angeles</city>
<region>CA</region>
<code>90024</code></postal>
<phone>310-825-2695</phone>
<email>Lixia@cs.ucla.edu</email></address></author>
<date year='1998' month='April' />
<area>Routing</area>
<keyword>congestion</keyword>
<abstract>
<t>

      This memo presents two recommendations to the Internet community

      concerning measures to improve and preserve Internet performance.

      It presents a strong recommendation for testing, standardization,

      and widespread deployment of active queue management in routers,

      to improve the performance of today's Internet. It also urges a

      concerted effort of research, measurement, and ultimate deployment

      of router mechanisms to protect the Internet from flows that are

      not sufficiently responsive to congestion notification.
</t></abstract></front>

<seriesInfo name='RFC' value='2309' />
<format type='TXT' octets='38079'
target='http://www.rfc-editor.org/rfc/rfc2309.txt' />
<format type='XML' octets='42517'
target='http://xml.resource.org/public/rfc/xml/rfc2309.xml' />
</reference>




        <!--
        <reference anchor="QOEVOICE2013">
        <front>
        <title>Voice quality prediction models and their application
in VoIP networks</title>
        <author initials="L." surname="Sun">
        </author>
        <author initials="E.C." surname="Ifeachor">
        </author>
        <date year="2006" />
        </front>
        <seriesInfo name="IEEE Transactions on Multimedia" value="" />
        </reference>	

        <reference anchor="QOEVID2013">
        <front>
        <title>Model for estimating QoE of Video delivered using HTTP
Adaptive Streaming</title>
        <author initials="J." surname="De Vriendt">
        </author>
        <author initials="D." surname="Robinson">
        </author>
        <date year="2013" />
        </front>
        <seriesInfo name="IFIP/IEEE International Symposium on
Integrated Network Management (IM 2013)" value="" />
        </reference>	
        -->

 	<reference anchor="JAY2006">
	<front>
	<title>A preliminary analysis of loss synchronisation between
concurrent TCP flows</title>
	<author initials="P" surname="Jay">
	</author>
	<author initials="Q" surname="Fu">
	</author>
	<author initials="G" surname="Armitage">
	</author>
	<date year="2006" />
	</front>
	<seriesInfo name="Australian Telecommunication Networks and
Application Conference (ATNAC)" value="" />
	</reference>


	<reference anchor="WINS2014">
	<front>
	<title>Transport Architectures for an Evolving Internet</title>
	<author initials="K" surname="Winstein">
	</author>
	<date year="2014" />
	</front>
	<seriesInfo name="PhD thesis, Massachusetts Institute of Technology"
value="" />
	</reference>

	<!--
	<reference anchor="HAYE2013">
	<front>
	<title>Common TCP Evaluation Suite</title>
	<author initials="D" surname="Hayes">
	</author>
	<author initials="D" surname="Ros">
	</author>
	<author initials="L.L.H" surname="Andrew">
	</author>
	<author initials="S" surname="Floyd">
	</author>
	<date year="2013" />
	</front>
	<seriesInfo name="IRTF (Work-in-Progress)" value="" />
	</reference>
	-->

        <!--
	<reference anchor="LOSS-SYNCH-AQM-08">
	<front>
	<title>Loss synchronization, router buffer sizing and high-speed TCP
versions: Adding RED to the mix</title>
	<author initials="S" surname="Hassayoun">
	</author>
	<author initials="D" surname="Ros">
	</author>
	<date year="2008" />
	</front>
	<seriesInfo name="IEEE LCN" value="" />
        </reference>-->

	<reference anchor="PAN2013">
	<front>
        <title>PIE: A lightweight control scheme to address the
bufferbloat problem</title>
	<author initials="R" surname="Pan">
	</author>
	<author initials="P" surname="Natarajan">
        </author>
	<author initials="C" surname="Piglione">
        </author>
	<author initials="MS" surname="Prabhu">
        </author>
	<author initials="V" surname="Subramanian">
        </author>
	<author initials="F" surname="Baker">
        </author>
       	<author initials="B" surname="VerSteeg">
        </author>
        <date year="2013" />
	</front>
	<seriesInfo name="IEEE HPSR" value="" />
	</reference>

        <reference anchor="NICH2012">
	<front>
        <title>Controlling Queue Delay</title>
	<author initials="K" surname="Nichols">
	</author>
	<author initials="V" surname="Jacobson">
	</author>
	<date year="2012" />
	</front>
	<seriesInfo name="ACM Queue" value="" />
	</reference>

	<reference anchor="MORR2000">
	<front>
	<title>Scalable TCP congestion control</title>
	<author initials="R" surname="Morris">
	</author>
	<date year="2000" />
	</front>
	<seriesInfo name="IEEE INFOCOM" value="" />
	</reference>

	<reference anchor="HASS2008">
	<front>
	<title>Loss Synchronization and Router Buffer Sizing with High-Speed
Versions of TCP</title>
	<author initials="S" surname="Hassayoun">
	</author>
	<author initials="D" surname="Ros">
	</author>
	<date year="2008" />
	</front>
	<seriesInfo name="IEEE INFOCOM Workshops" value="" />
	</reference>

	<reference anchor="BB2011">
	<front>
	<title>BufferBloat: what's wrong with the internet?</title>
	<author initials="" surname="">
	</author>
	<date year="2011" />
	</front>
	<seriesInfo name="ACM Queue" value="vol. 9" />
	</reference>

	<reference anchor="ANEL2014">
	<front>
	<title>FavorQueue: a Parameterless Active Queue Management to Improve
TCP Traffic Performance</title>
	<author initials="P" surname="Anelli">
	</author>
	<author initials="R" surname="Diana">
	</author>
	<author initials="E" surname="Lochin">
	</author>
	<date year="2014" />
	</front>
	<seriesInfo name="Computer Networks" value="vol. 60" />
	</reference>
	
	<!-- <reference anchor="DOCSIS2013">
	<front>
	<title>Active Queue Management Algorithms for DOCSIS 3.0</title>
	<author initials="G" surname="White">
	</author>
	<author initials="D" surname="Rice">
	</author>
	<date year="2013" />
	</front>
	<seriesInfo name="Technical report - Cable Television Laboratories" value="" />
	</reference> -->
      <reference anchor="WELZ2015">
        <front>
          <title>The Benefits to Applications of using Explicit Congestion
          Notification (ECN)</title>

          <author fullname="M Welzl" initials="M" surname="Welzl">
            <organization></organization>

            <address>
              <postal>
                <street></street>

                <city></city>

                <region></region>

                <code></code>

                <country></country>
              </postal>

              <phone></phone>

              <facsimile></facsimile>

              <email></email>

              <uri></uri>
            </address>
          </author>

          <author fullname="G Fairhurst" initials="G" surname="Fairhurst">
            <organization></organization>
          </author>

          <date day="23" month="June" year="2015" />
        </front>

        <seriesInfo name="IETF (Work-in-Progress)" value="" />
      </reference>

      <reference anchor="HOEI2015">
        <front>
          <title>FlowQueue-Codel</title>

          <author fullname="T Hoeiland-Joergensen" initials="T"
surname="Hoeiland-Joergensen">
          </author>

          <author fullname="P McKenney" initials="P" surname="McKenney">
          </author>

          <author fullname="D Taht" initials="D" surname="Taht">
          </author>

          <author fullname="J Gettys" initials="J" surname="Gettys">
          </author>

          <author fullname="E Dumazet" initials="E" surname="Dumazet">
          </author>

          <date day="13" month="January" year="2015" />
        </front>

        <seriesInfo name="IETF (Work-in-Progress)" value="" />
      </reference>

	
	</references>

        <!--
	<section anchor="app-additional" title="Additional Stuff">
	<t>This becomes an Appendix.</t>
        </section>-->

<!-- Change Log
v00 2006-03-15  EBD   Initial version
v01 2006-04-03  EBD   Moved PI location back to position 1 -
                      v3.1 of XMLmind is better with them at this location.
v02 2007-03-07  AH    removed extraneous nested_list attribute,
                      other minor corrections
v03 2007-03-09  EBD   Added comments on null IANA sections and fixed
heading capitalization.
                      Modified comments around figure to reflect
non-implementation of
                      figure indent control.  Put in reference using
anchor="DOMINATION".
                      Fixed up the date specification comments to
reflect current truth.
v04 2007-03-09 AH     Major changes: shortened discussion of PIs,
                      added discussion of rfc include.
v05 2007-03-10 EBD    Added preamble to C program example to tell
about ABNF and alternative
                      images. Removed meta-characters from comments
(causes problems).  -->
	</back>
</rfc>