Re: [aqm] the cisco pie patent and IETF IPR filing

KK <> Wed, 04 March 2015 19:43 UTC

Return-Path: <>
Received: from localhost ( []) by (Postfix) with ESMTP id CEAC41A88D0 for <>; Wed, 4 Mar 2015 11:43:11 -0800 (PST)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -1.911
X-Spam-Status: No, score=-1.911 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, SPF_PASS=-0.001, T_RP_MATCHES_RCVD=-0.01] autolearn=ham
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id 7DjfXMWZiViE for <>; Wed, 4 Mar 2015 11:43:10 -0800 (PST)
Received: from ( []) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by (Postfix) with ESMTPS id D625A1A889A for <>; Wed, 4 Mar 2015 11:43:09 -0800 (PST)
Received: from [] ( []) (using TLSv1 with cipher DES-CBC3-SHA (168/168 bits)) (No client certificate requested) by (Postfix) with ESMTP id 3DA0A16D83FF; Wed, 4 Mar 2015 11:43:09 -0800 (PST)
User-Agent: Microsoft-MacOutlook/
Date: Wed, 04 Mar 2015 11:49:01 -0800
From: KK <>
To: David Lang <>
Message-ID: <>
Thread-Topic: [aqm] the cisco pie patent and IETF IPR filing
References: <> <473265656416337848@unknownmsgid> <> <>
In-Reply-To: <>
Mime-version: 1.0
Content-type: text/plain; charset="ISO-8859-1"
Content-transfer-encoding: quoted-printable
Archived-At: <>
Cc: bloat <>, "" <>, Vishal Misra <>, Dave Taht <>
Subject: Re: [aqm] the cisco pie patent and IETF IPR filing
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: "Discussion list for active queue management and flow isolation." <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Wed, 04 Mar 2015 19:43:12 -0000

On 3/4/15, 1:42 AM, "David Lang" <> wrote:

>On Wed, 4 Mar 2015, KK wrote:
>> Date: Wed, 04 Mar 2015 01:01:19 -0800
>> From: KK <>
>> To: Vishal Misra <>, Dave Taht
>> Cc: "" <>, bloat <>
>> Subject: Re: [aqm] the cisco pie patent and IETF IPR filing
>> I think a combination of PI/PIE/fq_codel with ECN would enable us
>> a) be less dependent of the physical amount of buffering that is
>> implemented on the intermediate devices
>> b) allow us to use buffering for what it is meant to do - ride out
>> transient variations in traffic, at points where there is a mismatch in
>> available capacity
>The question is how much of a burst should the buffer be able to handle?
>now buffers routinely hold 10+ seconds worth of traffic (and Dave T
>showed the 
>airline system buffering 10+ MINUTES of traffic)
>The problem is that if you buffer too much, you break the TCP link speed
>probing, and if you buffer even more you end up with the sender genrating
>a new 
>packet to deliver while you still are buffering the old one.
>Buffers need to hold less than one second worth of traffic, and emperical
>testing is showing that much less is desirable (Others can post more
>numbers, but I belive that somewhere between 1/100 of a second and 1/10
>of a 
>second is a reasonable range)
I agree with what you¹re saying. Excessive buffers and the resultant
queueing is indeed bad and undesirable. And I am glad all are recognizing
that this needs to be overcome and the efforts for AQM and Bufferbloat are

Once you have mechanisms that provide a signal when the queue builds up,
rather than the buffer filling up, this helps tremendously. And under good
circumstances, having loss is just as good an indicator because that loss
can be overcome by all the associated mechanisms we have built around it
to recover and mitigate its impact. ECN provides the same benefit, except
that it also continues to work when circumstances aren¹t all that ideal
(start of a flow, small windows etc.).

Once we have these (AQM, feedback) mechanisms, vendors can make a more
sensible judgement of how much buffer to put in, and more than likely
reduce buffering (and cost)...
>> c) allow us to support different types of links, including wireless
>> links
>If a retry is fast and has a very high probability of succeding, then it
>may be 
>worth holding it and doing a link-level retry. But the existing mess that
>wifi is hardly a good example of this being the right thing to do in a
Exactly. If you have good recovery mechanisms for loss (and not
necessarily interpreting it as congestion), then you don¹t have to make
link layer mechanisms work Œovertime¹ to keep retrying and avoid that
loss. Decoupling indications for congestion from loss seems to me a good
thing - unless you know that there are un-intended consequences to this.
>David Lang
>> d) as we wrote in the ECN RFC, allow even short-lived transfers to not
>> suffer
>> Thanks,