Re: [aqm] AQM schemes: Queue length vs. delay based

Andrew Mcgregor <andrewmcgr@google.com> Fri, 15 November 2013 18:05 UTC

Return-Path: <andrewmcgr@google.com>
X-Original-To: aqm@ietfa.amsl.com
Delivered-To: aqm@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 5337011E81DC for <aqm@ietfa.amsl.com>; Fri, 15 Nov 2013 10:05:05 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.75
X-Spam-Level:
X-Spam-Status: No, score=-1.75 tagged_above=-999 required=5 tests=[BAYES_00=-2.599, FM_FORGED_GMAIL=0.622, HTML_MESSAGE=0.001, NO_RELAYS=-0.001, SARE_SUB_OBFU_Q1=0.227]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id PY4qEckJ3jma for <aqm@ietfa.amsl.com>; Fri, 15 Nov 2013 10:05:04 -0800 (PST)
Received: from mail-qa0-x234.google.com (mail-qa0-x234.google.com [IPv6:2607:f8b0:400d:c00::234]) by ietfa.amsl.com (Postfix) with ESMTP id 3F95411E8164 for <aqm@ietf.org>; Fri, 15 Nov 2013 10:05:04 -0800 (PST)
Received: by mail-qa0-f52.google.com with SMTP id ii20so737578qab.18 for <aqm@ietf.org>; Fri, 15 Nov 2013 10:05:03 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=dQMBFDphdI+XkkApLy4V/1Bi2QLOt5hWZET5KzTqgOA=; b=LPLZEJOHKZqoVh05JDPf+G+MRuSXRp4yRzPG3F11WpzuPwZ9n3ySambMNtJrlsTfRA /OnvXX99MKrBqc3xiAB3WPeeTgbbf5Yrf+q98PkMjUCbQvDsl7EwsmzKl5m8xrwmLbe3 pAxV2erB6eydJXQuvly1tNn6eK8O/9kymMg9CQv2KNQgYVYMM6vdo6iUkHMDqdoIqQ+/ pH8kOEpTgHN6TORgNkvYx6KoUslf7/9V9dEsYpIqHezFcJXXaIMQYr7hliPy2Ub1T+8H XQqQca3RvsZOAhCAJ9dUWNQ5iuuwn9/e9NZAW2TwtPQ7tVx4SR2LPrgBIt9EvThvbAYF yUSw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=dQMBFDphdI+XkkApLy4V/1Bi2QLOt5hWZET5KzTqgOA=; b=ObevuBRfbdjSpt0BcBFLFSQeVTLK5wEcFVTaR1BSeQTYdoyHIGksx+svci9a6+e0ci b3kHrkehj9uJ6GsPJzWV0yyz4/bMB6Gk/v5i4EMyz2h4nyAFpcfscMVRXrD7SM77RLwg qjVtAoVQtovpqAQOze3Iqxlzk28mEqQKoWikV6KF6VmzVXNJ5czR96B5RYytT2yriYhO AvlQSJ8Cko2Ly1UC6a0fNTGl0kwqrK4sPrS1OjF4sHANrDCv4yYIgM8rCSXO9lzgl8SM SejrdJZsOGEuvNRgFfyzpNZXsF1qfX/Ti3DjnSSU0WWXVOlw2tcK9r2ub8zFQgKdpUk5 SJ0A==
X-Gm-Message-State: ALoCoQk2TKgfWT3huJQNBTyhQEx0ynTlWz97JPod4sEbSrYTrIBan46H9Xc6OrqJKCOdeINDHjs/fYbdF9OFQrQVbtuX8x1ncVnhPhqlMC/QiZrPxogtENozPHOAP7ERxiB9n8kO6r9nte/HToJa5r5Nv6DLKFKghqWfNFVDsBgoV0Md3AZ5oFSBMA0MHa+VrutzoOsLg40+
MIME-Version: 1.0
X-Received: by 10.224.172.70 with SMTP id k6mr12756820qaz.100.1384538703655; Fri, 15 Nov 2013 10:05:03 -0800 (PST)
Received: by 10.224.197.198 with HTTP; Fri, 15 Nov 2013 10:05:03 -0800 (PST)
In-Reply-To: <AF47B543-0564-410D-9503-F73B04570C23@cisco.com>
References: <CAEjQQ5Vif73gWe-35nzbhmPH+Eh+gSZK+7xNm33+T-FVNsmPZw@mail.gmail.com> <CEA9034B.4B37C%prenatar@cisco.com> <CAEjQQ5XrRX=hTP9csoRJ04cK_6MAaMWA9o1Cwwqbm3RHRdBxpw@mail.gmail.com> <CA+-tSzyKT9gS_12V=yyia8iPt9FiDg6=NRKrB3FLdCRDTi=6Pg@mail.gmail.com> <AF47B543-0564-410D-9503-F73B04570C23@cisco.com>
Date: Fri, 15 Nov 2013 10:05:03 -0800
Message-ID: <CAPRuP3m1HH3B1y7V5Pbemo3Cd1AUV43M4iF=0_2ANaFzxaYeFA@mail.gmail.com>
From: Andrew Mcgregor <andrewmcgr@google.com>
To: "Fred Baker (fred)" <fred@cisco.com>
Content-Type: multipart/alternative; boundary="001a11c2462e9c099b04eb3b09e9"
Cc: Michael Welzl <michawe@ifi.uio.no>, Anoop Ghanwani <anoop@alumni.duke.edu>, Naeem Khademi <naeem.khademi@gmail.com>, Preethi Natarajan <preethi.cis@gmail.com>, "aqm@ietf.org" <aqm@ietf.org>
Subject: Re: [aqm] AQM schemes: Queue length vs. delay based
X-BeenThere: aqm@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: "Discussion list for active queue management and flow isolation." <aqm.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/aqm>, <mailto:aqm-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/aqm>
List-Post: <mailto:aqm@ietf.org>
List-Help: <mailto:aqm-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/aqm>, <mailto:aqm-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 15 Nov 2013 18:05:05 -0000

TCP as deployed can be pretty bursty; TSO often results in 44 packet bursts
(later in connections, once the window has opened wide enough), which is
one reason IW10 caused very little harm.  I wouldn't say Google favours
that; as Yuchung's presentation in TCPM showed, the TCP team here is
working hard to reduce that burstiness.  However, that is deployed reality,
and will be for some years on the net at large, no matter how undesirable
it is in terms of excessive packet loss and buffer pressure.

I very much think we should be thinking about DCTCP-style ECN; that seems
more deployable and a significant benefit, unlike the ambiguous benefit of
classic ECN.


On 15 November 2013 08:03, Fred Baker (fred) <fred@cisco.com> wrote:

>
> On Nov 13, 2013, at 2:43 PM, Anoop Ghanwani <anoop@alumni.duke.edu> wrote:
>
> On Wed, Nov 13, 2013 at 12:14 PM, Naeem Khademi <naeem.khademi@gmail.com>wrote:
>
>>
>> Agreed only in general terms -- but what would be considered as "packet
>> burst" and how would it be defined? This will probably have a subjective
>> answer e.g. one can argue that a size of TCP sawtooth of data is a burst
>> and therefore we need a BDP of buffering for that (that's what had actually
>> happened implicitly over the past decade). On the other hand, the
>> definition of the "burst" may likely to correspond to the application
>> generating it (e.g. video frames, IW10, etc) and therefore its size (and
>> even pattern) is application/transport dependent somehow.
>>
>
> There's one more case...the incast problem.  The sources themselves may
> not be "bursty" but in a high port count switch, you could have 10's (or
> more) ports sending traffic to a single output port at around the same time.
>
> Perhaps it is useful to add a description of what we mean when we say
> bursty.
>
>
> Possibly, but what you just highlighted is that there are at least two
> definitions. At the Transport layer, it's not unusual for TCP to send a
> number of segments back to back - our specifications say 2-4, but Google
> seems to favor as many as ten, and I'm told that measurements suggest that
> some on the net are sending as many as 44. That would be in a single
> session. What you're discussing is what I call "lemmings"; a single thread
> in a map/reduce or multi-cache application might simultaneously open short
> sessions (either opening new TCP sessions or using standing TCP sessions)
> with hundreds or thousands of neighbors, each of which now sends a
> transport-layer burst with effects as you describe. That is an
> application-layer burst.
>
> Bob's DCTCP suggestion could be useful in that.
>
> I wonder if we need a separate term for the application layer behavior,
> however. Maybe "flash crowd"?
>
> Anoop
>
> Anoop
>
> _______________________________________________
> aqm mailing list
> aqm@ietf.org
> https://www.ietf.org/mailman/listinfo/aqm
>
>
> ------------------------------------------------------
> 8 issues in virtual infrastructure
> http://dcrocker.net/#fallacies
>
>
> _______________________________________________
> aqm mailing list
> aqm@ietf.org
> https://www.ietf.org/mailman/listinfo/aqm
>
>


-- 
Andrew McGregor | SRE | andrewmcgr@google.com | +61 4 8143 7128