Re: http/2 prioritization/fairness bug with proxies

Roberto Peon <> Wed, 13 February 2013 21:25 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 9DC1C21E803D for <>; Wed, 13 Feb 2013 13:25:07 -0800 (PST)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -10.534
X-Spam-Status: No, score=-10.534 tagged_above=-999 required=5 tests=[AWL=0.064, BAYES_00=-2.599, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_HI=-8]
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id pA87ZbB-lOQk for <>; Wed, 13 Feb 2013 13:25:06 -0800 (PST)
Received: from ( []) by (Postfix) with ESMTP id 1913911E809C for <>; Wed, 13 Feb 2013 13:25:05 -0800 (PST)
Received: from lists by with local (Exim 4.72) (envelope-from <>) id 1U5jo1-0003X7-PI for; Wed, 13 Feb 2013 21:23:53 +0000
Resent-Date: Wed, 13 Feb 2013 21:23:53 +0000
Resent-Message-Id: <>
Received: from ([]) by with esmtp (Exim 4.72) (envelope-from <>) id 1U5jnt-0003WN-HA for; Wed, 13 Feb 2013 21:23:45 +0000
Received: from ([]) by with esmtps (TLS1.0:RSA_ARCFOUR_SHA1:16) (Exim 4.72) (envelope-from <>) id 1U5jns-0005rs-Je for; Wed, 13 Feb 2013 21:23:45 +0000
Received: by with SMTP id dn14so1764143obc.32 for <>; Wed, 13 Feb 2013 13:23:18 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:cc:content-type; bh=OiCH5E8How+cB5VmVUQo6xEGdP3FxtUTwsPnfB2n68Y=; b=tBigGFx334xYHLeLSAkB2dMAc9Thff+Dhxd45d6G4o5RsC14KKWuK3/ql8+8FUdlbA AiqeVIACaUe2SAQ9L0Ozv5vrweqVwpRR7bE7cEiX73Sh3dQLrWdhHyEJ8c8Zyts1gTfw BXO3nS6GfoBLkWoKHm+RideUPBvi0Izl8F9wK8+tJdxKAxr6qq7JYxLXhJvS1stPJO05 PlTEJyIjoKr4Ea9U7bFxQuICgWgMkgsBUGm9aZBfb0biFu+pv0ywS494B21gASZtUsYZ RpevwD6lbLFg+3lu+gI3oe912nWrXFGHRtfABAc/rCl+0sjgUqOCO4EsIDfHxJsYCNZ+ r3Dw==
MIME-Version: 1.0
X-Received: by with SMTP id ba20mr17391660oec.10.1360790598536; Wed, 13 Feb 2013 13:23:18 -0800 (PST)
Received: by with HTTP; Wed, 13 Feb 2013 13:23:18 -0800 (PST)
In-Reply-To: <>
References: <> <> <> <> <> <> <> <> <> <> <> <> <> <> <> <> <>
Date: Wed, 13 Feb 2013 13:23:18 -0800
Message-ID: <>
From: Roberto Peon <>
To: Nico Williams <>
Cc: Yoav Nir <>, HTTP Working Group <>
Content-Type: multipart/alternative; boundary=bcaec55408ac3d4cdd04d5a1c030
Received-SPF: pass client-ip=;;
X-W3C-Hub-Spam-Status: No, score=-4.4
X-W3C-Hub-Spam-Report: AWL=-1.741, BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FREEMAIL_FROM=0.001, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_LOW=-0.7, SPF_PASS=-0.001
X-W3C-Scan-Sig: 1U5jns-0005rs-Je a8ba30d9e06ff30537b88662b5d03bf5
Subject: Re: http/2 prioritization/fairness bug with proxies
Archived-At: <>
X-Mailing-List: <> archive/latest/16597
Precedence: list
List-Id: <>
List-Help: <>
List-Post: <>
List-Unsubscribe: <>

On Wed, Feb 13, 2013 at 12:57 PM, Nico Williams <>wrote;wrote:

> On Tue, Feb 12, 2013 at 12:18 PM, Roberto Peon <> wrote:
> > The problem that we have here is that the TCP API isn't sufficiently
> rich to
> > allow us to do the right thing (e.g. read bytes without allowing the
> sender
> > to send more). As a result, we have to have another level of flow control
> That's not necessary here.
> There are two issues here:
> a) flow [and congestion] control;
> b) prioritization of "interactive" or "control" traffic over bulk traffic.

We have more levels of prioritization than just that, but yes.

> These are exactly the same issues that have been faced in SSHv1 and SSHv2.
> TCP can handle (a), but if you multiplex traffic of different QoS over
> one TCP connection you run into the issues that SSHv1 and v2 have run
> into.

Agreed -- varying QoS for packets on a single in-order stream (i.e.
connection) basically don't help, even if the network did the right thing
with them, which it may not.
Even if the network does the right thing and the bytes have arrived, TCP's
API still only lets you access the packets in-order.

> There's two ways to address these issues: either don't it (it ==
> multiplex diff QoS traffic over the same TCP conn.) or try hard never
> to write more than one BDP's worth of bulk without considering higher
> priority traffic.

QoS for packets on multiple connections also doesn't work- each entity
owning a connection sends at what it believes is its max rate, induces
packet loss, gets throttled appropriately, and then takes too make RTTs to
recover. You end up not fully utilizing the channel(s).

>  Determining BDP is non-trivial and it can vary, but
> it's reasonable to estimate it by looking at round-trip times (it'd be
> nice if TCP could expose that to apps so they don't have to measure it
> redundantly!) and growing send bandwidth until receive bandwidth stops
> growing -- not exactly trivial, but reasonable.
The hard part is "considering higher priority traffic" when that traffic is
being send from a different machine, as would occur in the multiple
connection case.
With a single connection, this is easy to coordinate. Agreed that
estimating BDP isn't trivial (however it is something that TCP effectively
has to do).

> Now, in practice browsers already use multiple TCP connections to the
> same server anyways, so... what's wrong with per-priority TCP
> connections?  (see below)
> > which we'd otherwise be able to do without. Unfortunately, per priority
> > connections don't work well for large loadbalancers where each of these
> > connections will likely be terminating at a different place. This would
> > create a difficult synchronization problem server side, full of races and
> > complexity, and likely quite a bit worse in complexity than getting flow
> > control working well.
> I think you're saying that because of proxies it's difficult to ensure
> per-priority TCP connections, but this is HTTP/2.0 we're talking
> about.  We have the power to dictate that HTTP/2.0 proxies replicate
> the client's per-priority TCP connection scheme.

No, I'm saying that it is somewhere between difficult and impossible to
ensure that separate connections from a client end up on one machine in the
modern loadbalancer world.
>From a latency perspective, opening up the multiple connections can be a
loss as well-- it increases server load for both CPU and memory and vastly
increases the chance that you'll get a lost-packet on the SYN which takes
far longer to recover from as it requires an RTO before RTT has likely been

> > Note that the recommendation will be that flow control be effectively
> > disabled unless you know what you're doing, and have a good reason
> (memory
> > pressure) to use it.
> Huh?  Are you saying "we need and will specify flow control.  It won't
> work.  Therefore we'll have it off by default."  How can that help?!
> I don't see how it can.
Everyone will be required to implement the flow control mechanism as a
Only those people who have effective memory limitations will require its
use when receiving (since the receiver dictates policy for flow control).


> Nico
> --