Re: http/2 prioritization/fairness bug with proxies

Roberto Peon <> Wed, 13 February 2013 22:50 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id E579121E8045 for <>; Wed, 13 Feb 2013 14:50:26 -0800 (PST)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -10.535
X-Spam-Status: No, score=-10.535 tagged_above=-999 required=5 tests=[AWL=0.063, BAYES_00=-2.599, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_HI=-8]
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id IAASiWnL6jiT for <>; Wed, 13 Feb 2013 14:50:25 -0800 (PST)
Received: from ( []) by (Postfix) with ESMTP id 228DA21F86BC for <>; Wed, 13 Feb 2013 14:50:25 -0800 (PST)
Received: from lists by with local (Exim 4.72) (envelope-from <>) id 1U5l8c-00076n-KQ for; Wed, 13 Feb 2013 22:49:14 +0000
Resent-Date: Wed, 13 Feb 2013 22:49:14 +0000
Resent-Message-Id: <>
Received: from ([]) by with esmtp (Exim 4.72) (envelope-from <>) id 1U5l8O-00073Y-QE for; Wed, 13 Feb 2013 22:49:00 +0000
Received: from ([]) by with esmtps (TLS1.0:RSA_ARCFOUR_SHA1:16) (Exim 4.72) (envelope-from <>) id 1U5l8M-00064S-A0 for; Wed, 13 Feb 2013 22:49:00 +0000
Received: by with SMTP id jk7so781732bkc.15 for <>; Wed, 13 Feb 2013 14:48:31 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:cc:content-type; bh=NjRWZZeklFrVaeq9MFsYNHlQopIaz0mfE1pWw7XnDgE=; b=Vwx9D1bt/CPa/MtN1J6D6s2sGv/X9ysgDkc1nvnDPrpLVdDFWCm6+SyZTXOSON97ym zBYgp+aTE6syV3Urb1kJD74xOtcfl717VchjJlB03pSPRC+kJjdb6v55ROPSAed0vX6t 4pA2jDIw6FHqVjmuTFd4BKXgc404hoepRUSpqkdcq/pTNiKipUQ9HC6bud/1CStGEqPD m04sXxurwy17m6VxYLLJ4BcAzCz5DokavTnM9Qd2I78gopy93UTrgV2Jq36XtGWfOriY /KEd/pgQQHPxblYdrpGFAL6SOQFtL1dhhfKVOY/5GsXUE7XL3V5vA3j+XA1NVj5KA/Gn EZ8A==
MIME-Version: 1.0
X-Received: by with SMTP id p26mr5145938bks.112.1360795434300; Wed, 13 Feb 2013 14:43:54 -0800 (PST)
Received: by with HTTP; Wed, 13 Feb 2013 14:43:53 -0800 (PST)
In-Reply-To: <>
References: <> <> <> <> <> <> <> <> <> <> <> <> <> <> <> <> <> <> <>
Date: Wed, 13 Feb 2013 14:43:53 -0800
Message-ID: <>
From: Roberto Peon <>
To: Nico Williams <>
Cc: Yoav Nir <>, HTTP Working Group <>
Content-Type: multipart/alternative; boundary=00151747b0b879375104d5a2e086
Received-SPF: pass client-ip=;;
X-W3C-Hub-Spam-Status: No, score=-3.5
X-W3C-Hub-Spam-Report: AWL=-2.691, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FREEMAIL_FROM=0.001, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_LOW=-0.7, SPF_PASS=-0.001
X-W3C-Scan-Sig: 1U5l8M-00064S-A0 f208f3fd465acd772794e1bd37875d4d
Subject: Re: http/2 prioritization/fairness bug with proxies
Archived-At: <>
X-Mailing-List: <> archive/latest/16601
Precedence: list
List-Id: <>
List-Help: <>
List-Post: <>
List-Unsubscribe: <>

SCTP: Unfortunately not deployable due to consumer NAT interactions.

Bulk-traffic: There are a number of different levels of traffic we're
prioritizing. It isn't just 'bulk' or 'highpri'
Certain features require synchronization between control data and payload
(e.g. server push).
It is not possible to demux these without additional complexity from a
protoco standpoint.

>From an implementation standpoint: I'm already running out
of ephemeral port space. I *do Not* want to use more connections.
>From an implementation standpoint: It is often impossible to figure out
that k connections belong to a single client at all loadbalancers without
sacrificing orders of magnitude of performance. Orders of magnitude, not
just a factor of X.
There isn't even a guarantee that multiple connections will go to the same
load balancer.

Browsers have used multiple connections because of limitations in HTTP,
which we're already solving with multiplexing.

The flow control is window-update based, however a receiver can indicate to
a sender a value which effectively means: never limit yourself on flow

On Wed, Feb 13, 2013 at 1:48 PM, Nico Williams <>wrote;wrote:

> On Wed, Feb 13, 2013 at 3:23 PM, Roberto Peon <> wrote:
> > On Wed, Feb 13, 2013 at 12:57 PM, Nico Williams <>
> > wrote:
> > Even if the network does the right thing and the bytes have arrived,
> TCP's
> > API still only lets you access the packets in-order.
> Are we brave enough to try SCTP instead of TCP for HTTP/2.0?
> I didn't think so.
> :) (that should be a sad smiley, actually)
> >> There's two ways to address these issues: either don't it (it ==
> >> multiplex diff QoS traffic over the same TCP conn.) or try hard never
> >> to write more than one BDP's worth of bulk without considering higher
> >> priority traffic.
> >
> > QoS for packets on multiple connections also doesn't work- each entity
> > owning a connection sends at what it believes is its max rate, induces
> > packet loss, gets throttled appropriately, and then takes too make RTTs
> to
> > recover. You end up not fully utilizing the channel(s).
> No, no, all bulk traffic should be sent over one connection at max
> rate.  Multiple bulk flows can be multiplexed safely over one TCP
> connection, therefore they should be.
> High priority traffic _generally_ means "non-bulk", therefore "max
> rate" for non-bulk is generally much, much less than for bulk and,
> therefore, non-bulk traffic can be multiplexed safely over a single
> TCP connection, being careful to move to a bulk connection when a
> non-bulk flow changes nature.
> The sender will know what whether a message is a bulk message or not.
> One complication here is that many requests will be non-bulk but their
> responses will be.  I.e., you might want to write the responses to
> requests on a different connection from the request!  And now you need
> an XID or some such, but you probably want one anyways so that
> responses can be interleaved.
> (For example by analogy, if we were talking about doing this as an
> SSHv2 extension we might migrate a pty stdout channel to a bulk
> connection when the user does a cat(1) of a huge file.  This is much
> harder in SSHv2 because we have logical octet streams for interactive,
> high-priority data, but we don't have such a thing in HTTP, so this is
> not a concern at all.  This is just an analogy to illustrate the
> point.)
> > The hard part is "considering higher priority traffic" when that traffic
> is
> > being send from a different machine, as would occur in the multiple
> > connection case.
> Are you talking about proxies aggregating traffic from multiple
> clients into one [set of] TCP connection[s] to a given server?  Sure,
> but all the proxy needs is to know whether a given request (or
> response) is bulk or not.
> > With a single connection, this is easy to coordinate. Agreed that
> estimating
> > BDP isn't trivial (however it is something that TCP effectively has to
> do).
> A single connection is a bad idea.  We already use multiple
> connections today in _browsers_.  Of course, for non-browser apps
> multiple connections may be quite a change, but that should be a)
> optional, b) acceptable anyways.
> >> > which we'd otherwise be able to do without. Unfortunately, per
> priority
> >> > TCP
> >> > connections don't work well for large loadbalancers where each of
> these
> >> > connections will likely be terminating at a different place. This
> would
> >> > create a difficult synchronization problem server side, full of races
> >> > and
> >> > complexity, and likely quite a bit worse in complexity than getting
> flow
> >> > control working well.
> >>
> >> I think you're saying that because of proxies it's difficult to ensure
> >> per-priority TCP connections, but this is HTTP/2.0 we're talking
> >> about.  We have the power to dictate that HTTP/2.0 proxies replicate
> >> the client's per-priority TCP connection scheme.
> >
> > No, I'm saying that it is somewhere between difficult and impossible to
> > ensure that separate connections from a client end up on one machine in
> the
> > modern loadbalancer world.
> I don't think it should be difficult, much less impossible, for
> HTTP/_2.0_.  What you need for this is to identify flows so their
> requests/responses can be grouped.  The main thing that comes to mind
> is that the load balancer needs to understand Chunked PUTs/POSTs and
> get them to go to the same end server -- surely this is handled
> already in HTTP/1.1 load balancers.
> > From a latency perspective, opening up the multiple connections can be a
> > loss as well-- it increases server load for both CPU and memory and
> vastly
> > increases the chance that you'll get a lost-packet on the SYN which takes
> > far longer to recover from as it requires an RTO before RTT has likely
> been
> > computed.
> Well, sure, but the sender could share one connection for multiple QoS
> traffic types while the additional connections come up, and hope for
> the best -- mostly it should work out.
> >> > Note that the recommendation will be that flow control be effectively
> >> > disabled unless you know what you're doing, and have a good reason
> >> > (memory
> >> > pressure) to use it.
> >>
> >> Huh?  Are you saying "we need and will specify flow control.  It won't
> >> work.  Therefore we'll have it off by default."  How can that help?!
> >> I don't see how it can.
> >
> > Everyone will be required to implement the flow control mechanism as a
> > sender.
> > Only those people who have effective memory limitations will require its
> use
> > when receiving (since the receiver dictates policy for flow control).
> So this is a source quench type flow control?  (As opposed to window
> size type, as in SSHv2.)  But note that the issue isn't the need to
> quench fast sources from slow sinks.  The issue is that by the time
> you notice that you have a source/sink bandwidth mismatch it's too
> late and TCP flow control has kicked in.  Of course, the receiver can
> recover by quenching the sender and then reading and buffering
> whatever's left on the wire, thus freeing up bandwidth on the wire for
> other sources, but the cost is lots of buffer space on the receiver,
> unless you can tell the sender to resend later and then you can throw
> away instead of buffer.
> Nico
> --