Re: http/2 prioritization/fairness bug with proxies

Roberto Peon <> Thu, 14 February 2013 21:44 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id D96D721F8746 for <>; Thu, 14 Feb 2013 13:44:23 -0800 (PST)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -10.536
X-Spam-Status: No, score=-10.536 tagged_above=-999 required=5 tests=[AWL=0.062, BAYES_00=-2.599, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_HI=-8]
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id eSkYbz-C6Rv5 for <>; Thu, 14 Feb 2013 13:44:22 -0800 (PST)
Received: from ( []) by (Postfix) with ESMTP id CD3B721F86EF for <>; Thu, 14 Feb 2013 13:44:21 -0800 (PST)
Received: from lists by with local (Exim 4.72) (envelope-from <>) id 1U66aG-0000JZ-0n for; Thu, 14 Feb 2013 21:43:12 +0000
Resent-Date: Thu, 14 Feb 2013 21:43:12 +0000
Resent-Message-Id: <>
Received: from ([]) by with esmtp (Exim 4.72) (envelope-from <>) id 1U66a7-0000HL-7x for; Thu, 14 Feb 2013 21:43:03 +0000
Received: from ([]) by with esmtps (TLS1.0:RSA_ARCFOUR_SHA1:16) (Exim 4.72) (envelope-from <>) id 1U66a6-0005p8-8Q for; Thu, 14 Feb 2013 21:43:03 +0000
Received: by with SMTP id ta14so2986872obb.14 for <>; Thu, 14 Feb 2013 13:42:35 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:cc:content-type; bh=m5N1RB4EbKGkJZXvRInCHdCFmMw+WU9+JJPjzHHz0sA=; b=Ww6ALXupalts+hf5e15kYE0qmQN2seg7Fb0l409//8frPdmNJWIM57zXv8bNl9+sdH AbeWtAnkvSRvmMlWOtEEIEIRw19B+5g5xRd39MEcOQAyw7+ZSjr4xZobOwnlOSwe+bcD HGQgSeLJum32JTwcromiR3YcZ66txKM/vzxpCpRSYQ1prGj047w2u8o39a5FVmj8e7y0 bbw3Zw+WBosE5pKnzd/B+O9UpqryLh5o/JT0xbyeCemC4OwYhc+fBf1t09sQnB7Si+Vd 4G1uhzVSds1m9SdSCcvETqijP7WiY3lUB7jAUrRAPGTri0WZhSTHC/MIEexXsVdleu9H SM+w==
MIME-Version: 1.0
X-Received: by with SMTP id l5mr192284obf.16.1360878155671; Thu, 14 Feb 2013 13:42:35 -0800 (PST)
Received: by with HTTP; Thu, 14 Feb 2013 13:42:35 -0800 (PST)
In-Reply-To: <>
References: <> <> <> <> <> <> <> <> <> <> <> <> <> <> <> <> <> <> <> <> <>
Date: Thu, 14 Feb 2013 13:42:35 -0800
Message-ID: <>
From: Roberto Peon <>
To: Nico Williams <>
Cc: Yoav Nir <>, HTTP Working Group <>
Content-Type: multipart/alternative; boundary=f46d043749e10d293104d5b62391
Received-SPF: pass client-ip=;;
X-W3C-Hub-Spam-Status: No, score=-3.5
X-W3C-Hub-Spam-Report: AWL=-2.681, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FREEMAIL_FROM=0.001, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_LOW=-0.7, SPF_PASS=-0.001
X-W3C-Scan-Sig: 1U66a6-0005p8-8Q fb184c9e730a780ff6aaae56dee35127
Subject: Re: http/2 prioritization/fairness bug with proxies
Archived-At: <>
X-Mailing-List: <> archive/latest/16607
Precedence: list
List-Id: <>
List-Help: <>
List-Post: <>
List-Unsubscribe: <>

On Thu, Feb 14, 2013 at 12:26 PM, Nico Williams <>wrote;wrote:

> On Wed, Feb 13, 2013 at 4:43 PM, Roberto Peon <> wrote:
> > SCTP: Unfortunately not deployable due to consumer NAT interactions.
> I know :(
> > Bulk-traffic: There are a number of different levels of traffic we're
> > prioritizing. It isn't just 'bulk' or 'highpri'
> I believe there's really only two or three categories of traffic:
> bulk, non-bulk w/ Nagle algorithm, non-bulk w/o Nagle.  That's really
> it.  If there are multiple bulk flows where it is not desirable for
> one slow/stuck sink to cause all the other bulk flows to stop, then
> you need a TCP connection per-bulk flow (or at least that one
> possibly-slow flow).
> But it's possible that we're talking about different thing.  One thing
> is priority for server processing of requests.  Another is for proxies
> -- here we have to start worrying about the multiplexing issues that
> SSHv1 and SSHv2 have had, and since I think we ware talking about
> proxies I keep coming back to these issues.

Priority and flow control are separate issues.
Priorities are an expression of the order in which the browser wants/needs
Flow control is for receivers which are memory constrained to be sure that
they won't overrun memory requirements.

> > Certain features require synchronization between control data and payload
> > (e.g. server push).
> > It is not possible to demux these without additional complexity from a
> > protoco standpoint.
> I don't see why.  Can you explain in more detail?

If you want to synchronize things which are not part of a sequence, you
must do additional buffering, above and beyond what TCP may do, and you
must also include synchronization identifiers in both streams.

> > From an implementation standpoint: I'm already running out of ephemeral
> port
> > space. I *do Not* want to use more connections.
> It's certainly what browsers do already, and have for many years.
> What's the problem?

Browsers do this because HTTP effectively limits you to
one outstanding request per connection, which is effectively one request
per RTT per connection.
Allowing a larger concurrency per connection than 1 is what the
multiplexing does.

Browsers are most often not the ones likely to run out of ephemeral port
space-- that is a problem for proxies and servers.

> > From an implementation standpoint: It is often impossible to figure out
> that
> > k connections belong to a single client at all loadbalancers without
> > sacrificing orders of magnitude of performance. Orders of magnitude, not
> > just a factor of X.
> But I'm not saying you need to do that.  Nor am I implying it.

If there are multiple connections, then the cost of coordination for this
kind of loadbalancing at large sites is orders of magnitude higher.
You're telling me that you wish to use multiple connections. Unless my
statement above is a lie, it must then follow that it will cost orders of
magnitude more for the same coordination.

> > There isn't even a guarantee that multiple connections will go to the
> same
> > load balancer.
> All that matters is that bulk traffic be on separate TCP connections
> from non-bulk.  That's one bit in the request/response headers.

This is not what browsers need. Browsers need to control the order in which
the server sends responses effectively enough that they can simply send
requests for resources as soon as they know about the resource.
Today they cannot, and attempt to control this by using heuristics to
decide when to send a requests (instead of simply sending it as soon as the
browser realizes it needs the resource). This causes an underutilization of
the available bandwidth.
Removal of this heuristic increases the amount of time it takes to load the

> > Browsers have used multiple connections because of limitations in HTTP,
> > which we're already solving with multiplexing.
> I strongly suspect that you're not solving them.  It's not just "oh,
> let's multiplex".  You need to watch out for the issues other
> protocols that multiplex different flows on single TCP flows have had.
>  You haven't demonstrated a grasp of those issues.

Nico, I've been working closely with browser folks for quite a while
(years) and have a good grounding in what is suboptimal today, after all,
SPDY started as a project with a server/proxy developer (me) and a browser
developer (Mike Belshe).
Will Chan and others have been continuing the work on the Chrome side, and
he has already replied to you telling you what browsers need.
I've done several presentations and talks, several of them with browser
folks which do reference the primary browser issue of when to generate

You are a smart guy and likely have things to contribute, but, please
research before making assertions about what I do or do not know.