Re: http/2 prioritization/fairness bug with proxies

Amos Jeffries <> Wed, 13 February 2013 04:42 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 7425C21F86CB for <>; Tue, 12 Feb 2013 20:42:19 -0800 (PST)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -10.276
X-Spam-Status: No, score=-10.276 tagged_above=-999 required=5 tests=[AWL=0.323, BAYES_00=-2.599, RCVD_IN_DNSWL_HI=-8]
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id fTK-RAfLZDxp for <>; Tue, 12 Feb 2013 20:42:18 -0800 (PST)
Received: from ( []) by (Postfix) with ESMTP id 5D3DA21F870C for <>; Tue, 12 Feb 2013 20:42:15 -0800 (PST)
Received: from lists by with local (Exim 4.72) (envelope-from <>) id 1U5U8e-0003kr-Ne for; Wed, 13 Feb 2013 04:40:08 +0000
Resent-Date: Wed, 13 Feb 2013 04:40:08 +0000
Resent-Message-Id: <>
Received: from ([]) by with esmtp (Exim 4.72) (envelope-from <>) id 1U5U8R-0002QJ-N0 for; Wed, 13 Feb 2013 04:39:55 +0000
Received: from ([] by with esmtp (Exim 4.72) (envelope-from <>) id 1U5U8Q-0006Jr-2t for; Wed, 13 Feb 2013 04:39:55 +0000
Received: from [] (unknown []) by (Postfix) with ESMTP id 4B7ADE6FE5 for <>; Wed, 13 Feb 2013 17:39:26 +1300 (NZDT)
Message-ID: <>
Date: Wed, 13 Feb 2013 17:39:22 +1300
From: Amos Jeffries <>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:17.0) Gecko/20130107 Thunderbird/17.0.2
MIME-Version: 1.0
References: <> <> <> <> <> <> <> <> <> <> <> <> <> <> <> <>
In-Reply-To: <>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
Received-SPF: pass client-ip=;;
X-W3C-Hub-Spam-Status: No, score=-3.5
X-W3C-Hub-Spam-Report: AWL=-3.449, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001
X-W3C-Scan-Sig: 1U5U8Q-0006Jr-2t b70cd430f86afb72caad848c4e09747d
Subject: Re: http/2 prioritization/fairness bug with proxies
Archived-At: <>
X-Mailing-List: <> archive/latest/16592
Precedence: list
List-Id: <>
List-Help: <>
List-Post: <>
List-Unsubscribe: <>

On 13/02/2013 7:18 a.m., Roberto Peon wrote:
> The problem that we have here is that the TCP API isn't sufficiently 
> rich to allow us to do the right thing (e.g. read bytes without 
> allowing the sender to send more). As a result, we have to have 
> another level of flow control which we'd otherwise be able to do 
> without. Unfortunately, per priority TCP connections don't work well 
> for large loadbalancers where each of these connections will likely be 
> terminating at a different place. This would create a difficult 
> synchronization problem server side, full of races and complexity, and 
> likely quite a bit worse in complexity than getting flow control 
> working well.

As opposed to HTTP/1.x where each request may have its own TCP 
connection, end up at a different endpoint, and faces identical situation?
Sure this is not a solution to *that* problem, but nothing will.
1) The load balancers very existence purpose is to *create* that problem.

2) HTTP is meant to be stateless remember, so no request depends on any 
other for semantic handling. These high priority and low-priority 
requests you speak of.

3) the frames where statefulness and priority actually matters are 
per-hop control frames. Which the endpoint needing to send is perfectly 
capable of injecting for delivery between any other frames it may have 
queued. Regardless of how or when any other party along the channel 
believes they should be delivered.

So I agree with Nico on this one. Separate TCP connections *are* an 
option. It just has to be made clear in the spec how to identify them at 
the server and how it is expected to prioritize those connections from 
any other more important ones it may be handling in parallel. Any 
endpoint screwups caused by load balancing as a result of handling two 
*independent* requests via different pathways is the responsibility of 
the server admin, site author, or network admins to sort out - not the 
HTTP protocol.