Re: http/2 prioritization/fairness bug with proxies

Nico Williams <> Tue, 12 February 2013 16:18 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 1686021F8F5C for <>; Tue, 12 Feb 2013 08:18:54 -0800 (PST)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -7.963
X-Spam-Status: No, score=-7.963 tagged_above=-999 required=5 tests=[AWL=2.014, BAYES_00=-2.599, FM_FORGED_GMAIL=0.622, RCVD_IN_DNSWL_HI=-8]
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id cSnfV2Y+2jx3 for <>; Tue, 12 Feb 2013 08:18:53 -0800 (PST)
Received: from ( []) by (Postfix) with ESMTP id 293B421F8F60 for <>; Tue, 12 Feb 2013 08:18:53 -0800 (PST)
Received: from lists by with local (Exim 4.72) (envelope-from <>) id 1U5IXl-0007Zi-Qt for; Tue, 12 Feb 2013 16:17:17 +0000
Resent-Date: Tue, 12 Feb 2013 16:17:17 +0000
Resent-Message-Id: <>
Received: from ([]) by with esmtp (Exim 4.72) (envelope-from <>) id 1U5IXd-0007Yz-HJ for; Tue, 12 Feb 2013 16:17:09 +0000
Received: from ([] by with esmtp (Exim 4.72) (envelope-from <>) id 1U5IXa-0005wh-1G for; Tue, 12 Feb 2013 16:17:09 +0000
Received: from (localhost []) by (Postfix) with ESMTP id A0D1C1B406E for <>; Tue, 12 Feb 2013 08:16:43 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed;; h= mime-version:in-reply-to:references:date:message-id:subject:from :to:cc:content-type:content-transfer-encoding; s=; bh=cmtqZ7bQTWwuRb0ZIoWfkG9fnK8=; b=bhJEjPIZ6l2 +c3UkbtP+2h+jhaVqT0aS90tQ9ppvXrs1cIW7aouWXpD/99P9MzxNo7mFrn1qh1I /kvXl6KGQmQHGtKVhpVtzNaV1KyeOxeA4yMFApCdUph94xlS3JaMnbbwMIrMT+/j 9Qxh7jvsMiBfdLQ0lI4fwymsMn5+iy0Q=
Received: from ( []) (using TLSv1 with cipher RC4-SHA (128/128 bits)) (No client certificate requested) (Authenticated sender: by (Postfix) with ESMTPSA id 532721B4059 for <>; Tue, 12 Feb 2013 08:16:43 -0800 (PST)
Received: by with SMTP id s43so210385wey.35 for <>; Tue, 12 Feb 2013 08:16:41 -0800 (PST)
MIME-Version: 1.0
X-Received: by with SMTP id l9mr4360613wix.20.1360685801891; Tue, 12 Feb 2013 08:16:41 -0800 (PST)
Received: by with HTTP; Tue, 12 Feb 2013 08:16:41 -0800 (PST)
In-Reply-To: <>
References: <> <> <> <> <> <> <> <> <> <> <> <> <> <>
Date: Tue, 12 Feb 2013 10:16:41 -0600
Message-ID: <>
From: Nico Williams <>
To: Yoav Nir <>
Cc: HTTP Working Group <>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
Received-SPF: none client-ip=;;
X-W3C-Hub-Spam-Status: No, score=-3.5
X-W3C-Hub-Spam-Report: AWL=-3.449, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, RCVD_IN_DNSWL_NONE=-0.0001
X-W3C-Scan-Sig: 1U5IXa-0005wh-1G ea09101054cfa2e5a8908d43b01862e1
Subject: Re: http/2 prioritization/fairness bug with proxies
Archived-At: <>
X-Mailing-List: <> archive/latest/16586
Precedence: list
List-Id: <>
List-Help: <>
List-Post: <>
List-Unsubscribe: <>

On Tue, Feb 12, 2013 at 1:13 AM, Yoav Nir <> wrote:
> On Feb 12, 2013, at 1:59 AM, Nico Williams <> wrote:
>> Right.  Don't duplicate the SSHv2 handbrake (Peter Gutmann's term) in HTTP/2.0.
>> Use percentages of BDP on the sender side.  Have the receiver send
>> control frames indicating the rate at which it's receiving to help
>> estimate BDP, or ask TCP.  But do not flow control.
>> Another possibility is to have the sender (or a proxy) use
>> per-priority TCP connections.
> I don't think that one solves the problem. A server has to consider priority as relative to the TCP connection, so that high-priority requests trump low-priority requests within the same connection, but not low-priority requests in another connection. Otherwise we have a fairness issue even without proxies.

Clearly with per-priority TCP connections there's no need for explicit
priority labels.  The reason for wanting multiple flows is to avoid
the situation where bulk transfers block the smaller requests (and
responses) needed for applications to remain responsive to user input.
 The moment different QoS traffic is multiplexed over one TCP
connection we need either nested flow control (bad!) or other
cooperation between the sender and receiver (hop-by-hop too) to ensure
timely delivery of non-bulk, high-priority requests.

> So you're effectively creating several streams, each with all requests having the same priority. The server will then try to be fair to all connections, effectively giving the same performance to high-priority and low-priority requests.

Not necessarily.  First, the small requests can get through even
though the TCP connection for bulk transfers is full enough that it'd
take much longer for the small requests to get through (possibly
because of I/O problems with bulk sinks/sources on the server side
causing flow control to kick in).  Second, the server can probably
apply application-specific [possibly heuristic] rules to prioritize
processing of some requests over others regardless of which TCP
connections they arrived over.

I'm not advocating per-priority TCP connections.  I'm specifically
arguing against SSHv2-style per-channel flow control -- a performance
disaster -- and offering and supporting alternartives.