Re: Experiences with HTTP/2 server push

Alcides Viamontes E <> Fri, 05 August 2016 19:20 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id C9BD012D699 for <>; Fri, 5 Aug 2016 12:20:33 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -8.207
X-Spam-Status: No, score=-8.207 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, HEADER_FROM_DIFFERENT_DOMAINS=0.001, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_HI=-5, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, RP_MATCHES_RCVD=-1.287, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Authentication-Results: (amavisd-new); dkim=pass (2048-bit key)
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id opTsVy_6vlma for <>; Fri, 5 Aug 2016 12:20:31 -0700 (PDT)
Received: from ( []) (using TLSv1.2 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) by (Postfix) with ESMTPS id 479E912D692 for <>; Fri, 5 Aug 2016 12:20:31 -0700 (PDT)
Received: from lists by with local (Exim 4.80) (envelope-from <>) id 1bVkbP-0007IT-G3 for; Fri, 05 Aug 2016 19:16:15 +0000
Resent-Date: Fri, 05 Aug 2016 19:16:15 +0000
Resent-Message-Id: <>
Received: from ([]) by with esmtps (TLS1.2:DHE_RSA_AES_128_CBC_SHA1:128) (Exim 4.80) (envelope-from <>) id 1bVkbI-0007G2-1A for; Fri, 05 Aug 2016 19:16:08 +0000
Received: from ([]) by with esmtps (TLS1.2:RSA_AES_128_CBC_SHA1:128) (Exim 4.80) (envelope-from <>) id 1bVkbC-0003cO-9E for; Fri, 05 Aug 2016 19:16:07 +0000
Received: by with SMTP id 4so166726297oih.2 for <>; Fri, 05 Aug 2016 12:15:41 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=20150623; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=y0I83bIFrIMJrP0cJBGUg5EP0U2aoLSpT8pGObBIoI0=; b=cdTkbUWGNPkAT4Pd/enLqaIwvdTF6OHktj074tDV6qwfo3E7BHupINcUwIwG0U+MH4 A4H4A2NKQ4IVHbxCj8CqJLW6wOI+NKWfHM7ziOY9ALe7KnrOBB+852VFsNhKcymDmghz kKW3/hxKEtyby0Z7Pe+yWLB4+MlVyxDb1dh4nkf8A9r2QH6A6jA9DttXmXvL/qm8ZyMg KBm6vsVZQxSnWzn+g6LrzJAiHoyMaTk6sX5ciKaIg88RZW4GfTnnkKPYO9VTDqcNdhQp UpsL+OgrxeyJlHyq4mybaEv+EfNECgftI5vnJ9ojB7D8vuGL6O57lfOtMzNtQ5J3oeZX KkKg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=y0I83bIFrIMJrP0cJBGUg5EP0U2aoLSpT8pGObBIoI0=; b=SkyWL0evyB3VAhJq7XKQW6epi7Ukb5lO/3WtXFGTJjR8p0XcbGxzwS1YCuxfRU50uG qPoIlwl7mMO44+4jYV6QQG9+QkBnoYoreX4IZBIK7zz2sR1QOG6nHviXzVRq1SO2a+4f 17AEL2O3S2McDTGz5or0f3Puc9DEWtd6t/GkonVA0W8VD8WPiZHdtE9e8d/rzRRYJ4mU GkUxA5WZgWsqJ3K+BMbzLnSrGf9EYzYbL5bZ9IJQujL7njVUhhGZhSbMLGarqF5OREGb dgYr/dO7lu12FUB4n86AcGgrftbrqxGGAWA+tm5uTKjQP3oztJSiuCigMsNe575gDm+C J4zw==
X-Gm-Message-State: AEkoous26G1TVqz+sobmwCHWl+hD/O0J0FYS2b0S/sh0V2WBJkyiQHsL68ZmF1HOzrYAbA04SL/2hY9zD15swQ==
X-Received: by with SMTP id v6mr8560706otf.141.1470424077451; Fri, 05 Aug 2016 12:07:57 -0700 (PDT)
MIME-Version: 1.0
Received: by with HTTP; Fri, 5 Aug 2016 12:07:56 -0700 (PDT)
In-Reply-To: <>
References: <> <> <>
From: Alcides Viamontes E <>
Date: Fri, 05 Aug 2016 21:07:56 +0200
Message-ID: <>
To: Tom Bergan <>
Cc: Martin Thomson <>, HTTP Working Group <>, Vladir Parrado Cruz <>, Nejc Vukovic <>, Ludvig Bohlin <>
Content-Type: multipart/alternative; boundary="94eb2c191454ce5f03053957c8c3"
Received-SPF: pass client-ip=;;
X-W3C-Hub-Spam-Status: No, score=-4.2
X-W3C-Hub-Spam-Report: AWL=-2.067, BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_LOW=-0.7, SPF_PASS=-0.001, W3C_NW=0.5
X-W3C-Scan-Sig: 1bVkbC-0003cO-9E 43689adb7eabf6df81c6676c8cbe772f
Subject: Re: Experiences with HTTP/2 server push
Archived-At: <>
X-Mailing-List: <> archive/latest/32198
Precedence: list
List-Id: <>
List-Help: <>
List-Post: <>
List-Unsubscribe: <>

> Let's say the server wants to prioritize a subset of streams differently
> than the priorities specified by the client, or differently from the
> default priorities. How should it actually implement this? The simplest
> implementation is to mutate the H2 priority tree directly. This makes the
> H2 priority tree the single prioritization data structure in the server.
> It's also attractive because H2 priorities can be communicated to lower
> layers like QUIC
> <>.
> We are aware of a few servers that update the priority tree like this,
> e.g., see Apache's h2_session_set_prio
> <>
> .
> However, if the server does this, it has a problem: the H2 priority tree
> is a shared data structure. If it makes a change, its local copy of the
> data structure can be out-of-sync relative to the client's copy. A future
> PRIORITY frame from the client may have a different meaning than intended
> if the server has already changed its tree locally. The sentence you quoted
> describes the reactions of a naive server to this problem: Maybe I can keep
> the client's tree in sync by sending a PRIORITY frame? (Sorry for not
> making this more clear.) Of course, this doesn't actually solve the
> problem, since the server's PRIORITY frames could race with the client's.
> (Note that we're not aware of any servers that actually do this; we were
> just hoping to prevent any from trying.)

Hi. Great work over there.

If browser and server are a bit far apart re-prioritization may arrive a
bit too late to the server to be effective. Our solution for both  the race
conditions and the RTT problem  is having our server learn the priorities
and dependencies, build a delivery plan once and use it many times. In that
sense, priorities and dependencies on the HTTP/2 spec as it is today is
good enough for us. And the implementation complexity is about the same as
implementing on-the-flight re-prioritization.

> RFC 7540 talks about another kind of race: removing closed streams from
> the tree. The solution proposed by the RFC is to keep closed streams in the
> tree as long as possible. The RFC does not discuss this other kind of race
> -- reprioritizing streams on the server -- and this seems like something
> servers are very interested in doing. AFAIK, no one has really studied the
> impacts of this race nor provided guidance about how to avoid it. We don't
> have any great solutions, either, we just wanted to raise the problem to be
> sure that server implementors are aware of it.

This thing with closed streams takes a bit of getting used to. We have to
be very careful to discard as much information as possible about closed
streams as early as possible, but still keep some around for a little while
to know which stream references from the browser are valid. Since we are
not handling priority frames online, the amount of information we have to
save is relatively small ("was this stream ever used?"), but if we were
following the letter of the spec this would be a very worrying issue.

> Our team has been experimenting with H2 server push at Google for a few
>> > months. We found that it takes a surprising amount of careful reasoning
>> to
>> > understand why your web page is or isn't seeing better performance with
>> H2
>> > push.
Oh, but it is a lot of fun :-)

In our experience as well the biggest performance killer of HTTP/2 Push is
TCP slow start and the fact that push promises are bulky. Send many of them
and an ACK round-trip will be needed.

However, HTTP/2 Push *is* useful at other times as well. For example, if
the server is using cache digests via cookies and it positively knows that
the browser doesn't have a script referenced at the bottom of the page,
like this:

      <script src="/jquery.js?vh=3fhhwq"></script>,

it can pause the HTML stream a little bit before "<script src", send a push
promise for "/jquery.js?vh=3fhhwq", and resume sending the HTML document.
Chances are that the TCP window is bigger by then.

Also notice a related scenario, which is a counter-pattern from the times
of HTTP/1.1: instead of making a big HTML file including all  parts of a
page, use (the closest thing to) HTML imports. If elements of a page that
seldom change like navigation bar and visual footer are made imports, then
they can be cached and traffic to the server can be reduced. Nobody does
that because of latency. Using  HTTP/2 Push in the way described before, it
becomes possible at no performance cost.

Looked that way, HTTP/2 Push is a big deal for web components. And it is
not far fetched, we are planning to release these features for ShimmerCat
1.7. The only thing we require from browser  is that they check if there is
a push promise for a resource strictly -- but as late as possible -- before
starting a fetch.

The same can be done with hierarchies of scripts, although we will have to
wait a bit for people to stop making big .js blobs....

Alcides Viamontes E.
Chief Executive Officer, Zunzun AB
(+46) 722294542
( is a property of Zunzun AB)