Re: Experiences with HTTP/2 server push

Alcides Viamontes E <alcidesv@shimmercat.com> Fri, 05 August 2016 19:20 UTC

Return-Path: <ietf-http-wg-request+bounce-httpbisa-archive-bis2juki=lists.ie@listhub.w3.org>
X-Original-To: ietfarch-httpbisa-archive-bis2Juki@ietfa.amsl.com
Delivered-To: ietfarch-httpbisa-archive-bis2Juki@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id C9BD012D699 for <ietfarch-httpbisa-archive-bis2Juki@ietfa.amsl.com>; Fri, 5 Aug 2016 12:20:33 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -8.207
X-Spam-Level:
X-Spam-Status: No, score=-8.207 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, HEADER_FROM_DIFFERENT_DOMAINS=0.001, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_HI=-5, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, RP_MATCHES_RCVD=-1.287, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=shimmercat-com.20150623.gappssmtp.com
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id opTsVy_6vlma for <ietfarch-httpbisa-archive-bis2Juki@ietfa.amsl.com>; Fri, 5 Aug 2016 12:20:31 -0700 (PDT)
Received: from frink.w3.org (frink.w3.org [128.30.52.56]) (using TLSv1.2 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 479E912D692 for <httpbisa-archive-bis2Juki@lists.ietf.org>; Fri, 5 Aug 2016 12:20:31 -0700 (PDT)
Received: from lists by frink.w3.org with local (Exim 4.80) (envelope-from <ietf-http-wg-request@listhub.w3.org>) id 1bVkbP-0007IT-G3 for ietf-http-wg-dist@listhub.w3.org; Fri, 05 Aug 2016 19:16:15 +0000
Resent-Date: Fri, 05 Aug 2016 19:16:15 +0000
Resent-Message-Id: <E1bVkbP-0007IT-G3@frink.w3.org>
Received: from maggie.w3.org ([128.30.52.39]) by frink.w3.org with esmtps (TLS1.2:DHE_RSA_AES_128_CBC_SHA1:128) (Exim 4.80) (envelope-from <alcidesv@zunzun.se>) id 1bVkbI-0007G2-1A for ietf-http-wg@listhub.w3.org; Fri, 05 Aug 2016 19:16:08 +0000
Received: from mail-oi0-f47.google.com ([209.85.218.47]) by maggie.w3.org with esmtps (TLS1.2:RSA_AES_128_CBC_SHA1:128) (Exim 4.80) (envelope-from <alcidesv@zunzun.se>) id 1bVkbC-0003cO-9E for ietf-http-wg@w3.org; Fri, 05 Aug 2016 19:16:07 +0000
Received: by mail-oi0-f47.google.com with SMTP id 4so166726297oih.2 for <ietf-http-wg@w3.org>; Fri, 05 Aug 2016 12:15:41 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shimmercat-com.20150623.gappssmtp.com; s=20150623; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=y0I83bIFrIMJrP0cJBGUg5EP0U2aoLSpT8pGObBIoI0=; b=cdTkbUWGNPkAT4Pd/enLqaIwvdTF6OHktj074tDV6qwfo3E7BHupINcUwIwG0U+MH4 A4H4A2NKQ4IVHbxCj8CqJLW6wOI+NKWfHM7ziOY9ALe7KnrOBB+852VFsNhKcymDmghz kKW3/hxKEtyby0Z7Pe+yWLB4+MlVyxDb1dh4nkf8A9r2QH6A6jA9DttXmXvL/qm8ZyMg KBm6vsVZQxSnWzn+g6LrzJAiHoyMaTk6sX5ciKaIg88RZW4GfTnnkKPYO9VTDqcNdhQp UpsL+OgrxeyJlHyq4mybaEv+EfNECgftI5vnJ9ojB7D8vuGL6O57lfOtMzNtQ5J3oeZX KkKg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=y0I83bIFrIMJrP0cJBGUg5EP0U2aoLSpT8pGObBIoI0=; b=SkyWL0evyB3VAhJq7XKQW6epi7Ukb5lO/3WtXFGTJjR8p0XcbGxzwS1YCuxfRU50uG qPoIlwl7mMO44+4jYV6QQG9+QkBnoYoreX4IZBIK7zz2sR1QOG6nHviXzVRq1SO2a+4f 17AEL2O3S2McDTGz5or0f3Puc9DEWtd6t/GkonVA0W8VD8WPiZHdtE9e8d/rzRRYJ4mU GkUxA5WZgWsqJ3K+BMbzLnSrGf9EYzYbL5bZ9IJQujL7njVUhhGZhSbMLGarqF5OREGb dgYr/dO7lu12FUB4n86AcGgrftbrqxGGAWA+tm5uTKjQP3oztJSiuCigMsNe575gDm+C J4zw==
X-Gm-Message-State: AEkoous26G1TVqz+sobmwCHWl+hD/O0J0FYS2b0S/sh0V2WBJkyiQHsL68ZmF1HOzrYAbA04SL/2hY9zD15swQ==
X-Received: by 10.157.17.134 with SMTP id v6mr8560706otf.141.1470424077451; Fri, 05 Aug 2016 12:07:57 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.202.215.132 with HTTP; Fri, 5 Aug 2016 12:07:56 -0700 (PDT)
In-Reply-To: <CA+3+x5FpRGm9XQz2PdvFs6Kfiz3eMH1QLJ0fAeaeqQOSF2c9sw@mail.gmail.com>
References: <CACZw55mUg_VjN3Q6TqPCb6udo3mQpoWQVNV5e2iYiNj=hC-2kw@mail.gmail.com> <CABkgnnX=6ZjnFJsh-07SDt+LMprsJ9w7tgSjaeaMKeEgihsD4g@mail.gmail.com> <CA+3+x5FpRGm9XQz2PdvFs6Kfiz3eMH1QLJ0fAeaeqQOSF2c9sw@mail.gmail.com>
From: Alcides Viamontes E <alcidesv@shimmercat.com>
Date: Fri, 05 Aug 2016 21:07:56 +0200
Message-ID: <CAAMqGzbwaFiMXy6r+r2avvv+ESG+sN0MK5FLdNZ8tB2xb=r3uA@mail.gmail.com>
To: Tom Bergan <tombergan@chromium.org>
Cc: Martin Thomson <martin.thomson@gmail.com>, HTTP Working Group <ietf-http-wg@w3.org>, Vladir Parrado Cruz <vparrado@gmail.com>, Nejc Vukovic <nejc.vukovic@gmail.com>, Ludvig Bohlin <ludvigbohlin@gmail.com>
Content-Type: multipart/alternative; boundary="94eb2c191454ce5f03053957c8c3"
Received-SPF: pass client-ip=209.85.218.47; envelope-from=alcidesv@zunzun.se; helo=mail-oi0-f47.google.com
X-W3C-Hub-Spam-Status: No, score=-4.2
X-W3C-Hub-Spam-Report: AWL=-2.067, BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_LOW=-0.7, SPF_PASS=-0.001, W3C_NW=0.5
X-W3C-Scan-Sig: maggie.w3.org 1bVkbC-0003cO-9E 43689adb7eabf6df81c6676c8cbe772f
X-Original-To: ietf-http-wg@w3.org
Subject: Re: Experiences with HTTP/2 server push
Archived-At: <http://www.w3.org/mid/CAAMqGzbwaFiMXy6r+r2avvv+ESG+sN0MK5FLdNZ8tB2xb=r3uA@mail.gmail.com>
Resent-From: ietf-http-wg@w3.org
X-Mailing-List: <ietf-http-wg@w3.org> archive/latest/32198
X-Loop: ietf-http-wg@w3.org
Resent-Sender: ietf-http-wg-request@w3.org
Precedence: list
List-Id: <ietf-http-wg.w3.org>
List-Help: <http://www.w3.org/Mail/>
List-Post: <mailto:ietf-http-wg@w3.org>
List-Unsubscribe: <mailto:ietf-http-wg-request@w3.org?subject=unsubscribe>

>
> Let's say the server wants to prioritize a subset of streams differently
> than the priorities specified by the client, or differently from the
> default priorities. How should it actually implement this? The simplest
> implementation is to mutate the H2 priority tree directly. This makes the
> H2 priority tree the single prioritization data structure in the server.
> It's also attractive because H2 priorities can be communicated to lower
> layers like QUIC
> <https://tools.ietf.org/html/draft-hamilton-early-deployment-quic-00#section-9>.
> We are aware of a few servers that update the priority tree like this,
> e.g., see Apache's h2_session_set_prio
> <https://github.com/icing/mod_h2/blob/master/mod_http2/h2_session.c#L1245>
> .
>
> However, if the server does this, it has a problem: the H2 priority tree
> is a shared data structure. If it makes a change, its local copy of the
> data structure can be out-of-sync relative to the client's copy. A future
> PRIORITY frame from the client may have a different meaning than intended
> if the server has already changed its tree locally. The sentence you quoted
> describes the reactions of a naive server to this problem: Maybe I can keep
> the client's tree in sync by sending a PRIORITY frame? (Sorry for not
> making this more clear.) Of course, this doesn't actually solve the
> problem, since the server's PRIORITY frames could race with the client's.
> (Note that we're not aware of any servers that actually do this; we were
> just hoping to prevent any from trying.)
>

Hi. Great work over there.

If browser and server are a bit far apart re-prioritization may arrive a
bit too late to the server to be effective. Our solution for both  the race
conditions and the RTT problem  is having our server learn the priorities
and dependencies, build a delivery plan once and use it many times. In that
sense, priorities and dependencies on the HTTP/2 spec as it is today is
good enough for us. And the implementation complexity is about the same as
implementing on-the-flight re-prioritization.


>
> RFC 7540 talks about another kind of race: removing closed streams from
> the tree. The solution proposed by the RFC is to keep closed streams in the
> tree as long as possible. The RFC does not discuss this other kind of race
> -- reprioritizing streams on the server -- and this seems like something
> servers are very interested in doing. AFAIK, no one has really studied the
> impacts of this race nor provided guidance about how to avoid it. We don't
> have any great solutions, either, we just wanted to raise the problem to be
> sure that server implementors are aware of it.
>

This thing with closed streams takes a bit of getting used to. We have to
be very careful to discard as much information as possible about closed
streams as early as possible, but still keep some around for a little while
to know which stream references from the browser are valid. Since we are
not handling priority frames online, the amount of information we have to
save is relatively small ("was this stream ever used?"), but if we were
following the letter of the spec this would be a very worrying issue.


> Our team has been experimenting with H2 server push at Google for a few
>> > months. We found that it takes a surprising amount of careful reasoning
>> to
>> > understand why your web page is or isn't seeing better performance with
>> H2
>> > push.
>>
>
Oh, but it is a lot of fun :-)

In our experience as well the biggest performance killer of HTTP/2 Push is
TCP slow start and the fact that push promises are bulky. Send many of them
and an ACK round-trip will be needed.

However, HTTP/2 Push *is* useful at other times as well. For example, if
the server is using cache digests via cookies and it positively knows that
the browser doesn't have a script referenced at the bottom of the page,
like this:

       ...something
      <script src="/jquery.js?vh=3fhhwq"></script>,

it can pause the HTML stream a little bit before "<script src", send a push
promise for "/jquery.js?vh=3fhhwq", and resume sending the HTML document.
Chances are that the TCP window is bigger by then.

Also notice a related scenario, which is a counter-pattern from the times
of HTTP/1.1: instead of making a big HTML file including all  parts of a
page, use (the closest thing to) HTML imports. If elements of a page that
seldom change like navigation bar and visual footer are made imports, then
they can be cached and traffic to the server can be reduced. Nobody does
that because of latency. Using  HTTP/2 Push in the way described before, it
becomes possible at no performance cost.

Looked that way, HTTP/2 Push is a big deal for web components. And it is
not far fetched, we are planning to release these features for ShimmerCat
1.7. The only thing we require from browser  is that they check if there is
a push promise for a resource strictly -- but as late as possible -- before
starting a fetch.

The same can be done with hierarchies of scripts, although we will have to
wait a bit for people to stop making big .js blobs....


-- 
Alcides Viamontes E.
Chief Executive Officer, Zunzun AB
(+46) 722294542
(www.shimmercat.com is a property of Zunzun AB)