Re: Experiences with HTTP/2 server push

Alcides Viamontes E <> Mon, 08 August 2016 20:09 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 9CFE812B078 for <>; Mon, 8 Aug 2016 13:09:37 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -8.167
X-Spam-Status: No, score=-8.167 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, HEADER_FROM_DIFFERENT_DOMAINS=0.001, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_HI=-5, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, RP_MATCHES_RCVD=-1.247, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Authentication-Results: (amavisd-new); dkim=pass (2048-bit key)
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id R1h5pTdyd7hr for <>; Mon, 8 Aug 2016 13:09:19 -0700 (PDT)
Received: from ( []) (using TLSv1.2 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) by (Postfix) with ESMTPS id AEC2212D150 for <>; Mon, 8 Aug 2016 13:09:19 -0700 (PDT)
Received: from lists by with local (Exim 4.80) (envelope-from <>) id 1bWqne-0000zr-0b for; Mon, 08 Aug 2016 20:05:26 +0000
Resent-Date: Mon, 08 Aug 2016 20:05:26 +0000
Resent-Message-Id: <>
Received: from ([]) by with esmtps (TLS1.2:DHE_RSA_AES_128_CBC_SHA1:128) (Exim 4.80) (envelope-from <>) id 1bWqnX-0000yR-DY for; Mon, 08 Aug 2016 20:05:19 +0000
Received: from ([]) by with esmtps (TLS1.2:RSA_AES_128_CBC_SHA1:128) (Exim 4.80) (envelope-from <>) id 1bWqnH-0005N7-Jk for; Mon, 08 Aug 2016 20:05:17 +0000
Received: by with SMTP id 74so85534381uau.0 for <>; Mon, 08 Aug 2016 13:04:42 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=20150623; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=77nH7BYGAiMl+VIjKQKGHM2oxJ9dAiEdVO5GphWs2Z8=; b=a4FN2xkipk0FCkrN1tv/tDAxQtHadto1/U+yPlArJ057nfJJblbS39XoYya59nu5oJ OE7cwd7G3oTXQzuzS4ZC0cXgHLLcnLEy3k03RYZX65hOTVeqpBgborC84CPW9maL551W 8Kz1xbPRa2XI9Qy9hG58dUpWh9MD7eC+cAZCXVZ802y/mmh+o8wzMFkV3JiX12wY2j8c mIezbQ50Vo6yGbSTXkT2daI/mEccgb+J8dpURZ8bINtIQnBc16YQSYh13fmZe+Pg/jIB GtvEBoDew98w/qbVw6AmvBtdWCL3yWCbcIBIYyTpYB/r/AFXXngcKb9c6ZTBNfxaYkBN mpbg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=77nH7BYGAiMl+VIjKQKGHM2oxJ9dAiEdVO5GphWs2Z8=; b=VjAmoAtAHvra4CKt2Ak0xbx7UqzkOd2bbMtdaXfyYaK+WKjgM/XAMeVbCDXrBEiU6v Agfq1jtfGznsVEdm9CwJs4tWhM4diawtr47WsApPv1LTyxop8iEyE+AqM+oPNsBvbozS cKFluijkNmHJJ4xRH9eSO0deP/7YeypM5eovmy8zsqJcyYnyZYTOFXwdxI+3wG+pCAkx vBDHYickl5XJk9dI1OwubATP+s31ZzjcRaCTz4CIzf2z3bDBuAWLWaCNQjQIJvd+xTg5 hKcGxxG9myIyFsXmmOKcNl43nZMcKbHheeGboMWQ3MY/78+uiMmfn4kmvTjrAGLfySnS 2xTQ==
X-Gm-Message-State: AEkooutxv/zhF9BlgQoNXIjVAgNGTq3KQyszOYNtlcJNOpGKdZGKVigMg0AAzgkuD45mEfjSlCT7aGlE1nHXLg==
X-Received: by with SMTP id x135mr47862223vke.25.1470686676531; Mon, 08 Aug 2016 13:04:36 -0700 (PDT)
MIME-Version: 1.0
Received: by with HTTP; Mon, 8 Aug 2016 13:04:35 -0700 (PDT)
In-Reply-To: <>
References: <> <> <> <> <>
From: Alcides Viamontes E <>
Date: Mon, 08 Aug 2016 22:04:35 +0200
Message-ID: <>
To: Tom Bergan <>
Cc: HTTP Working Group <>, Stefan Eissing <>, Martin Thomson <>
Content-Type: multipart/alternative; boundary="001a1144014eee6ae5053994ecfd"
Received-SPF: pass client-ip=;;
X-W3C-Hub-Spam-Status: No, score=-2.1
X-W3C-Hub-Spam-Report: BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_LOW=-0.7, SPF_PASS=-0.001, W3C_NW=0.5
X-W3C-Scan-Sig: 1bWqnH-0005N7-Jk a7a29fd74137e45062ab6e4e909f6140
Subject: Re: Experiences with HTTP/2 server push
Archived-At: <>
X-Mailing-List: <> archive/latest/32229
Precedence: list
List-Id: <>
List-Help: <>
List-Post: <>
List-Unsubscribe: <>

> If you can come up with a complete delivery plan on the server, that is
> definitely ideal. There is some academic work that tries to do this (see
> Klotski: I also liked your
> "Coordinated Image Loading" article as an example of why server-defined
> priorities can be a good idea. I am curious though -- how do you deal with
> cases where the page is heavily dynamic and some of the important content
> may be fetched by XHRs? Do you expect that the full dependency information
> is available a priori, before the main response is served to the client?

Thanks for the link! I will take a closer look.

About dynamic content, so far we have focused on xhr content which is
fetched on initial page load. From version 1.0 up to 1.4 we have included a
learning mode that introduces small, artificial random pauses between DATA
frames of an HTTP/2 stream. We also make the  frames purposely small and
variable, say arond 512 bytes in length. Then we record the time when each
frame is delivered (as well as our server can know from the blanket of
abstractions that the OS provides). During initial page load multiple
streams are being downloaded in parallel, and the pauses help to improve
(up to a point, before the law of large numbers kicks in) overall
randomness in the process.  Therefore, on repeated loads of the same page,
frames from different streams which are almost always delivered very close
in time point out to some correlation between their streams. We sort the
correlations on the order of possible causality, and we postulate that
there is a dependency between the two streams. This works well even for xhr
content, where the browser doesn't provide explicit dependency information.

In simpler words, we use statistics to try to infer dependencies from the
browser's behavior. The only downside of our technique is that it uses too
much computing power to be able to run under 10 seconds on a developer's
laptop, because each page of the site which is significantly different to
the others needs to be fetched several times by a typical user agent, e.g.,
a browser. It would work great as a cloud service though. And it is also
good enough to run as part of CI workflows.

>> > Our team has been experimenting with H2 server push at Google for a few
>>>> > months. We found that it takes a surprising amount of careful
>>>> reasoning to
>>>> > understand why your web page is or isn't seeing better performance
>>>> with H2
>>>> > push.
>> Oh, but it is a lot of fun :-)
> It is for us too :-), but I imagine many devs would find it frustrating.
> Hence our attempt to try to distill our experiences into a "best practices"
> doc.
>> In our experience as well the biggest performance killer of HTTP/2 Push
>> is TCP slow start and the fact that push promises are bulky. Send many of
>> them and an ACK round-trip will be needed.
> Interesting point about needing ACK round-trips just to send the push
> promises. We hadn't run across that problem specifically. Is this because
> you're sending many push promises? Is there some reason why hpack cannot
> compress a sequence of push promises?
Yes, in our early prototypes we just wanted to see how far the technique
would take us, so we grabbed a bloated HTML template and tried to push all
of it in the order our algorithms were spitting out ;-) .

> If you've released any data about HTTP/2 Push performance in ShimmerCat,
> I'd be interested to read it. I did notice one article on your site,
> although that only dealt with one page with a fixed network connection:
What would you consider an interesting, standard measure for this case? We
are mainly interested on reducing the impact of latency on page load time.
So we tend to measure how the page load time decreases  for a client  with
latency around 120 ms. That's easy to standardize, but the other variable
is the website itself and that one is harder to standardize. In our
consultancy projects we tend to set a 30 to 50% improvement in load time
and time-to-start-reading over baseline as an achievable target goal, but
each project is quite unique :-( .