Re: Experiences with HTTP/2 server push

Kazuho Oku <kazuhooku@gmail.com> Thu, 01 September 2016 21:38 UTC

Return-Path: <ietf-http-wg-request+bounce-httpbisa-archive-bis2juki=lists.ie@listhub.w3.org>
X-Original-To: ietfarch-httpbisa-archive-bis2Juki@ietfa.amsl.com
Delivered-To: ietfarch-httpbisa-archive-bis2Juki@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id B793312D766 for <ietfarch-httpbisa-archive-bis2Juki@ietfa.amsl.com>; Thu, 1 Sep 2016 14:38:58 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -7.569
X-Spam-Level:
X-Spam-Status: No, score=-7.569 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HEADER_FROM_DIFFERENT_DOMAINS=0.001, RCVD_IN_DNSWL_HI=-5, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, RP_MATCHES_RCVD=-0.548, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 4cAN0LZofG8J for <ietfarch-httpbisa-archive-bis2Juki@ietfa.amsl.com>; Thu, 1 Sep 2016 14:38:56 -0700 (PDT)
Received: from frink.w3.org (frink.w3.org [128.30.52.56]) (using TLSv1.2 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 5BFDA12D100 for <httpbisa-archive-bis2Juki@lists.ietf.org>; Thu, 1 Sep 2016 14:38:56 -0700 (PDT)
Received: from lists by frink.w3.org with local (Exim 4.80) (envelope-from <ietf-http-wg-request@listhub.w3.org>) id 1bfZcl-00019x-Gf for ietf-http-wg-dist@listhub.w3.org; Thu, 01 Sep 2016 21:34:15 +0000
Resent-Date: Thu, 01 Sep 2016 21:34:15 +0000
Resent-Message-Id: <E1bfZcl-00019x-Gf@frink.w3.org>
Received: from lisa.w3.org ([128.30.52.41]) by frink.w3.org with esmtps (TLS1.2:DHE_RSA_AES_128_CBC_SHA1:128) (Exim 4.80) (envelope-from <kazuhooku@gmail.com>) id 1bfZcc-00015k-2Q for ietf-http-wg@listhub.w3.org; Thu, 01 Sep 2016 21:34:06 +0000
Received: from mail-it0-f48.google.com ([209.85.214.48]) by lisa.w3.org with esmtps (TLS1.2:RSA_AES_128_CBC_SHA1:128) (Exim 4.80) (envelope-from <kazuhooku@gmail.com>) id 1bfZcZ-0000Kt-Uu for ietf-http-wg@w3.org; Thu, 01 Sep 2016 21:34:05 +0000
Received: by mail-it0-f48.google.com with SMTP id i184so4655298itf.1 for <ietf-http-wg@w3.org>; Thu, 01 Sep 2016 14:33:43 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=WhjOLS7QTL7eCIzRi6rZD5DdYuBQCx5FjbULYhZnyeU=; b=QKKlW3Mlolxw/2CnknaFyOSTFv51x6Ewg/Ixf0WrebEyLO4WUW+0SdvC8kWY3l+8gm qFXn6rEiEOZmw4mRKhkwLEdKjHVC6wZAt5MrW6wy3l2VdKbQ8m/Mm+FKjseItAkVEhxK rkeAv6iZP9W/JEB9WnTwTt7rsxEKATqAoID2h72PXHxPgAOO57Abd9h4SuCaGvQ9D9kO KVKBwH33ySW85reEIvRIRDmCXDwr/kVJXFf38CIU7en+3bOBNggigcBzS2Fb5BctcgjO eHQBqmzkFkOPUbqzO0a1LpcgiJssn0oOE9HqwFy1vknIMlwzwxfv5lcg7gkvq7HRq68n lQLQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=WhjOLS7QTL7eCIzRi6rZD5DdYuBQCx5FjbULYhZnyeU=; b=PY1pLwjBgt0FVfyQtAhx5QaRGFl2EyzZR4bPffCatWDx2xKQ6v+yYd8j1UtkTeTgnp XOTr6euXRjBLw9qT2GfqpmNPBxeDo2uRuJrVuLzFz2L6hb2yjHtUaUmwRT2HN+cjLGQL 6POFFObDh9A0XgZaf+KT0Amilf3I5I3E+aKO58DtjAbfmipoGQFvRh28WbtMmKm19XW/ zcZQVU4POuL9Y0KXAc6KXTpPEnmEDdDsPqPn0opZa066+gwbtzxAqVPDeymVNBp1DEAn KBaWZUVvlKDOYoxYUGz0qzFoXZab3zDcVgfPyPnsEai7XcqUrX1CPszLc+z7+JR1aUj7 avAw==
X-Gm-Message-State: AE9vXwPwVyvSjFdOvigxrC79nub1w2JH0LrYfLXspbbFEVhAb23Dmt9LwT7khUpXDRyH6WegQJ7gCvcFpi7cJQ==
X-Received: by 10.36.238.132 with SMTP id b126mr44979919iti.37.1472765615381; Thu, 01 Sep 2016 14:33:35 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.107.29.134 with HTTP; Thu, 1 Sep 2016 14:33:34 -0700 (PDT)
In-Reply-To: <CA+3+x5F8M+xbH2YjD9m87WPOyQPCTVNV8evBJQHn9gicch+1TQ@mail.gmail.com>
References: <CACZw55mUg_VjN3Q6TqPCb6udo3mQpoWQVNV5e2iYiNj=hC-2kw@mail.gmail.com> <CANatvzz0rBjkgV4yS+2hgspUj7wZ6y=NqojPyzHiPpvZVXzwEA@mail.gmail.com> <CA+3+x5F8M+xbH2YjD9m87WPOyQPCTVNV8evBJQHn9gicch+1TQ@mail.gmail.com>
From: Kazuho Oku <kazuhooku@gmail.com>
Date: Fri, 02 Sep 2016 06:33:34 +0900
Message-ID: <CANatvzwdmz4L06DbzxBgm1+YeVcpYXDck2QBEXoytHfoJBDzZg@mail.gmail.com>
To: Tom Bergan <tombergan@chromium.org>
Cc: HTTP Working Group <ietf-http-wg@w3.org>
Content-Type: text/plain; charset="UTF-8"
Received-SPF: pass client-ip=209.85.214.48; envelope-from=kazuhooku@gmail.com; helo=mail-it0-f48.google.com
X-W3C-Hub-Spam-Status: No, score=-3.7
X-W3C-Hub-Spam-Report: BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FREEMAIL_FROM=0.001, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_SORBS_SPAM=1, SPF_PASS=-0.001, W3C_AA=-1, W3C_WL=-1
X-W3C-Scan-Sig: lisa.w3.org 1bfZcZ-0000Kt-Uu 2892f93f6ebee8312ef9c133044d52b1
X-Original-To: ietf-http-wg@w3.org
Subject: Re: Experiences with HTTP/2 server push
Archived-At: <http://www.w3.org/mid/CANatvzwdmz4L06DbzxBgm1+YeVcpYXDck2QBEXoytHfoJBDzZg@mail.gmail.com>
Resent-From: ietf-http-wg@w3.org
X-Mailing-List: <ietf-http-wg@w3.org> archive/latest/32371
X-Loop: ietf-http-wg@w3.org
Resent-Sender: ietf-http-wg-request@w3.org
Precedence: list
List-Id: <ietf-http-wg.w3.org>
List-Help: <http://www.w3.org/Mail/>
List-Post: <mailto:ietf-http-wg@w3.org>
List-Unsubscribe: <mailto:ietf-http-wg-request@w3.org?subject=unsubscribe>

Hi,

Thank you for your response.

2016-09-02 1:38 GMT+09:00 Tom Bergan <tombergan@chromium.org>:
> Thanks for the feedback and link to that workshop talk! A few comments
> inline.
>
> On Wed, Aug 31, 2016 at 9:57 PM, Kazuho Oku <kazuhooku@gmail.com> wrote:
>>
>> Consider the case where a large HTML that loads a CSS is sent over the
>> wire. In a typical implementation, the server will pass a block of
>> HTML much larger than INITCWND to the TCP stack before recognizing the
>> request for CSS. So the client would need to wait for multiple RTTs
>> before starting to receive the CSS.
>
>
> Unrelated to your above comment -- I think servers should use a higher
> initcwnd with H2, and I know that some servers do this. The experiments in
> our doc used Linux's default initcwnd (10 packets). If you compare that to
> H1, where browsers use 6 concurrent connections, the effective initcwnd for
> H1 is 60 packets (well, not exactly, since the browser only makes one
> request initially, but as soon as the browser starts making additional
> requests, cwnd effectively grows much faster than it would with a single
> connection).
>
>>
>> That said, as discussed at the workshop, it is possible to implement a
>> HTTP/2 server that does not get affected by HoB between the different
>> streams (see
>> https://github.com/HTTPWorkshop/workshop2016/blob/master/talks/tcpprog.pdf).
>>
>> I would suggest that regardless of whether or not push is used, server
>> implementors should consider adopting such approach to minimize the
>> impact of HoB.
>
>
> This is really interesting. To summarize: the idea is to use getsockopt to
> compute the number of available bytes in cwnd so that sizeof(kernel buffer)
> = cwnd. I rejected this idea without thinking about it much because it
> seemed like it would increase kernel/user round-trips and perform poorly in
> bursty conditions. But, your idea to restrict this optimization to cases
> where it matters most makes sense. Do you have performance measurements of
> this idea under heavy load?

Unfortunately not.

I agree that it would be interesting to collect metrics based on real
workload, both on the client side and the server side.

OTOH let me note that since we enable the optimization only for
connections with RTT substantially higher than the time spent by a
single iteration of the event loop, we expect that there would be no
performance penalty when facing a burst. The server would just switch
to the ordinary way.

> Are you using TCP_NOTSENT_LOWAT for cases where
> the optimization cannot be used?

No. I'm not sure if restricting the amount of unsent data to a fixed
value is generally a good thing, or if that causes practical impact on
performance.

Personally, for connections that left the slow-start phase, I prefer
the amount calculated proportional to the current CWND value, which
IIRC is the default behavior of Linux.

>>
>> It should also be noted that with QUIC such HoB would not be an issue
>> since there would no longer be a send buffer within the kernel.
>
>
> Yep, this is definitely an advantage of QUIC.
>
>> "Rule 2: Push Resources in the Right Order"
>>
>> My take is that the issue can / should be solved by clients sending
>> PRIORITY frames for pushed resources when they observe how the
>> resources are used, and that until then servers should schedule the
>> pushed streams separately from the client-driven prioritization tree
>> (built by using the PRIORITY frames).
>>
>> Please refer to the discussion in the other thread for details:
>> https://lists.w3.org/Archives/Public/ietf-http-wg/2016JulSep/0453.html
>
>
> To make sure I understand the idea: Suppose you send HTML, then push
> resources X and Y. You will continue pushing X and Y until you get requests
> from the client, at which point you switch to serving requests made by the
> client (which may or may not include X and Y, as the client may not know
> about those resources yet, depending on what you decided to push). These
> client requests are served via the client-driven priority tree.
>
> Is that right? If so, you've basically implemented rule #1

Actually not.

My interpretation of rule #1 (or the solution proposed for rule #1)
was that it discusses the impact of TCP-level head-of-line blocking,
whereas rule #2 seemed to discuss the issues caused by pushed streams
not appropriately prioritized against the pulled streams.

And the solution for rule #2 that I revisited here was for a server to
prioritize _some_ of the pushed streams outside the client-driven
priority tree.

I am not copy-pasting the scheme described in
https://lists.w3.org/Archives/Public/ietf-http-wg/2016JulSep/0453.html
in fear that doing so might lose context, but as an example, it would
go like this.

Suppose you are sending HTML (in response to a pull), as well as
pushing two asset files: one is CSS and one is an image.

Among the two assets, it is fair for a server to anticipate that the
CSS is likely to block the rendering of the HTML. Therefore, the
server sends CSS before HTML (but does not send a PRIORITY frame for
the CSS, since PRIORITY frame is a tool for controlling client-driven
prioritization). OTOH an image is not likely to block the rendering.
Therefore, it is scheduled as specified by the HTTP/2 specification
(so that it would be sent after the HTML).

This out-of-client-driven-priotization-tree scheduling should be
performed until a server receives a PRIORITY frame adjusting the
precedence of a pushed stream. At this point, a server should
reprioritize the pushed stream (i.e. CSS) if it considers client's
knowledge of how the streams should be prioritized superior to what
the server knows.

> -- the push lasts
> while the network is idle, then you switch to serving client requests
> afterwards. It's nice to see that we came to the same high-level conclusion
> :-). But, I like the way you've phrased the problem. Instead of computing a
> priori out how much data you should push, which we suggested, you start
> pushing an arbitrary number of things, then you'll automatically stop
> pushing as soon as you get the next client request.
>
> One more clarification: what happens when the client loads two pages
> concurrently and the network is effectively never idle? I assume push won't
> happen in this case?
>
> Next, I think you're arguing that push order doesn't matter as long as you
> have a solution for HoB. I don't think this is exactly right. Specifically:
>
> - Head-of-link blocking (HoB) can happen due to network-level bufferbloat.
> The above solution only applies to kernel-level bufferbloat. You need some
> kind of bandwidth-based pacing to avoid network-level buffer bloat.

That's correct.

OTOH I would like to point out that the issue is irrelevant to push.

A client would issue requests in the order it notices the URLs that it
should fetch. And it cannot update the priority of the links found in
LRP headers until it observes how the resource is actually used.

So if preload links included low-priority assets, bufferbloat can (or
will) cause issues for both pull and push.

> - If you're pushing X and Y, and you know the client will use X before Y,
> you should push in that order. The opposite order is sub-optimal and can
> eliminate the benefit of push in some cases, even ignoring HoB.

Agreed.

And my understanding is that both Apache and H2O does this, based on
the content-type of the pushed response.

Just having two (or three) levels of precedence (send before HTML vs.
send after HTML vs. send along with HTML) is not as complex as what
HTTP/2's prioritization tree provides, but I think is sufficient for
optimizing the time spent until first-render.

What would be the best way to prioritize the blocking assets (i.e. an
asset that needs to be sent before HTML, e.g. CSS) is what Apache and
H2O disagree. And my proposal (and what H2O does in that respect) is
that a server should schedule such pushed streams outside the
prioritization tree (i.e. my response for rule #2).

>> As a server implementor, I have always dreamt of cancelling a push
>> after sending a PUSH_PROMISE. In case a resource we want to push
>> exists on a dedicate cache that cannot be reached synchronously from
>> the HTTP/2 server, the server needs to send PUSH_PROMISE without the
>> guarantee that it would be able to push a valid response.
>>
>> It would be great if we could have an error code that can be sent
>> using RST_STREAM to notify the client that it should discard the
>> PUSH_PROMISE being sent, and issue a request by itself.
>
>
> Yes, +1. I've wanted this feature. It sucks that the client won't reissue
> the requests if they get a RST_STREAM. (At least, Chrome won't do this, I
> don't know about other browsers.)



-- 
Kazuho Oku