Re: Experiences with HTTP/2 server push

Tom Bergan <tombergan@chromium.org> Thu, 01 September 2016 17:31 UTC

Return-Path: <ietf-http-wg-request+bounce-httpbisa-archive-bis2juki=lists.ie@listhub.w3.org>
X-Original-To: ietfarch-httpbisa-archive-bis2Juki@ietfa.amsl.com
Delivered-To: ietfarch-httpbisa-archive-bis2Juki@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 0665D12D0D3 for <ietfarch-httpbisa-archive-bis2Juki@ietfa.amsl.com>; Thu, 1 Sep 2016 10:31:00 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -7.568
X-Spam-Level:
X-Spam-Status: No, score=-7.568 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HEADER_FROM_DIFFERENT_DOMAINS=0.001, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_HI=-5, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, RP_MATCHES_RCVD=-0.548, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (1024-bit key) header.d=chromium.org
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id e7t_gzvUSxDV for <ietfarch-httpbisa-archive-bis2Juki@ietfa.amsl.com>; Thu, 1 Sep 2016 10:30:57 -0700 (PDT)
Received: from frink.w3.org (frink.w3.org [128.30.52.56]) (using TLSv1.2 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 66D1D12B04A for <httpbisa-archive-bis2Juki@lists.ietf.org>; Thu, 1 Sep 2016 10:30:57 -0700 (PDT)
Received: from lists by frink.w3.org with local (Exim 4.80) (envelope-from <ietf-http-wg-request@listhub.w3.org>) id 1bfVlH-0004Ge-I4 for ietf-http-wg-dist@listhub.w3.org; Thu, 01 Sep 2016 17:26:47 +0000
Resent-Date: Thu, 01 Sep 2016 17:26:47 +0000
Resent-Message-Id: <E1bfVlH-0004Ge-I4@frink.w3.org>
Received: from lisa.w3.org ([128.30.52.41]) by frink.w3.org with esmtps (TLS1.2:DHE_RSA_AES_128_CBC_SHA1:128) (Exim 4.80) (envelope-from <tombergan@chromium.org>) id 1bfVlA-0004Ft-G7 for ietf-http-wg@listhub.w3.org; Thu, 01 Sep 2016 17:26:40 +0000
Received: from mail-it0-f50.google.com ([209.85.214.50]) by lisa.w3.org with esmtps (TLS1.2:RSA_AES_128_CBC_SHA1:128) (Exim 4.80) (envelope-from <tombergan@chromium.org>) id 1bfVl6-0004h4-Kw for ietf-http-wg@w3.org; Thu, 01 Sep 2016 17:26:38 +0000
Received: by mail-it0-f50.google.com with SMTP id i184so82042615itf.1 for <ietf-http-wg@w3.org>; Thu, 01 Sep 2016 10:26:16 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=ujutzodcaBj91KLo+W9xwPTD1AIlZe2VAWsagld7vQg=; b=Qv6yCrj1GKrLx0ZbJYhahgis3aRNhI/B6Fa4XobAiOK3ujNULD9D1gMWr4AveViQb4 Ec4VGZDlW6cwuJaPvM32vL09UHyWaXIPwAfA9v2tBtXWkoCSoWc+GNT5Cy7GYnlxVMWl YScFybIbsFv1fmp0uEBO4f71obdgujPkmxHIE=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=ujutzodcaBj91KLo+W9xwPTD1AIlZe2VAWsagld7vQg=; b=OvvcFy/CVUME0igyy+6bDnpy2LfIekniQekeihXkywUUzhygJypLzWEwByBYC0Qxaj ORh2RSbCe3/oSJY5XvOkLEEFRkTvl8VSTyqr21JGqYYA9Y2yC1jiJNL40h2HQCsbfgU9 1Pl7VrQv9kbJmMEd0O/L4AvTtZjIclUzC+XElGDk7xzocmZ1GLCkzatX3hE3P0HhlDQ/ aK+KWR0GIASTjIY2tUW2BjDCKHrjKwsiJRwZaYU59a+k71AXKbNkFa9HdjeFGs5P8F0G YxZLBjiLRzD0vtZxqaRyg4Fd60QrlsCmmfAb6CsGaVwLks8zIoAmsfGzBIrKq8iQI4U9 G0AQ==
X-Gm-Message-State: AE9vXwM0RoSxKnDZ3hNR5DNUjfQkDITMuU0iENCMUjVvoGZXHWJcNy5aMPxM30vuxacVTLrL
X-Received: by 10.107.146.134 with SMTP id u128mr112961iod.74.1472750770382; Thu, 01 Sep 2016 10:26:10 -0700 (PDT)
Received: from mail-it0-f48.google.com (mail-it0-f48.google.com. [209.85.214.48]) by smtp.gmail.com with ESMTPSA id r124sm575607itc.19.2016.09.01.10.26.09 for <ietf-http-wg@w3.org> (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 01 Sep 2016 10:26:09 -0700 (PDT)
Received: by mail-it0-f48.google.com with SMTP id e124so116481090ith.0 for <ietf-http-wg@w3.org>; Thu, 01 Sep 2016 10:26:09 -0700 (PDT)
X-Received: by 10.107.55.193 with SMTP id e184mr112659ioa.161.1472750769367; Thu, 01 Sep 2016 10:26:09 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.107.48.136 with HTTP; Thu, 1 Sep 2016 10:26:08 -0700 (PDT)
In-Reply-To: <CA+3+x5F8M+xbH2YjD9m87WPOyQPCTVNV8evBJQHn9gicch+1TQ@mail.gmail.com>
References: <CACZw55mUg_VjN3Q6TqPCb6udo3mQpoWQVNV5e2iYiNj=hC-2kw@mail.gmail.com> <CANatvzz0rBjkgV4yS+2hgspUj7wZ6y=NqojPyzHiPpvZVXzwEA@mail.gmail.com> <CA+3+x5F8M+xbH2YjD9m87WPOyQPCTVNV8evBJQHn9gicch+1TQ@mail.gmail.com>
From: Tom Bergan <tombergan@chromium.org>
Date: Thu, 01 Sep 2016 10:26:08 -0700
X-Gmail-Original-Message-ID: <CA+3+x5FQqB8vfrgq-5yZyLn11OgYMooQAi5veTugu5CRHRZB+w@mail.gmail.com>
Message-ID: <CA+3+x5FQqB8vfrgq-5yZyLn11OgYMooQAi5veTugu5CRHRZB+w@mail.gmail.com>
To: Kazuho Oku <kazuhooku@gmail.com>
Cc: HTTP Working Group <ietf-http-wg@w3.org>
Content-Type: multipart/alternative; boundary="001a114acb8873edbb053b758202"
Received-SPF: pass client-ip=209.85.214.50; envelope-from=tombergan@chromium.org; helo=mail-it0-f50.google.com
X-W3C-Hub-Spam-Status: No, score=-4.5
X-W3C-Hub-Spam-Report: AWL=-0.840, BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_SORBS_SPAM=1, SPF_PASS=-0.001, W3C_AA=-1, W3C_WL=-1
X-W3C-Scan-Sig: lisa.w3.org 1bfVl6-0004h4-Kw 68dc6aa7595f51c903f94f8832690f35
X-Original-To: ietf-http-wg@w3.org
Subject: Re: Experiences with HTTP/2 server push
Archived-At: <http://www.w3.org/mid/CA+3+x5FQqB8vfrgq-5yZyLn11OgYMooQAi5veTugu5CRHRZB+w@mail.gmail.com>
Resent-From: ietf-http-wg@w3.org
X-Mailing-List: <ietf-http-wg@w3.org> archive/latest/32370
X-Loop: ietf-http-wg@w3.org
Resent-Sender: ietf-http-wg-request@w3.org
Precedence: list
List-Id: <ietf-http-wg.w3.org>
List-Help: <http://www.w3.org/Mail/>
List-Post: <mailto:ietf-http-wg@w3.org>
List-Unsubscribe: <mailto:ietf-http-wg-request@w3.org?subject=unsubscribe>

On Thu, Sep 1, 2016 at 9:38 AM, Tom Bergan <tombergan@chromium.org> wrote:

> Thanks for the feedback and link to that workshop talk! A few comments
> inline.
>
> On Wed, Aug 31, 2016 at 9:57 PM, Kazuho Oku <kazuhooku@gmail.com> wrote:
>>
>> Consider the case where a large HTML that loads a CSS is sent over the
>> wire. In a typical implementation, the server will pass a block of
>> HTML much larger than INITCWND to the TCP stack before recognizing the
>> request for CSS. So the client would need to wait for multiple RTTs
>> before starting to receive the CSS.
>>
>
> Unrelated to your above comment -- I think servers should use a higher
> initcwnd with H2, and I know that some servers do this. The experiments in
> our doc used Linux's default initcwnd (10 packets). If you compare that to
> H1, where browsers use 6 concurrent connections, the effective initcwnd for
> H1 is 60 packets (well, not exactly, since the browser only makes one
> request initially, but as soon as the browser starts making additional
> requests, cwnd effectively grows much faster than it would with a single
> connection).
>
>
>> That said, as discussed at the workshop, it is possible to implement a
>> HTTP/2 server that does not get affected by HoB between the different
>> streams (see https://github.com/HTTPWorkshop/workshop2016/blob/master/
>> talks/tcpprog.pdf).
>>
>> I would suggest that regardless of whether or not push is used, server
>> implementors should consider adopting such approach to minimize the
>> impact of HoB.
>>
>
> This is really interesting. To summarize: the idea is to use getsockopt to
> compute the number of available bytes in cwnd so that sizeof(kernel buffer)
> = cwnd. I rejected this idea without thinking about it much because it
> seemed like it would increase kernel/user round-trips and perform poorly in
> bursty conditions. But, your idea to restrict this optimization to cases
> where it matters most makes sense. Do you have performance measurements of
> this idea under heavy load? Are you using TCP_NOTSENT_LOWAT for cases where
> the optimization cannot be used?
>
>
>> It should also be noted that with QUIC such HoB would not be an issue
>> since there would no longer be a send buffer within the kernel.
>>
>
> Yep, this is definitely an advantage of QUIC.
>
> "Rule 2: Push Resources in the Right Order"
>>
>> My take is that the issue can / should be solved by clients sending
>> PRIORITY frames for pushed resources when they observe how the
>> resources are used, and that until then servers should schedule the
>> pushed streams separately from the client-driven prioritization tree
>> (built by using the PRIORITY frames).
>>
>> Please refer to the discussion in the other thread for details:
>> https://lists.w3.org/Archives/Public/ietf-http-wg/2016JulSep/0453.html
>
>
> To make sure I understand the idea: Suppose you send HTML, then push
> resources X and Y. You will continue pushing X and Y until you get requests
> from the client, at which point you switch to serving requests made by the
> client (which may or may not include X and Y, as the client may not know
> about those resources yet, depending on what you decided to push). These
> client requests are served via the client-driven priority tree.
>
> Is that right? If so, you've basically implemented rule #1 -- the push
> lasts while the network is idle, then you switch to serving client requests
> afterwards. It's nice to see that we came to the same high-level conclusion
> :-). But, I like the way you've phrased the problem. Instead of computing a
> priori out how much data you should push, which we suggested, you start
> pushing an arbitrary number of things, then you'll automatically stop
> pushing as soon as you get the next client request.
>

On second thought: this doesn't solve the full problem. What if you push X,
but the client starts requesting other resources because it doesn't know
about X yet? e.g., this might happen if X is a script loaded via
document.write and the browser hasn't evaluated that code yet. The priority
tree has a default position for X (RFC, section 5.3.5) and the server
cannot know if the client is happy with this default priority for X or if
the client hasn't corrected that priority because it doesn't know about X
yet.


> One more clarification: what happens when the client loads two pages
> concurrently and the network is effectively never idle? I assume push won't
> happen in this case?
>
> Next, I think you're arguing that push order doesn't matter as long as you
> have a solution for HoB. I don't think this is exactly right. Specifically:
>
> - Head-of-link blocking (HoB) can happen due to network-level bufferbloat.
> The above solution only applies to kernel-level bufferbloat. You need some
> kind of bandwidth-based pacing to avoid network-level bufferbloat.
>
> - If you're pushing X and Y, and you know the client will use X before Y,
> you should push in that order. The opposite order is sub-optimal and can
> eliminate the benefit of push in some cases, even ignoring HoB.
>
> As a server implementor, I have always dreamt of cancelling a push
>> after sending a PUSH_PROMISE. In case a resource we want to push
>> exists on a dedicate cache that cannot be reached synchronously from
>> the HTTP/2 server, the server needs to send PUSH_PROMISE without the
>> guarantee that it would be able to push a valid response.
>>
>> It would be great if we could have an error code that can be sent
>> using RST_STREAM to notify the client that it should discard the
>> PUSH_PROMISE being sent, and issue a request by itself.
>>
>
> Yes, +1. I've wanted this feature. It sucks that the client won't reissue
> the requests if they get a RST_STREAM. (At least, Chrome won't do this, I
> don't know about other browsers.)
>