Re: Experiences with HTTP/2 server push
Tom Bergan <tombergan@chromium.org> Thu, 01 September 2016 16:43 UTC
Return-Path: <ietf-http-wg-request+bounce-httpbisa-archive-bis2juki=lists.ie@listhub.w3.org>
X-Original-To: ietfarch-httpbisa-archive-bis2Juki@ietfa.amsl.com
Delivered-To: ietfarch-httpbisa-archive-bis2Juki@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 1A10712D531 for <ietfarch-httpbisa-archive-bis2Juki@ietfa.amsl.com>; Thu, 1 Sep 2016 09:43:17 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -7.568
X-Spam-Level:
X-Spam-Status: No, score=-7.568 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HEADER_FROM_DIFFERENT_DOMAINS=0.001, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_HI=-5, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, RP_MATCHES_RCVD=-0.548, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (1024-bit key) header.d=chromium.org
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id K3X-Qfefmc3k for <ietfarch-httpbisa-archive-bis2Juki@ietfa.amsl.com>; Thu, 1 Sep 2016 09:43:14 -0700 (PDT)
Received: from frink.w3.org (frink.w3.org [128.30.52.56]) (using TLSv1.2 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 6828E12D158 for <httpbisa-archive-bis2Juki@lists.ietf.org>; Thu, 1 Sep 2016 09:43:14 -0700 (PDT)
Received: from lists by frink.w3.org with local (Exim 4.80) (envelope-from <ietf-http-wg-request@listhub.w3.org>) id 1bfV10-0002yk-8Y for ietf-http-wg-dist@listhub.w3.org; Thu, 01 Sep 2016 16:38:58 +0000
Resent-Date: Thu, 01 Sep 2016 16:38:58 +0000
Resent-Message-Id: <E1bfV10-0002yk-8Y@frink.w3.org>
Received: from lisa.w3.org ([128.30.52.41]) by frink.w3.org with esmtps (TLS1.2:DHE_RSA_AES_128_CBC_SHA1:128) (Exim 4.80) (envelope-from <tombergan@chromium.org>) id 1bfV0p-0002xw-KU for ietf-http-wg@listhub.w3.org; Thu, 01 Sep 2016 16:38:47 +0000
Received: from mail-it0-f46.google.com ([209.85.214.46]) by lisa.w3.org with esmtps (TLS1.2:RSA_AES_128_CBC_SHA1:128) (Exim 4.80) (envelope-from <tombergan@chromium.org>) id 1bfV0l-0001Wr-Pz for ietf-http-wg@w3.org; Thu, 01 Sep 2016 16:38:46 +0000
Received: by mail-it0-f46.google.com with SMTP id i184so4981502itf.1 for <ietf-http-wg@w3.org>; Thu, 01 Sep 2016 09:38:23 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=047tsfvmLOE1MhHXiebBql5+vb64XHWNDuo6+kDwJbM=; b=CFvV8Ta14kJCEUcau5NPHA45Sc4mm9Wt5SwB9CTvPvlVwGQ6Oy3MR2ME3wEIwwdGxW Olw5PjiFTzhXtu8eZgZxf2Ua1bpQsJzFg9TTfuo8c0Q8HOVoNhnEJPJt3UEDzBdTItl0 xIUqEcBLB+8K1Dk77slYCY5fDOdqvRad3hlGk=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=047tsfvmLOE1MhHXiebBql5+vb64XHWNDuo6+kDwJbM=; b=HWWq4/WsS/C+3nYQjqUOFcwEmUOxfyXxxdNfFJjiFmufjOP1Se3baOxVZK0y10qNz4 XrBLi60AgIGsPsY8c0rs+JeU0DNAuKWH97NGTH6eHU+x850QpjMKcbacPaovdjVw9pbk bad3EgruZ9N+isLCdjrRnCcW8BD/uUoXeEXUioUtebl9eME4TmERamMDjqnDT5CinCqS LSo/Vo4KuwqA7aHbGegCqYHrpAZllUwt6KiPWaryiH5xIDlhkaUE/ADOkYWgYGlUZJ28 QyJzchaYGQAyrX3xP0sr3GsZDquJi37J8ueNaccOrnW9czz1oQH1HVO+6u1ykWE0/eiH oqaQ==
X-Gm-Message-State: AE9vXwP+t+vVBpFSk/rW+4C56DuqHLlEzLfM62E85txVsMA5laZ+TKnuOHwaVKJJJqSTgHtX
X-Received: by 10.36.10.196 with SMTP id 187mr21462208itw.93.1472747897128; Thu, 01 Sep 2016 09:38:17 -0700 (PDT)
Received: from mail-it0-f44.google.com (mail-it0-f44.google.com. [209.85.214.44]) by smtp.gmail.com with ESMTPSA id 8sm7420164itm.3.2016.09.01.09.38.16 for <ietf-http-wg@w3.org> (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 01 Sep 2016 09:38:16 -0700 (PDT)
Received: by mail-it0-f44.google.com with SMTP id e124so113811811ith.0 for <ietf-http-wg@w3.org>; Thu, 01 Sep 2016 09:38:16 -0700 (PDT)
X-Received: by 10.36.216.138 with SMTP id b132mr24122180itg.38.1472747896024; Thu, 01 Sep 2016 09:38:16 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.107.48.136 with HTTP; Thu, 1 Sep 2016 09:38:14 -0700 (PDT)
In-Reply-To: <CANatvzz0rBjkgV4yS+2hgspUj7wZ6y=NqojPyzHiPpvZVXzwEA@mail.gmail.com>
References: <CACZw55mUg_VjN3Q6TqPCb6udo3mQpoWQVNV5e2iYiNj=hC-2kw@mail.gmail.com> <CANatvzz0rBjkgV4yS+2hgspUj7wZ6y=NqojPyzHiPpvZVXzwEA@mail.gmail.com>
From: Tom Bergan <tombergan@chromium.org>
Date: Thu, 01 Sep 2016 09:38:14 -0700
X-Gmail-Original-Message-ID: <CA+3+x5F8M+xbH2YjD9m87WPOyQPCTVNV8evBJQHn9gicch+1TQ@mail.gmail.com>
Message-ID: <CA+3+x5F8M+xbH2YjD9m87WPOyQPCTVNV8evBJQHn9gicch+1TQ@mail.gmail.com>
To: Kazuho Oku <kazuhooku@gmail.com>
Cc: HTTP Working Group <ietf-http-wg@w3.org>
Content-Type: multipart/alternative; boundary="94eb2c05ae923009bd053b74d7ee"
Received-SPF: pass client-ip=209.85.214.46; envelope-from=tombergan@chromium.org; helo=mail-it0-f46.google.com
X-W3C-Hub-Spam-Status: No, score=-5.0
X-W3C-Hub-Spam-Report: AWL=-1.260, BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_SORBS_SPAM=1, SPF_PASS=-0.001, W3C_AA=-1, W3C_WL=-1
X-W3C-Scan-Sig: lisa.w3.org 1bfV0l-0001Wr-Pz c891ec1cc794978be07b8509111df27a
X-Original-To: ietf-http-wg@w3.org
Subject: Re: Experiences with HTTP/2 server push
Archived-At: <http://www.w3.org/mid/CA+3+x5F8M+xbH2YjD9m87WPOyQPCTVNV8evBJQHn9gicch+1TQ@mail.gmail.com>
Resent-From: ietf-http-wg@w3.org
X-Mailing-List: <ietf-http-wg@w3.org> archive/latest/32369
X-Loop: ietf-http-wg@w3.org
Resent-Sender: ietf-http-wg-request@w3.org
Precedence: list
List-Id: <ietf-http-wg.w3.org>
List-Help: <http://www.w3.org/Mail/>
List-Post: <mailto:ietf-http-wg@w3.org>
List-Unsubscribe: <mailto:ietf-http-wg-request@w3.org?subject=unsubscribe>
Thanks for the feedback and link to that workshop talk! A few comments inline. On Wed, Aug 31, 2016 at 9:57 PM, Kazuho Oku <kazuhooku@gmail.com> wrote: > > Consider the case where a large HTML that loads a CSS is sent over the > wire. In a typical implementation, the server will pass a block of > HTML much larger than INITCWND to the TCP stack before recognizing the > request for CSS. So the client would need to wait for multiple RTTs > before starting to receive the CSS. > Unrelated to your above comment -- I think servers should use a higher initcwnd with H2, and I know that some servers do this. The experiments in our doc used Linux's default initcwnd (10 packets). If you compare that to H1, where browsers use 6 concurrent connections, the effective initcwnd for H1 is 60 packets (well, not exactly, since the browser only makes one request initially, but as soon as the browser starts making additional requests, cwnd effectively grows much faster than it would with a single connection). > That said, as discussed at the workshop, it is possible to implement a > HTTP/2 server that does not get affected by HoB between the different > streams (see https://github.com/HTTPWorkshop/workshop2016/ > blob/master/talks/tcpprog.pdf). > > I would suggest that regardless of whether or not push is used, server > implementors should consider adopting such approach to minimize the > impact of HoB. > This is really interesting. To summarize: the idea is to use getsockopt to compute the number of available bytes in cwnd so that sizeof(kernel buffer) = cwnd. I rejected this idea without thinking about it much because it seemed like it would increase kernel/user round-trips and perform poorly in bursty conditions. But, your idea to restrict this optimization to cases where it matters most makes sense. Do you have performance measurements of this idea under heavy load? Are you using TCP_NOTSENT_LOWAT for cases where the optimization cannot be used? > It should also be noted that with QUIC such HoB would not be an issue > since there would no longer be a send buffer within the kernel. > Yep, this is definitely an advantage of QUIC. "Rule 2: Push Resources in the Right Order" > > My take is that the issue can / should be solved by clients sending > PRIORITY frames for pushed resources when they observe how the > resources are used, and that until then servers should schedule the > pushed streams separately from the client-driven prioritization tree > (built by using the PRIORITY frames). > > Please refer to the discussion in the other thread for details: > https://lists.w3.org/Archives/Public/ietf-http-wg/2016JulSep/0453.html To make sure I understand the idea: Suppose you send HTML, then push resources X and Y. You will continue pushing X and Y until you get requests from the client, at which point you switch to serving requests made by the client (which may or may not include X and Y, as the client may not know about those resources yet, depending on what you decided to push). These client requests are served via the client-driven priority tree. Is that right? If so, you've basically implemented rule #1 -- the push lasts while the network is idle, then you switch to serving client requests afterwards. It's nice to see that we came to the same high-level conclusion :-). But, I like the way you've phrased the problem. Instead of computing a priori out how much data you should push, which we suggested, you start pushing an arbitrary number of things, then you'll automatically stop pushing as soon as you get the next client request. One more clarification: what happens when the client loads two pages concurrently and the network is effectively never idle? I assume push won't happen in this case? Next, I think you're arguing that push order doesn't matter as long as you have a solution for HoB. I don't think this is exactly right. Specifically: - Head-of-link blocking (HoB) can happen due to network-level bufferbloat. The above solution only applies to kernel-level bufferbloat. You need some kind of bandwidth-based pacing to avoid network-level bufferbloat. - If you're pushing X and Y, and you know the client will use X before Y, you should push in that order. The opposite order is sub-optimal and can eliminate the benefit of push in some cases, even ignoring HoB. As a server implementor, I have always dreamt of cancelling a push > after sending a PUSH_PROMISE. In case a resource we want to push > exists on a dedicate cache that cannot be reached synchronously from > the HTTP/2 server, the server needs to send PUSH_PROMISE without the > guarantee that it would be able to push a valid response. > > It would be great if we could have an error code that can be sent > using RST_STREAM to notify the client that it should discard the > PUSH_PROMISE being sent, and issue a request by itself. > Yes, +1. I've wanted this feature. It sucks that the client won't reissue the requests if they get a RST_STREAM. (At least, Chrome won't do this, I don't know about other browsers.)
- Re: Experiences with HTTP/2 server push Matthew Kerwin
- Re: Experiences with HTTP/2 server push Kazuho Oku
- Re: Experiences with HTTP/2 server push Martin Thomson
- Re: Experiences with HTTP/2 server push Kazuho Oku
- Re: Experiences with HTTP/2 server push Tom Bergan
- Re: Experiences with HTTP/2 server push Martin Thomson
- Re: Experiences with HTTP/2 server push Alcides Viamontes E
- Re: Experiences with HTTP/2 server push Tom Bergan
- Re: Experiences with HTTP/2 server push Amos Jeffries
- Re: Experiences with HTTP/2 server push Alcides Viamontes E
- Re: Experiences with HTTP/2 server push Martin Thomson
- Experiences with HTTP/2 server push Tom Bergan
- Re: Experiences with HTTP/2 server push Tom Bergan
- Re: Experiences with HTTP/2 server push Kazuho Oku
- Re: Experiences with HTTP/2 server push Tom Bergan
- Re: Experiences with HTTP/2 server push Tom Bergan
- Re: Experiences with HTTP/2 server push Kazuho Oku
- Re: Experiences with HTTP/2 server push Bryan McQuade