Re: HTTP/2 and TCP CWND

William Chan (陈智昌) <willchan@chromium.org> Fri, 26 April 2013 02:11 UTC

Return-Path: <ietf-http-wg-request@listhub.w3.org>
X-Original-To: ietfarch-httpbisa-archive-bis2Juki@ietfa.amsl.com
Delivered-To: ietfarch-httpbisa-archive-bis2Juki@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id F2E7E21F97A1 for <ietfarch-httpbisa-archive-bis2Juki@ietfa.amsl.com>; Thu, 25 Apr 2013 19:11:43 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -9.676
X-Spam-Level:
X-Spam-Status: No, score=-9.676 tagged_above=-999 required=5 tests=[BAYES_00=-2.599, FM_FORGED_GMAIL=0.622, HTML_MESSAGE=0.001, MIME_8BIT_HEADER=0.3, RCVD_IN_DNSWL_HI=-8]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id tWYqVmwqJnXP for <ietfarch-httpbisa-archive-bis2Juki@ietfa.amsl.com>; Thu, 25 Apr 2013 19:11:42 -0700 (PDT)
Received: from frink.w3.org (frink.w3.org [128.30.52.56]) by ietfa.amsl.com (Postfix) with ESMTP id CAA5E21F9799 for <httpbisa-archive-bis2Juki@lists.ietf.org>; Thu, 25 Apr 2013 19:11:41 -0700 (PDT)
Received: from lists by frink.w3.org with local (Exim 4.72) (envelope-from <ietf-http-wg-request@listhub.w3.org>) id 1UVY6v-0003ee-M9 for ietf-http-wg-dist@listhub.w3.org; Fri, 26 Apr 2013 02:10:05 +0000
Resent-Date: Fri, 26 Apr 2013 02:10:05 +0000
Resent-Message-Id: <E1UVY6v-0003ee-M9@frink.w3.org>
Received: from maggie.w3.org ([128.30.52.39]) by frink.w3.org with esmtp (Exim 4.72) (envelope-from <willchan@google.com>) id 1UVY6n-0002Nk-4A for ietf-http-wg@listhub.w3.org; Fri, 26 Apr 2013 02:09:57 +0000
Received: from mail-qe0-f46.google.com ([209.85.128.46]) by maggie.w3.org with esmtps (TLS1.0:RSA_ARCFOUR_SHA1:16) (Exim 4.72) (envelope-from <willchan@google.com>) id 1UVY6l-0004sZ-Fe for ietf-http-wg@w3.org; Fri, 26 Apr 2013 02:09:57 +0000
Received: by mail-qe0-f46.google.com with SMTP id nd7so2195996qeb.19 for <ietf-http-wg@w3.org>; Thu, 25 Apr 2013 19:09:29 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:x-received:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type; bh=7aszhbwXFyns6fQjdj20jm1aAkVoxXUeW0WGnOEilb8=; b=ohennI/kxvaBGrnbavUNo0CBEOJpmtdIkCzbdwOP5n1AjjZVVmiK4RUPzu+pI3r83J eEL0xQjaSPLl6zP+rW+V0xre51j3EQoCH73znD8/Jmt4E5GbpoLVpu2HYKCTQcs55UTq nJI7+Nokw4p5zzQ2Ed/v+/jOkJeH/eDC4j9UWWRJOKLT9hd+mtysRPn/6xouKRLFGWBS F3kQ3jWQSkuUJWAJgzZ6eGor03AJRlhypJfDiOFKmflQGYtpyaLIrG7JYnOQ5loqOa2w 8CbHw9X2ojy/qznp1LlTh0Zpqz25km93KpmMDWLMc83UbcLths55Fs0+5amU+X8da4FR e+Kw==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=mime-version:x-received:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type; bh=7aszhbwXFyns6fQjdj20jm1aAkVoxXUeW0WGnOEilb8=; b=hEd3Uv0SJOmxDnqfFwO7XTDpXbdZLtpYw7h3j5xK4dKt+MaeVGgp8/pdmVFLQ+DDM6 f2GPp688YYu8JTYAR+cyTmkf0yPARYN8W7u8GZvyyha/djQcRzneN+KPtBRzowZSI5Dj MMCebLDMDYgpHmJSulpa2mxAR+r0nb/DWzC3Y=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:x-received:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type :x-gm-message-state; bh=7aszhbwXFyns6fQjdj20jm1aAkVoxXUeW0WGnOEilb8=; b=C+VGwXnagbhdlNDAhBsBrn45Vo1xLeYeNlAMPkUcHITpNfE5qJ2SPKs/OCdBLW12Je Op4pC69DEpg9+NZAnJJQ0OnBQ9UeB9JnqQqwDjnWniCNxyMc/rHo1P4b8eO4AVYUA0B2 IKDttDKPlwL5CxIcRnW13NHU3tqCuD2gOByK7JGFsZz082XAuH+VT0cFqxDP+I8tVuPJ Baz1lJbjKjV75p0TQ9+uBXC3GGBvObex0ei2mBSNbBTZH1n9Hf7AVW7DPrdJanzTC8hj C7O20xsxNaf6FGHF/OSFNvX7Av5WPDrMlT5SrT0WpaHcg3xobJpPMTPC5D9ca9EAIUQk 80/w==
MIME-Version: 1.0
X-Received: by 10.229.170.18 with SMTP id b18mr4089610qcz.5.1366942169587; Thu, 25 Apr 2013 19:09:29 -0700 (PDT)
Sender: willchan@google.com
Received: by 10.229.180.4 with HTTP; Thu, 25 Apr 2013 19:09:27 -0700 (PDT)
In-Reply-To: <BCA2178C-5F80-4A6F-95EF-0DC40B84BA7A@gmail.com>
References: <516B8824.8040904@cisco.com> <DF8F6DB7E5D58B408041AE4D927B2F48CBB88103@CINURCNA14.e2k.ad.ge.com> <CAP+FsNfeUtKfOMPKriYP7Ak_YzsjEFKvprJOAQaxYP7_BxTBsw@mail.gmail.com> <cf53405c48dc431693573a9148776c8a@BN1PR03MB072.namprd03.prod.outlook.com> <8B0AAE84-CAB8-483B-99FD-DA6A0CA13395@netapp.com> <CAP+FsNca6TOB2B-ntnEHvzPx3JY=6Qcp34RgF7uQsbdsLUbptQ@mail.gmail.com> <95367D0C-D34C-4542-A0DE-921BBDE6A239@netapp.com> <CAP+FsNfGBYXABwLJJMk6rC_GAMVD2RXaMFEu93oGwMaCuCzN7Q@mail.gmail.com> <856946E5-2239-40BB-AC2D-716D6FDAA9FF@netapp.com> <CAP+FsNd97LUZNRJrf=vCc_tmnxn8ygGZ4EyOfVywt=cuc_qutA@mail.gmail.com> <CANmPAYFhD8kwiM5F1vG0A5Thkrf4Dmw+64nDhvOjzPDVONU7mQ@mail.gmail.com> <CAA4WUYi+ewPmapspBETX=7m1Pxvft2u7C_7MHVJ7h1s0BFWN-Q@mail.gmail.com> <DF66CBBE-D828-4647-B42F-E3014309AFA7@gmail.com> <CAA4WUYgnUr_-Zja9y-+=uRjses=qU9MxQ4pZZa5xYjNzLRv4+g@mail.gmail.com> <9F6F9423-2164-4E7A-95DF-A9CE60E718C8@gmail.com> <CAA4WUYjBQc2njX70tHrkzw5Z2HHA8YtX0asySC4Y8y-PZV9efA@mail.gmail.com> <BCA2178C-5F80-4A6F-95EF-0DC40B84BA7A@gmail.com>
Date: Thu, 25 Apr 2013 19:09:27 -0700
X-Google-Sender-Auth: MUIq4_mT3QU6GlATtrNKj80PbAU
Message-ID: <CAA4WUYgpmYyBu337vsjhh=9toTNrb-nT9MQkk7j3inOs4GZwCA@mail.gmail.com>
From: "William Chan (陈智昌)" <willchan@chromium.org>
To: Peter Lepeska <bizzbyster@gmail.com>
Cc: Roberto Peon <grmocg@gmail.com>, "Eggert, Lars" <lars@netapp.com>, Gabriel Montenegro <Gabriel.Montenegro@microsoft.com>, "Simpson, Robby (GE Energy Management)" <robby.simpson@ge.com>, Eliot Lear <lear@cisco.com>, Robert Collins <robertc@squid-cache.org>, Jitu Padhye <padhye@microsoft.com>, "ietf-http-wg@w3.org" <ietf-http-wg@w3.org>, "Brian Raymor (MS OPEN TECH)" <Brian.Raymor@microsoft.com>, Rob Trace <Rob.Trace@microsoft.com>, Dave Thaler <dthaler@microsoft.com>, Martin Thomson <martin.thomson@skype.net>, Martin Stiemerling <martin.stiemerling@neclab.eu>
Content-Type: multipart/alternative; boundary="e89a8f646b01728f7204db3a0612"
X-Gm-Message-State: ALoCoQnI24N054XUvZpa46BwIb2l0w7bQxgQE8ycvmCqKIAOBAwzmr7YYjZaA9Fa9EyWQZiHIDvjaqmnGOaYj6gD/PKM60kfexQFoOYn2F3QqjOodODLTKGFLCmfGy530bHeElZ+VQzIhxGyE7lQgYI0CXyqyGV8afH9BtaDgYEKv0Vj1dLYlJlrgNWfzyuM9x+qyQ1Nt47H
Received-SPF: pass client-ip=209.85.128.46; envelope-from=willchan@google.com; helo=mail-qe0-f46.google.com
X-W3C-Hub-Spam-Status: No, score=-4.5
X-W3C-Hub-Spam-Report: AWL=-1.765, BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_LOW=-0.7, RP_MATCHES_RCVD=-0.001, SPF_PASS=-0.001
X-W3C-Scan-Sig: maggie.w3.org 1UVY6l-0004sZ-Fe c6a372ad3d5d7168bee371e95d781193
X-Original-To: ietf-http-wg@w3.org
Subject: Re: HTTP/2 and TCP CWND
Archived-At: <http://www.w3.org/mid/CAA4WUYgpmYyBu337vsjhh=9toTNrb-nT9MQkk7j3inOs4GZwCA@mail.gmail.com>
Resent-From: ietf-http-wg@w3.org
X-Mailing-List: <ietf-http-wg@w3.org> archive/latest/17592
X-Loop: ietf-http-wg@w3.org
Resent-Sender: ietf-http-wg-request@w3.org
Precedence: list
List-Id: <ietf-http-wg.w3.org>
List-Help: <http://www.w3.org/Mail/>
List-Post: <mailto:ietf-http-wg@w3.org>
List-Unsubscribe: <mailto:ietf-http-wg-request@w3.org?subject=unsubscribe>

Cwnd is an internal implementation state variable maintained on a per
connection basis. It's never explicitly advertised on the wire. You can
only guess at it by the number of in flight packets. I've already told you
that due to how the SSL handshake does not saturate IW10 due to Google's
smallish certificate chain. Can you clarify what you mean by send window?
Did you mean receive window? And were you applying window scaling? Please
clarify further. I don't see the 2*MSS that you're talking about.

Why are you saying SPDY is slower? Please, you don't actually think SPDY is
slower here do you? :P Hehe, J/K, Chrome's got some issues here for sure.
However, what's probably confusing you is that IE9 fires the load event way
earlier, so if you are strictly evaluating by the time the load event
fires, IE9 certainly looks faster. But look at when all these resources
load. SPDY is rockin nearly a second earlier (5.8~ compared to 6.5 for
IE9). Don't let browsers fool you, look at the waterfalls yourself to see
when the resource downloads complete.

You can look at the tcpdumps if you re-run WPT with tcpdumps enabled. Check
out my run: http://www.webpagetest.org/result/130426_NX_1ZA/. I've captured
the tcpdump for you.

Let's not get sidetracked on a tangent analyzing this waterfall. IW10 is a
reality, and highly parallelized web sites are out there. Effective
initcwnd>=120 is not uncommon. If you need more examples, I can provide
more example URLs offlist (I don't want to derail this thread).

On Thu, Apr 25, 2013 at 12:16 PM, Peter Lepeska <bizzbyster@gmail.com>wrote:

> I don't see evidence of IW10 in these packet captures.  In fact I'm not
> seeing the send window increase at all --
> http://cloudshark.org/captures/35ff23aa38e1?filter=tcp.stream%20eq%202 --
> from the initial 2 MSS. These connections are running very slowly.
>
> This effect might be what's killing SPDY (chrome) performance for the same
> site over DSL, which is more than 2x slower than IE --
> http://www.webpagetest.org/result/130425_VZ_ZC1/1/details/. My guess is
> that because SPDY operates over a single connection, it's SOL when that
> connection get stuck in a low gear like these appear to be. IE9 uses too
> many connections and suffers from retrans but is more robust to this effect
> b/c it has such high TCP connection concurrency. Again, this is just a
> guess though since I can't look at the WPT captures from the chrome test.
>
> Peter
>
> On Apr 25, 2013, at 12:21 PM, William Chan (陈智昌) <willchan@chromium.org>
> wrote:
>
> There's this small internet company that has a cute kitten photo search
> product, check it out: https://www.google.com/search?q=kittens&tbm=isch.
>
> I just kicked off a WebPageTest run so you can analyze it for yourself:
> http://www.webpagetest.org/result/130425_7X_TQT/
> It shards 4 ways, with 6 connections per host shard. That's 24 connections
> with IW10. That's effective initcwnd=240. And I'm not even mentioning the
> other connections it has open. Check out how freakishly long the SSL
> handshakes take when you model this on a "DSL" connection with dummynet.
> Ouch.
> You can see how it triggers wonderful TCP level behavior in
> http://cloudshark.org/captures/20818577e6b9/graphs/~?filters=tcp.analysis.retransmission%2C%21%28tcp.analysis.retransmission%29+%7Bother%7D.
> That's a graph of the retransmitted packets vs non-retransmitted.
>
> Despite doing 4 way sharding, this example isn't as bad as it could be,
> because the SSL SERVER_HELLO and CERTIFICATE messages don't use the full
> IW10 available cwnd. When you examine sites that don't use HTTPS, and blast
> out an image object, you see this get way worse. I've got a lot of those
> URLs if you want to see this too.
>
> Patrick and I have been discussing this off and on and how to ameliorate
> this in our respective browsers. We've got different takes, and I'm
> watching his work closely to see how it turns out :)
>
>
> On Thu, Apr 25, 2013 at 8:58 AM, Peter Lepeska <bizzbyster@gmail.com>wrote:
>
>> I have not seen initcwnds of 120+. Can you send me a URL that would have
>> that behavior?
>>
>> Thanks,
>>
>> Peter
>>
>> On Apr 24, 2013, at 3:27 PM, William Chan (陈智昌) <willchan@chromium.org>
>> wrote:
>>
>> On Wed, Apr 24, 2013 at 11:52 AM, Peter Lepeska <bizzbyster@gmail.com>wrote:
>>
>>>
>>> On Apr 24, 2013, at 12:36 PM, William Chan (陈智昌) <willchan@chromium.org>
>>> wrote:
>>>
>>> On Wed, Apr 24, 2013 at 8:40 AM, Peter Lepeska <bizzbyster@gmail.com>wrote:
>>>
>>>> Not sure this has been proposed before, but better than caching would
>>>> be dynamic initial CWND based on web server object size hinting.
>>>>
>>>> Web servers often know the size of the object that will be sent to the
>>>> browser. The web server therefore can help the transport make smart initial
>>>> CWND decisions. For instance, if an object is less than 20KB, which is true
>>>> for the majority of objects on web pages, the web server could tell the
>>>> transport to increase the CWND to a size that would allow the object to be
>>>> sent in the initial window.
>>>>
>>>
>>> In the HTTP/2 case where we often are multiplexing, this doesn't seem to
>>> make as much sense. Also, I'm not sure that it's a reasonable argument to
>>> select initcwnd in absence of any congestion information...or were you
>>> suggesting merely tweaking the initcwnd a little bit if that little bit
>>> would make a difference in terms of fitting the whole object in the
>>> initcwnd?
>>>
>>>
>>> Right. A small number of multiplexed connections transfer less of a
>>> given page's data in slow start so this will have less impact for those
>>> connections. However it's worth nothing that often the first object
>>> requested over the multiplexed channel will be the root object alone and of
>>> course number of round trips to download the root object directly impacts
>>> page load time.
>>>
>>
>> We should move away from this assumption that the first request is for
>> the root object. I've been advising companies on how to do SPDY
>> deployments, and a common scenario is origin server hosting the root doc +
>> SPDY capable CDN for the subresources (primarily images served on the
>> edge). For these CDNs, they're going to serve a burst of traffic
>> immediately, and those subresources often have high impact on the above the
>> fold perceived latency (in many of today's websites, images form a big part
>> of the initial viewport's content, so serving these images quickly is
>> vital). In today's non-SPDY / HTTP2 case, they just domain shard and do 6 *
>> [2-4] sharded hosts, for 12-24 connections with IW10, starting out with
>> effective initcwnds of 120+. They are gaming initcwnd to the benefit of
>> their users that don't have a congested path, and severe detriment of users
>> that cannot handle such high bursts. This situation sucks.
>>
>>
>>>
>>> Caching attempts to reuse old congestion information, although it has
>>> been reasonably pointed out that the validity of that information is
>>> questionable. It's an open research question as far as I'm concerned, and
>>> I'd love to see any data people had.
>>>
>>>
>>>>
>>>> For larger objects, the benefit of a large CWND is minimal so the web
>>>> server could tell the transport to use the default and let the connection
>>>> ramp slowly.
>>>>
>>>
>>> I'm not sure this makes sense. GMail and Google+ and I'm sure other
>>> large web apps have rather large scripts and stylesheets, but they still
>>> care about their initial page load latency. Perhaps you're making the
>>> assumption that large objects implies the user does not have interactivity
>>> / low-latency expectations? If so, that's invalid. Those roundtrips still
>>> matter and I can tell you our Google app teams work very hard to eliminate
>>> them. Or maybe your definition is large is larger than what I'm thinking.
>>>
>>>
>>> The threshold is tunable. My point here is if the TCP connection is
>>> going to be used to download a 100 MB file,  or stream a video, then slow
>>> start has a negligible impact on overall download time for the file.
>>>
>>
>> Sure, if you're doing non-interactive large data transfers, then the slow
>> start latency isn't going to matter much. I don't view that conversation as
>> very interesting, and no one's agitating for change there. The contentious
>> and more interesting discussion is how to safely, yet quickly start up TCP
>> connections for interactive bursty traffic like web browsing. I include
>> video web sites like Youtube amongst that, even if their objects are large,
>> since the time to start viewing the video is still important.
>>
>>
>>>
>>>
>>>
>>>> Peter
>>>>
>>>>
>>>>
>>>>
>>>> On Mon, Apr 15, 2013 at 8:16 PM, Roberto Peon <grmocg@gmail.com> wrote:
>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Mon, Apr 15, 2013 at 4:03 PM, Eggert, Lars <lars@netapp.com> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>>
>>>>>> On Apr 15, 2013, at 15:56, Roberto Peon <grmocg@gmail.com> wrote:
>>>>>> > The interesting thing about the client mucking with this data is
>>>>>> that, so
>>>>>> > long as the server's TCP implementation is smart enough not to kill
>>>>>> itself
>>>>>> > (and some simple limits accomplish that), the only on the client
>>>>>> harms is
>>>>>> > itself...
>>>>>>
>>>>>> I fail to see how you'd be able to achieve this. If the server uses a
>>>>>> CWND that is too large, it will inject a burst of packets into the network
>>>>>> that will overflow a queue somewhere. Unless you use WFQ or something
>>>>>> similar on all bottleneck queues (not generally possible), that burst will
>>>>>> likely cause packet loss to other flows, and will therefore impact them.
>>>>>>
>>>>>
>>>>> The most obvious way is that the server doesn't use a CWND which is
>>>>> larger than the largest currently active window to a similar RTT. The other
>>>>> obvious way is to limit it to something like 32, which is about what we'd
>>>>> see with the opening of a mere 3 regular HTTP connections! This at least
>>>>> makes the one connection competitive with the circumventions that HTTP/1.X
>>>>> currently exhibits.
>>>>>
>>>>>
>>>>>> TCP is a distributed resource sharing algorithm to allocate capacity
>>>>>> throughout a network. Although the rates for all flows are computed in
>>>>>> isolation, the effect of that computation is not limited to the flow in
>>>>>> question, because all flows share the same queues.
>>>>>>
>>>>>
>>>>> Yes, that is what I've been arguing w.r.t. the many connections that
>>>>> the application-layer currently opens :)
>>>>> It becomes a question of which dragon is actually most dangerous.
>>>>>
>>>>> -=R
>>>>>
>>>>>
>>>>>>
>>>>>> Lars
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>>
>>
>>
>
>