Peter Lepeska <> Wed, 24 April 2013 18:54 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 15E9721F8E7A for <>; Wed, 24 Apr 2013 11:54:46 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -10.298
X-Spam-Status: No, score=-10.298 tagged_above=-999 required=5 tests=[BAYES_00=-2.599, HTML_MESSAGE=0.001, MIME_8BIT_HEADER=0.3, RCVD_IN_DNSWL_HI=-8]
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id Wc2h-GjIkmHE for <>; Wed, 24 Apr 2013 11:54:45 -0700 (PDT)
Received: from ( []) by (Postfix) with ESMTP id EECDB21F85ED for <>; Wed, 24 Apr 2013 11:54:44 -0700 (PDT)
Received: from lists by with local (Exim 4.72) (envelope-from <>) id 1UV4oq-0002GP-K5 for; Wed, 24 Apr 2013 18:53:28 +0000
Resent-Date: Wed, 24 Apr 2013 18:53:28 +0000
Resent-Message-Id: <>
Received: from ([]) by with esmtp (Exim 4.72) (envelope-from <>) id 1UV4oh-0002Em-DP for; Wed, 24 Apr 2013 18:53:19 +0000
Received: from ([]) by with esmtps (TLS1.0:RSA_ARCFOUR_SHA1:16) (Exim 4.72) (envelope-from <>) id 1UV4og-0000ZC-A9 for; Wed, 24 Apr 2013 18:53:19 +0000
Received: by with SMTP id l13so316786yen.17 for <>; Wed, 24 Apr 2013 11:52:52 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=20120113; h=x-received:content-type:mime-version:subject:from:in-reply-to:date :cc:message-id:references:to:x-mailer; bh=uQp5ikdbxCRB7tjt8AxQWoDXWYOacd9ixxyxAiMndm0=; b=YNRKFpDwS+Yd9RzPpIMJqBKxxofypLavWjfIfdzbXLiVLba8pmQuYb4Sl5oGKEWbVI D7YIa5QA3x0AeDyj/S9Zred/OTg7zU+chi9UUfC2138jcYEL8TzRSXNvxZSGVnDBaOmE 8RuE3cqb2WWg7h9BaoO1G7sVJgriy67nKqiJqd8xFawxifj22yCVZpV+pzVUjxmllWTD cftQ27oEQFCBOrMtH6UoGA3z5VvD0muzAcjNY4JdHbahR+zXqhwl+woyFU/ar7x0YSKB huU1k1SLMSRu7uDMRbNkkfncjpqLGDwe/z3njENEVxrd/iLIqSmo0hs+caSw6bvyDXpz 0oFw==
X-Received: by with SMTP id u56mr23992153yhn.32.1366829572586; Wed, 24 Apr 2013 11:52:52 -0700 (PDT)
Received: from [] ( []) by with ESMTPSA id n15sm5397773yhi.2.2013. for <multiple recipients> (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 24 Apr 2013 11:52:51 -0700 (PDT)
Content-Type: multipart/alternative; boundary="Apple-Mail=_D7D994CD-15DC-4152-A03A-A673F4D6748A"
Mime-Version: 1.0 (Mac OS X Mail 6.3 \(1503\))
From: Peter Lepeska <>
In-Reply-To: <>
Date: Wed, 24 Apr 2013 14:52:51 -0400
Cc: Roberto Peon <>, "Eggert, Lars" <>, Gabriel Montenegro <>, "Simpson, Robby (GE Energy Management)" <>, Eliot Lear <>, Robert Collins <>, Jitu Padhye <>, "" <>, "Brian Raymor (MS OPEN TECH)" <>, Rob Trace <>, Dave Thaler <>, Martin Thomson <>, Martin Stiemerling <>
Message-Id: <>
References: <> <> <> <> <> <> <> <> <> <> <> <>
To: =?utf-8?Q?William_Chan_=28=E9=99=88=E6=99=BA=E6=98=8C=29?= <>
X-Mailer: Apple Mail (2.1503)
Received-SPF: pass client-ip=;;
X-W3C-Hub-Spam-Status: No, score=-0.8
X-W3C-Scan-Sig: 1UV4og-0000ZC-A9 1e7dd6efa66a26e66e0727e99b014ac6
Subject: Re: HTTP/2 and TCP CWND
Archived-At: <>
X-Mailing-List: <> archive/latest/17554
Precedence: list
List-Id: <>
List-Help: <>
List-Post: <>
List-Unsubscribe: <>

On Apr 24, 2013, at 12:36 PM, William Chan (陈智昌) <> wrote:

> On Wed, Apr 24, 2013 at 8:40 AM, Peter Lepeska <> wrote:
> Not sure this has been proposed before, but better than caching would be dynamic initial CWND based on web server object size hinting.
> Web servers often know the size of the object that will be sent to the browser. The web server therefore can help the transport make smart initial CWND decisions. For instance, if an object is less than 20KB, which is true for the majority of objects on web pages, the web server could tell the transport to increase the CWND to a size that would allow the object to be sent in the initial window.
> In the HTTP/2 case where we often are multiplexing, this doesn't seem to make as much sense. Also, I'm not sure that it's a reasonable argument to select initcwnd in absence of any congestion information...or were you suggesting merely tweaking the initcwnd a little bit if that little bit would make a difference in terms of fitting the whole object in the initcwnd?

Right. A small number of multiplexed connections transfer less of a given page's data in slow start so this will have less impact for those connections. However it's worth nothing that often the first object requested over the multiplexed channel will be the root object alone and of course number of round trips to download the root object directly impacts page load time.

> Caching attempts to reuse old congestion information, although it has been reasonably pointed out that the validity of that information is questionable. It's an open research question as far as I'm concerned, and I'd love to see any data people had.
> For larger objects, the benefit of a large CWND is minimal so the web server could tell the transport to use the default and let the connection ramp slowly. 
> I'm not sure this makes sense. GMail and Google+ and I'm sure other large web apps have rather large scripts and stylesheets, but they still care about their initial page load latency. Perhaps you're making the assumption that large objects implies the user does not have interactivity / low-latency expectations? If so, that's invalid. Those roundtrips still matter and I can tell you our Google app teams work very hard to eliminate them. Or maybe your definition is large is larger than what I'm thinking.

The threshold is tunable. My point here is if the TCP connection is going to be used to download a 100 MB file,  or stream a video, then slow start has a negligible impact on overall download time for the file.

> Peter
> On Mon, Apr 15, 2013 at 8:16 PM, Roberto Peon <> wrote:
> On Mon, Apr 15, 2013 at 4:03 PM, Eggert, Lars <> wrote:
> Hi,
> On Apr 15, 2013, at 15:56, Roberto Peon <> wrote:
> > The interesting thing about the client mucking with this data is that, so
> > long as the server's TCP implementation is smart enough not to kill itself
> > (and some simple limits accomplish that), the only on the client harms is
> > itself...
> I fail to see how you'd be able to achieve this. If the server uses a CWND that is too large, it will inject a burst of packets into the network that will overflow a queue somewhere. Unless you use WFQ or something similar on all bottleneck queues (not generally possible), that burst will likely cause packet loss to other flows, and will therefore impact them.
> The most obvious way is that the server doesn't use a CWND which is larger than the largest currently active window to a similar RTT. The other obvious way is to limit it to something like 32, which is about what we'd see with the opening of a mere 3 regular HTTP connections! This at least makes the one connection competitive with the circumventions that HTTP/1.X currently exhibits.
> TCP is a distributed resource sharing algorithm to allocate capacity throughout a network. Although the rates for all flows are computed in isolation, the effect of that computation is not limited to the flow in question, because all flows share the same queues.
> Yes, that is what I've been arguing w.r.t. the many connections that the application-layer currently opens :)
> It becomes a question of which dragon is actually most dangerous.
> -=R
> Lars