[Tmrg] Proposal to increase TCP initial CWND

krasnoj at gmx.at (Stefan Hirschmann) Mon, 19 July 2010 22:13 UTC

From: krasnoj at gmx.at (Stefan Hirschmann)
Date: Tue, 20 Jul 2010 00:13:27 +0200
Subject: [Tmrg] Proposal to increase TCP initial CWND
In-Reply-To: <AANLkTil937lyUzRvUtdqd2qdl9RN7AZ-Mo_cT-dtmqXz@mail.gmail.com>
References: <AANLkTil937lyUzRvUtdqd2qdl9RN7AZ-Mo_cT-dtmqXz@mail.gmail.com>
Message-ID: <4C44CE07.7080503@gmx.at>

Hi all!

Lachlan Andrew wrote:
> Greetings TMRG folk,
> At the last IETF meeting, some folk from Google proposed increasing
> the initial TCP CWND from the current value (2-4 depending on MSS) to
> 10.  The current draft is at
> <http://tools.ietf.org/html/draft-hkchu-tcpm-initcwnd-01>.
> The discussion at the ICCRG meeting raised some modelling/evaluation
> issues which fit well with the expertise at TMRG.  Since TMRG is
> chartered as a mailing-list-only RG, let's discuss this now in lieu of
> a meeting at IETF 78 in Maastricht.

OK, I will try to focus only at the TMRG relevant and don't talk too
much about the drawbacks of the draft.

"while the narrowband (< 256Kbps) usage has dropped to 5%" (quote from
The consequence is not: Forget 5% of all user, the consequence has to
be: Simulations for narrowband are still important because TCP should
work for everybody and not only for privileged ones, even if they are
95% of all. Also I don't believe the number. My cell phone is official a
UMTS one which is fast. But most of the time I use it with GPRS because
it is more stable and uses less energy. [4] stated, that 64 Kbps is in
Africa widely used, even as ISP backpone and we call the "WWW" world
wide web and not "World With fast Web". I still believe in the idea of
TCP: Everything (and over every wire) over TCP/IP.

HTTP/1.1 parallel
"  This trend is to remedy HTTP serialized download to achieve
    parallelism and higher performance. But it also implies today most
    access links are severely under-utilized, hence having multiple TCP
    connections improves performance most of the time."

This is not a well founded conclusion. Whats about the other idea: A TCP
handshake need a long time and with parallelisation the wasted time of
the TCP handshake (and connection quit) can be reduced. Also the browser
starts to render the page during the packets are still transmitted.

The question about the size of the web objects.
I used Firebug to monitor a Google search request. There were 21 web
transfers and only 8 of these were larger than 4 KB.
packet). An analysis in 2008 [2] showed that 93% of all connections send
less than the 4380 bytes. In other words: For 93% of all connections,
there is no congestion control at all.

Tra?c analyses (like [21]) showed that around half of transferred data
come from ?ows transferring fewer than 4036 bytes and it is allowed to
send 4036 bytes in the ?rst RTT without using any congestion control
[RFC 3390].
I believe it is a good idea to simulate each scenario using a UDP source
with 50% of the available bandwidth.

The Google approach to simple count the number of web objects is
worthless because this  approach leads to highly instable results. Eg if
I count a page which contains the extracted linux source code (more than
20K files) I will have a different average size as I will have if I
analyse the average file size of www.youtube.com, and if you look at
rapidshare.com, the number of large files should be enough to raise the
mean of 300 sites outside of the Slow-Start area.
Also the draft's referenced results are in contrast to the network
traffic analyses in [2] and [3] and I believe this approach much more
than a simple object counting.

Five: A Warning
A too large initial window leads to congestion, packet drops and
retransmissions leading to a larger transfer time. A 56K modem is able
to transfer 7 KByte/s. Firefox 3.5.3 use 6 parallel, persistent
connections per server. An initial window of 4 KByte allows to send 24
KBytes at once. The connection is ?lled for 3.42 seconds without using
any congestion control. So it is a bad idea to increase the initial
window to a larger value. Or is somebody thinking, that it is OK to need
a buffer size of more than 12 seconds (12,8 < 6*15/7)?

HTTP Pipelining is AFAIK not widely used by now. In my Google search
test mentioned before, there were 3 transfers with less than 200 bytes
(yeah bytes not KB). The change proposed by the draft does not affect
this transfers. So the browser producer would slow down the connection
if he/she reduces the number of parallel connections. So I am sure the
browsers won't reduce this number.
For TMRG the consequence is: We have to model it with six parallel
connections because this will be the real world.

Seven: How to model
Simplified two 10 seg flows are same aggressive as 5 4K flows. This
simplification is only valid for the first RTT.

This are a few thoughts about this idea. I am very doubtful if this
draft is a good idea.

Cheers Stefan

[1] http://tools.ietf.org/html/draft-hkchu-tcpm-initcwnd-01

[2] Mark Allman. A collection of slow start thoughts, April 2008.
Mailing list of iccrg-slowstart.

[3]  Stefan Hirschmann. Re: A collection of slow start thoughts, May
2008. Mailing list of iccrg-slowstart.

[4]  Lachlan Andrew, Cesar Marcondes, Sally Floyd, Lawrence Dunn,
Romeric  Guillier, Wang Gang, Lars Eggert, Sangtae Ha and Injong Rhee.
Towards a  Common TCP Evaluation Suite. In Protocols for Fast, Long
Distance Networks (PFLDnet), 5-7 Mar 2008.