Re: [tcpm] Increasing the Initial Window - Notes

Mark Allman <mallman@icir.org> Thu, 11 November 2010 17:42 UTC

Return-Path: <mallman@icir.org>
X-Original-To: tcpm@core3.amsl.com
Delivered-To: tcpm@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id 07C853A6A0B for <tcpm@core3.amsl.com>; Thu, 11 Nov 2010 09:42:38 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -105.999
X-Spam-Level:
X-Spam-Status: No, score=-105.999 tagged_above=-999 required=5 tests=[BAYES_00=-2.599, J_CHICKENPOX_22=0.6, RCVD_IN_DNSWL_MED=-4, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id eQcCrqY+kahJ for <tcpm@core3.amsl.com>; Thu, 11 Nov 2010 09:42:36 -0800 (PST)
Received: from fruitcake.ICSI.Berkeley.EDU (fruitcake.ICSI.Berkeley.EDU [192.150.186.11]) by core3.amsl.com (Postfix) with ESMTP id 7A9323A6954 for <tcpm@ietf.org>; Thu, 11 Nov 2010 09:42:36 -0800 (PST)
Received: from lawyers.icir.org (jack.ICSI.Berkeley.EDU [192.150.186.73]) by fruitcake.ICSI.Berkeley.EDU (8.12.11.20060614/8.12.11) with ESMTP id oABHgIEG029974; Thu, 11 Nov 2010 09:42:18 -0800 (PST)
Received: from lawyers.icir.org (www.obdev.at [127.0.0.1]) by lawyers.icir.org (Postfix) with ESMTP id 3568A247BF2B; Thu, 11 Nov 2010 12:42:18 -0500 (EST)
To: Hagen Paul Pfeifer <hagen@jauu.net>
From: Mark Allman <mallman@icir.org>
In-Reply-To: <20101110152857.GA5094@hell>
Organization: International Computer Science Institute (ICSI)
Song-of-the-Day: Layla
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="--------ma10999-1"; micalg="pgp-sha1"; protocol="application/pgp-signature"
Date: Thu, 11 Nov 2010 12:42:18 -0500
Sender: mallman@icir.org
Message-Id: <20101111174218.3568A247BF2B@lawyers.icir.org>
Cc: tmrg <tmrg-interest@ICSI.Berkeley.EDU>, Matt Mathis <mattmathis@google.com>, tcpm <tcpm@ietf.org>
Subject: Re: [tcpm] Increasing the Initial Window - Notes
X-BeenThere: tcpm@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
Reply-To: mallman@icir.org
List-Id: TCP Maintenance and Minor Extensions Working Group <tcpm.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/tcpm>, <mailto:tcpm-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/tcpm>
List-Post: <mailto:tcpm@ietf.org>
List-Help: <mailto:tcpm-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/tcpm>, <mailto:tcpm-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 11 Nov 2010 17:42:38 -0000

I have inhaled the initial window thread and have some thoughts ...

  + Altering the initial window is not a *fundamental* change to TCP.
    It is a *parameter* tweak.  Certainly parameter tweaks can have
    negative and broad implications, but lets show some perspective
    here.

  + Likewise, changing the initial cwnd **does not remove CC**.  As a
    connection gets feedback from the network path it does what it has
    always done and reacts to its observations.  So, a problematic
    initial window size will in fact get corrected.

  + It is not true that somehow using an initial cwnd of 10 packets
    makes that the minimum cwnd size.  TCP has mechanisms to lower the
    cwnd if that is necessary.

  + Nobody is mandating using a cwnd of 10 packets.  If a host is in a
    particular situation whereby less would be better then it can use
    less (either by tuning its own initial cwnd or by appropriately
    setting the advertised window).

  + Is this HTTP-centric?  Well, maybe.  It is medium transfer
    size-centric.  HTTP is a big component of those transfers at
    present.  But, I don't follow why this is a big deal.  This seems
    like a red herring to me.

  + Adjusting a parameter is not renouncing the conservativeness
    principle.  It is perfectly natural for a parameter like this to
    evolve over time as networks evolve.  Just because there used to be
    excessively low speed networks doesn't mean that somehow we need to
    be bound by that for the entire future.  And, adjusting for
    "inflation" doesn't somehow abdicate the principle of being
    conservative.  Put another way, conservativeness and aggressiveness
    are relative and we are merely trying to find a reasonable tradeoff
    here.

  + Further, there are always *current* networks that lag behind.  At
    some point we have to decide that our engineering is going to be for
    the 99.9% common case and the outliers are going to have to deal
    with it.  People lodged the complaint that there were slow networks
    when we moved from one packet to three.  That will always be the 
    case.  I'm not saying if you're in the minority we forget you here.
    If this change is good for only 51% of the cases then it is not a
    good change.  But, because we have an existence proof of some spot
    that cannot cope with a 10 packet burst does not mean this is a bad
    idea. 

  + Just because IW=10 causes loss doesn't mean it hurts performance.
    Jerry, et.al. have testbed experiments that show this.  The steady
    state loss rate of a TCP is a key component of performance.  But,
    that doesn't mean loss in the initial window dictates long-term
    performance.  If a network can only support a window of three
    packets then the performance is going to be lousy regardless of the
    initial window.  A larger initial window is not likely to change the
    overall performance---as the experiments illustrate.

    (There is the chance that in the case of multiple congested gateways
    that the extra packets will consume scarce resources and therefore
    impact other flows.  But, with excessively low capacity networks it
    is really hard to envision the initial cwnd as being the thing that
    makes the network lousy.)
  
  + Maybe this exacerbates the case of long queues in home networks.
    But, (1) I doubt its a big deal and (2) that isn't exactly TCP's
    problem.  On my DSL line I never build a perceptible queue on the
    downlink.  On the uplink I do.  If I am doing the math right my
    uplink would build a 1sec queue at 10 parallel connections using
    IW=10 and starting at the same time.  But, that seems like a pretty
    thin use case.  I.e., I never start that many connections and pump
    data into them in that fashion.  I might start that many connections
    / second, but I put 300 bytes of HTTP GET in there or the like.  In
    the cases where I can think of that you would start connections at a
    high rate (at least in a burst) and then pump data into them
    (e.g. BitTorrent) you're going to build this queue anyway.  And, it
    just isn't clear to me that we need to somehow make TCP "solve" this
    problem because fundamentally we just need less buffering.

  + Allowing apps to tune the IW via a socket option seems pretty
    pointless to me.  I doubt the application is in any sort of position
    to pick wisely.  If they do anything but turn it to the maximum
    allowed it would be pretty surprising, I think.

While perhaps we can say the onus is on the proposers to convince us, it
seems as though Jerry et.al. have done all the experiments that comments
have pointed to.  If you have a concrete problem with the proposal it'd
be nice to see that in concrete terms---experiments---instead of hand
waves.

FWIW.

allman