[Tmrg] Now, where were we...?

lachlan.andrew at gmail.com (Lachlan Andrew) Thu, 19 November 2009 22:39 UTC

From: "lachlan.andrew at gmail.com"
Date: Fri, 20 Nov 2009 09:39:49 +1100
Subject: [Tmrg] Now, where were we...?
In-Reply-To: <4B05C021.5000105@ifi.uio.no>
References: <BE0E1358-7C27-46A8-AF1E-D8D7CC834A52@ifi.uio.no> <4B05A502.4050402@gmx.at> <4B05C021.5000105@ifi.uio.no>
Message-ID: <aa7d2c6d0911191439n2b08ac6l7d4d6f7ed5f58830@mail.gmail.com>

2009/11/20 Michael Welzl <michawe at ifi.uio.no>:
>
>>>
>>> Can delay ever be worse as a congestion indicator than
>>> loss is?
>>
>> Another bad effect is due to the burstiness of TCP:
>> If a queue of a bottleneck is empty and a traffic burst arrives, the
>> first packet of the burst have less RTT than the last packets of the
>> burst (first packet is transported immediately, the other packets have
>> an increasing queuing delay).
>
> ... but a shorter queue would ?cause the same effect with a loss based
> controller.

Linux takes care of sub-RTT burstines by simply responding to the
minimum RTT observed over a window of packets.  Compared with using
the average delay, this has the disadvantage of not indicating
congestion until it is quite severe, but the same is true of loss.

I'm not aware of any similar tricks in the case Michael pointed out,
where sub-RTT burstiness and small buffers cause loss.

Cheers,
Lachlan

-- 
Lachlan Andrew  Centre for Advanced Internet Architectures (CAIA)
Swinburne University of Technology, Melbourne, Australia
<http://caia.swin.edu.au/cv/landrew> <http://netlab.caltech.edu/lachlan>
Ph +61 3 9214 4837