[Tmrg] Now, where were we...?

johnwheffner at gmail.com (John Heffner) Tue, 24 November 2009 18:17 UTC

From: "johnwheffner at gmail.com"
Date: Tue, 24 Nov 2009 13:17:16 -0500
Subject: [Tmrg] Now, where were we...?
In-Reply-To: <4B05A502.4050402@gmx.at>
References: <BE0E1358-7C27-46A8-AF1E-D8D7CC834A52@ifi.uio.no> <4B05A502.4050402@gmx.at>
Message-ID: <1e41a3230911241017p26af6480sd8b4ff358e2363ef@mail.gmail.com>

Another issue with delay as a signal is that, absent a means to
measure one-way delay, it measures the forward and reverse paths
together, whereas loss will be from only the forward path.  (That is,
unless there is severe loss on the reverse path resulting in
timeouts.)  Delay measurements are also very problematic on any
multi-path flow.

  -John


On Thu, Nov 19, 2009 at 3:05 PM, Stefan Hirschmann <krasnoj at gmx.at> wrote:
> Michael Welzl wrote:
>> Hi,
>>
>> This prompts me to ask a question that I've been pondering
>> ever since a hallway conversation that I had with Stanislav
>> Shalunov at the Stockholm IETF:
>>
>>
>>> 2. How reliable are implicit congestion indicators? ?The prevailing
>>> wisdom in the IETF seems to be that "ECN=loss = congestion, delay =
>>> noise, nothing else is useful for congestion control". ?What criteria
>>> would "delay" have to satisfy in order to be a useful indicator of
>>> congestion? ?Should we listen to the average delay, the frequency with
>>> which delay exceeds a threshold, or the jitter?
>>
>> Can delay ever be worse as a congestion indicator than
>> loss is?
>
> Yes. It can be wrong in two ways:
>
>
> If there is physical corruption and data link repeating of the signals,
> any correlation between congestion and delay is just random.
>
> Error 1: There data link retransmissions (due to physical corruption /
> checksum errors) are increasing and the delay increases. Reality: Same
> state of congestion, but assumption that congestion increased.
>
> Error 2: There are less data link retransmissions (compared to beginning
> of connection). This leads to the wrong assumption that the congestion
> (the queue delay) is also less.
>
> In my opinion, delay should only be considered as limiting factor, but
> never as increasing factor: cwnd = min( f(delay), g(loss) ).
>
>
>
> Another bad effect is due to the burstiness of TCP:
> If a queue of a bottleneck is empty and a traffic burst arrives, the
> first packet of the burst have less RTT than the last packets of the
> burst (first packet is transported immediately, the other packets have
> an increasing queuing delay). A delay based approach can already lead to
> reduction of cwnd even if cwnd = 10% BDP.
>
>
> Cheers,
> Stefan
> _______________________________________________
> Tmrg-interest mailing list
> Tmrg-interest at ICSI.Berkeley.EDU
> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/tmrg-interest
>