[Tmrg] Is it possible that IW doesn't effect other traffic

mattmathis at google.com (Matt Mathis) Mon, 23 August 2010 02:55 UTC

From: mattmathis at google.com (Matt Mathis)
Date: Sun, 22 Aug 2010 22:55:58 -0400
Subject: [Tmrg] Is it possible that IW doesn't effect other traffic
Message-ID: <AANLkTik8MVCPYAmoD+o1ga08_kevtVUc=2M-tF7EgEqQ@mail.gmail.com>

I suspect that our inability to detect the intuitively expected
result, that IW10 hurts IW3, may actually be a symptom of a deeper
theoretical result that it really doesn't matter, because the system
is memoryless. ? If the system is memoryless, then all that matters is
the total number of packets delivered, and not when they were sent.
Queuing systems generally become memoryless when they have idle, but I
believe that queue overflows also cause loss of memory.

Consider the following thought experiment: ?You have a busy queue, and
two bursts arrive, separated by a small time interval. ? Arrange to
run multiple trials with exactly the same traffic except to vary the
size of the first burst. ?If there is idle* between the bursts, then
clearly the loss fraction for the second burst does not depend on the
size of the first burst. ? It is also true that if there are
sufficient* losses due to queue overflow between the bursts, then the
loss fraction for the 2nd burst does not depend on the size of the
first burst.

*For both cases, the precondition has to hold true for the entire
range of sizes for the first burst. ?e.g. IW10 must still be followed
by idle or IW3 must also be followed by queue overflows.

This situation is especially likely if the arrival time for the entire
burst (e.g. IW10 at 1 Gb/s) is not larger than a packet time at the
bottleneck link (e.g. 100 Mb/s) ?In this case the tail of the burst
gets smashed if there is not sufficient headroom in the queue, and no
other flows can even detect how many packets were lost. ?In this
situation (servers >10x faster than the bottleneck) you are
automatically memoryless if the queue is mostly full most of the time.
? This seems particularly likely for the Africa case.

Note that this observation also implies that the operating region
where the size of the first burst matters is extremely small, because
the link has to be congested enough where the 2nd burst might cause
queue overflow, but the total losses between the start of the first
burst and the start of the 2nd burst must be smaller than the change
in size of the first burst.

There are also some situations there these assumptions clearly don't
apply, such as Fred's long running bulk flow at relatively low
aggregation.  His example system will spend long periods of time with
neither idle and nor queue overflows, and thus has a very long memory.
  This long memory property is a direct consequence of both low
aggregation and large queue space (relative to the RTT).   With either
more aggregation or less queue space, the system is likely to quickly
lose memory.

Comments anyone?

I should point out that I am not a mathematician......  Can somebody
send a pointer to a good discussion of memory full vs memoryless
queuing systems?

Thanks,
--MM--
The best way to predict the future is to create it. ?- Peter Drucker