[Tmrg] Is it possible that IW doesn't effect other traffic

fred at cisco.com (Fred Baker) Mon, 23 August 2010 04:58 UTC

From: fred at cisco.com (Fred Baker)
Date: Sun, 22 Aug 2010 21:58:24 -0700
Subject: [Tmrg] Is it possible that IW doesn't effect other traffic
In-Reply-To: <AANLkTik8MVCPYAmoD+o1ga08_kevtVUc=2M-tF7EgEqQ@mail.gmail.com>
References: <AANLkTik8MVCPYAmoD+o1ga08_kevtVUc=2M-tF7EgEqQ@mail.gmail.com>
Message-ID: <47A9D2A6-1B12-4926-8D87-EA78B8C3AEEB@cisco.com>

On Aug 22, 2010, at 7:55 PM, Matt Mathis wrote:

> I suspect that our inability to detect the intuitively expected
> result, that IW10 hurts IW3, may actually be a symptom of a deeper
> theoretical result that it really doesn't matter, because the system
> is memoryless.   If the system is memoryless, then all that matters is
> the total number of packets delivered, and not when they were sent.

well, that may me true, especially with respect to links with RED or ECN that trend toward the knee of the curve. Adding packets to the queue when the queue averages "empty" is quite a bit different than adding packets to the queue when it is full. I absolutely believe that in the *average* case, the *average* might be pretty reasonable.

Note that the conjectures we have heard, that a single system might, in displaying a single web page, open tens of http sessions, each of which get IW=10 and the average size of which might well be in the 20 KBPS (>10 segment) neighborhood.

Is your conjecture based on data, or is it a conjecture? I asked about the behavior of the system when there is a severe bottleneck in the link, such as a 56 or 64 KBPS link. The presentation in response to my question looked hard at data derived from a 20 MBPS link. I'm still looking for data from the low bitrate links that MTN and UTL more generally deploy.

The question isn't really about memoryless links and what happens on the third RTT of what might be a single RTT transaction. If the conjectured tens of sessions result in hundreds of segments in flight, megabit and gigabit links will take it in stride, but links measured in kilobits are likely to introduce serious delays and potentially lossto competing sessions.

I really don't want to be a broken record in this, but I'll remind you that Jerry asked me after the ICCRG meeting why I felt that the data presented was non-responsive, and I answered that it was because it answered kilobit questions with megabit samples. I don't want to in any way denigrate the Google researchers or the data they are looking at - it's more data than I have. But I'll also remind you that when I was looking for a TCP/SCTP congestion avoidance algorithm for networks built on 802.15.4g and related technologies in which non-congestive loss is normal, you told me that in your wired approaching-terabit world you had no way to measure or simulate such a link. I suspect that you still live in an approaching-terabit world, and it doesn't map to the world your consumers live in. You really need to get out of the ivory tower.

I'll shut up when you answer the question with samples that respond to the question asked.