Re: Comparing an old flow snapshot with some packet size data

John Hawkinson <jhawk@bbnplanet.com> Tue, 06 August 1996 05:29 UTC

Received: from ietf.org by ietf.org id aa00973; 6 Aug 96 1:29 EDT
Received: from cnri by ietf.org id aa00969; 6 Aug 96 1:29 EDT
Received: from murtoa.cs.mu.OZ.AU by CNRI.Reston.VA.US id aa01890; 6 Aug 96 1:29 EDT
Received: from mailing-list by murtoa.cs.mu.OZ.AU (8.6.9/1.0) id OAA09746; Tue, 6 Aug 1996 14:56:33 +1000
Received: from munnari.OZ.AU by murtoa.cs.mu.OZ.AU (8.6.9/1.0) with SMTP id OAA09719; Tue, 6 Aug 1996 14:45:49 +1000
Received: from poblano.near.net by munnari.OZ.AU with SMTP (5.83--+1.3.1+0.56) id EA05860; Tue, 6 Aug 1996 14:41:55 +1000 (from jhawk@bbnplanet.com)
Subject: Re: Comparing an old flow snapshot with some packet size data
To: "Kent W. England" <kwe@6sigmanets.com>
Date: Tue, 06 Aug 1996 00:41:41 -0400
Sender: ietf-archive-request@ietf.org
From: John Hawkinson <jhawk@bbnplanet.com>
Cc: big-internet@munnari.oz.au
In-Reply-To: <2.2.32.19960806000836.00d194b8@mail.cts.com> from "Kent W. England" at Aug 5, 96 05:08:36 pm
X-Mailer: ELM [version 2.4 PL23]
Mime-Version: 1.0
Content-Type: text/plain; charset="US-ASCII"
Content-Transfer-Encoding: 7bit
Content-Length: 3816
Message-Id: <9608060041.aa02539@poblano.bbnplanet.com>
Precedence: bulk

> From: "Kent W. England" <kwe@6sigmanets.com>
> Subject: Comparing an old flow snapshot with some packet size data

> Back in January, Sean Doran and Dorian Kim posted some cisco IP flow stats
> to this list. I haven't seen any since, but my big-internet mail delivery
> seems spotty so I may have missed some messages. I'd be interested in seeing
> some more flow stats, if Sean or Dorian or anyone has been collecting more
> data.  Sean or Dorian, would you care to post some more flow stats?

Just to provide you with a lack of baselines for comparison (:-)), here
are the top packet sizes one of our transit FDDI rings between
1200 and 1300 EDT today:

Size     %Packets        %Bytes
40       36.4837         4.5397
552      19.2812         33.1087
576      9.56957         17.1468
1500     4.84203         22.5937
44       4.00251         0.547841
41       2.48799         0.317323
52       0.573903        0.0928349
60       0.505717        0.0943905
48       0.484214        0.0723015
72       0.467757        0.104766
56       0.435227        0.0758182
42       0.400778        0.0523627
296      0.340765        0.313773
84       0.326612        0.0853456
45       0.305438        0.0427568
43       0.297758        0.0398292
588      0.297319        0.543838

Binning to histograms of 10 bytes

Size            %Packets        %Bytes
40-50           45.020109       5.693618
550-560         19.326785       33.187304
570-580         9.604513        17.209135
1500-1510       4.842249        22.594727
50-60           2.133512        0.362158
60-70           1.659997        0.326584
70-80           1.633336        0.374121
80-90           1.030689        0.269704
290-300         0.555307        0.510068
140-150         0.522942        0.234360

> Would there be any improvement if hosts used path MTU discovery, or would it
> add up to about the same thing? I'm not sure whether you can do path MTU
> discovery at the same time you are starting a TCP session or whether, as is
> more likely, it is a separate process and uses an RTT or more before
> starting the TCP session.

There would be QUITE A LOT of improvement if everyone used Path MTU
Discovery. There would be quite a lot of improvement if everyone
changed the TCP default MSS on their unix boxes to 1460 instead of
576.

In the former case, most inplementations assume that the interface
MTU - ip header is the maximum length, and will send that as the MSS
when they open a TCP connection. They will send any data up-to that
size in a single packet with the DF bit set, and will only fragment
if they get back an indication that such is necessary. There are
relatively few links in the Internet that don't support a 1500-byte
MTU that it's well worth the extra RTT. Further, those hosts that
don't have 1500-byte MTUs tend to be behind slow links (i.e. dialup
links) where an extra RTT is probably not all that significant. This
is the standard way of implementing PMD, and it's how it works
in Solaris, for instance. There is no intial-RTT cost for setup in the
general (non-fragemented case).

If you don't have PMD and just up the max segmenet size, you do the
same thing except you don't set Dont Fragment on your packets. This
may actually be more efficient because it causes fragmentation
to happen at the places in the network where low-MTU links exist.
If you assume that those are few and far -between, and are special
cases who should be willing to bear the cost of doing fragmentation
themselves, this is a good thing. It doesn't work so well if your
host is FDDI-connected, because many Internet links can't support
the FDDI MSS. But you can set your FDDI link to the Ethernet MSS
and still see a good improvement. Of course, this methodology doesn't
work for IPv6, but PMD is required there, anyhow.

--jhawk
  John Hawkinson