Re: Comparing an old flow snapshot with some packet size data

Paul Ferguson <pferguso@cisco.com> Wed, 07 August 1996 03:25 UTC

Received: from ietf.org by ietf.org id aa01184; 6 Aug 96 23:25 EDT
Received: from cnri by ietf.org id aa01180; 6 Aug 96 23:25 EDT
Received: from murtoa.cs.mu.OZ.AU by CNRI.Reston.VA.US id aa00476; 6 Aug 96 23:24 EDT
Received: from mailing-list by murtoa.cs.mu.OZ.AU (8.6.9/1.0) id MAA11114; Wed, 7 Aug 1996 12:41:53 +1000
Received: from munnari.OZ.AU by murtoa.cs.mu.OZ.AU (8.6.9/1.0) with SMTP id MAA11076; Wed, 7 Aug 1996 12:23:51 +1000
Received: from lint.cisco.com by munnari.OZ.AU with SMTP (5.83--+1.3.1+0.56) id CA09003; Wed, 7 Aug 1996 12:23:46 +1000 (from pferguso@cisco.com)
Received: from pferguso-pc.cisco.com (c1robo8.cisco.com [171.68.13.8]) by lint.cisco.com (8.6.12/CISCO.SERVER.1.1) with SMTP id TAA05610; Tue, 6 Aug 1996 19:24:44 -0700
Message-Id: <199608070224.TAA05610@lint.cisco.com>
X-Sender: pferguso@lint.cisco.com
X-Mailer: Windows Eudora Pro Version 2.1.2
Mime-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Date: Tue, 06 Aug 1996 22:23:33 -0400
To: John Hawkinson <jhawk@bbnplanet.com>
Sender: ietf-archive-request@ietf.org
From: Paul Ferguson <pferguso@cisco.com>
Subject: Re: Comparing an old flow snapshot with some packet size data
Cc: "Kent W. England" <kwe@6sigmanets.com>, big-internet@munnari.oz.au
Precedence: bulk

Not that this isn't interesting data (it is), but it would even
be more valuable if there were a painless mechanism to derive
the arrival sequence of the various packet sizes in a timeline
relationship to the distributions we've seen thus far.

Food for thought.

- paul


At 12:41 AM 8/6/96 -0400, John Hawkinson wrote:

>
>Just to provide you with a lack of baselines for comparison (:-)), here
>are the top packet sizes one of our transit FDDI rings between
>1200 and 1300 EDT today:
>
>Size     %Packets        %Bytes
>40       36.4837         4.5397
>552      19.2812         33.1087
>576      9.56957         17.1468
>1500     4.84203         22.5937
>44       4.00251         0.547841
>41       2.48799         0.317323
>52       0.573903        0.0928349
>60       0.505717        0.0943905
>48       0.484214        0.0723015
>72       0.467757        0.104766
>56       0.435227        0.0758182
>42       0.400778        0.0523627
>296      0.340765        0.313773
>84       0.326612        0.0853456
>45       0.305438        0.0427568
>43       0.297758        0.0398292
>588      0.297319        0.543838
>
>Binning to histograms of 10 bytes
>
>Size            %Packets        %Bytes
>40-50           45.020109       5.693618
>550-560         19.326785       33.187304
>570-580         9.604513        17.209135
>1500-1510       4.842249        22.594727
>50-60           2.133512        0.362158
>60-70           1.659997        0.326584
>70-80           1.633336        0.374121
>80-90           1.030689        0.269704
>290-300         0.555307        0.510068
>140-150         0.522942        0.234360
>
>> Would there be any improvement if hosts used path MTU discovery, or would it
>> add up to about the same thing? I'm not sure whether you can do path MTU
>> discovery at the same time you are starting a TCP session or whether, as is
>> more likely, it is a separate process and uses an RTT or more before
>> starting the TCP session.
>
>There would be QUITE A LOT of improvement if everyone used Path MTU
>Discovery. There would be quite a lot of improvement if everyone
>changed the TCP default MSS on their unix boxes to 1460 instead of
>576.
>
>In the former case, most inplementations assume that the interface
>MTU - ip header is the maximum length, and will send that as the MSS
>when they open a TCP connection. They will send any data up-to that
>size in a single packet with the DF bit set, and will only fragment
>if they get back an indication that such is necessary. There are
>relatively few links in the Internet that don't support a 1500-byte
>MTU that it's well worth the extra RTT. Further, those hosts that
>don't have 1500-byte MTUs tend to be behind slow links (i.e. dialup
>links) where an extra RTT is probably not all that significant. This
>is the standard way of implementing PMD, and it's how it works
>in Solaris, for instance. There is no intial-RTT cost for setup in the
>general (non-fragemented case).
>
>If you don't have PMD and just up the max segmenet size, you do the
>same thing except you don't set Dont Fragment on your packets. This
>may actually be more efficient because it causes fragmentation
>to happen at the places in the network where low-MTU links exist.
>If you assume that those are few and far -between, and are special
>cases who should be willing to bear the cost of doing fragmentation
>themselves, this is a good thing. It doesn't work so well if your
>host is FDDI-connected, because many Internet links can't support
>the FDDI MSS. But you can set your FDDI link to the Ethernet MSS
>and still see a good improvement. Of course, this methodology doesn't
>work for IPv6, but PMD is required there, anyhow.
>
>--jhawk
>  John Hawkinson
>