Re: Draft minutes for IETF Network Status Reports. - please comment.

Curtis Villamizar <curtis@wawa.ans.net> Tue, 16 August 1994 01:51 UTC

Received: from ietf.nri.reston.va.us by IETF.CNRI.Reston.VA.US id aa16013; 15 Aug 94 21:51 EDT
Received: from CNRI.RESTON.VA.US by IETF.CNRI.Reston.VA.US id aa16009; 15 Aug 94 21:51 EDT
Received: from merit.edu by CNRI.Reston.VA.US id aa22589; 15 Aug 94 21:51 EDT
Received: from boole.ece.cmu.edu (boole.ece.cmu.edu [128.182.61.117]) by merit.edu (8.6.8.1/merit-1.0) with SMTP id VAA23592; Mon, 15 Aug 1994 21:25:00 -0400
Received: by boole.ece.cmu.edu (5.54-ECE2/5.17) id AA00202; Mon, 15 Aug 94 21:24:53 EDT
Resent-Message-Id: <9408160124.AA00202@boole.ece.cmu.edu>
Received: by boole.ece.cmu.edu (5.54-ECE2/5.17) id AA00165; Mon, 15 Aug 94 21:11:18 EDT
Received: from wawa.ans.net by mailer.psc.edu (5.65/Ultrix3.0-C 11/12/92 nydick) id AA04196; Mon, 15 Aug 1994 21:11:20 -0400
Received: by wawa.ans.net (AIX 3.2/UCB 5.64/4.03) id AA20860; Tue, 16 Aug 1994 01:09:25 GMT
Sender: ietf-archive-request@IETF.CNRI.Reston.VA.US
From: Curtis Villamizar <curtis@wawa.ans.net>
Message-Id: <9408160109.AA20860@wawa.ans.net>
To: Gene Hastings <hastings@psc.edu>
Cc: almes@ans.net
Subject: Re: Draft minutes for IETF Network Status Reports. - please comment.
In-Reply-To: (Your message of Wed, 10 Aug 94 07:40:36 D.) <9408101140.AA06128@mailer.psc.edu>
Date: Tue, 16 Aug 1994 01:09:25 +0000
Resent-To: njm@merit.edu, nanog@merit.edu
Resent-From: Gene.Hastings@boole.ece.cmu.edu
Resent-Date: Mon, 15 Aug 1994 21:24:52 -0400

Gene,

Sounds like a great meeting.  Too bad I couldn't make it.

Some extremely minor corrections and additional information.

Guy presented this but I did the actual testing.

> UPDATES:
> ANS router software activity
>   Software enhancements:
>    RS960 buffering and queueing microcode updated
>
>    - increased number of buffers, also went from max MTU sized buffers
>      to 2+kB chainable buffers (max FDDI will fit in two buffers with
>      room to spare.

We are using 2kB buffers.  A FDDI packet fits into 3 buffers.  The
advantage is that most real world packets are still ethernet MTU or
less and take up less space using the new scheme.  We still got the
buffering up for FDDI packets and used full FDDI MTU in testing.

> [ ... ]

> The conditions and results were summarized on two slides:
>
>  + Single flow Van Jacobson random early drop:
>
>     41Mbps at 384k MTU cross-country (PSC to SDSC?)

This was on our testnet.  We took NY to Ann Arbor down and went by way
of Texas (MCI) giving us 68 msec RTT.  NY to SF is 70 msec so it is
roughly cross country equivalent.  Of course we couldn't step on poor
unsuspecting users in the middle of the night by congesting the net,
and they couldn't provide a "realistic background load" for our
testing.  We'd like to see a (brief) validation of results after
various steps in deployment and have support from PSC and (I think)
SDSC to do this.

>     This code (V4.20L++) is likely to be deployed in a month or so.

It doesn't have an official name and has no firm deployment plans.  A
month or so would be very optimistic.  Some of the changes are already
deployed since the Maui testing and others will deploy soon but others
(RED) have no plans (yet).  We'll validate progress as this stuff gets
deployed and hopefully it will all get deployed (soon).

> By way of comparison Maui Supercomputer center to SDSC was 31Mbps using
> an earlier version of code with 35 buffers.  Windowed ping with the same
> code did 41Mbps.

MHPCC (Maui) to SDSC is a 50 msec RTT.  So we went faster on a longer
RTT path.

>  + Four flow Van Jacobson random early drop:
>
>     42Mbps at 96kB MTU.

>     All the numbers are with full forwarding tables in the RS960s

We inject full routing into the testnet but don't allow packet
forwarding between testnet and production net.

Curtis

BTW - I gave numbers to Guy that I've since revised down slightly.  I
changed the way I estimate link utilization in mutiple flow tests so
that I will generally be more accurate and will underestimate
performance if inaccurate.  It's closer to 40 Mb/s for 1 flow and 41
Mb/s for 4 flows (we did hit 42 Mb/s on 8 flows) on the above tests
and those results were before we tested RED.  The RED code was written
just before IETF and wasn't performance tested until just after.  In
the enthusiasm over "we have RED" I may not have conveyed that the
graphs were prior to having had a chance to test RED.  Could you just
quietly revise the numbers down by one in the minutes.

I plan to give a detailed update on the performance testing at NANOG.