RE: FW: [bmwg] Some questions/comments on the multicast draft.

"Conrad C. Nobili" <conrad@harvard.edu> Fri, 19 October 2001 12:53 UTC

Received: from optimus.ietf.org (ietf.org [132.151.1.19] (may be forged)) by ietf.org (8.9.1a/8.9.1a) with ESMTP id IAA17165 for <bmwg-archive@odin.ietf.org>; Fri, 19 Oct 2001 08:53:37 -0400 (EDT)
Received: from optimus.ietf.org (localhost [127.0.0.1]) by optimus.ietf.org (8.9.1a/8.9.1) with ESMTP id IAA23234; Fri, 19 Oct 2001 08:53:08 -0400 (EDT)
Received: from ietf.org (odin [132.151.1.176]) by optimus.ietf.org (8.9.1a/8.9.1) with ESMTP id OAA17045 for <bmwg@optimus.ietf.org>; Thu, 18 Oct 2001 14:14:30 -0400 (EDT)
Received: from Conrad.Harvard.EDU (root@conrad.harvard.edu [128.103.209.81]) by ietf.org (8.9.1a/8.9.1a) with SMTP id OAA17257 for <bmwg@ietf.org>; Thu, 18 Oct 2001 14:14:25 -0400 (EDT)
Received: by Conrad.Harvard.EDU (Linux Smail3.1.28.1 #1) id m15uHga-00098BC; Thu, 18 Oct 101 14:14 EDT
Date: Thu, 18 Oct 2001 14:14:27 -0400
From: "Conrad C. Nobili" <conrad@harvard.edu>
To: Debby Stopp <dstopp@ixiacom.com>
cc: 'Kevin Dubray' <kdubray@juniper.net>, "Bmwg (E-mail)" <bmwg@ietf.org>
Subject: RE: FW: [bmwg] Some questions/comments on the multicast draft.
In-Reply-To: <9A9C83C019F35D47A570460E87D5D8ABD1961C@racerx.ixiacom.com>
Message-ID: <Pine.LNX.3.91.1011018133642.20708F-100000@Conrad.Harvard.EDU>
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset="US-ASCII"
Sender: bmwg-admin@ietf.org
Errors-To: bmwg-admin@ietf.org
X-Mailman-Version: 1.0
Precedence: bulk
List-Id: Benchmarking Methodology Working Group <bmwg.ietf.org>
X-BeenThere: bmwg@ietf.org

A binary search for a zero loss rate with our old throughput tests
found a single value, to 1 PPS granularity.  It did this relatively
quickly for the slower link technologies that were the order of the
day.  As the link speeds went up by factors of ten and a hundred,
maintaining the 1 PPS granularity meant that binary searches for
a ZLR took log2(10) or log2(100) more trials.  However there are
other tricks available if you want to reduce the number of trials.
In the old Harvard NDTL we had tweaked our software to "cheat" a
bit on the binary search.  We had our binary search go 80% of the
way toward its next goal instead of half way.  That way we found
the common media rate devices and bricks a bit more quickly than
we would have with a POBS.  So, there may be some technique like
that which can help some.

But these tests aren't really interesting in the cases of media
rate devices and bricks.  In those cases, neither a PLR nor a THRU
test gives you much more information than the other.  In the more
interesting cases, you really do get more information by having
both of the tests.  The PLR test, with its (10% traditionally)
slices through the performance space, gives you a general outline
of the performance curve.  The THRU test gives you a single more
accurate (1 PPS granularity traditionally) ZLR number.  Remember,
we used to see all sorts of perverse performance behavior, and I
presume we still want these tests to find and show such behaviors
where they still exist.  So, if a device is a "dribbler" I want to
be able to find that and see that the PLR tests show near perfect
performance at all rates, but that the THRU test gives a very low
number.  If a device has a periodic or other non-load-related issue
I also want to be able to see that somewhere (in which case we need
long trials and/or many of them).

I'm not really proposing any specific changes or wording.  I just
thought these perspectives might help somehow.  (And a gentle prod
from Kevin helped to stir me I confess.)

--cn, ex-NDTL, hoping this is somehow helpful ...

Conrad C. Nobili  Conrad@Harvard.EDU  Harvard University UIS/NOC

On Thu, 18 Oct 2001, Debby Stopp wrote:

> kevin wrote:
> > I have no problem modifying the collection algorithm to reduce
> > collection
> > time; the issue that I have is that RFC 1242 defines throughput as the
> > _maximum_ lossless forwarding rate.  Your proposed algorithm won't 
> > necessarily preserve this.  
> > 
> > A hybrid approach might work here:  use your 10% (or smaller ;-)
> > decrements
> > to narrow the binary search area.  Then apply the binary-search like
> > technique
> > to find the maximum, lossless forwarding rate.
> > 
> > Then we don't recast RFC 1242's meaning of throughput via a 
> > methodology 
> > statement.  
> 
> 
> Hmm.  I don't believe I've actually stepped on RFC 1242's def of throughput
> by suggesting a different methodology, from 1242:
> 
> "Definition:
>                 The maximum rate at which none of the offered frames
>                 are dropped by the device.
>  
>         Discussion:
>                 The throughput figure allows vendors to report a
>                 single value which has proven to have use in the
>                 marketplace.  Since even the loss of one frame in a
>                 data stream can cause significant delays while
>                 waiting for the higher level protocols to time out,
>                 it is useful to know the actual maximum data
>                 rate that the device can support.  Measurements should
>                 be taken over a assortment of frame sizes.  Separate
>                 measurements for routed and bridged data in those
>                 devices that can support both.  If there is a checksum
>                 in the received frame, full checksum processing must
>                 be done."
> 
> There's no mention in there re: binary search or any other specific
> algorithm to find the "maximum rate at which none of the offered frames are
> dropped".
> 
> However, I have no problem compromising, how about this:
> 
> "Similar to the Frame loss rate test in RFC 2544, the first trial SHOULD be
> run for the frame rate that corresponds to 100% of the maximum offered load
> for the frame size on the input media. Repeat the procedure for the rate
> that corresponds to 90% of the maximum offered load, then again for 80% of
> this load and so on. This sequence SHOULD be continued (at reducing 10%
> intervals) until there are two successive trials in which no frames are
> lost. The maximum granularity of the trials MUST be 10% of the maximum
> offered load; a finer granularity is encouraged.
> 
> Alternately, the first trial MAY be run for a frame rate that is well known
> to correspond to 0% frame loss; consecutive trials MUST be run at regular
> increasing intervals of 10% of the maximum offered load.  When loss is
> encountered, perform a binary search between the loss point and the last
> zero-loss point to determine the throughput metric. "
> 
> Debby
> 
> 
> _______________________________________________
> bmwg mailing list
> bmwg@ietf.org
> http://www1.ietf.org/mailman/listinfo/bmwg



_______________________________________________
bmwg mailing list
bmwg@ietf.org
http://www1.ietf.org/mailman/listinfo/bmwg