Re: [ippm] How should capacity measurement interact with shaping?

"MORTON, ALFRED C (AL)" <acm@research.att.com> Mon, 30 September 2019 11:39 UTC

Return-Path: <acm@research.att.com>
X-Original-To: ippm@ietfa.amsl.com
Delivered-To: ippm@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 282AE12022A for <ippm@ietfa.amsl.com>; Mon, 30 Sep 2019 04:39:32 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.601
X-Spam-Level:
X-Spam-Status: No, score=-2.601 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id JKlhPnQ8wIv6 for <ippm@ietfa.amsl.com>; Mon, 30 Sep 2019 04:39:29 -0700 (PDT)
Received: from mx0a-00191d01.pphosted.com (mx0a-00191d01.pphosted.com [67.231.149.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 8BA8C12021D for <ippm@ietf.org>; Mon, 30 Sep 2019 04:39:29 -0700 (PDT)
Received: from pps.filterd (m0048589.ppops.net [127.0.0.1]) by m0048589.ppops.net-00191d01. (8.16.0.42/8.16.0.42) with SMTP id x8UBZCqP043990; Mon, 30 Sep 2019 07:39:23 -0400
Received: from tlpd255.enaf.dadc.sbc.com (sbcsmtp3.sbc.com [144.160.112.28]) by m0048589.ppops.net-00191d01. with ESMTP id 2vbgf4h1es-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 30 Sep 2019 07:39:22 -0400
Received: from enaf.dadc.sbc.com (localhost [127.0.0.1]) by tlpd255.enaf.dadc.sbc.com (8.14.5/8.14.5) with ESMTP id x8UBdLnU123864; Mon, 30 Sep 2019 06:39:21 -0500
Received: from zlp30499.vci.att.com (zlp30499.vci.att.com [135.46.181.149]) by tlpd255.enaf.dadc.sbc.com (8.14.5/8.14.5) with ESMTP id x8UBdEAf123677 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Mon, 30 Sep 2019 06:39:15 -0500
Received: from zlp30499.vci.att.com (zlp30499.vci.att.com [127.0.0.1]) by zlp30499.vci.att.com (Service) with ESMTP id 7D09E4000722; Mon, 30 Sep 2019 11:39:14 +0000 (GMT)
Received: from clpi183.sldc.sbc.com (unknown [135.41.1.46]) by zlp30499.vci.att.com (Service) with ESMTP id 224564000724; Mon, 30 Sep 2019 11:39:14 +0000 (GMT)
Received: from sldc.sbc.com (localhost [127.0.0.1]) by clpi183.sldc.sbc.com (8.14.5/8.14.5) with ESMTP id x8UBdDRk031877; Mon, 30 Sep 2019 06:39:13 -0500
Received: from mail-blue.research.att.com (mail-blue.research.att.com [135.207.178.11]) by clpi183.sldc.sbc.com (8.14.5/8.14.5) with ESMTP id x8UBd7E3031444; Mon, 30 Sep 2019 06:39:07 -0500
Received: from exchange.research.att.com (njbdcas1.research.att.com [135.197.255.61]) by mail-blue.research.att.com (Postfix) with ESMTP id B4D36F0E44; Mon, 30 Sep 2019 07:39:06 -0400 (EDT)
Received: from njmtexg5.research.att.com ([fe80::b09c:ff13:4487:78b6]) by njbdcas1.research.att.com ([fe80::8c6b:4b77:618f:9a01%11]) with mapi id 14.03.0468.000; Mon, 30 Sep 2019 07:38:59 -0400
From: "MORTON, ALFRED C (AL)" <acm@research.att.com>
To: "Ruediger.Geib@telekom.de" <Ruediger.Geib@telekom.de>, "ihameli@cnet.fi.uba.ar" <ihameli@cnet.fi.uba.ar>
CC: "ippm@ietf.org" <ippm@ietf.org>
Thread-Topic: [ippm] How should capacity measurement interact with shaping?
Thread-Index: AQHVU4HrkoUGktYwE0yJQr8LfUwZ06b/aQHwgAQxteCAL6Jm4IAAvlkA//++dPCAAIuwgIACMOUAgAB7hwCABojxgIAAjnMAgAZVHfA=
Date: Mon, 30 Sep 2019 11:38:45 +0000
Message-ID: <4D7F4AD313D3FC43A053B309F97543CFA0AFBB8E@njmtexg5.research.att.com>
References: <CAH56bmBmywKg_AxsHnRf97Pfxu4Yjsp_fv_s4S7LXk1voQpV1g@mail.gmail.com> <4D7F4AD313D3FC43A053B309F97543CFA0ADC777@njmtexg4.research.att.com> <LEXPR01MB05607E081CB169E34587EEEF9CA10@LEXPR01MB0560.DEUPRD01.PROD.OUTLOOK.DE> <4D7F4AD313D3FC43A053B309F97543CFA0AF9184@njmtexg5.research.att.com> <CAH56bmC3gDEDF0wypcN2Lu+Ken3E7f_zXf_5yYbJGURBsju22w@mail.gmail.com> <4D7F4AD313D3FC43A053B309F97543CFA0AF94D0@njmtexg5.research.att.com> <CAH56bmBvaFb9cT+YQUhyA4gYywhjFuhk12snFh8atB9xAA5pWg@mail.gmail.com> <4D7F4AD313D3FC43A053B309F97543CFA0AF9BA5@njmtexg5.research.att.com> <CAH56bmDmFwpzmB3NeDoE3cE-er6jZzZg_p-St6fO5nu3Ls1fJQ@mail.gmail.com> <EE39896D-A7E6-4710-924F-418B5BD72E38@cnet.fi.uba.ar> <LEJPR01MB1178633ADC0D6C54A649764A9C860@LEJPR01MB1178.DEUPRD01.PROD.OUTLOOK.DE>
In-Reply-To: <LEJPR01MB1178633ADC0D6C54A649764A9C860@LEJPR01MB1178.DEUPRD01.PROD.OUTLOOK.DE>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-originating-ip: [69.141.203.172]
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-09-30_07:, , signatures=0
X-Proofpoint-Spam-Details: rule=outbound_policy_notspam policy=outbound_policy score=0 priorityscore=1501 malwarescore=0 suspectscore=0 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1908290000 definitions=main-1909300124
Archived-At: <https://mailarchive.ietf.org/arch/msg/ippm/8oOQUlV0ihUxHOXCaCZ3ze6trqg>
Subject: Re: [ippm] How should capacity measurement interact with shaping?
X-BeenThere: ippm@ietf.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: IETF IP Performance Metrics Working Group <ippm.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/ippm>, <mailto:ippm-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/ippm/>
List-Post: <mailto:ippm@ietf.org>
List-Help: <mailto:ippm-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/ippm>, <mailto:ippm-request@ietf.org?subject=subscribe>
X-List-Received-Date: Mon, 30 Sep 2019 11:39:32 -0000

Hi Ignacio and Rüdiger,
please one reply below.

> -----Original Message-----
> From: ippm [mailto:ippm-bounces@ietf.org] On Behalf Of
> Ruediger.Geib@telekom.de
> Sent: Thursday, September 26, 2019 2:51 AM
> To: ihameli@cnet.fi.uba.ar
> Cc: ippm@ietf.org
> Subject: Re: [ippm] How should capacity measurement interact with shaping?
> 
> Dear Ignacio,
> 
> [IAH] My question is: why one wants to measure the "real" access if this
> one could not be available 100% of the time at that specific speed?
> 
> [RG] Regulators busy an industry with so called access "speed tests". This
> also involves providers which are subjected to regulation. Some of these
> are interested in standardizing IP access bandwidth measurements.
> 
> [RG] Some text of your message seems to be concerned with detection of the
> available bottleneck bandwidth (e.g., in the presence of background
> traffic). That's an interesting subject too. We should separate
> discussion, I think.
> 
> [RG] Contributions to the detection of congestion indicated by a queue
> build up are welcome. RTT is one potential method and the merits and flaws
> of it can become part of the draft, I think.
[acm] 
On this last point, it could be part of the draft where Ignacio has already
provided some discussion:
https://tools.ietf.org/html/draft-ietf-ippm-route-05#section-5
since this section aims to identify properties of paths, IMO.

Al

> 
> Regards,
> 
> Ruediger
> 
> 
> -----Ursprüngliche Nachricht-----
> Von: ippm <ippm-bounces@ietf.org> Im Auftrag von J Ignacio Alvarez-Hamelin
> Gesendet: Donnerstag, 26. September 2019 00:21
> An: Matt Mathis <mattmathis=40google.com@dmarc.ietf.org>
> Cc: MORTON, ALFRED C (AL) <acm@research.att.com>om>; ippm@ietf.org
> Betreff: Re: [ippm] How should capacity measurement interact with shaping?
> 
> Dear Matt,
> 
> I use to discuss with Al and Rüdiger about some issues about measurements
> in IP networks.
> I would like to introduce another point of view (sorry to get the
> "entropy" larger…), In my opinion one part of the problem is that one
> would like to establish how significant is the Internet connection, and it
> depends not only on the channel technology but in a lot of other
> parameters (like "buckets", etc.). My question is: why one wants to
> measure the "real" access if this one could not be available 100% of the
> time at that specific speed? It is clearly not realistic that an ISP
> provides 100Mbps all the time for every customer; therefore, another kind
> of measurement could be considered.
> Form the other optic, BBR is a great technique to achieve the best
> possible capacity incredibly quick, and here the problem is another. (By
> the way, I think that the BBR difficulty is potentially related to  the
> interactions with several concurrent flows; which, even if there are some
> studies, I think it is not entirely understood.) Then, there are
> situations, from the user-application, that it is needed to use a high
> capacity during a  time (i.e., security software updates), and this could
> be interesting to provide protocols, like BBR, to ensure that.
> What end you pursuit? Understand what is happening on the network to bring
> a solution to this kind of applications, or you would like to measure the
> properties?
> For my point of view (sorry Al to repeat this one more time), if we like
> to produce some information about the network status for the end-user, we
> need to work on parameters that express the "quality": quite hard.
> On idea on this path is to know if my network could carry some "bursty"
> traffic in *every* time; which means to measure continuously, i.e., we
> cannot saturate the link all the time. This fact leads to another idea: to
> develop an active and low impact measurement to asses to quality: can I
> send or receive some busty traffic? I understand that how long and how
> much "bursty" is not defined here, but we can figure out (or try some
> reasonable numbers at least ). I got a couple of ideas using RTTs and my
> experience measuring RTTs on IP networks.
> The central part here is that performing measurement using bottleneck
> limit trigger different mechanisms, like one that you described, but
> potentially others more complex. (By the way, avoiding lost packets but
> controlling the delay is an exciting way to influence BBR).
> 
> Best,
> 
> 	Ignacio
> 
> 
> _______________________________________________________________
> 
> Dr. Ing. José Ignacio Alvarez-Hamelin
> CONICET and Facultad de Ingeniería, Universidad de Buenos Aires Av. Paseo
> Colón 850 - C1063ACV - Buenos Aires - Argentina
> +54 (11) 5285 0716 / 5285 0705
> e-mail: ihameli@cnet.fi.uba.ar
> web: https://urldefense.proofpoint.com/v2/url?u=http-
> 3A__cnet.fi.uba.ar_ignacio.alvarez-2Dhamelin_&d=DwIGaQ&c=LFYZ-
> o9_HUMeMTSQicvjIg&r=OfsSu8kTIltVyD1oL72cBw&m=Q01OpB9liGLHD2NujOQz2ZsZstWhJ
> mYmq9QDzuytvFM&s=fBHgaolKIJm5xN0KzOYdpvuxBfPZb7a3jwA5PaxNJo8&e=
> _______________________________________________________________
> 
> 
> 
> > On 21 Sep 2019, at 15:32, Matt Mathis
> <mattmathis=40google.com@dmarc.ietf.org> wrote:
> >
> > Yes, exactly, I am sure this is a provider's feature.   I have a
> receiver side pcap, and it is quite a bit more complicated than I thought:
> > - Zero losses in the entire trace.  It is dynamically shaped at a
> bottleneck with a long queue that is pacing packets.
> > - The initial part is really straight (looks like a hard limit)
> > - The rate (and packet headway) smoothly wanders irregularly all over
> the place in the latter part of the trace, from a low of about 1 Mb/s to
> peaks close to the max rate.   My earlier data was from BBR max_rate, so
> the fluctuating rate apparently has stable peaks.
> >
> > By smooth: it looked like a spline fit, suggesting the instantaneous
> packet headway was determined by a differential equation....
> >
> > This behavior is not an accident, but the result of a very sophisticated
> controller.   And we have seen other bottlenecks like it elsewhere in the
> US and Europe.
> >
> > As people may know, I do believe that shaping is the most appropriate
> way to deal with heavy hitters.  And that we (IPPM) really need a way to
> characterize shaping.  I do care about the asymptotic rate, and how
> quickly I fall out of the initial rate.
> >
> > Thanks,
> > --MM--
> > The best way to predict the future is to create it.  - Alan Kay
> >
> > We must not tolerate intolerance;
> >        however our response must be carefully measured:
> >             too strong would be hypocritical and risks spiraling out of
> control;
> >             too weak risks being mistaken for tacit approval.
> >
> >
> > On Sat, Sep 21, 2019 at 8:34 AM MORTON, ALFRED C (AL)
> <acm@research.att.com> wrote:
> > Hi Matt,
> >
> >
> >
> > I had another thought about the 94 > 75 > 83 Mbps trace:
> >
> > This might well be a service provider’s feature, where
> >
> > they favor new flows with a high-rate for a short amount
> >
> > of time.  I remember some ISPs offering a “speed boost”
> >
> > to load web pages fast, but settling to a lower rate for
> >
> > longer transfers. The same strategy could help reduce
> >
> > the initial buffering time for video streams, perhaps with
> >
> > different time intervals and rates. This might be implemented
> >
> > by changing the token rate, or through some other means.
> >
> >
> >
> >
> >
> > From: Matt Mathis [mailto:mattmathis@google.com]
> > Sent: Thursday, September 19, 2019 9:43 PM
> > To: MORTON, ALFRED C (AL) <acm@research.att.com>
> > Cc: Ruediger.Geib@telekom.de; ippm@ietf.org; CIAVATTONE, LEN
> > <lc9892@att.com>
> > Subject: Re: How should capacity measurement interact with shaping?
> >
> >
> >
> > I am actually more interested in the philosophical questions about how
> this should be reported, and what should the language be about non-
> stationary available capacity.   One intersecting issue: BBR converges on
> both the initial and final rate in under 2 seconds (this was a long path,
> so startup took more than a second).   Do users want a quick (and
> relatively cheap) test that takes 2 seconds or a longer test that is more
> likely to discover the token bucket?  How long?  If we want to call them
> different names, what should they be?
> >
> > [acm]
> >
> > So, if your trace is revealing a bimodal form of service rates,
> >
> > then it ought to be characterized with that in mind and
> >
> > allow for two modes of operation when reporting:
> >
> > 94 initial peak Capacity, 83 sustained Capacity
> >
> > *when this behavior is demonstrated and repeatable”.
> >
> >
> >
> > Thanks for more insights about BBR, too.
> >
> > Al
> >
> >
> >
> > On the pure technical issues: BBR is still quite a moving target.   I
> have a paper in draft that will shed some light on this.  It is due to be
> unembargoed sometime in October.
> >
> > BBRv1 (released slightly after the cacm paper you mention) measures the
> max_BW every 8 RTT.  BBRv2 measures the max_BW on a sliding schedule that
> loosely matches CUBIC.  (In both, min_RTT is measured every 10 seconds, in
> the absence of organic low RTT samples).   BBRv2 uses additional signals
> and does a much better job of avoiding overshoot at the startup.
> >
> >
> >
> > In any case the best (most stable) BBR based metric seems to be
> delta(snd.una)/elapsed_time, which is the progress as seen by upper
> layers.  If you look at short time slices (we happen to be using 0.25
> seconds) you see a mostly crisp square wave.  If you average from the
> beginning of the connection to now, the peak rate happens at the moment
> the bucket runs out of tokens, and falls towards the toke rate after that.
> >
> >
> >
> > Thanks,
> >
> > --MM--
> > The best way to predict the future is to create it.  - Alan Kay
> >
> > We must not tolerate intolerance;
> >
> >        however our response must be carefully measured:
> >
> >             too strong would be hypocritical and risks spiraling out
> > of control;
> >
> >             too weak risks being mistaken for tacit approval.
> >
> >
> >
> >
> >
> > On Thu, Sep 19, 2019 at 3:35 PM MORTON, ALFRED C (AL)
> <acm@research.att.com> wrote:
> >
> > Thanks Matt!  This is an interesting trace to consider,
> >
> > and an important discussion to share with the group.
> >
> >
> >
> > When I look at the equation for BBR:
> >
> > https://urldefense.proofpoint.com/v2/url?u=https-
> 3A__cacm.acm.org_magazines_2017_2_212428-2Dbbr-2Dcongestion-2Dbased-
> 2Dcong&d=DwIGaQ&c=LFYZ-
> o9_HUMeMTSQicvjIg&r=OfsSu8kTIltVyD1oL72cBw&m=Q01OpB9liGLHD2NujOQz2ZsZstWhJ
> mYmq9QDzuytvFM&s=0U4RvwRX4FczhzosE-0W0YYHtDbDgumnInzLZKbJDzQ&e=
> > estion-control/fulltext
> >
> >
> >
> > both BBR and Maximum IP-layer Capacity Metric seek the
> >
> > Max over some time interval. The window seems smaller for
> >
> > BBR: 6 to 10 RTTs, where we’ve been using parameters that
> >
> > result in a rate measurement once a second and take the max
> >
> > of the 10 one-second measurements.
> >
> >
> >
> > We’ve also evaluate several performance metrics when
> >
> > adjusting load, and that determines how high the sending
> >
> > rate will go (based on feedback from the receiver).
> >
> > https://urldefense.proofpoint.com/v2/url?u=https-
> 3A__tools.ietf.org_html_draft-2Dmorton-2Dippm-2Dcapcity-2Dmetric-2Dmethod-
> 2D00&d=DwIGaQ&c=LFYZ-
> o9_HUMeMTSQicvjIg&r=OfsSu8kTIltVyD1oL72cBw&m=Q01OpB9liGLHD2NujOQz2ZsZstWhJ
> mYmq9QDzuytvFM&s=clXXsk_Qf4IKTrlTErZavqIalw01TstgIbM6ekDiULg&e=
> > #section-4.3
> >
> >
> >
> > So, it seems that the MAX delivered rate for the 10 second test,  we
> >
> > can all see is 94.5 Mbps. This rate was sustained for more
> >
> > than a trivial amount of time, too. But if you are concerned that this
> >
> > rate was somehow inflated by a large buffer and a large
> >
> > burst tolerance in the shaper – that’s where the additional
> >
> > metrics and slightly different sending rate control
> >
> > that we described in the draft (and the slides) might help.
> >
> > https://urldefense.proofpoint.com/v2/url?u=https-
> 3A__datatracker.ietf.org_meeting_105_materials_slides-2D105-2Dippm-
> 2Dmet&d=DwIGaQ&c=LFYZ-
> o9_HUMeMTSQicvjIg&r=OfsSu8kTIltVyD1oL72cBw&m=Q01OpB9liGLHD2NujOQz2ZsZstWhJ
> mYmq9QDzuytvFM&s=_1Y3qxWv-oQptQVtwk42-OFsfbkf9ckKFWFZ5Bs7dbI&e=
> > rics-and-methods-for-ip-capacity-00
> >
> >
> >
> > IOW, it might well be that Max IP Capacity, measured as we designed
> >
> > and parameterized it, measures 83 Mbps for this path
> >
> > (assuming the 94.5 is the result of big overshoot at sender, and the
> >
> > fluctuating performance afterward seems to support that).
> >
> >
> >
> > When I was looking for background on BBR, I saw a paper comparing
> >
> > BBR and CUBIC during drive tests.
> >
> > https://urldefense.proofpoint.com/v2/url?u=http-3A__web.cs.wpi.edu_-
> 7Eclaypool_papers_driving-2Dbbr_&d=DwIGaQ&c=LFYZ-
> o9_HUMeMTSQicvjIg&r=OfsSu8kTIltVyD1oL72cBw&m=Q01OpB9liGLHD2NujOQz2ZsZstWhJ
> mYmq9QDzuytvFM&s=sW-j7eyAkkqcrmPgkTdVlScf4546Gz3HCtf1AQldc5Q&e=
> >
> > One pair of plots seemed to indicate that BBR sent lots of Bytes
> >
> > early-on, and grew the RTT pretty high before settling down
> >
> > (Figure 5, a & b).
> >
> > This looks a bit like the case you described below,
> >
> > except 94.5 Mbps is a Received Rate – we don’t know
> >
> > what came out of the network, just what went in and filled
> >
> > a buffer before crashing down in the drive test.
> >
> >
> >
> > So, I think I did more investigation than justification
> >
> > for my answers, but I conclude the parameters like the
> >
> > individual measurement intervals and overall time interval
> >
> > from which the Max is drawn, plus the rate control algorithm
> >
> > itself, play a big role here.
> >
> >
> >
> > regards,
> >
> > Al
> >
> >
> >
> >
> >
> > From: Matt Mathis [mailto:mattmathis@google.com]
> > Sent: Thursday, September 19, 2019 5:18 PM
> > To: MORTON, ALFRED C (AL) <acm@research.att.com>om>;
> > Ruediger.Geib@telekom.de
> > Cc: ippm@ietf.org
> > Subject: Fwd: How should capacity measurement interact with shaping?
> >
> >
> >
> > Ok, moving the thread to IPPM
> >
> >
> >
> > Some background, we (Measurement Lab) are testing a new transport (TCP)
> performance measurement tool, based on BBR-TCP.   I'm not ready to talk
> about results yet (well ok, it looks pretty good).    (BTW the BBR
> algorithm just happens to resemble the algorithm described in draft-
> morton-ippm-capcity-metric-method-00.)
> >
> >
> >
> > Anyhow we noticed some interesting performance features for number of
> ISPs in the US and Europe and I wanted to get some input for how these
> cases should be treated.
> >
> >
> >
> > One data point, a single trace saw ~94.5 Mbit/s for ~4 seconds,
> fluctuating performance ~75 Mb/s for ~1 second and then stable performance
> at ~83Mb/s for the rest of the 10 second test.    If I were to guess this
> is probably a policer (shaper?) with a 1 MB token bucket and a ~83Mb/s
> token rate (these numbers are not corrected for header overheads, which
> actually matter with this tool).  What is weird about it is that different
> ingress interfaces to the ISP (peers or serving locations) exhibit
> different parameters.
> >
> >
> >
> > Now the IPPM measurement question:   Is the bulk transport capacity of
> this link ~94.5 Mbit/s or ~83Mb/s?   Justify your answer....?
> >
> >
> >
> > Thanks,
> >
> > --MM--
> > The best way to predict the future is to create it.  - Alan Kay
> >
> > We must not tolerate intolerance;
> >
> >        however our response must be carefully measured:
> >
> >             too strong would be hypocritical and risks spiraling out
> > of control;
> >
> >             too weak risks being mistaken for tacit approval.
> >
> >
> >
> > Forwarded Conversation
> > Subject: How should capacity measurement interact with shaping?
> > ------------------------
> >
> >
> >
> > From: Matt Mathis <mattmathis@google.com>
> > Date: Thu, Aug 15, 2019 at 8:55 AM
> > To: MORTON, ALFRED C (AL) <acm@research.att.com>
> >
> >
> >
> > We are seeing shapers  with huge bucket sizes, perhaps as larger or
> larger than 100 MB.
> >
> >
> >
> > These are prohibitive to test by default, but can have a huge impact in
> some common situations.  E.g. downloading software updates.
> >
> >
> >
> > An unconditional pass is not good, because some buckets are small.  What
> counts as large enough to be ok, and what "derating" is ok?
> >
> >
> >
> > Thanks,
> >
> > --MM--
> > The best way to predict the future is to create it.  - Alan Kay
> >
> > We must not tolerate intolerance;
> >
> >        however our response must be carefully measured:
> >
> >             too strong would be hypocritical and risks spiraling out
> > of control;
> >
> >             too weak risks being mistaken for tacit approval.
> >
> >
> >
> > ----------
> > From: MORTON, ALFRED C (AL) <acm@research.att.com>
> > Date: Mon, Aug 19, 2019 at 5:08 AM
> > To: Matt Mathis <mattmathis@google.com>
> > Cc: CIAVATTONE, LEN <lc9892@att.com>om>, Ruediger.Geib@telekom.de
> > <Ruediger.Geib@telekom.de>
> >
> >
> >
> > Hi Matt, currently cruising between Crete and Malta,
> >
> > with about 7 days of vacation remaining – Adding my friend Len.
> >
> > You know Rüdiger. It appears I’ve forgotten how to typs in 2 weeks
> >
> > given the number of typos I’ve fixed so far...
> >
> >
> >
> > We’ve seen big buffers on a basic DOCSIS cable service (downlink >2
> > sec)
> >
> > but,
> >
> >   we have 1-way delay variation or RTT variation limits
> >
> >   when searching for the max rate, that don’t many packets
> >
> >   queue in the buffer
> >
> >
> >
> >   we want the status messages that result in rate adjustment to return
> >
> >  in a reasonable amount of time (50ms + RTT)
> >
> >
> >
> >   we usually search for 10 seconds, but if we go back and test with
> >
> >   a fixed rate, we can see the buffer growing if the rate is too high.
> >
> >
> >
> >   There will eventually be a discussion on the thresholds we use
> >
> >   in the search // load rate control algorithm. The copy of
> >
> >   Y.1540 I sent you has a simple one, we moved beyond that now
> >
> >   (see the slides I didn’t get to present at IETF).
> >
> >
> >
> >   There is value in having some of this discussion on IPPM-list,
> >
> >   so we get some *agenda time at IETF-106*
> >
> >
> >
> > We measure rate and performance, with some performance limits
> >
> > built-in.  Pass/Fail is another step, de-rating too (made sense
> >
> > with MBM “target_rate”).
> >
> >
> >
> > Al
> >
> >
> >
> > ----------
> > From: <Ruediger.Geib@telekom.de>
> > Date: Mon, Aug 26, 2019 at 12:05 AM
> > To: <acm@research.att.com>
> > Cc: <lc9892@att.com>om>, <mattmathis@google.com>
> >
> >
> >
> > Hi Al,
> >
> >
> >
> > thanks for keeping me involved. I don’t have a precise answer and doubt,
> that there will be a single universal truth.
> >
> >
> >
> > If the aim is only to determine the IP bandwidth of an access, then we
> aren’t interested in filling a buffer. Buffering events may occur, some of
> which are useful and to be expected, whereas others are not desired:
> >
> >
> >
> > 	• Sender shaping behavior may matter (is traffic at the source CBR
> or is it bursty)
> > 	• Random collisions should be tolerated at the access whose
> bandwidth is to be measured.
> > 	• Limiting packet drop due to buffer overflow is a design aim or an
> important part of the algorithm, I think.
> > 	• Shared media might create bursts. I’m not an expert in the area,
> but there’s an “is bandwidth available” check in some cases between a
> central sender using a shared medium and the receivers connected. WiFi and
> may be other wireless equipment buffers packets also to optimize wireless
> resource optimization.
> > 	• It might be an idea to mark some flows by ECN, once there’s a
> guess on a sending bitrate when to expect no or very little packet drop.
> Today, this is experimental. CE marks by an ECN capable device should be
> expected roughly once queuing starts.
> >
> >
> > Practically, the set-up should be configurable with commodity hard- and
> software and all metrics should be measurable at the receiver. Burstiness
> of traffic and a distinction between queuing events which are to be
> expected and (undesired) queue build up are the events to be
> distinguished. I hope that can be done with commodity hard- and software.
> I at least am not able to write down a simple metric distinguishing queues
> to be expected from (undesired) queue build up causing congestion. The
> hard- and software to be used should be part of the solution, not part of
> the problem (bursty source traffic and timestamps with insufficient
> accuracy to detect queues are what I’d like to avoid).
> >
> >
> >
> > I’d suggest to move discussion to the list.
> >
> >
> >
> > Regards,
> >
> >
> >
> > Rüdiger
> >
> >
> >
> > ----------
> > From: MORTON, ALFRED C (AL) <acm@research.att.com>
> > Date: Thu, Sep 19, 2019 at 7:01 AM
> > To: Ruediger.Geib@telekom.de <Ruediger.Geib@telekom.de>
> > Cc: CIAVATTONE, LEN <lc9892@att.com>om>, mattmathis@google.com
> > <mattmathis@google.com>
> >
> >
> >
> > I’m catching-up with this thread again, but before I reply:
> >
> >
> >
> > *** Any objection to moving this discussion to IPPM-list ?? ***
> >
> >
> >
> > @Matt – this is a question to you at this point...
> >
> >
> >
> > thanks,
> >
> > Al
> >
> >
> >
> > From: Ruediger.Geib@telekom.de [mailto:Ruediger.Geib@telekom.de]
> > Sent: Monday, August 26, 2019 3:05 AM
> > To: MORTON, ALFRED C (AL) <acm@research.att.com>
> > Cc: CIAVATTONE, LEN <lc9892@att.com>om>; mattmathis@google.com
> > Subject: AW: How should capacity measurement interact with shaping?
> >
> >
> >
> > Hi Al,
> >
> >
> >
> > thanks for keeping me involved. I don’t have a precise answer and doubt,
> that there will be a single universal truth.
> >
> >
> >
> > If the aim is only to determine the IP bandwidth of an access, then we
> aren’t interested in filling a buffer. Buffering events may occur, some of
> which are useful and to be expected, whereas others are not desired:
> >
> >
> >
> > -        Sender shaping behavior may matter (is traffic at the source
> CBR or is it bursty)
> >
> > -        Random collisions should be tolerated at the access whose
> bandwidth is to be measured.
> >
> > -        Limiting packet drop due to buffer overflow is a design aim or
> an important part of the algorithm, I think.
> >
> > -        Shared media might create bursts. I’m not an expert in the
> area, but there’s an “is bandwidth available” check in some cases between
> a central sender using a shared medium and the receivers connected. WiFi
> and may be other wireless equipment buffers packets also to optimize
> wireless resource optimization.
> >
> > -        It might be an idea to mark some flows by ECN, once there’s a
> guess on a sending bitrate when to expect no or very little packet drop.
> Today, this is experimental. CE marks by an ECN capable device should be
> expected roughly once queuing starts.
> >
> >
> >
> > _______________________________________________
> > ippm mailing list
> > ippm@ietf.org
> > https://urldefense.proofpoint.com/v2/url?u=https-
> 3A__www.ietf.org_mailman_listinfo_ippm&d=DwIGaQ&c=LFYZ-
> o9_HUMeMTSQicvjIg&r=OfsSu8kTIltVyD1oL72cBw&m=Q01OpB9liGLHD2NujOQz2ZsZstWhJ
> mYmq9QDzuytvFM&s=gcL4ff0SGxbLqvJzUshWBxIyvEfPeenVE7LsB_MJBxE&e=
> 
> _______________________________________________
> ippm mailing list
> ippm@ietf.org
> https://urldefense.proofpoint.com/v2/url?u=https-
> 3A__www.ietf.org_mailman_listinfo_ippm&d=DwIGaQ&c=LFYZ-
> o9_HUMeMTSQicvjIg&r=OfsSu8kTIltVyD1oL72cBw&m=Q01OpB9liGLHD2NujOQz2ZsZstWhJ
> mYmq9QDzuytvFM&s=gcL4ff0SGxbLqvJzUshWBxIyvEfPeenVE7LsB_MJBxE&e=
> _______________________________________________
> ippm mailing list
> ippm@ietf.org
> https://urldefense.proofpoint.com/v2/url?u=https-
> 3A__www.ietf.org_mailman_listinfo_ippm&d=DwIGaQ&c=LFYZ-
> o9_HUMeMTSQicvjIg&r=OfsSu8kTIltVyD1oL72cBw&m=Q01OpB9liGLHD2NujOQz2ZsZstWhJ
> mYmq9QDzuytvFM&s=gcL4ff0SGxbLqvJzUshWBxIyvEfPeenVE7LsB_MJBxE&e=