Re: [ippm] How should capacity measurement interact with shaping?

<Ruediger.Geib@telekom.de> Thu, 26 September 2019 06:50 UTC

Return-Path: <Ruediger.Geib@telekom.de>
X-Original-To: ippm@ietfa.amsl.com
Delivered-To: ippm@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id B30D412004C for <ippm@ietfa.amsl.com>; Wed, 25 Sep 2019 23:50:40 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -4.298
X-Spam-Level:
X-Spam-Status: No, score=-4.298 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_NONE=0.001, SPF_NONE=0.001, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=telekom.de
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id EpoSypYUBewg for <ippm@ietfa.amsl.com>; Wed, 25 Sep 2019 23:50:35 -0700 (PDT)
Received: from mailout11.telekom.de (mailout11.telekom.de [194.25.225.207]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 0F92F120045 for <ippm@ietf.org>; Wed, 25 Sep 2019 23:50:34 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=telekom.de; i=@telekom.de; q=dns/txt; s=dtag1; t=1569480620; x=1601016620; h=from:to:cc:subject:date:message-id:references: in-reply-to:content-transfer-encoding:mime-version; bh=pX/UDzkMC+R4jEsElrn1p/8BeSxd0XImD7gU5jeUoK0=; b=iHV/OkakOF+/zFYe0cH4jkAIs6UEEfmQJlIB5U7OF1ludydnQ47QM9Rs 1BCXI1snHB23N33lhN837ylnGsKqW2RCT3MwKkK9BFCGwdA1VR25yaIn3 ahUkM02SDh+NeLK6xy+Z7dUmiFNSoKD9lL3X3K5vVwqt/1vLZX+3EEgGB 0dkdI0gvArSN8ibY2SityZ+Y8oAoe7OI/7McO+I6CC2Ekyshjo03HNsQT Lgv5sZ42GZhmX4PYk4sAsT/Uw1EDBPN6+zODx6CvyT+kcarvTuPSXT053 F9CKziWTPrShXr5qmYO/JCWOLjUmILr0eNHcKlHiJfNyk3CBWdEeZtpeP g==;
Received: from qde8e4.de.t-internal.com ([10.171.255.33]) by MAILOUT11.dmznet.de.t-internal.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 26 Sep 2019 08:50:17 +0200
X-IronPort-AV: E=Sophos;i="5.64,550,1559512800"; d="scan'208";a="635440182"
X-MGA-submission: MDHKPc2EbbQjyuwqzWR4DhHOwiJmjaCs7zZEaAkZ+UpSFmX4DzRrPj/s0e24E6ZVxAoz6qpDkVLdFQEOe+BSWSm8xMh9sheKh1rnJNuA4eYAG0cZwbjH5EnKDSgZSMnCECOAy6tLHqBWaHq6AfR4b+AkPf5llbSQTsKB56lkICBh9g==
Received: from he105711.emea1.cds.t-internal.com ([10.169.118.42]) by QDE8PP.de.t-internal.com with ESMTP/TLS/AES256-SHA; 26 Sep 2019 08:50:32 +0200
Received: from HE105711.EMEA1.cds.t-internal.com (10.169.118.42) by HE105711.emea1.cds.t-internal.com (10.169.118.42) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 26 Sep 2019 08:50:31 +0200
Received: from HE104160.emea1.cds.t-internal.com (10.171.40.36) by HE105711.EMEA1.cds.t-internal.com (10.169.118.42) with Microsoft SMTP Server (TLS) id 15.0.1497.2 via Frontend Transport; Thu, 26 Sep 2019 08:50:31 +0200
Received: from GER01-LEJ-obe.outbound.protection.outlook.de (51.5.80.18) by O365mail03.telekom.de (172.30.0.232) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 26 Sep 2019 08:50:28 +0200
Received: from LEJPR01MB1178.DEUPRD01.PROD.OUTLOOK.DE (10.158.145.12) by LEJPR01MB0108.DEUPRD01.PROD.OUTLOOK.DE (10.158.140.143) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2284.25; Thu, 26 Sep 2019 06:50:31 +0000
Received: from LEJPR01MB1178.DEUPRD01.PROD.OUTLOOK.DE ([fe80::987d:70fa:4d72:d6da]) by LEJPR01MB1178.DEUPRD01.PROD.OUTLOOK.DE ([fe80::987d:70fa:4d72:d6da%5]) with mapi id 15.20.2284.028; Thu, 26 Sep 2019 06:50:31 +0000
From: Ruediger.Geib@telekom.de
To: ihameli@cnet.fi.uba.ar
CC: ippm@ietf.org
Thread-Topic: [ippm] How should capacity measurement interact with shaping?
Thread-Index: AQHVU4HrkoUGktYwE0yJQr8LfUwZ06b/aQHwgAQxteCAL6Jm4IAAe0sAgAAVfACAADSngIACen8AgAAx7gCABojwgIAAhyEg
Date: Thu, 26 Sep 2019 06:50:30 +0000
Message-ID: <LEJPR01MB1178633ADC0D6C54A649764A9C860@LEJPR01MB1178.DEUPRD01.PROD.OUTLOOK.DE>
References: <CAH56bmBmywKg_AxsHnRf97Pfxu4Yjsp_fv_s4S7LXk1voQpV1g@mail.gmail.com> <4D7F4AD313D3FC43A053B309F97543CFA0ADC777@njmtexg4.research.att.com> <LEXPR01MB05607E081CB169E34587EEEF9CA10@LEXPR01MB0560.DEUPRD01.PROD.OUTLOOK.DE> <4D7F4AD313D3FC43A053B309F97543CFA0AF9184@njmtexg5.research.att.com> <CAH56bmC3gDEDF0wypcN2Lu+Ken3E7f_zXf_5yYbJGURBsju22w@mail.gmail.com> <4D7F4AD313D3FC43A053B309F97543CFA0AF94D0@njmtexg5.research.att.com> <CAH56bmBvaFb9cT+YQUhyA4gYywhjFuhk12snFh8atB9xAA5pWg@mail.gmail.com> <4D7F4AD313D3FC43A053B309F97543CFA0AF9BA5@njmtexg5.research.att.com> <CAH56bmDmFwpzmB3NeDoE3cE-er6jZzZg_p-St6fO5nu3Ls1fJQ@mail.gmail.com> <EE39896D-A7E6-4710-924F-418B5BD72E38@cnet.fi.uba.ar>
In-Reply-To: <EE39896D-A7E6-4710-924F-418B5BD72E38@cnet.fi.uba.ar>
Accept-Language: de-DE, en-US
Content-Language: de-DE
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
authentication-results: spf=none (sender IP is ) smtp.mailfrom=Ruediger.Geib@telekom.de;
x-originating-ip: [164.19.3.151]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 515b64c6-f5a8-4482-e767-08d7424dd2a3
x-microsoft-antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(5600167)(711020)(4605104)(1401327)(4534185)(4627221)(201703031133081)(201702281549075)(8990200)(2017052603328)(7193020); SRVR:LEJPR01MB0108;
x-ms-traffictypediagnostic: LEJPR01MB0108:
x-ms-exchange-purlcount: 6
x-microsoft-antispam-prvs: <LEJPR01MB01089BD5270EC4AA80FD9E4C9C860@LEJPR01MB0108.DEUPRD01.PROD.OUTLOOK.DE>
x-ms-oob-tlc-oobclassifiers: OLM:55;
x-forefront-prvs: 0172F0EF77
x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(376002)(396003)(39860400002)(136003)(346002)(366004)(51444003)(189003)(199004)(9686003)(14444005)(86362001)(4326008)(102836004)(14454004)(3846002)(6116002)(413944005)(966005)(2906002)(81156014)(8936002)(7736002)(26005)(476003)(446003)(8676002)(11346002)(186003)(30864003)(53546011)(305945005)(6916009)(66946007)(66446008)(64756008)(66556008)(66476007)(55016002)(7696005)(76176011)(71200400001)(81166006)(486006)(66066001)(478600001)(71190400001)(33656002)(5660300002)(76116006)(6306002)(85182001)(256004)(66574012)(85202003)(316002)(178774003)(777600001)(579004); DIR:OUT; SFP:1101; SCL:1; SRVR:LEJPR01MB0108; H:LEJPR01MB1178.DEUPRD01.PROD.OUTLOOK.DE; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; MX:1; A:1;
received-spf: None (protection.outlook.com: telekom.de does not designate permitted sender hosts)
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam-message-info: EuW0xLDvfOLHbrGt6V2QYjKfI0VUph+OrgbAJgQuYXCSFQgTB6ta0t9ukdmEQkkpEwgZV4NCjCEGDAtYbo7AM0iBaKou/1MsatCh9vL3IAoBBlsO5gtYGsUgpLyFJlsE168z035I9RhmESZ6ZUL1mPFW3OOYeFOG9iuts6xtgUcRzmRudbqsAHfFnalwRXOYA2Po7JD5N66DmWp6f3KOXrEHNC1pVYaCyo5SgRJDnQksWqf858JnClXkUH2qzXGWsO7Dv0Ow1XzJLK9/5A1Bb0DRnR6i8j6rmdtpts+rWzzvcbqGOKIH1Zr424FE9gKDTwFgOI3dee6CJxDUOdpsi29yLkrpQwSJcefkThsh2WdwMQYF6fT5s9SPU4ZC7N5gF/TwT0DSb16MmHbOSN8h7Q+Kzt5FaEyeZnRRTQRfcYZ83BCB5G4WrRWJ/M/g3k/ggEyKOJNTq2OktyL2B46OMQ==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-Network-Message-Id: 515b64c6-f5a8-4482-e767-08d7424dd2a3
X-MS-Exchange-CrossTenant-originalarrivaltime: 26 Sep 2019 06:50:30.8837 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: bde4dffc-4b60-4cf6-8b04-a5eeb25f5c4f
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: a9MamlaJVQcqerh6Wf37+acrMZ/B39iuvfCRzF/i7vRjY82XrxYbC/dGIXxZH0928emO6WrvhuBvzENuwFh/ubPMP3cXyPsLk4qDHQwW9P4=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: LEJPR01MB0108
X-OriginatorOrg: telekom.de
Archived-At: <https://mailarchive.ietf.org/arch/msg/ippm/fkFCcOMFsv2R0GPBSvDB7ObSeWk>
Subject: Re: [ippm] How should capacity measurement interact with shaping?
X-BeenThere: ippm@ietf.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: IETF IP Performance Metrics Working Group <ippm.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/ippm>, <mailto:ippm-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/ippm/>
List-Post: <mailto:ippm@ietf.org>
List-Help: <mailto:ippm-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/ippm>, <mailto:ippm-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 26 Sep 2019 06:50:41 -0000

Dear Ignacio,

[IAH] My question is: why one wants to measure the "real" access if this one could not be available 100% of the time at that specific speed?

[RG] Regulators busy an industry with so called access "speed tests". This also involves providers which are subjected to regulation. Some of these are interested in standardizing IP access bandwidth measurements.

[RG] Some text of your message seems to be concerned with detection of the available bottleneck bandwidth (e.g., in the presence of background traffic). That's an interesting subject too. We should separate discussion, I think. 

[RG] Contributions to the detection of congestion indicated by a queue build up are welcome. RTT is one potential method and the merits and flaws of it can become part of the draft, I think.

Regards,

Ruediger


-----Ursprüngliche Nachricht-----
Von: ippm <ippm-bounces@ietf.org> Im Auftrag von J Ignacio Alvarez-Hamelin
Gesendet: Donnerstag, 26. September 2019 00:21
An: Matt Mathis <mattmathis=40google.com@dmarc.ietf.org>
Cc: MORTON, ALFRED C (AL) <acm@research.att.com>; ippm@ietf.org
Betreff: Re: [ippm] How should capacity measurement interact with shaping?

Dear Matt,

I use to discuss with Al and Rüdiger about some issues about measurements in IP networks. 
I would like to introduce another point of view (sorry to get the "entropy" larger…), In my opinion one part of the problem is that one would like to establish how significant is the Internet connection, and it depends not only on the channel technology but in a lot of other parameters (like "buckets", etc.). My question is: why one wants to measure the "real" access if this one could not be available 100% of the time at that specific speed? It is clearly not realistic that an ISP provides 100Mbps all the time for every customer; therefore, another kind of measurement could be considered. 
Form the other optic, BBR is a great technique to achieve the best possible capacity incredibly quick, and here the problem is another. (By the way, I think that the BBR difficulty is potentially related to  the interactions with several concurrent flows; which, even if there are some studies, I think it is not entirely understood.) Then, there are situations, from the user-application, that it is needed to use a high capacity during a  time (i.e., security software updates), and this could be interesting to provide protocols, like BBR, to ensure that.
What end you pursuit? Understand what is happening on the network to bring a solution to this kind of applications, or you would like to measure the properties? 
For my point of view (sorry Al to repeat this one more time), if we like to produce some information about the network status for the end-user, we need to work on parameters that express the "quality": quite hard. 
On idea on this path is to know if my network could carry some "bursty" traffic in *every* time; which means to measure continuously, i.e., we cannot saturate the link all the time. This fact leads to another idea: to develop an active and low impact measurement to asses to quality: can I send or receive some busty traffic? I understand that how long and how much "bursty" is not defined here, but we can figure out (or try some reasonable numbers at least ). I got a couple of ideas using RTTs and my experience measuring RTTs on IP networks.
The central part here is that performing measurement using bottleneck limit trigger different mechanisms, like one that you described, but potentially others more complex. (By the way, avoiding lost packets but controlling the delay is an exciting way to influence BBR).

Best,

	Ignacio


_______________________________________________________________

Dr. Ing. José Ignacio Alvarez-Hamelin
CONICET and Facultad de Ingeniería, Universidad de Buenos Aires Av. Paseo Colón 850 - C1063ACV - Buenos Aires - Argentina
+54 (11) 5285 0716 / 5285 0705
e-mail: ihameli@cnet.fi.uba.ar
web: http://cnet.fi.uba.ar/ignacio.alvarez-hamelin/
_______________________________________________________________



> On 21 Sep 2019, at 15:32, Matt Mathis <mattmathis=40google.com@dmarc.ietf.org> wrote:
> 
> Yes, exactly, I am sure this is a provider's feature.   I have a receiver side pcap, and it is quite a bit more complicated than I thought:
> - Zero losses in the entire trace.  It is dynamically shaped at a bottleneck with a long queue that is pacing packets.
> - The initial part is really straight (looks like a hard limit)
> - The rate (and packet headway) smoothly wanders irregularly all over the place in the latter part of the trace, from a low of about 1 Mb/s to peaks close to the max rate.   My earlier data was from BBR max_rate, so the fluctuating rate apparently has stable peaks.
> 
> By smooth: it looked like a spline fit, suggesting the instantaneous packet headway was determined by a differential equation....
> 
> This behavior is not an accident, but the result of a very sophisticated controller.   And we have seen other bottlenecks like it elsewhere in the US and Europe.
> 
> As people may know, I do believe that shaping is the most appropriate way to deal with heavy hitters.  And that we (IPPM) really need a way to characterize shaping.  I do care about the asymptotic rate, and how quickly I fall out of the initial rate.
> 
> Thanks,
> --MM--
> The best way to predict the future is to create it.  - Alan Kay
> 
> We must not tolerate intolerance;
>        however our response must be carefully measured: 
>             too strong would be hypocritical and risks spiraling out of control;
>             too weak risks being mistaken for tacit approval.
> 
> 
> On Sat, Sep 21, 2019 at 8:34 AM MORTON, ALFRED C (AL) <acm@research.att.com> wrote:
> Hi Matt,
> 
>  
> 
> I had another thought about the 94 > 75 > 83 Mbps trace:
> 
> This might well be a service provider’s feature, where
> 
> they favor new flows with a high-rate for a short amount
> 
> of time.  I remember some ISPs offering a “speed boost”
> 
> to load web pages fast, but settling to a lower rate for
> 
> longer transfers. The same strategy could help reduce
> 
> the initial buffering time for video streams, perhaps with
> 
> different time intervals and rates. This might be implemented
> 
> by changing the token rate, or through some other means.
> 
>  
> 
>  
> 
> From: Matt Mathis [mailto:mattmathis@google.com]
> Sent: Thursday, September 19, 2019 9:43 PM
> To: MORTON, ALFRED C (AL) <acm@research.att.com>
> Cc: Ruediger.Geib@telekom.de; ippm@ietf.org; CIAVATTONE, LEN 
> <lc9892@att.com>
> Subject: Re: How should capacity measurement interact with shaping?
> 
>  
> 
> I am actually more interested in the philosophical questions about how this should be reported, and what should the language be about non-stationary available capacity.   One intersecting issue: BBR converges on both the initial and final rate in under 2 seconds (this was a long path, so startup took more than a second).   Do users want a quick (and relatively cheap) test that takes 2 seconds or a longer test that is more likely to discover the token bucket?  How long?  If we want to call them different names, what should they be?
> 
> [acm]
> 
> So, if your trace is revealing a bimodal form of service rates,
> 
> then it ought to be characterized with that in mind and
> 
> allow for two modes of operation when reporting:
> 
> 94 initial peak Capacity, 83 sustained Capacity
> 
> *when this behavior is demonstrated and repeatable”.
> 
>  
> 
> Thanks for more insights about BBR, too.
> 
> Al
> 
>  
> 
> On the pure technical issues: BBR is still quite a moving target.   I have a paper in draft that will shed some light on this.  It is due to be unembargoed sometime in October.
> 
> BBRv1 (released slightly after the cacm paper you mention) measures the max_BW every 8 RTT.  BBRv2 measures the max_BW on a sliding schedule that loosely matches CUBIC.  (In both, min_RTT is measured every 10 seconds, in the absence of organic low RTT samples).   BBRv2 uses additional signals and does a much better job of avoiding overshoot at the startup.
> 
>  
> 
> In any case the best (most stable) BBR based metric seems to be delta(snd.una)/elapsed_time, which is the progress as seen by upper layers.  If you look at short time slices (we happen to be using 0.25 seconds) you see a mostly crisp square wave.  If you average from the beginning of the connection to now, the peak rate happens at the moment the bucket runs out of tokens, and falls towards the toke rate after that.
> 
> 
> 
> Thanks,
> 
> --MM--
> The best way to predict the future is to create it.  - Alan Kay
> 
> We must not tolerate intolerance;
> 
>        however our response must be carefully measured: 
> 
>             too strong would be hypocritical and risks spiraling out 
> of control;
> 
>             too weak risks being mistaken for tacit approval.
> 
>  
> 
>  
> 
> On Thu, Sep 19, 2019 at 3:35 PM MORTON, ALFRED C (AL) <acm@research.att.com> wrote:
> 
> Thanks Matt!  This is an interesting trace to consider,
> 
> and an important discussion to share with the group.
> 
>  
> 
> When I look at the equation for BBR:
> 
> https://cacm.acm.org/magazines/2017/2/212428-bbr-congestion-based-cong
> estion-control/fulltext
> 
>  
> 
> both BBR and Maximum IP-layer Capacity Metric seek the
> 
> Max over some time interval. The window seems smaller for
> 
> BBR: 6 to 10 RTTs, where we’ve been using parameters that
> 
> result in a rate measurement once a second and take the max
> 
> of the 10 one-second measurements.
> 
>  
> 
> We’ve also evaluate several performance metrics when
> 
> adjusting load, and that determines how high the sending
> 
> rate will go (based on feedback from the receiver).
> 
> https://tools.ietf.org/html/draft-morton-ippm-capcity-metric-method-00
> #section-4.3
> 
>  
> 
> So, it seems that the MAX delivered rate for the 10 second test,  we
> 
> can all see is 94.5 Mbps. This rate was sustained for more
> 
> than a trivial amount of time, too. But if you are concerned that this
> 
> rate was somehow inflated by a large buffer and a large
> 
> burst tolerance in the shaper – that’s where the additional
> 
> metrics and slightly different sending rate control
> 
> that we described in the draft (and the slides) might help.
> 
> https://datatracker.ietf.org/meeting/105/materials/slides-105-ippm-met
> rics-and-methods-for-ip-capacity-00
> 
>  
> 
> IOW, it might well be that Max IP Capacity, measured as we designed
> 
> and parameterized it, measures 83 Mbps for this path
> 
> (assuming the 94.5 is the result of big overshoot at sender, and the
> 
> fluctuating performance afterward seems to support that).
> 
>  
> 
> When I was looking for background on BBR, I saw a paper comparing
> 
> BBR and CUBIC during drive tests.
> 
> http://web.cs.wpi.edu/~claypool/papers/driving-bbr/
> 
> One pair of plots seemed to indicate that BBR sent lots of Bytes
> 
> early-on, and grew the RTT pretty high before settling down
> 
> (Figure 5, a & b).
> 
> This looks a bit like the case you described below,
> 
> except 94.5 Mbps is a Received Rate – we don’t know
> 
> what came out of the network, just what went in and filled
> 
> a buffer before crashing down in the drive test.
> 
>  
> 
> So, I think I did more investigation than justification
> 
> for my answers, but I conclude the parameters like the
> 
> individual measurement intervals and overall time interval
> 
> from which the Max is drawn, plus the rate control algorithm
> 
> itself, play a big role here.
> 
>  
> 
> regards,
> 
> Al
> 
>  
> 
>  
> 
> From: Matt Mathis [mailto:mattmathis@google.com]
> Sent: Thursday, September 19, 2019 5:18 PM
> To: MORTON, ALFRED C (AL) <acm@research.att.com>; 
> Ruediger.Geib@telekom.de
> Cc: ippm@ietf.org
> Subject: Fwd: How should capacity measurement interact with shaping?
> 
>  
> 
> Ok, moving the thread to IPPM
> 
>  
> 
> Some background, we (Measurement Lab) are testing a new transport (TCP) performance measurement tool, based on BBR-TCP.   I'm not ready to talk about results yet (well ok, it looks pretty good).    (BTW the BBR algorithm just happens to resemble the algorithm described in draft-morton-ippm-capcity-metric-method-00.)
> 
>  
> 
> Anyhow we noticed some interesting performance features for number of ISPs in the US and Europe and I wanted to get some input for how these cases should be treated.
> 
>  
> 
> One data point, a single trace saw ~94.5 Mbit/s for ~4 seconds, fluctuating performance ~75 Mb/s for ~1 second and then stable performance at ~83Mb/s for the rest of the 10 second test.    If I were to guess this is probably a policer (shaper?) with a 1 MB token bucket and a ~83Mb/s token rate (these numbers are not corrected for header overheads, which actually matter with this tool).  What is weird about it is that different ingress interfaces to the ISP (peers or serving locations) exhibit different parameters.  
> 
>  
> 
> Now the IPPM measurement question:   Is the bulk transport capacity of this link ~94.5 Mbit/s or ~83Mb/s?   Justify your answer....?
> 
>  
> 
> Thanks,
> 
> --MM--
> The best way to predict the future is to create it.  - Alan Kay
> 
> We must not tolerate intolerance;
> 
>        however our response must be carefully measured: 
> 
>             too strong would be hypocritical and risks spiraling out 
> of control;
> 
>             too weak risks being mistaken for tacit approval.
> 
>  
> 
> Forwarded Conversation
> Subject: How should capacity measurement interact with shaping?
> ------------------------
> 
>  
> 
> From: Matt Mathis <mattmathis@google.com>
> Date: Thu, Aug 15, 2019 at 8:55 AM
> To: MORTON, ALFRED C (AL) <acm@research.att.com>
> 
>  
> 
> We are seeing shapers  with huge bucket sizes, perhaps as larger or larger than 100 MB.
> 
>  
> 
> These are prohibitive to test by default, but can have a huge impact in some common situations.  E.g. downloading software updates.
> 
>  
> 
> An unconditional pass is not good, because some buckets are small.  What counts as large enough to be ok, and what "derating" is ok?
> 
> 
> 
> Thanks,
> 
> --MM--
> The best way to predict the future is to create it.  - Alan Kay
> 
> We must not tolerate intolerance;
> 
>        however our response must be carefully measured: 
> 
>             too strong would be hypocritical and risks spiraling out 
> of control;
> 
>             too weak risks being mistaken for tacit approval.
> 
>  
> 
> ----------
> From: MORTON, ALFRED C (AL) <acm@research.att.com>
> Date: Mon, Aug 19, 2019 at 5:08 AM
> To: Matt Mathis <mattmathis@google.com>
> Cc: CIAVATTONE, LEN <lc9892@att.com>, Ruediger.Geib@telekom.de 
> <Ruediger.Geib@telekom.de>
> 
>  
> 
> Hi Matt, currently cruising between Crete and Malta,
> 
> with about 7 days of vacation remaining – Adding my friend Len.
> 
> You know Rüdiger. It appears I’ve forgotten how to typs in 2 weeks
> 
> given the number of typos I’ve fixed so far...
> 
>  
> 
> We’ve seen big buffers on a basic DOCSIS cable service (downlink >2 
> sec)
> 
> but,
> 
>   we have 1-way delay variation or RTT variation limits
> 
>   when searching for the max rate, that don’t many packets
> 
>   queue in the buffer
> 
>  
> 
>   we want the status messages that result in rate adjustment to return
> 
>  in a reasonable amount of time (50ms + RTT)
> 
>  
> 
>   we usually search for 10 seconds, but if we go back and test with
> 
>   a fixed rate, we can see the buffer growing if the rate is too high.
> 
>  
> 
>   There will eventually be a discussion on the thresholds we use
> 
>   in the search // load rate control algorithm. The copy of
> 
>   Y.1540 I sent you has a simple one, we moved beyond that now
> 
>   (see the slides I didn’t get to present at IETF).
> 
>  
> 
>   There is value in having some of this discussion on IPPM-list,
> 
>   so we get some *agenda time at IETF-106*
> 
>  
> 
> We measure rate and performance, with some performance limits
> 
> built-in.  Pass/Fail is another step, de-rating too (made sense
> 
> with MBM “target_rate”).
> 
>  
> 
> Al
> 
>  
> 
> ----------
> From: <Ruediger.Geib@telekom.de>
> Date: Mon, Aug 26, 2019 at 12:05 AM
> To: <acm@research.att.com>
> Cc: <lc9892@att.com>, <mattmathis@google.com>
> 
>  
> 
> Hi Al,
> 
>  
> 
> thanks for keeping me involved. I don’t have a precise answer and doubt, that there will be a single universal truth.
> 
>  
> 
> If the aim is only to determine the IP bandwidth of an access, then we aren’t interested in filling a buffer. Buffering events may occur, some of which are useful and to be expected, whereas others are not desired:
> 
>  
> 
> 	• Sender shaping behavior may matter (is traffic at the source CBR or is it bursty)
> 	• Random collisions should be tolerated at the access whose bandwidth is to be measured.
> 	• Limiting packet drop due to buffer overflow is a design aim or an important part of the algorithm, I think.
> 	• Shared media might create bursts. I’m not an expert in the area, but there’s an “is bandwidth available” check in some cases between a central sender using a shared medium and the receivers connected. WiFi and may be other wireless equipment buffers packets also to optimize wireless resource optimization.
> 	• It might be an idea to mark some flows by ECN, once there’s a guess on a sending bitrate when to expect no or very little packet drop. Today, this is experimental. CE marks by an ECN capable device should be expected roughly once queuing starts.
>  
> 
> Practically, the set-up should be configurable with commodity hard- and software and all metrics should be measurable at the receiver. Burstiness of traffic and a distinction between queuing events which are to be expected and (undesired) queue build up are the events to be distinguished. I hope that can be done with commodity hard- and software. I at least am not able to write down a simple metric distinguishing queues to be expected from (undesired) queue build up causing congestion. The hard- and software to be used should be part of the solution, not part of the problem (bursty source traffic and timestamps with insufficient accuracy to detect queues are what I’d like to avoid).
> 
>  
> 
> I’d suggest to move discussion to the list.
> 
>  
> 
> Regards,
> 
>  
> 
> Rüdiger
> 
>  
> 
> ----------
> From: MORTON, ALFRED C (AL) <acm@research.att.com>
> Date: Thu, Sep 19, 2019 at 7:01 AM
> To: Ruediger.Geib@telekom.de <Ruediger.Geib@telekom.de>
> Cc: CIAVATTONE, LEN <lc9892@att.com>, mattmathis@google.com 
> <mattmathis@google.com>
> 
>  
> 
> I’m catching-up with this thread again, but before I reply:
> 
>  
> 
> *** Any objection to moving this discussion to IPPM-list ?? ***
> 
>  
> 
> @Matt – this is a question to you at this point...
> 
>  
> 
> thanks,
> 
> Al
> 
>  
> 
> From: Ruediger.Geib@telekom.de [mailto:Ruediger.Geib@telekom.de]
> Sent: Monday, August 26, 2019 3:05 AM
> To: MORTON, ALFRED C (AL) <acm@research.att.com>
> Cc: CIAVATTONE, LEN <lc9892@att.com>; mattmathis@google.com
> Subject: AW: How should capacity measurement interact with shaping?
> 
>  
> 
> Hi Al,
> 
>  
> 
> thanks for keeping me involved. I don’t have a precise answer and doubt, that there will be a single universal truth.
> 
>  
> 
> If the aim is only to determine the IP bandwidth of an access, then we aren’t interested in filling a buffer. Buffering events may occur, some of which are useful and to be expected, whereas others are not desired:
> 
>  
> 
> -        Sender shaping behavior may matter (is traffic at the source CBR or is it bursty)
> 
> -        Random collisions should be tolerated at the access whose bandwidth is to be measured.
> 
> -        Limiting packet drop due to buffer overflow is a design aim or an important part of the algorithm, I think.
> 
> -        Shared media might create bursts. I’m not an expert in the area, but there’s an “is bandwidth available” check in some cases between a central sender using a shared medium and the receivers connected. WiFi and may be other wireless equipment buffers packets also to optimize wireless resource optimization.
> 
> -        It might be an idea to mark some flows by ECN, once there’s a guess on a sending bitrate when to expect no or very little packet drop. Today, this is experimental. CE marks by an ECN capable device should be expected roughly once queuing starts.
> 
>  
> 
> _______________________________________________
> ippm mailing list
> ippm@ietf.org
> https://www.ietf.org/mailman/listinfo/ippm

_______________________________________________
ippm mailing list
ippm@ietf.org
https://www.ietf.org/mailman/listinfo/ippm