Re: [ippm] Fwd: How should capacity measurement interact with shaping?

<Ruediger.Geib@telekom.de> Wed, 25 September 2019 11:00 UTC

Return-Path: <Ruediger.Geib@telekom.de>
X-Original-To: ippm@ietfa.amsl.com
Delivered-To: ippm@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 55188120047 for <ippm@ietfa.amsl.com>; Wed, 25 Sep 2019 04:00:46 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -4.299
X-Spam-Level:
X-Spam-Status: No, score=-4.299 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_NONE=0.001, SPF_NONE=0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=telekom.de
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id AGg0F1Uwep8S for <ippm@ietfa.amsl.com>; Wed, 25 Sep 2019 04:00:41 -0700 (PDT)
Received: from mailout41.telekom.de (mailout41.telekom.de [194.25.225.151]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 6BFEE12010C for <ippm@ietf.org>; Wed, 25 Sep 2019 04:00:40 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=telekom.de; i=@telekom.de; q=dns/txt; s=dtag1; t=1569409232; x=1600945232; h=from:to:cc:subject:date:message-id:references: in-reply-to:content-transfer-encoding:mime-version; bh=xRR9dt+6UF1r3Kj88Bk4Nv+S/ifrsUhM/CqwEoHodgc=; b=r7WPn1uM4r0jOz6wDfemClV7vsB7UNYyV7SWmNZS+HQDdhb0yLXiAm2O zfHa8Cx97/vCOjuDkF30wuZfs8Wm5E2EU13/FHY575yBuVS55mTa9vWSx jxIgli94BEIVS+m4U1YRHrq1z5rkiDalk4vngrb9Yh1dFmL4nDsR30heG 5/pO+2Q7QM5dxKmgqhuW1j4zsMndq9IDluw3xOBlVPLC10zYffD/6Rskg fXn3m1wR9OLWjSb9mH7zjsXSgakjZe8dsupFxaiKNtbJRMFWD0YcC00an dRzMh7kOeBK27wJZ56xllvELfH0FmehBimUTsnzAw1oPnWfGIUultwBGY w==;
Received: from qde9xy.de.t-internal.com ([10.171.254.32]) by MAILOUT41.dmznet.de.t-internal.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 25 Sep 2019 13:00:30 +0200
X-IronPort-AV: E=Sophos;i="5.64,547,1559512800"; d="scan'208";a="369525437"
X-MGA-submission: MDG7FJpOu/TH5tFGVpv6R4tMhGAIT9q7BNKzAqSSEz4S8RqXFKUfNy4/IwKFZH/rQY5YfI9L74pWGt31JvuqN28GXguIPRxIGDliVtfdApsCTz49QEkdrM7Um7KLaFiXqVMujOZYQAgUabc7L3FA2asg0Wy/P+cVoneENU8j4pJNag==
Received: from he105709.emea1.cds.t-internal.com ([10.169.118.41]) by QDE9Y1.de.t-internal.com with ESMTP/TLS/AES256-SHA; 25 Sep 2019 12:57:49 +0200
Received: from HE105715.EMEA1.cds.t-internal.com (10.169.118.51) by HE105709.emea1.cds.t-internal.com (10.169.118.41) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 25 Sep 2019 12:57:36 +0200
Received: from HE104160.emea1.cds.t-internal.com (10.171.40.36) by HE105715.EMEA1.cds.t-internal.com (10.169.118.51) with Microsoft SMTP Server (TLS) id 15.0.1497.2 via Frontend Transport; Wed, 25 Sep 2019 12:57:36 +0200
Received: from GER01-FRA-obe.outbound.protection.outlook.de (51.4.80.21) by O365mail03.telekom.de (172.30.0.232) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 25 Sep 2019 12:57:33 +0200
Received: from LEJPR01MB1178.DEUPRD01.PROD.OUTLOOK.DE (10.158.145.12) by LEJPR01MB0316.DEUPRD01.PROD.OUTLOOK.DE (10.158.142.7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2284.26; Wed, 25 Sep 2019 10:57:35 +0000
Received: from LEJPR01MB1178.DEUPRD01.PROD.OUTLOOK.DE ([fe80::987d:70fa:4d72:d6da]) by LEJPR01MB1178.DEUPRD01.PROD.OUTLOOK.DE ([fe80::987d:70fa:4d72:d6da%5]) with mapi id 15.20.2284.023; Wed, 25 Sep 2019 10:57:35 +0000
From: Ruediger.Geib@telekom.de
To: joachim.fabini@tuwien.ac.at
CC: ippm@ietf.org
Thread-Topic: AW: [ippm] Fwd: How should capacity measurement interact with shaping?
Thread-Index: AQHVU4HrkoUGktYwE0yJQr8LfUwZ06b/aQHwgAQxteCAL6Jm4IAAe0sAgAcTDQCAAAc4AIABlP4AgAAKlVA=
Date: Wed, 25 Sep 2019 10:57:35 +0000
Message-ID: <LEJPR01MB1178A7ABC54AEEE4A891A8949C870@LEJPR01MB1178.DEUPRD01.PROD.OUTLOOK.DE>
References: <CAH56bmBmywKg_AxsHnRf97Pfxu4Yjsp_fv_s4S7LXk1voQpV1g@mail.gmail.com> <4D7F4AD313D3FC43A053B309F97543CFA0ADC777@njmtexg4.research.att.com> <LEXPR01MB05607E081CB169E34587EEEF9CA10@LEXPR01MB0560.DEUPRD01.PROD.OUTLOOK.DE> <4D7F4AD313D3FC43A053B309F97543CFA0AF9184@njmtexg5.research.att.com> <CAH56bmC3gDEDF0wypcN2Lu+Ken3E7f_zXf_5yYbJGURBsju22w@mail.gmail.com> <e782b21f-e475-ca0f-01ac-a195a02fdd36@tuwien.ac.at> <LEJPR01MB1178B3E51EE8BD26673F9F0D9C840@LEJPR01MB1178.DEUPRD01.PROD.OUTLOOK.DE> <22c76deb-89cb-4936-6c1f-6e7171054d2b@tuwien.ac.at>
In-Reply-To: <22c76deb-89cb-4936-6c1f-6e7171054d2b@tuwien.ac.at>
Accept-Language: de-DE, en-US
Content-Language: de-DE
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
authentication-results: spf=none (sender IP is ) smtp.mailfrom=Ruediger.Geib@telekom.de;
x-originating-ip: [164.19.3.30]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 3a09a6a6-edb1-474b-1f69-08d741a72c35
x-microsoft-antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(4534185)(4627221)(201703031133081)(201702281549075)(8990200)(5600167)(711020)(4605104)(1401327)(2017052603328)(7193020); SRVR:LEJPR01MB0316;
x-ms-traffictypediagnostic: LEJPR01MB0316:
x-ms-exchange-purlcount: 2
x-microsoft-antispam-prvs: <LEJPR01MB03169742BADB1086BE1E6C7C9C870@LEJPR01MB0316.DEUPRD01.PROD.OUTLOOK.DE>
x-ms-oob-tlc-oobclassifiers: OLM:27;
x-forefront-prvs: 01713B2841
x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(136003)(346002)(366004)(376002)(39860400002)(396003)(199004)(189003)(66946007)(66556008)(66446008)(64756008)(5640700003)(6916009)(486006)(66476007)(316002)(76116006)(186003)(11346002)(26005)(76176011)(53546011)(256004)(3846002)(85202003)(6116002)(33656002)(561944003)(14444005)(102836004)(7696005)(446003)(55016002)(8936002)(4326008)(478600001)(966005)(305945005)(7736002)(476003)(2351001)(66066001)(81166006)(86362001)(81156014)(14454004)(71190400001)(6306002)(71200400001)(8676002)(85182001)(9686003)(2501003)(2906002)(66574012)(5660300002)(30864003)(777600001)(579004); DIR:OUT; SFP:1101; SCL:1; SRVR:LEJPR01MB0316; H:LEJPR01MB1178.DEUPRD01.PROD.OUTLOOK.DE; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; A:1; MX:1;
received-spf: None (protection.outlook.com: telekom.de does not designate permitted sender hosts)
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam-message-info: kit1dKAL+px4OeN4QHZ24Z8Nt9yYzT+GGc3UpAsf3lvZfNvkthgnYSEsJ5ryOuE/nmZajTf6Yg6FnPKtCf/qVxOL9XbEPGKfRgorwJOjb8Hmr1GPnPYzI/u3EBcp11YUvMmIIduR/0Fl4IEsmbUv3dLmbtg3CIbSyz+p4MFeqtZyu9IeT//BYE4BDWxlhhkkjdSeIcr+Sr32a0mk+M8AAQv+axd8T+kpBWT5C8jUg+xvtb95NeoNtZWprr8xEWtOluk1R+2glayOyoUSFUzPvlW7AYyJ/v1MND/LxYpIuAAHSicsojUuhtfjvxQZXfmLj8zTZNWEnYIlZ6EhzKUR/XGJ8tB/9r/1HKUlKll5gabkHkiG7hTuZ6wfEqKxL3IckWBMTTOzTowuygi5BC5qRK5IEtrTqoC41z6t6oUjzXAUZuqOShrxtq86iNiJGi0Ll53U9Pm+ih4V3FFOxncvjQ==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-Network-Message-Id: 3a09a6a6-edb1-474b-1f69-08d741a72c35
X-MS-Exchange-CrossTenant-originalarrivaltime: 25 Sep 2019 10:57:35.1757 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: bde4dffc-4b60-4cf6-8b04-a5eeb25f5c4f
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 9lAnxRwNw0rMudwu2vlcTtOBfxic5SnDKkFKB0MkAAuJTOGVCyrXsXV0LATqGu9pvOdre8M2/i7yhq3nrl9VPmO2jZODwrKkglalPiZ1Ieg=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: LEJPR01MB0316
X-OriginatorOrg: telekom.de
Archived-At: <https://mailarchive.ietf.org/arch/msg/ippm/or1U5tDwIEgzwizG3driE0QiWNY>
Subject: Re: [ippm] Fwd: How should capacity measurement interact with shaping?
X-BeenThere: ippm@ietf.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: IETF IP Performance Metrics Working Group <ippm.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/ippm>, <mailto:ippm-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/ippm/>
List-Post: <mailto:ippm@ietf.org>
List-Help: <mailto:ippm-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/ippm>, <mailto:ippm-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 25 Sep 2019 11:00:46 -0000

Hi Joachim,

the QoE measurement campaigns I mentioned were performed on both, laboratory and public LTE networks. The researchers wanted to measure results which relate to consumer experience. I regard QoE measurements as a source giving me general guidance on what's done on a subject also related to QoS, but don't expect them to serve as a blueprint for QoS measurements.  

Pre-loading may be a useful approach to reach "stable" mobile network conditions. Hopefully, there are choices to specify "stable" in a way allowing separate, but compatible implementations of a metric.

My closing statement below  should express, that a metric determining a stable BTC should consider packet loss and queue build up (i.e., increasing one-way latency during "stable" conditions) as indicators of congestion. The highest sending rate not causing congestion seems to be a metric useful for a BTC metric to me. Another one might be a burst size which can be transmitted causing congestion by a queue build up, but no packet drop.

I fully agree that receiver based measurements or receiver perspective, respectively, should be taken into account.

Regards, Ruediger
 

-----Ursprüngliche Nachricht-----
Von: Joachim Fabini <joachim.fabini@tuwien.ac.at> 
Gesendet: Mittwoch, 25. September 2019 11:55
An: Geib, Rüdiger <Ruediger.Geib@telekom.de>
Cc: ippm@ietf.org
Betreff: Re: AW: [ippm] Fwd: How should capacity measurement interact with shaping?

Hi Ruediger,

thank you for your comments, answers inline.

On 24/09/2019 12:14, Ruediger.Geib@telekom.de wrote:
> Hi Joachim,
> 
> thanks for adding a discussion related to mobile access behavior. I've read some "streaming QoE over LTE" related publications. These are not BTC related, but LTE and CCA interaction also impacts the packet and flow related measurements performed. One conclusion of these publications is, that throughput fluctuations characterize an LTE access network better, than just average throughput values.

Did these publications use isolated LTE networks or public ones?
Whenever (access) network capacity is shared between users (which is the case for most cellular users in a cell) this is a potential source of non-determinism in terms of measurements that is {hard|impossible} to control. Measurement results depend to a substantial extent on the traffic profile and timing of competing devices and users.

> 
> Your discussion whether to consider an access showing 94.5 Mbit/s throughput for a short time and 83 Mbit/s for a longer interval may indicate, that singleton, short interval BTC measurements used as samples to build a (or some) BTC statistics_to_be_defined might be useful approach.  

Yes, I like your proposal. A BTC singleton could be measured after detecting stable conditions and for an interval that is sufficiently short to increase the likelihood of almost constant parameters. The need to reflect the "true" path BTC (i.e., the *receiver's* perspective) and detect/eliminate intermediate node buffering is challenging but imo crucial for such experiments.

> The above QoE experts use receiver statistics, as you suggest too, however their activities are passive, whereas BTC measurements discussed here are active. I add this to indicate, that a mobile access IPPM BTC metric discussion will have to produce own approaches.

The good news about BTC in mobile networks is that (at least in the past, when I did measurements) the scheduling/capacity algorithm predictability and determinism increased for greedy ("the more you ask for, the more you get") users. And BTC measurements are greedy by definition.

> I agree that defining "long term average data rate" in a way which is useful to an implementors use case is desirable (the same holds for guidance on a short term singleton BTC metric).

Yes, but again: in white box measurements (without exact knowledge on the measurement path implementation and configuration) imo there is an imperative need for the *receiver's* perspective. The *sender* itself can't differentiate between stateful policing, shaping and buffering that may happen on the network path.

> To start a discussion: a BTC singleton could consist of back-to-back packets of a length corresponding to an important or expected bottleneck burst tolerance (and/or buffer depth). 

Hmmh. If buffering is known to happen on the path, I consider the BTC singleton that you mention to be a *prerequisite* for the actual measurement BTC singleton: The active measurement must send sufficient data to fill all buffers within the network path under test. Only subsequent packets allow inference on the path's true BTC.

As a side note: an identical approach ("preload") is used for active delay measurements in mobile networks: transferring a large data burst over an idle 3G/4G link makes the radio scheduler to allocate substantial radio resources (i.e. high capacity) to this link. Sending a delay measurement packet immediately after the data burst yields delay values that are by orders of magnitude lower compared to sending the same packets on an almost idle link.

I mentioned this detail because of two aspects:
1. The same mechanism could be used to create adequate measurement prerequisites for mobile cellular networks and for fixed access networks, for BTC and for delay measurements. The packets serve distinct purpose - force capacity allocation in cellular networks vs. filling buffers in fixed network paths - but the same method may work for both cases.
2. Measurement results nowadays depend on the use case (i.e., the context in which they have been acquired). They reflect the performance of the network in a particular state that we can (sometimes) artificially enforce as measurement prerequisite. If the state is the one encountered in typical operation, then the results are representative (which will be typically true for BTC). Otherwise they may be misleading.

> These singletons should be spaced in a way that a loss and may be queue free long term BTC results. 

Sorry, I don't get this last sentence (something missing?).

Regards
Joachim




> -----Ursprüngliche Nachricht-----
> Von: Joachim Fabini <joachim.fabini@tuwien.ac.at>
> Gesendet: Dienstag, 24. September 2019 11:20
> An: Matt Mathis <mattmathis=40google.com@dmarc.ietf.org>; ALFRED C 
> (AL) <acm@research.att.com>; Geib, Rüdiger <Ruediger.Geib@telekom.de>
> Cc: ippm@ietf.org
> Betreff: Re: [ippm] Fwd: How should capacity measurement interact with shaping?
> 
> Matt,
> 
> access technology was the first question I wanted to ask yesterday, you answered it in your last email.
> Some more comments, questions, and observations:
> - On-demand capacity allocation over time was/is common for mobile cellular access technologies. Superposition of algorithms at various layers (cellular access capacity management, buffering, users in the cell and CCA) generates challenging traces.
> - Some details on the measurement architecture could help: where did you measure BTC: at the receiver, at the sender, at intermediate nodes? Time granularity/resolution?
> - Al made several good points in his replies; the overshooting of BBR in Fig.5a of http://web.cs.wpi.edu/~claypool/papers/driving-bbr/paper-final.pdf is a hint that *if* there are buffers on the path, BBR will attempt to use them. It's the sequence cache-memory-network: first you fill up all local buffers, then the ones at intermediate nodes on the path, and only later on you can measure the network capacity. Still, 4 seconds for the initial 94.5 Mbit/s burst means truly large buffers...
> 
> You asked for an answer and justification - here we go:
> Boiling down to the essence, your question (which capacity to prefer,
> 94.5 or 83Mb/s) imo addresses a weakness of RFC3148. In particular a more accurate definition and refinement of "long term" could help.
> Quoting RFC 3148: "The intuitive definition of BTC is the expected 
> *long
> term* average data rate (bits per second) of a single ideal TCP implementation over the path in question". Is *long term* 4 seconds?
> 10s? 1000s?
> 
> My interpretation is that you meant a "time interval sufficiently large for CCA to converge and reach a steady state" (D1). However, this definition considers E2E control/CCA *only* and falls short of the pitfall that Al and I attempted to address in RFC 7312 for delay measurements in mobile network measurements: today's (access) networks are stateful (the copper wire abstraction fails). This stateful network policing interacts with the end-to-end CCA and originates artificial, non-representative and non-repeatable measurement results. In DOCSIS (that I did not measure) it seems that buffering is the main issue.
> 
> Some more reasoning on your BTC measurements:
> - In your measurements, when BTC is measured at the sender, the informal definition (D1) seems to hold true for a flow that lasts for slightly less than 4 seconds. But do BTC measurements at the receiver's end yield the same results? They may or may not.
> - If the flow duration exceeds 4 seconds, the convergence criterion (as seen by the sender) is no longer fulfilled: conditions change.
> - If I'm asked if 94.5Mb/s or 83Mb/s is "the" path BTC, my decision depends on the receiver view: (a) if some buffers on the network path fill up, but the data rate at the receiver is only 83Mb/s, then the network path's BTC is 83Mb/s. If there's a shaper on the path that grants an initial "data credit" to the sender at full speed and only later on shapes, i.e, the receiver can measure 94.5Mb/s for the first 4 seconds, then I'd agree to a BTC of 94.5Mb/s. However, only for the first 4 second interval or for the first y Mbytes of data.
> 
> My point is that (unfortunately) the measurement of BTC for a network path must be tied to use cases to be meaningful. I'm truly convinced that with today's networks it's an illusion to have one single, representative value "BTC is x Mb/s". Decoupling BTC from the network path's state and its evolution with time may not be feasible.
> 
> Reasoning on objective BTC:
> First, the measurement of stateful network BTC mandate to take into consideration parameters like (a) the network path implementation, configuration and behavior, (b) the measurement point (sender, receiver, intermediate point), and (c) the measurement flow characteristics (duration, total amount of data? Any parameter that influences on the network state wrt the flow?). These are just examples, it may make sense to structure and categorize these influencing parameters.
> 
> Second, the definition (D1) in RFC3148 could be refined. The *long term* should be made explicit, covering two mandatory components: (a) CCA (end-to-end control) reaching a steady state and (b) the network path under observation reaching a steady state over (c) a well-specified interval.
> 
> In order to be meaningful (and scientifically sound), BTC as an intuitive metric should imho reflect a network path's ability to offer a sustained transfer capacity under (almost) constant conditions for a specified time interval (and/or for a specified amount of data?). Still, architecture and implementation of today's (access) networks make it challenging to identify those intervals during which we have constant conditions.
> 
> So the key question is if one can provide a BTC definition that is accurate, representative and simple/usable for the broad mass. (*) Al and his co-authors have included some key aspect in the capacity-metric draft, some aspects of this thread may be added there, too. But BTC imo is different and even more complex to measure.
> 
> regards Joachim
> PS: (*) Leaving aside uncertainty that we will never be able to handle in measurements because of the capacity-sharing in today's access networks (users-in-a-cell, cell usage).
> 
> 
> 
> On 19/09/2019 23:17, Matt Mathis wrote:
>> Ok, moving the thread to IPPM
>>
>> Some background, we (Measurement Lab) are testing a new transport
>> (TCP) performance measurement tool, based on BBR-TCP.   I'm not ready 
>> to talk about results yet (well ok, it looks pretty good).    (BTW 
>> the BBR algorithm just happens to resemble the algorithm described in
>> draft-morton-ippm-capcity-metric-method-00.)
>>
>> Anyhow we noticed some interesting performance features for number of 
>> ISPs in the US and Europe and I wanted to get some input for how 
>> these cases should be treated.
>>
>> One data point, a single trace saw ~94.5 Mbit/s for ~4 seconds, 
>> fluctuating performance ~75 Mb/s for ~1 second and then stable 
>> performance at ~83Mb/s for the rest of the 10 second test.    If I 
>> were to guess this is probably a policer (shaper?) with a 1 MB token 
>> bucket and a ~83Mb/s token rate (these numbers are not corrected for 
>> header overheads, which actually matter with this tool).  What is 
>> weird about it is that different ingress interfaces to the ISP (peers 
>> or serving locations) exhibit different parameters.
>>
>> Now the IPPM measurement question:   Is the bulk transport capacity 
>> of this link ~94.5 Mbit/s or ~83Mb/s?   Justify your answer....?
>>
>> Thanks,
>> --MM--
>> The best way to predict the future is to create it.  - Alan Kay
>>
>> We must not tolerate intolerance;
>>        however our response must be carefully measured:
>>             too strong would be hypocritical and risks spiraling out 
>> of control;
>>             too weak risks being mistaken for tacit approval.
>>
>> Forwarded Conversation
>> Subject: How should capacity measurement interact with shaping?
>> ------------------------
>>
>> From: *Matt Mathis* <mattmathis@google.com 
>> <mailto:mattmathis@google.com>>
>> Date: Thu, Aug 15, 2019 at 8:55 AM
>> To: MORTON, ALFRED C (AL) <acm@research.att.com 
>> <mailto:acm@research.att.com>>
>>
>>
>> We are seeing shapers  with huge bucket sizes, perhaps as larger or 
>> larger than 100 MB.
>>
>> These are prohibitive to test by default, but can have a huge impact 
>> in some common situations.  E.g. downloading software updates.
>>
>> An unconditional pass is not good, because some buckets are small. 
>> What counts as large enough to be ok, and what "derating" is ok?
>>
>> Thanks,
>> --MM--
>> The best way to predict the future is to create it.  - Alan Kay
>>
>> We must not tolerate intolerance;
>>        however our response must be carefully measured:
>>             too strong would be hypocritical and risks spiraling out 
>> of control;
>>             too weak risks being mistaken for tacit approval.
>>
>>
>> ----------
>> From: *MORTON, ALFRED C (AL)* <acm@research.att.com 
>> <mailto:acm@research.att.com>>
>> Date: Mon, Aug 19, 2019 at 5:08 AM
>> To: Matt Mathis <mattmathis@google.com 
>> <mailto:mattmathis@google.com>>
>> Cc: CIAVATTONE, LEN <lc9892@att.com <mailto:lc9892@att.com>>, 
>> Ruediger.Geib@telekom.de <mailto:Ruediger.Geib@telekom.de> 
>> <Ruediger.Geib@telekom.de <mailto:Ruediger.Geib@telekom.de>>
>>
>>
>> Hi Matt, currently cruising between Crete and Malta,____
>>
>> with about 7 days of vacation remaining – Adding my friend Len.____
>>
>> You know Rüdiger. It appears I’ve forgotten how to typs in 2 
>> weeks____
>>
>> given the number of typos I’ve fixed so far...____
>>
>> __ __
>>
>> We’ve seen big buffers on a basic DOCSIS cable service (downlink >2 
>> sec)____
>>
>> but,____
>>
>>   we have 1-way delay variation or RTT variation limits____
>>
>>   when searching for the max rate, that don’t many packets ____
>>
>>   queue in the buffer____
>>
>> __ __
>>
>>   we want the status messages that result in rate adjustment to 
>> return____
>>
>>  in a reasonable amount of time (50ms + RTT)____
>>
>> __ __
>>
>>   we usually search for 10 seconds, but if we go back and test with 
>> ____
>>
>>   a fixed rate, we can see the buffer growing if the rate is too 
>> high.____
>>
>> __ __
>>
>>   There will eventually be a discussion on the thresholds we use____
>>
>>   in the search // load rate control algorithm. The copy of ____
>>
>>   Y.1540 I sent you has a simple one, we moved beyond that now____
>>
>>   (see the slides I didn’t get to present at IETF).____
>>
>> __ __
>>
>>   There is value in having some of this discussion on IPPM-list,____
>>
>>   so we get some **agenda time at IETF-106** ____
>>
>> __ __
>>
>> We measure rate and performance, with some performance limits____
>>
>> built-in.  Pass/Fail is another step, de-rating too (made sense ____
>>
>> with MBM “target_rate”).____
>>
>> __ __
>>
>> Al____
>>
>>
>>
>> ----------
>> From: <Ruediger.Geib@telekom.de <mailto:Ruediger.Geib@telekom.de>>
>> Date: Mon, Aug 26, 2019 at 12:05 AM
>> To: <acm@research.att.com <mailto:acm@research.att.com>>
>> Cc: <lc9892@att.com <mailto:lc9892@att.com>>, <mattmathis@google.com 
>> <mailto:mattmathis@google.com>>
>>
>>
>> Hi Al,____
>>
>> __ __
>>
>> thanks for keeping me involved. I don’t have a precise answer and 
>> doubt, that there will be a single universal truth. ____
>>
>> __ __
>>
>> If the aim is only to determine the IP bandwidth of an access, then 
>> we aren’t interested in filling a buffer. Buffering events may occur, 
>> some of which are useful and to be expected, whereas others are not 
>> desired:____
>>
>> __ __
>>
>>   * Sender shaping behavior may matter (is traffic at the source CBR or
>>     is it bursty)____
>>   * Random collisions should be tolerated at the access whose bandwidth
>>     is to be measured.____
>>   * Limiting packet drop due to buffer overflow is a design aim or an
>>     important part of the algorithm, I think.____
>>   * Shared media might create bursts. I’m not an expert in the area, but
>>     there’s an “is bandwidth available” check in some cases between a
>>     central sender using a shared medium and the receivers connected.
>>     WiFi and may be other wireless equipment buffers packets also to
>>     optimize wireless resource optimization.____
>>   * It might be an idea to mark some flows by ECN, once there’s a guess
>>     on a sending bitrate when to expect no or very little packet drop.
>>     Today, this is experimental. CE marks by an ECN capable device
>>     should be expected roughly once queuing starts.____
>>
>> __ __
>>
>> Practically, the set-up should be configurable with commodity hard- 
>> and software and all metrics should be measurable at the receiver.
>> Burstiness of traffic and a distinction between queuing events which 
>> are to be expected and (undesired) queue build up are the events to 
>> be distinguished. I hope that can be done with commodity hard- and 
>> software. I at least am not able to write down a simple metric 
>> distinguishing queues to be expected from (undesired) queue build up 
>> causing congestion. The hard- and software to be used should be part 
>> of the solution, not part of the problem (bursty source traffic and 
>> timestamps with insufficient accuracy to detect queues are what I’d 
>> like to avoid).____
>>
>> __ __
>>
>> I’d suggest to move discussion to the list. ____
>>
>> __ __
>>
>> Regards,____
>>
>> __ __
>>
>> Rüdiger____
>>
>>
>>
>> ----------
>> From: *MORTON, ALFRED C (AL)* <acm@research.att.com 
>> <mailto:acm@research.att.com>>
>> Date: Thu, Sep 19, 2019 at 7:01 AM
>> To: Ruediger.Geib@telekom..de <mailto:Ruediger.Geib@telekom.de> 
>> <Ruediger.Geib@telekom.de <mailto:Ruediger.Geib@telekom.de>>
>> Cc: CIAVATTONE, LEN <lc9892@att.com <mailto:lc9892@att.com>>, 
>> mattmathis@google.com <mailto:mattmathis@google.com> 
>> <mattmathis@google.com <mailto:mattmathis@google.com>>
>>
>>
>> I’m catching-up with this thread again, but before I reply:____
>>
>> __ __
>>
>> *** Any objection to moving this discussion to IPPM-list ?? ***____
>>
>> __ __
>>
>> @Matt – this is a question to you at this point...____
>>
>> __ __
>>
>> thanks,____
>>
>> Al____
>>
>> __ __
>>
>> *From:*Ruediger.Geib@telekom.de <mailto:Ruediger.Geib@telekom.de> 
>> [mailto:Ruediger.Geib@telekom.de <mailto:Ruediger.Geib@telekom.de>]
>> *Sent:* Monday, August 26, 2019 3:05 AM
>> *To:* MORTON, ALFRED C (AL) <acm@research.att.com 
>> <mailto:acm@research.att.com>>
>> *Cc:* CIAVATTONE, LEN <lc9892@att.com <mailto:lc9892@att.com>>; 
>> mattmathis@google.com <mailto:mattmathis@google.com>
>> *Subject:* AW: How should capacity measurement interact with 
>> shaping?____
>>
>> __ __
>>
>> Hi Al,____
>>
>> __ __
>>
>> thanks for keeping me involved. I don’t have a precise answer and 
>> doubt, that there will be a single universal truth. ____
>>
>> __ __
>>
>> If the aim is only to determine the IP bandwidth of an access, then 
>> we aren’t interested in filling a buffer. Buffering events may occur, 
>> some of which are useful and to be expected, whereas others are not 
>> desired:____
>>
>> __ __
>>
>> __-        __Sender shaping behavior may matter (is traffic at the 
>> source CBR or is it bursty)____
>>
>> __-        __Random collisions should be tolerated at the access 
>> whose bandwidth is to be measured.____
>>
>> __-        __Limiting packet drop due to buffer overflow is a design 
>> aim or an important part of the algorithm, I think.____
>>
>> __-        __Shared media might create bursts. I’m not an expert in 
>> the area, but there’s an “is bandwidth available” check in some cases 
>> between a central sender using a shared medium and the receivers 
>> connected. WiFi and may be other wireless equipment buffers packets 
>> also to optimize wireless resource optimization.____
>>
>> __-        __It might be an idea to mark some flows by ECN, once 
>> there’s a guess on a sending bitrate when to expect no or very little 
>> packet drop. Today, this is experimental. CE marks by an ECN capable 
>> device should be expected roughly once queuing starts.____
>>
>>
>>
>>
>> _______________________________________________
>> ippm mailing list
>> ippm@ietf.org
>> https://www.ietf.org/mailman/listinfo/ippm
>>
> 
>