Re: [ippm] Magnus Westerlund's Discuss on draft-ietf-ippm-capacity-metric-method-06: (with DISCUSS)

Ruediger.Geib@telekom.de Wed, 03 March 2021 10:38 UTC

Return-Path: <Ruediger.Geib@telekom.de>
X-Original-To: ippm@ietfa.amsl.com
Delivered-To: ippm@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 76F743A0C5D; Wed, 3 Mar 2021 02:38:43 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -4.665
X-Spam-Level:
X-Spam-Status: No, score=-4.665 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.248, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_MED=-2.3, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, SPF_HELO_NONE=0.001, SPF_NONE=0.001, URIBL_BLOCKED=0.001] autolearn=unavailable autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=telekom.de
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id F9ujKeMNuOSv; Wed, 3 Mar 2021 02:38:39 -0800 (PST)
Received: from mailout21.telekom.de (mailout21.telekom.de [194.25.225.215]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id B527B3A0C59; Wed, 3 Mar 2021 02:38:38 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=telekom.de; i=@telekom.de; q=dns/txt; s=dtag1; t=1614767919; x=1646303919; h=from:to:cc:subject:date:message-id:references: in-reply-to:mime-version; bh=0RXUWY/t9fmYS+d/PJTkbb+zR3PuXPGSUr0wMkwApuA=; b=owSWiFgDTSvQHsBTY1c3QZkmLpxlQ32S1X1qE95YS+3NE6Pu5bmRwie2 qsiLiR5wLQqXvARA4MzSME3NENzpPuol85NcNA9MgbLI4B/tEgTrtdD3H oTm3R60UMAh6hW4VekvUxPug0Qp4XVztbEmGlZgJ6S1ufWj/ClxttoHOh c5qIHbfn7HOLQsnLZIhcYBTA73iMWYvtpnuMgej1B7otTcvEbtl0AdkDT uzMZS+dIpc0T9g51QPs5Q42Z06COnRfiueLWVIJcnC/MZo4FbCc5rEux7 VKwKFXDRooNnJ0RPShY0mGDekLLkO/slHPVAMLGKCBVfQktnNISRPdUpJ w==;
IronPort-SDR: sxWOVZZAE1YjrWNFVN9QZGcH7dQGHC3kpI5MwN7KMhsO8oNeQeMgZ6SZabGwA7sKS4dohdpkec axFBKBJVEdCw==
X-Mailbb-Crypt: true
Received: from qdec94.de.t-internal.com ([10.171.255.41]) by MAILOUT21.dmznet.de.t-internal.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256; 03 Mar 2021 11:38:36 +0100
IronPort-SDR: CJOq9+o9wH1Z1Hv1CtoKlqgoKvudVnu3n4If2Kwpk2q9SaAONoVWjF+MvqSaxHLY+UyrSd5c7a TFRUa4/ufHHg==
X-Mailbb-Crypt: true
Received: from q4dee1syvuf.ffm.t-systems.com (HELO totemomail) ([10.162.176.13]) by QDEC94.de.t-internal.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256; 03 Mar 2021 11:37:35 +0100
Received: from QDEFCS.de.t-internal.com ([10.171.254.41]) by totemomail (Totemo SMTP Server) with SMTP ID 884; Wed, 3 Mar 2021 11:37:32 +0100 (CET)
IronPort-SDR: R+5nKY9+Kzsk1vgv1oxIwTioz6vbPZvo1HA55JHoeCXC7JNAQpSLudZqj0KiTVitb+ul9e27bm hhFXPh0xM1N+hcKUjKVHh9V+1ogLxZcRE=
X-IronPort-AV: E=Sophos;i="5.81,219,1610406000"; d="p7s'?scan'208";a="291362020"
X-Mailbb-Crypt: true
X-Mailbb-SentCrypt: true
X-MGA-submission: MDF51jn3q7sxoDtVCItc94i7oBAzK0MUL03YuJJGlvCOB0EMYBIhQDD/PLjstAi6gtYjHdmn99Zly3nluNr6648+/pu+EL6z3XM3fa9hOwanfSW1+fQVf9wq+WeIPxPrvclz6Z2ghc0pXazXlDK4tJ9ZjH0VJsoKZCRzvaHqn8WB6Q==
Received: from he199744.emea1.cds.t-internal.com ([10.169.119.52]) by QDEFCV.de.t-internal.com with ESMTP/TLS/ECDHE-RSA-AES128-SHA256; 03 Mar 2021 11:37:32 +0100
Received: from HE199744.EMEA1.cds.t-internal.com (10.169.119.52) by HE199744.emea1.cds.t-internal.com (10.169.119.52) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 3 Mar 2021 11:37:31 +0100
Received: from HE100181.emea1.cds.t-internal.com (10.171.40.15) by HE199744.EMEA1.cds.t-internal.com (10.169.119.52) with Microsoft SMTP Server (TLS) id 15.0.1497.2 via Frontend Transport; Wed, 3 Mar 2021 11:37:31 +0100
Received: from DEU01-FR2-obe.outbound.protection.outlook.com (104.47.11.173) by O365mail02.telekom.de (172.30.0.235) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 3 Mar 2021 11:37:29 +0100
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=eEESB8fsbMAPeh5xWa3giL2GMloVtx6Pzq9ELyT4cXcw9UEmFUi6lKDzExspgPdiimLQ7nwf5Pngzq78fmEpZkZx66NXmm+LUVVoJkL5++kF2GmoMnGrW50luaOgyp46SeYG0e3Fv9OG0rPuiGgTW5SwDuFuuSazYuVvgICtly6KUVmefL2BUQ8wWbnF/1d7FL0XpX8eQCPP14/1VmRXYSbCwU0J437lDQ1ifsvC3pKM8NAvtRl7KH2q+qfLZfK1isuwc/NG4tKmEN9D1JWEccvM2AuCGQj+ol6jr3brecNs0S6TLBXhQVKX6D9vs5GmcE3dHQng6JMabJ8tHnKJvA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=KQe1EpTpfFb9gSq+KUyaacLfnoCQlHejS/fBH254ZrA=; b=E8silmVYLMDoF106usaluS0P8zU7x1yHNi1nkVgC7U/SfXXDM+R0VTnDz//LoveUxi/GheI2kInZKpkHfyYvFiKFxIAYMfbmQuV6Lz/wU78W/vbbdwHLEneKvrdPBSUqKQF58fpJzcp0ITVT+i83oI5fLiGkHCGGyF+TynnS1q08ZqVOlduf+XmP7ZfmbkYMgLpSHZJ6CNW81/DySVbAlD978nmmaRJRmZ1OjWotjKfi6Lcj8AYihf6GwcVosJS75p5GCVaC+M+6jXF98shu+gkpnRIPoBfww9ZbNnL60v7sZdPdyZiqEoEoUTzRiGe7Zhu/7813OK3ZkKtki1DRkQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=telekom.de; dmarc=pass action=none header.from=telekom.de; dkim=pass header.d=telekom.de; arc=none
Received: from FRYP281MB0112.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:2::11) by FR2P281MB0012.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:d::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3912.11; Wed, 3 Mar 2021 10:37:29 +0000
Received: from FRYP281MB0112.DEUP281.PROD.OUTLOOK.COM ([fe80::cd9f:f557:94c6:43]) by FRYP281MB0112.DEUP281.PROD.OUTLOOK.COM ([fe80::cd9f:f557:94c6:43%6]) with mapi id 15.20.3912.017; Wed, 3 Mar 2021 10:37:29 +0000
From: Ruediger.Geib@telekom.de
To: magnus.westerlund@ericsson.com, acm@research.att.com
CC: tpauly@apple.com, ianswett@google.com, draft-ietf-ippm-capacity-metric-method@ietf.org, ippm-chairs@ietf.org, ippm@ietf.org, iesg@ietf.org
Thread-Topic: Magnus Westerlund's Discuss on draft-ietf-ippm-capacity-metric-method-06: (with DISCUSS)
Thread-Index: AQHXC4EnvPLzhH0sQ0K8CsiAo8w0/6pp866AgABO2YCAAhpQgIAEKUUAgAFZfcA=
Date: Wed, 03 Mar 2021 10:37:29 +0000
Message-ID: <FRYP281MB0112AA417F93CA05C0BC17429C989@FRYP281MB0112.DEUP281.PROD.OUTLOOK.COM>
References: <161426272345.2083.7668347127672505809@ietfa.amsl.com> <4D7F4AD313D3FC43A053B309F97543CF01476A0C0E@njmtexg5.research.att.com> <66f367953ae838c8ba7505c60e51367843117787.camel@ericsson.com> <4D7F4AD313D3FC43A053B309F97543CF01476A0FE3@njmtexg5.research.att.com> <HE1PR0702MB3772A66E2C0409F5A69DC7DA95999@HE1PR0702MB3772.eurprd07.prod.outlook.com>
In-Reply-To: <HE1PR0702MB3772A66E2C0409F5A69DC7DA95999@HE1PR0702MB3772.eurprd07.prod.outlook.com>
Accept-Language: de-DE, en-US
Content-Language: de-DE
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator:
authentication-results: ericsson.com; dkim=none (message not signed) header.d=none;ericsson.com; dmarc=none action=none header.from=telekom.de;
x-originating-ip: [84.136.94.38]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 890afad2-dadb-4fd3-1cee-08d8de305891
x-ms-traffictypediagnostic: FR2P281MB0012:
x-microsoft-antispam-prvs: <FR2P281MB0012D68CD9978DBCB92A43DB9C989@FR2P281MB0012.DEUP281.PROD.OUTLOOK.COM>
x-ms-oob-tlc-oobclassifiers: OLM:5236;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: hYuLH/LfjcgjS7/SdVOXSK8QosJrHuLJFU+04J3u+Z1ZN+QuF4XHMYO3lRyOuHVpeNwrN7XJdDA2T5XtdB6KAyufKGahInZLtx/LuBuRdOA1LQl0uMGQd3HR7y4gRl2GUIO3hDsdZtRAWX+GiS+eL8m7Tb317n9IcyiFaaCGnJlE3+2NQjHj9D6uFWwwgyVS+enrXvSkOBW1erc3Q3DA2Or2dgqkY7sD4wz7P76Bxmbp8AURytNESpvYrpKNmJrnsJbGYaSVSzKdggtXPFz5VwXcEQ6YtCqbQEKYLLpbXubd4m8cK8rqMkSZLZaBY55OxByd2bhxOAP3zhK8FGQVogzbEu0qcM8DSFsJHX5GqTJlbCKUw9Z4SkxAMw+4uWhbfPhCM/oufPxGCeWrsDafiSSy2It2J5/EgFKaR9jqLHOY1I55pyaN+W26ssP/QFYQXeKf+/SA8q2Kq41SwKBunJBMpU8qWhSmfRuVnCVUl5ppcSLhEA6IutZ5dhnOVgmv6l6kd/zsQGflbgE/C69iehn2DSnXZ5JkxnqMp1+7KIk92mVpUl9FikdMfeVrBmJ0P/ytYBzsESfr64hGrxQostrtqcM2IGZfw8PQcBohc/iXXlvPJRuVr40XhDx4LbwWjKgcR5o7DbEbyAjRUAN7hw==
x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:FRYP281MB0112.DEUP281.PROD.OUTLOOK.COM; PTR:; CAT:NONE; SFS:(346002)(366004)(39860400002)(136003)(376002)(396003)(16799955002)(66616009)(316002)(110136005)(54906003)(66574015)(85202003)(66446008)(7696005)(66476007)(76116006)(83380400001)(99936003)(6506007)(66946007)(64756008)(66556008)(85182001)(30864003)(2906002)(966005)(71200400001)(33656002)(4326008)(86362001)(186003)(8676002)(26005)(52536014)(55016002)(9686003)(8936002)(5660300002)(478600001)(87944003)(559001)(579004); DIR:OUT; SFP:1101;
x-ms-exchange-antispam-messagedata: yaZeLdrmcNiyEfm0WvHigJRmMXpnDW3GgRxHRy/A/N8mgQMyw9jrifSMtdRkw/WDZ9NLCYHwqqGUPfgbwaJoG+ecdFld1NyzJOMAp9+xuWk8f9HivNQL69xLEc0XZKOgX3w9BD6+ALhpySHsevVe1L0DCmDEl6j2QYlV9vCTpCqC+E5UBzCKKMecWKXfOTW795w/8PnnaAqtUrd5vV9TbH6/bMKsDSUU9npEtTcWWNGIbyhIaehYCkPa086WPSEtNrINwlfQRdT8YDtDpyk0gyukndB0gIECrZww0vSSFMAdNFiYjMh3ssyidt1tCIGfJUQPLn/rNaMkdhvUsJ1omB7MX3ecXH3jQ+dhRLG/gbZh9Vxx5hHlZOB1x6lznxJdv9sRFT9Jf0nE0snQhcQInoxopwrjHdab9nyyFYkUX/RG5k6yfOU7mPab1fOl5sLYpqCDl1Sm6T5IHNQnlieecCfKINEmEqisUQWI50yYeZR+2D1/LXzk8kpkf/rxK3/xYXlt16qX0mc2tR7aE5FyEhhL34EtNGOzKgQje0DvBmDEryLY2ojCziEQWfbA0VXkepa4bdO3rwdELG/AYwUHrDM2sdTuiRkFLEzh8+iVRPoAwS9P+3tu2hQ0oucCwSOK+9gc6v8eaSUK4JwyjG1na3Mcc7KoGblrHW2+V387oNHDStPB2YiHPnRlKTNev+XsD7xNml6fc91uvxKX+CHUhIEGUGpqvBkyAdQRpKhBWLQOBLZyEVlVJjU2EVvyV2TnYaFY1ZDtljzYi88I7M1tgp0flVpQAG71IWPkMfQKypiFkdu0IrECFG5zzST2CdjDTjkdvtm972BnOIkKtMqhMunzvgZ7ofZTauiQARZrbEyRG0mbDVR4C4uPewfTsAq+nSYsjfCIFO2K0xm0TR++pz4fDNKZvXKMoFGfc0XiisVE5NuW+W+3kO8z7dzWvagqAjhyuh6ylgrrcE6ECmbJHZuZvsd+zGXwxL57M64o7HQD/VjqXI/uEzCTCaxK1IbAPvQMqkMXINVfpVzGUi40eZ7dndbJhTSr0OsSgsBRUocs1NPd106e3F7HP4X4A+/rYcMK4T2VbQhRPlCxLQ6cJIxxdbFABPlaBQ0SbTX4gp6XHIHg4JLqg1rRyqQDv5CVt+TuyYyuHfVSzmQY26/Ia6jU4D+fOEMe9sYi0O1x+Yr5CYTPeCyauuQDtFv1V+OTadt2YuOxYnma2xdJ/FMqDZZnbiY9Wyq6DPerAOQFvETKvrB1ayHma/pI7yUFJijZB7CsmakQaStSgY3Vik6i2W5EfgqpyHowxRM3SrZUJ7w=
x-ms-exchange-transport-forked: True
Content-Type: multipart/signed; protocol="application/x-pkcs7-signature"; micalg="SHA1"; boundary="----=_NextPart_000_0004_01D71021.8D048D90"
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: FRYP281MB0112.DEUP281.PROD.OUTLOOK.COM
X-MS-Exchange-CrossTenant-Network-Message-Id: 890afad2-dadb-4fd3-1cee-08d8de305891
X-MS-Exchange-CrossTenant-originalarrivaltime: 03 Mar 2021 10:37:29.7680 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: bde4dffc-4b60-4cf6-8b04-a5eeb25f5c4f
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: upLKMEBoMBv/BkncrDHU5hv70ohC68XDJ/v/dVAE44lyv8zf6pA9T4VjBUPaqvqp1QGpamxpgBeYX031a5euMkWImXRvSi2UAUg0YmoEjwE=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: FR2P281MB0012
X-TM-SNTS-SMTP: 9AF6EB7EB0DEF113F0157751CB791BF2E48C828740391B5668351A2E972FA94B2000:8
X-OriginatorOrg: telekom.de
Archived-At: <https://mailarchive.ietf.org/arch/msg/ippm/qTCjn3uk4uXwRP4klYrUSTnw5aY>
Subject: Re: [ippm] Magnus Westerlund's Discuss on draft-ietf-ippm-capacity-metric-method-06: (with DISCUSS)
X-BeenThere: ippm@ietf.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: IETF IP Performance Metrics Working Group <ippm.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/ippm>, <mailto:ippm-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/ippm/>
List-Post: <mailto:ippm@ietf.org>
List-Help: <mailto:ippm-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/ippm>, <mailto:ippm-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 03 Mar 2021 10:38:44 -0000

Hi Magnus, hi Al

I'm available in the afternoon today. 

I agree that the topic presence and interpretation of sequence numbers may benefit from more text. I didn't implement or design the code. If sequence numbers are continuous across all intervals st and feedback reflects the number of packets received since the preceding feedback message and the sequence number of the last packet, this should be transparent (I'm not talking about re-ordering here, but continuous sequence numbers and reporting of them to some extent helps). I'm not sure whether the protocol is or needs to be able to detect whether a first feedback message (or any other) was dropped. My take is, if the feedback channel is lossy (i.e., worse than infrequent random drop), any test designed to detect maximum capacities at any place should stop as soon as possible. I prefer that to figuring out when a feedback should have been present, whether it was lost and could be re-transmitted and so on. 
The test preamble may benefit from more text, maybe (after the preamble, an upper RTT limit should be known and the sender may behave quite conservative during the preamble phase).

[MW]: "For example the last I am worried when you define the step size as 1 GBPS above 10 GBPS. My fear is that one 1 GBPS more traffic will increase the buffer occupancy so quickly"
The same step width percentage is applied from 1 Mbps to 1 Gbps, steps of 1 Mbps. In the Gbps range, the application is even more sensible (1 Gbps steps start since 10 Gbps). An access of more than 10 Gbps is physically 20 Gbps at least (a bundle) and the likely shaper rate is often n*1 Gbps. st is set to 50 ms. From what I can tell, that buffer size often is available at that bandwidth. One could try to work with a smaller value of st at that bandwidth. The number of packets transmitted needs to ensure, that a shaper or policer kicks in, i.e., above the burst tolerance of any traffic conditioner. Than that burst tolerance roughly needs to be below st, if expressed in ms@shaper rate (simply fixing the number of packets during st might be a bad choice).

Integration into an existing protocol - I leave this to implementers. I'd prefer the metric and method to be on standards track, and making it part of an existing protocol shouldn't be a requirement. Al and Len have compared the implementation with TCP based speed tests and the implementation proved to measure access bandwidth with better accuracy and precision in many cases (under differing circumstances). I regard this as an important feature of a standard implementation. IETF so far didn't standardize a  method to measure access speeds with high precision - I'm not happy with TCP based speed tests as an informal standard and a method showing better precision being classified as "experimental" by IETF (noting that your motivation derives from another issue here, but let's keep an eye on the whole thing. Please don't take this as "the purpose justifies the means" - implementations need to work reliable of course).


Al, there's a nit in Section 6.5:

OLD
   A single constant interval dt SHOULD be
   chosen so that is an integer multiple of increasing values k times
                          ^^^^
   serialisation delay of a path MTU at the physical interface speed
   where traffic conditioning is expected.

NEW
   A single constant interval dt SHOULD be
   chosen so that it is an integer multiple of increasing values k times
                          ^^^^
   serialisation delay of a path MTU at the physical interface speed
   where traffic conditioning is expected.

Regards,

Ruediger

-----Ursprüngliche Nachricht-----
Von: Magnus Westerlund <magnus.westerlund@ericsson.com> 
Gesendet: Dienstag, 2. März 2021 11:27
An: acm@research.att.com; iesg@ietf.org
Cc: tpauly@apple.com; ianswett@google.com; draft-ietf-ippm-capacity-metric-method@ietf.org; ippm-chairs@ietf.org; ippm@ietf.org
Betreff: RE: Magnus Westerlund's Discuss on draft-ietf-ippm-capacity-metric-method-06: (with DISCUSS)

Hi Al,

Would it be possible for you and your co-authors to call in tomorrow Wednesday at 16:30 CET to the TSV-Office hours to discuss this document?  

Meeting link:
    https://ietf.webex.com/ietf/j.php?MTID=m689aa12ae4b319f4e371988b8330a863 
Meeting number:
    185 768 2828
Password:
    PNvNTuQm823


Please see inline. 

> > > >
> > > > ------------------------------------------------------------------
> > > > ----
> > > > DISCUSS:
> > > > ------------------------------------------------------------------
> > > > ----
> > > >
> > > > A) Section 8. Method of Measurement
> > > >
> > > > I think the metrics are fine, what makes me quite worried here is
> > > > the measurement method. My concerns with it are the following.
> > > >
> > > > 1. The application of this measurement method is not clearly scoped.
> <snip>
> [acm] we agreed on text adding "access" applicability to the scope section.
> 
> > > > However in
> > > > that context I think the definition and protection against severe
> > > > congestion has significant short comings. The main reason is that
> > > > the during a configurable time period (default 1 s) the sender
> > > > will attempt to send at a specified rate by a table independently
> > > > on what happens during that second.
> > >
> > > [acm]
> > > Not quite, 1 second is the default measurement interval for
> > > Capacity, but sender rate adjustments occur much faster (and we add
> > > a default at 50ms). This is a an important point (and one that Ben
> > > also noticed, regarding variable F in section 8.1). So, I have added FT as a
> parameter in section 4:
> > >
> > > o FT, the feedback time interval between status feedback messages
> > > communicating measurement results, sent from the receiver to control
> > > the sender. The results are evaluated to determine how to adjust the
> > > current offered load rate at the sender (default 50ms)
> > >
> > > -=-=-=-=-=-=-=-
> > > Note that variable F in section 8.1 is redundant with parameter F in
> > Section
> > > 4,
> > > the number of flows (in-06). So we changed the section 8.1 variable
> > > F to
> > FT
> > > in the working text.
> >
> > Okay, that makes things clearer. With all the equal intervals in the
> > metrics I had missinterpreted that also the transmission would be
> > uniform during the measurement intervals.
> >
> > However, when rereading Section 8.1 I do have to wonder if the
> > non-cumaltive feedback actually creates two issues. First, it appears
> > to loose information for reordering that crosses the time when the FT timer
> fires, due to reset.
> [acm]
> I don't understand how the sequence error counting "loses information"
> when reordered packets cross a measurement feedback boundary. I'm not
> sure what aspect of measurement you are "resetting", but I assume it is
>     "The accumulated statistics are then
>      reset by the receiver for the next feedback interval."
> 
> Suppose I have two measurement intervals and I receive:
> 
> ||  1  2  3  5  6 || 4  7  8  9 ...||
> 
> where || is the measurement feedback boundary.
> 
> Packet 4 arrives late enough from its original position to span the boundary.
> The 3->5 sequence is one sequence error, and the 4->7 sequence is another
> error.
> This example produces two sequence errors in different feedback intervals,
> but that's a typical measurement boundary problem. We can't get rid of
> measurement boundaries, and they affect many measurements.
> 
> Note that a reordered packet contributes to IP-Layer Capacity, by definition.
> 
> Perhaps you had some other scenario in mind?
> 

It was the above type of cases I was considering. I was worried that you don't detect some cases of issues if you reset and have no memory of last received packets. I think one obvious case that doesn't report as out of order for really short ones are this case when zero or single packets are delivered. Then a reordering could look like this.

|| 1   || 3 || 2 || and without a memory of the last received this would not be an out of order event. Nor would a packet loss like this

|| 1 ||     ||  3 || 

So when FB is on the same magnitude as the sent packet interval this measurement method appear to have issues. I have not analyzed if a single last seen packet number memory would solve most or all issues. 

At least it would also solve the issue of this

|| 1 2  3 4 || 6 7 8 || that I am also uncertain if that is detected. 

So, I think more details are needed to what is required on the feedback mechanism to detect these type of issues. 

> 
> > In addition if the feedback is not reliable it looses the information
> > for that interval.
> [acm]
> That's right, and:
> 1. the sending rate does not increase or decrease without feedback 2. the
> feedback is traveling the reverse path that the test do not congest with
>    test traffic
> 3. the running code has watchdog time-outs that *terminate the
> connection* if
>    either the sender or receiver go quiet
> 
> In essence, the test method is not reliable byte stream transfer like TCP.
> We can shut-down the test traffic very quickly if something is wrong and
> useful measurement is in question.
> 
> 

Al, I hope you understand that my concern here is that the documents description of the control algorithm is incomplete and are missing important protection aspects. As well as its behavior to different parameterizations in various conditions. 

> > And making feedback reliabel could cause worse HOL issues for
> > reacting to later feedback that are recived prior to the lost one.
> >
> [acm]
> So the alternative to un-reliable feedback can be worse?
> Good thing it's not planned.

So lets assume that receiver measures two feedback intervals in a row that there where reordering events etc. So in case the first report is lost, then from the senders perspective would there be an advantage to know that both intervals indicated issues at this rate? It at least indicates that the measurement that one is over capacity is consistent. 

Also, will the sender know that feedback is missing. This is all depending on the protocol used for the measurements. 

So I think there are two exiting protocols that I could reasonably easily implement this capacity measurement into. These two are QUIC and RTP/UDP. By taking of the shelf implementations I would have frameworks that would contain a number of features that appear needed to be able to do this measurement and receive feedback needed.


> 
> >
> > >
> > >
> > > >
> > > > 2. The algorithm for adjusting rate is table driven but give no guidance
> on
> > > > how
> > > > to construct the table and limitations on value changes in the table. In
> > > > addition the algorithm discusses larger steps in the table without any
> > > > reflection of what theses steps sides may represent in offered load.
> > >
> > > [acm]
> > > We can add (Len suggested the following text addition):
> > > OLD
> > > 8.1. Load Rate Adjustment Algorithm
> > >
> > > A table SHALL be pre-built defining all the offered load rates that
> > > will be supported (R1 through Rn, in ascending order, corresponding
> > > to indexed rows in the table). Each rate is defined as datagrams of...
> > >
> > > NEW
> > > 8.1. Load Rate Adjustment Algorithm
> > >
> > > A table SHALL be pre-built defining all the offered load rates that
> > > will be supported (R1 through Rn, in ascending order, corresponding
> > > to indexed rows in the table). It is RECOMMENDED that rates begin with
> > > 0.5 Mbps at index zero, use 1 Mbps at index one, and then continue in
> > > 1 Mbps increments to 1 Gbps. Above 1 Gbps, and up to 10 Gbps, it is
> > > RECOMMENDED that 100 Mbps increments be used. Above 10 Gbps,
> > > increments of 1 Gbps are RECOMMENDED. Each rate is defined as...
> >
> > Is this what you actually used in your test implementation?
> [acm]
> Yes, except that the current table stops at 10Gbps. We haven't had the
> opportunity to test >10Gbps.
> 
> > At my first glance
> > this recommendation looks to suffer from rather severe step effects and
> also
> > make the respond to losses behave strange around the transitions.
> Wouldn't some
> > type of logarithmic progression be more appropriate here for initial
> > probing?
> [acm]
> Len and I considered various algorithms for the search.
> 
> Logarithmic increase typically means more rate overshoot than Linear
> increases.
> Unfortunately, a large rate overshoot means that the queues will fill and
> need
> a longer time to bleed-off, meaning that rate reductions will take you far
> from
> the "right neighborhood" again.
> 
> Our experience is that we avoid large overshoot with fast or slow linear
> increases.
> It means we've taken some care to keep the network running. We haven't
> broken
> any test path yet, and we bail-out quickly and completely if something goes
> wrong.

I understand that there are several considerations here that are contradicting.

1. Quickly finding roughly the existing capacity
2. Avoid significant overshoot that build up to much queue if that is possible
3. Have sufficient fine grained control around the capacity so that adjustment step finds the capacity.

For example the last I am worried when you define the step size as 1 GBPS above 10 GBPS. My fear is that one 1 GBPS more traffic will increase the buffer occupancy so quickly that one alternates between not filled, and over filled between stepping up and receiving the feedback in the control cycle. We have to remember that with a FB of 50 ms the delay in the control cycle will be up to RTT + 50 ms.  

So I am speculating and you haven't tested. So if you put in table construction recommendations, then I do think you should be very clear that these are untested. 


> 
> >
> > If I have 1 GBPS line rate, there is a 1000 steps in the table to this value.
> > Even if I increase with the suggested 10 steps until first congestion seen, it
> > will take 100 steps, and with 50 ms feedback interval that is 5 seconds
> before
> > it is in the right ball park.
> [acm]
> Your math is correct.
> 
> Remember that the only assumption we made when building the table of
> sending rates
> is that the maximum *somewhere between 500kbps and 10Gbps*. Our lab
> tests used
> unknown rates between 50Mbps and 10Gbps, as the "ground truth" that we
> asked UDP
> and TCP-based methods to measure correctly. Measurements on production
> networks
> encountered many different technologies. Some subscriber rates were 5 to
> 10 Mbps
> on upstream.
> 
> > And I get one random loss at 10 mbps, then its 990
> > steps, In such a situation the whole measurement period (10 s) would be
> over
> > before one has reached actual capacity.
> [acm]
> I'm sorry, that's not quite correct, assuming delay range meets the criteria
> below, which would be consistent with "one random loss".
> 

Yes, my mistake, but let me replace one random loss with one congestion event due to a single intermittent transaction. So harder to trigger if the capacity is large, easier if it is low and then it matters less. So this reduced the issue significantly.

> The text says:
>   If the feedback indicates that sequence number anomalies were detected
> OR
>   the delay range was above the upper threshold, the offered load rate is
>   decreased.  (by one step)
> 
> But when the next feedback message arrives with no loss, and the
> "congestion"
> state has not been declared, the relevant text is:
> 
>   If the feedback indicates that no sequence number anomalies were
> detected AND
>   the delay range was below the lower threshold, the offered load rate is
> increased.
>   If congestion has not been confirmed up to this point, the offered load rate
> is
>   increased by more than one rate (e.g., Rx+10).
> 
> and we return to the high speed increases, because:
> 
>   Lastly, the method for inferring congestion is that there were sequence
>   number anomalies AND/OR the delay range was above the upper threshold
> for
>   *two* consecutive feedback intervals.
> 
> So, there is a single step back for the single random loss, but then
> immediately
> back to Rx+10 increases.
> 
> 

Ok. Good that this is less than a issue then I thought. 

> >
> > To me it appears that the probing (slow start) equivalent do need
> logarithmic
> > increase to reach likely capacity quickly. Then how big the adjustment is
> > actually dependent on what extra delay one consider the target for the
> test.
> [acm]
> Our delay variation values are low, but need to accommodate the relatively
> high
> delay variation of some access technologies. We learned this during our
> testing
> on production networks.
> 
> > Having a step size of 1 GBPS if probing a 2.5 GBPS path would likely make it
> > very hard to keep the delay in the intended interval when it would
> fluxtuate
> > between 500 mbps to much traffic and then 500 mbps to little. Sure with
> > sufficiently short FT it will likely work in this algorithm. However, I wonder
> > about regulation stability here for differnet RTT, FTs and buffer depth
> > fluxtuations.
> [acm]
> I'm sorry, that's not quite correct, the text we proposed to add says:
> 
>    It is RECOMMENDED that rates begin with 0.5 Mbps at index zero,
>    use 1 Mbps at index one, and then continue in 1 Mbps increments to 1
> Gbps.
>    Above 1 Gbps, and up to 10 Gbps, it is RECOMMENDED that 100 Mbps
> increments be used.
> and                                                        ^^^^^^^^
>    Above 10 Gbps, increments of 1 Gbps are RECOMMENDED.
> 
> Your example falls in the 1Gbps to 10Gbps range, where table increments are
> 100Mbps.
> 
> >
> > From my perspective I think this is a indication that the load rate
> adjustment
> > algorithm is not ready to be a standards track specification.
> [acm]
> Given several corrections above, the authors ask that you reconsider your
> position.
> Please read on.
> 
> >
> > I would recommend that you actually take out the control algorithm and
> write a
> > high level functional description of what needs to happen when measuring
> this
> > capacity.
> [acm]
> We worked for several weeks in December to make the current high-level
> description
> an accurate one. The IESG review has resulted added some useful details
> that
> I have shared along the way.
> 
> >
> > If I understand this correctly the requirements on the measurement are
> the
> > following.
> >
> > - Need to seek the available capacity so that several measurement period
> are
> > likely to be done at capacity
> > - Must not create persistent congestion as the capacity measurement
> should be
> > based on traffic capacity that doesn't cause more standing queue that X,
> where X
> > is some additiona delay in ms compared to minimal one way delay. And X is
> > actually something that is configurable for a measurement campagin as
> capacity
> > for a given one way delay and delay variation can be highly relevant to
> > know.
> [acm]
> These are not identical to our requirements.  For example:
>  - Both the metric and method consider a number of measurement intervals,
> and the Maximum IP-Layer Capacity is determined from one (or more) of the
> intervals.



> 
> 
> >
> > What else is needed?
> >
> > Are synchronized clocks needed or just relative delay changes necessary?
> [acm]
> Just delay variation, and the safest is RT delay variation.

So sender side transmission timestamping and the receiver measure delay variations based on these transmission times would suffice. 

Good. 

> 
> >
> > >
> > > -=-=-=-=-=-=-=-
> > >
> > > >
> > > > 3. Third the algorithms reaction to any sequence number gaps is
> dependent on
> > > > delay and how it is related to unspecified delay thresholds. Also no text
> > > > discussion how these thresholds should be configured for safe
> operation.
> > >
> > > [acm]
> > > We can add some details in the paragraph below:
> > > OLD
> > > If the feedback indicates that sequence number anomalies were
> detected OR
> > > the delay range was above the upper threshold, the offered load rate is
> > > decreased.
> > > Also, if congestion is now ...
> > > NEW
> > > If the feedback indicates that sequence number anomalies were
> detected OR
> > > the delay range was above the upper threshold, the offered load rate is
> decreased.
> > > The RECOMMENDED values are 0 for sequence number gaps and 30-90
> ms for lower
> > > and upper delay thresholds. Also, if congestion is now ...
> >
> > Ok, but the delay values as I noted before highly dependent of what my
> goal with
> > the capacity metric is. If I want to figure out the capacity for like say XR or
> > cloud gaming applications that maybe have much lower OWD variances and
> absolute
> > values so maybe my values are 10-25 ms.
> [acm]
> We intend to measure the limit of the access technology, with a set of
> parameters
> that work well for all technologies we have tested so far.
> 
> Notice that I didn't type the word "application" above. Or "user experience".
> 
> Sure, there is sensitivity to the parameters chosen, and we supplied our
> well-tested defaults to maximize results comparability and technology
> coverage
> (with no twiddling).
> 
> 
> >
> > How much explaration have you done of the control stability over a range
> > of parameters? Do you have any material about that?
> [acm]
> Yes. There are several parameter ranges we examined.
> 
> If we set the delay thresholds high enough, we see the RTT grow as the
> queues
> fill to max and tail-drop finally restricts the rate. We can measure the
> extent of buffer bloat this way (if it is present). It's Not our goal.
> 
> We have used lower thresholds of delay variation, which work fine on the
> PON 1Gbps access services.
> 
> In the collaborative testing of the Open Broadband Open Source project,
> one participant contributed tests with a 5G system that exhibited systematic
> low-level loss and reordering in his lab. For this unusual case, Len added
> the features to set a loss threshold above zero, and to tolerate reordered
> and duplicate packets with no penalty in rate adjustment.
> 
> We have tried a range of test durations (I=20, 30 for example).
> 
> We have tried different steep-ness of ramp-up slope. Rate += 10 steps
> works well, even when measuring rates separated by 3 orders of magnitude.
> 
> 
> But for the co-authors, it was more important that the load adjustment
> search
> produce the correct Maximum IP-Layer Capacity for each of the lab
> conditions
> we created (including challenging conditions with competing traffic, long
> delay
> etc.), and the many access technologies we tested in production use
> (where again we encountered similar challenging conditions).
> 
> 

Thanks for the clarifications. 

> >
> >
> > >
> > > -=-=-=-=-=-=-=-
> > >
> > > Please also note many requirements for safe operation in Section 10,
> > > Security Considerations.
> > >
> > > >
> > > > B) Section 8. Method of Measurement
> > > >
> > > > There are no specification of the measurement protocol here that
> provides
> > > > sequence numbers, and the feedback channel as well as the control
> channel.
> > >
> > > [acm]
> > > That is correct. The Scope does not include protocol development.
> > >
> > > > Is this intended to use TWAMP?
> > >
> > > [acm]
> > > Maybe, but a lot of extensions would be involved.
> >
> >
> >
> > >
> > > >
> > > > From my perspective this document defines the metrics on standards
> track
> > > > level. However, the method for actually running the measurements are
> not
> > > > specified on a standards track level.
> > >
> > > [acm]
> > > In IPPM work, the methods of measurement are described more broadly
> than
> > > the metrics, as actions and operations the Src and Dst hosts perform to
> > > send and receive, and calculate the results.
> > >
> > > IPPM Methods of Measurement have not included protocol
> requirements in
> > > the past, in any of our Standards Track Metrics RFCs.  In fact, we
> developed
> > > a measurement-specific criteria for moving our RFCs along the standards
> track
> > > that has nothing to do with protocols or interoperability.
> > > See BCP 176 aka RFC 6576:
> https://protect2.fireeye.com/v1/url?k=65acc415-3a37fcc7-65ac848e-
> 866132fe445e-b4323cb4b6e4206f&q=1&e=5fe1792c-f641-4903-9a32-
> 317350790872&u=https%3A%2F%2Furldefense.com%2Fv3%2F__https%3A%
> 2F%2Ftools.ietf.org%2Fhtml%2Frfc6576__%3B%21%21BhdT%213
> RMQMZOAFloQ1FvtC_LIrUsDqBGxQeJZ1F4GFt8TpbQ60w841wT54sLLoKS9-
> Yc$
> > > IP Performance Metrics (IPPM) Standard Advancement Testing
> > >
> > > > No one can build implementation.
> > >
> > > [acm]
> > > I'm sorry, but that is not correct.  Please see Section 8.4.
> >
> > Sorry, that was poorly formulated. I mean that you can't give this
> specification
> > to a guy on a island without external communication and have them
> > implemented it and it will work with someone else implementation.
> [acm]
> But then you are asking for protocol-level interoperability, Magnus.
> That is not our scope, or the scope of any IPPM Metric and Method RFCs.
> The procedures of BCP 176 tell us when independent implementations
> produce
> equivalent results, which is IETF's definition of "works with" for metrics
> and methods.
> 
> 
> > You have clearly
> > implemented a solutiont that works for some set of parameters. And I am
> asking how
> > much of the reasonable parameter space you have tested.
> [acm]
> Right. I answered this question qualitatively above, but the co-authors
> claim that an equally important question is the breadth of access
> technologies
> we have tested.
> 
> The tests conducted over 2+ years used the following production access
> types:
> 
> 1. Fixed: DOCSIS 3.0 cable modem with "triple-play" capability and embedded
> WiFi and
> Wired GigE switch (two manufacturers).
> 2. Mobile: LTE cellular phone with a Cat 12 modem (600 Mbps Downlink, 50
> Mbps uplink).
> 3. Fixed: passive optical network (PON) "F", 1 Gbps service.
> 4. Fixed: PON "T", 1000 Mbps Service.
> 5. Fixed: VDSL, service, at various rates <100 Mbps.
> 6. Fixed: ADSL, 1.5 Mbps.
> 7. Mobile: LTE enabled router with ETH LAN to client host
> 8. Fixed: DOCSIS 3.1 cable modem with "triple-play" capability and embedded
> WiFi and
> Wired GigE switch (two other manufacturers).
> 
> 
> >
> > Based on this discussion I don't think I can build an implementation that
> > fulfills the measurement goals, becasue I have questions about them. And I
> > suspect it would take substantial amount of experimentation to get it to
> > work correctly over a broader range of input parameters.
> [acm]
> Now we refer you to the references in the memo, particularly Appendix X
> of Y.1540:
> 
>    [Y.1540]   Y.1540, I. R., "Internet protocol data communication
>               service - IP packet transfer and availability performance
>               parameters", December 2019,
>               <https://www.itu.int/rec/T-REC-Y.1540-201912-I/en>.
> 
>    [Y.Sup60]  Morton, A., Rapporteur, "Recommendation Y.Sup60,
> Interpreting
>               ITU-T Y.1540 maximum IP-layer capacity measurements", June
>               2020, <https://www.itu.int/rec/T-REC-Y.Sup60/en>.
> 
> and Liaisons, were many of the experimental results are summarized:
> 
>    [LS-SG12-A]
>               12, I. S., "LS - Harmonization of IP Capacity and Latency
>               Parameters: Revision of Draft Rec. Y.1540 on IP packet
>               transfer performance parameters and New Annex A with Lab
>               Evaluation Plan", May 2019,
>               <https://datatracker.ietf.org/liaison/1632/>.
> 
>    [LS-SG12-B]
>               12, I. S., "LS on harmonization of IP Capacity and Latency
>               Parameters: Consent of Draft Rec. Y.1540 on IP packet
>               transfer performance parameters and New Annex A with Lab &
>               Field Evaluation Plans", March 2019,
>               <https://datatracker.ietf.org/liaison/1645/>.
> 
> Also, see our slides from the Hackathons at IETF 105 and 106, and the
> IPPM WG sessions slides beginning with IETF-105, July 2019.
> You might also look into the discussions on the mailing list.
> Some other results are available to those with ITU-TIES accounts.
> 
> The load adjustment algorithm itself was improved after experimentation,
> adding the fast ramp-up with rate += 10 when feedback indicates no
> impairments. The original/current algorithms appear in Y.1540 Annexes A
> and B, respectively.
> 
> 
> >
> >
> >
> > >
> > > > And if the section is
> > > > intended to provide requirements on a protocol that performs these
> > > > measurements
> > > > I think several aspects are missing. There appear several ways forward
> here
> > > > to
> > > > resolve this; one is to split out the method of measurement and define
> it
> > > > separately to standard tracks level using a particular protocol, another
> > > > is to write it purely as requirements on a measurement protocols.
> > >
> > > [acm]
> > > As stated above, connecting a method with a single protocol is not IPPM's
> way.
> >
> > That is fine. However, I find the attempt to specify a specific load regulator
> > in the method of measurement to take this specification beyond a general
> method
> > of measurment. The high level requirement appear to be that to correctly
> find
> > the capacity, and that requires that one load to the point where buffers are
> > filled sufficiently to introduce extra delay or where AQM starts dropping or
> > marking some of the load. Thus, I am questioning if the described algorithm
> will
> > adqueately solve that issue over a wider range of parameters.
> >
> > So if you have more information to show at least which range it has been
> proven
> > to do its work and with what input parameters?
> [acm]
> Yes. See ~10 references  and replies above.
> 
> > I hope you understand that I
> > expect this load control algorithm to get simularly scrutinized to congestion
> > control algorithms that we standardize in IETF.
> [acm]
> Yes, although it is a surprising at this point, we certainly understand your
> current position.
> 
> However, Rüdiger made a relevant point in our discussions (why our
> algorithm's
> role is different from Transport Area congestion control algorithms,
> and need not be subjected to the same scrutiny):
> 
>     This is a measurement method designed for infrequent and sensible
> maximum
>     capacity assessment, instantiated only in an OAM or diagnostic tool.
> 
>     It is not a blueprint for a congestion control algorithm (CCA) in a bulk
>     transfer protocol that runs by default and is globally deployed by
>     commodity stacks.
> 
> We don't want to re-create any TCP CCA: they weren't designed for accurate
> measurement of maximum rate (as the referenced measurements show).
> 
> It appears that the most recent (2018) standardized and widely used
> CCA is Cubic (RFC 8312 https://tools.ietf.org/html/rfc8312 ).
> 
> The great TCP CCA Census (2019)
> https://datatracker.ietf.org/meeting/109/materials/slides-109-iccrg-the-
> great-internet-tcp-congestion-control-census-00
> 
> finds that BBR versions account for greater popularity on Alexa-250 sites
> (25.2%) than CUBIC, and more than 40% of downstream traffic on the
> Internet
> (slide 14). I found some references to BBR in ICCRG drafts, but no RFC.
> I would guess that BBR has already provided CCA for more traffic than the
> test traffic complying to this memo ever will.
> 
> Our overall method works similar to BBR: Received rate per RTT is the
> feedback to the sender.
> 
> We added Applicability to the access portion, not the global Internet
> where standardized transport protocol CCAs must operate.
> 
> We are not specifying a transport CCA that must support many applications.
> Measurement is the *only* application (for an IP-layer metric).
> 
> Early impressions have been formed on several erroneous assumptions
> regarding algorithm stability (operation above 1Gbps) and
> suitability for purpose (one random loss case).

I agree to misunderstandings and misinterpretation of what was defined. And I do have to say that some of these assumptions or guesses are due to the lack of clarity of the specification. Which from my perspective strengthens my position that this is not yet standards track quality around the load algorithm. 

> 
> Ad-hoc methods resulting in TCP-based under-estimates of Internet Speed
> are the problem we attack here! Implementation of harmonized industry
> standards are the solution.
> 
> We believe in rough consensus and running code.
> 

So BBR is frequently discussed in ICCRG, and the BBR v1 was found to have some truly horrible stable state. Like one where it was running on at 30% packet loss without reacting as it only contained delay. BBRv2 has addressed this so it will react to higher degrees of packet loss. 

So I do believe in running code to and as long as you are monitoring your impact I think experimentation is the right way here. However, if we put the label IETF proposed standard on the load algorithm people will assume that it is safe to use. That is my issue here and I don't think we can give that recommendation even with a limited applicability statement in the current form. 


> 
> We also ask that you understand our position, that tests with many different
> access technologies in production, and careful comparison of ad-hoc
> methods
> claiming to make the similar measurements in the lab and the field are
> equally,
> if not more important than even more parameter investigations at this point.
> I have personally been running lab tests since September 2018 with various
> tools.
> Len released his first version of the code in Feb 2019,
> and we immediately focused on tests with his utility instead of UDP packet
> blasters like iPerf, and Trex with my own Binary Search with Loss verification
> algorithm that we use in device benchmarking (cross-over with BMWG).
> 
> 
> >
> > I would very much prefer to take out the load algorithm and place it in a
> > seperate document where it can have a tighter scope description and more
> > discussion about about that it does its job.
> >
> > I hope this clarifies what my concerns are with this document in its
> > current form.
> [acm]
> Yes, and we have rather exhaustively argued to go ahead here, especially
> since a much less-frequently-used testing-only algorithm is a different
> situation
> than specifying a CCA for global TCP deployment: transport area's usual role.
> 
> We hope you can now appreciate the years of study, experimentation and
> running code that you apparently first encountered last Thursday,
> and will look into some more of the supporting background material.
> 

I do appreciate your work. However, I still don't think the load control algorithm description is sufficiently well specified and parameterized to be published in a standards track document. Maybe it can be made into that. However, I think this would make an excellent experimental specification with some additional clarifications on parameters ranges and defined responses when feedback is not received in a timely fashion. Also I think you can clarify the protocol requirements for the measurement methodology. Then a crystal clear applicability statement for the load algorithm. 

Here I think we can see another point in actually splitting out the load algorithm and that is related to the applicability statement. Does the metric and the load algorithm have the same applicability? As a metric it is not truly restricted, it is just a question of the impact of measuring the metric across the internet. 

Also, I think we should consider somewhat the update and evolution path that hopefully will occur here. I think the load algorithm is likely to benefit from further clarification in its specification in the future. Thus, having it in a separate specification makes it easier to update and maintain. 


Cheers

Magnus Westerlund