Re: [ippm] Magnus Westerlund's Discuss on draft-ietf-ippm-capacity-metric-method-06: (with DISCUSS)

"MORTON, ALFRED C (AL)" <> Sat, 27 February 2021 18:54 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 3219A3A1271; Sat, 27 Feb 2021 10:54:18 -0800 (PST)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -1.897
X-Spam-Status: No, score=-1.897 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id 51m67Nt6WGju; Sat, 27 Feb 2021 10:54:15 -0800 (PST)
Received: from ( []) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by (Postfix) with ESMTPS id D9F3C3A1270; Sat, 27 Feb 2021 10:54:14 -0800 (PST)
Received: from pps.filterd ( []) by ( with SMTP id 11RIj3wp039129; Sat, 27 Feb 2021 13:54:06 -0500
Received: from ( []) by with ESMTP id 36yh5sx56j-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Sat, 27 Feb 2021 13:54:06 -0500
Received: from (localhost []) by (8.14.5/8.14.5) with ESMTP id 11RIs51F028984; Sat, 27 Feb 2021 12:54:05 -0600
Received: from ( []) by (8.14.5/8.14.5) with ESMTP id 11RIrwiF028885 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Sat, 27 Feb 2021 12:53:58 -0600
Received: from ( []) by (Service) with ESMTP id 0A86340004B2; Sat, 27 Feb 2021 18:53:58 +0000 (GMT)
Received: from (unknown []) by (Service) with ESMTP id B7C87400A0A9; Sat, 27 Feb 2021 18:53:57 +0000 (GMT)
Received: from (localhost []) by (8.14.5/8.14.5) with ESMTP id 11RIru4m067453; Sat, 27 Feb 2021 12:53:57 -0600
Received: from ( []) by (8.14.5/8.14.5) with ESMTP id 11RIrpYZ067257; Sat, 27 Feb 2021 12:53:51 -0600
Received: from ( []) by (Postfix) with ESMTP id 8347210A18F2; Sat, 27 Feb 2021 13:53:50 -0500 (EST)
Received: from ([fe80::b09c:ff13:4487:78b6]) by ([fe80::e881:676b:51b6:905d%12]) with mapi id 14.03.0487.000; Sat, 27 Feb 2021 13:54:08 -0500
From: "MORTON, ALFRED C (AL)" <>
To: Magnus Westerlund <>, "" <>
CC: "" <>, "" <>, "" <>, "" <>, "" <>
Thread-Topic: Magnus Westerlund's Discuss on draft-ietf-ippm-capacity-metric-method-06: (with DISCUSS)
Thread-Index: AQHXC4EzqJ5daDOk8keYoMTRANvS6qpo9txggAGffICAACldMA==
Date: Sat, 27 Feb 2021 18:54:07 +0000
Message-ID: <>
References: <> <> <>
In-Reply-To: <>
Accept-Language: en-US
Content-Language: en-US
x-originating-ip: []
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-02-27_13:2021-02-26, 2021-02-27 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_policy_notspam policy=outbound_policy score=0 spamscore=0 clxscore=1015 priorityscore=1501 impostorscore=0 adultscore=0 suspectscore=0 bulkscore=0 malwarescore=0 mlxscore=0 lowpriorityscore=0 mlxlogscore=999 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2102270160
Archived-At: <>
Subject: Re: [ippm] Magnus Westerlund's Discuss on draft-ietf-ippm-capacity-metric-method-06: (with DISCUSS)
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: IETF IP Performance Metrics Working Group <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Sat, 27 Feb 2021 18:54:18 -0000

Hi Magnus,
Thanks for your further clarifications. We have more work ahead,
so I am deleting the topics where we appear to agree on changes 
in the first exchange (Applicability statement).

Consolidated replies and good ideas (we think) from the co-authors 
are in-line below.

> -----Original Message-----
> From: Magnus Westerlund []
> Sent: Friday, February 26, 2021 5:47 AM
> Cc:;; draft-ietf-ippm-capacity-
> Subject: Re: Magnus Westerlund's Discuss on draft-ietf-ippm-capacity-
> metric-method-06: (with DISCUSS)
> (Resending as I got an error that it wasn't delivered due to signature)
> Hi Al,
> Please see inline.
> On Fri, 2021-02-26 at 06:05 +0000, MORTON, ALFRED C (AL) wrote:
> > Hi Magnus, Thanks for your review.
> >
> > Please see replies to your review and comments from Len and me,
> consolidated
> > and marked [acm] below,
> >
> > Al
> >
> > > -----Original Message-----
> > > From: Magnus Westerlund via Datatracker []
> > > Sent: Thursday, February 25, 2021 9:19 AM
> > > To: The IESG <>
> > > Cc:; ippm-
> > >; Ian Swett <>;;
> > >
> > > Subject: Magnus Westerlund's Discuss on draft-ietf-ippm-capacity-
> metric-
> > > method-06: (with DISCUSS)
> > >
> > > Magnus Westerlund has entered the following ballot position for
> > > draft-ietf-ippm-capacity-metric-method-06: Discuss
> > >
> >
> > ...
> > >
> > > ----------------------------------------------------------------------
> > > DISCUSS:
> > > ----------------------------------------------------------------------
> > >
> > > A) Section 8. Method of Measurement
> > >
> > > I think the metrics are fine, what makes me quite worried here is the
> > > measurement method. My concerns with it are the following.
> > >
> > > 1. The application of this measurement method is not clearly scoped.
[acm] we agreed on text adding "access" applicability to the scope section. 

> > > However in
> > > that context I think the definition and protection against severe congestion
> > > has significant short comings. The main reason is that the during a
> > > configurable time period (default 1 s) the sender will attempt to send at
> > > a specified rate by a table independently on what happens during that
> > > second.
> >
> > [acm]
> > Not quite, 1 second is the default measurement interval for Capacity, but
> > sender rate adjustments occur much faster (and we add a default at 50ms). This
> > is a an important point (and one that Ben also noticed, regarding variable F
> > in section 8.1). So, I have added FT as a parameter in section 4:
> >
> > o FT, the feedback time interval between status feedback messages
> > communicating measurement results, sent from the receiver to control the
> > sender. The results are evaluated to determine how to adjust the current
> > offered load rate at the sender (default 50ms)
> >
> > -=-=-=-=-=-=-=-
> > Note that variable F in section 8.1 is redundant with parameter F in
> Section
> > 4,
> > the number of flows (in-06). So we changed the section 8.1 variable F to
> FT
> > in the working text.
> Okay, that makes things clearer. With all the equal intervals in the metrics I
> had missinterpreted that also the transmission would be uniform during the
> measurement intervals.
> However, when rereading Section 8.1 I do have to wonder if the non-cumaltive
> feedback actually creates two issues. First, it appears to loose information for
> reordering that crosses the time when the FT timer fires, due to reset.
I don't understand how the sequence error counting "loses information" when 
reordered packets cross a measurement feedback boundary. I'm not sure what 
aspect of measurement you are "resetting", but I assume it is 
    "The accumulated statistics are then
     reset by the receiver for the next feedback interval."

Suppose I have two measurement intervals and I receive:

||  1  2  3  5  6 || 4  7  8  9 ...||

where || is the measurement feedback boundary.

Packet 4 arrives late enough from its original position to span the boundary.
The 3->5 sequence is one sequence error, and the 4->7 sequence is another error.
This example produces two sequence errors in different feedback intervals, 
but that's a typical measurement boundary problem. We can't get rid of 
measurement boundaries, and they affect many measurements.

Note that a reordered packet contributes to IP-Layer Capacity,
by definition.  

Perhaps you had some other scenario in mind?

> In addition if the feedback is not reliable it looses the information for that
> interval. 
That's right, and:
1. the sending rate does not increase or decrease without feedback
2. the feedback is traveling the reverse path that the test do not congest with 
   test traffic
3. the running code has watchdog time-outs that *terminate the connection* if
   either the sender or receiver go quiet 

In essence, the test method is not reliable byte stream transfer like TCP.
We can shut-down the test traffic very quickly if something is wrong and 
useful measurement is in question.

> And making feedback reliabel could cause worse HOL issues for
> reacting to later feedback that are recived prior to the lost one.
So the alternative to un-reliable feedback can be worse? 
Good thing it's not planned.

> >
> >
> > >
> > > 2. The algorithm for adjusting rate is table driven but give no guidance on
> > > how
> > > to construct the table and limitations on value changes in the table. In
> > > addition the algorithm discusses larger steps in the table without any
> > > reflection of what theses steps sides may represent in offered load.
> >
> > [acm]
> > We can add (Len suggested the following text addition):
> > OLD
> > 8.1. Load Rate Adjustment Algorithm
> >
> > A table SHALL be pre-built defining all the offered load rates that
> > will be supported (R1 through Rn, in ascending order, corresponding
> > to indexed rows in the table). Each rate is defined as datagrams of...
> >
> > NEW
> > 8.1. Load Rate Adjustment Algorithm
> >
> > A table SHALL be pre-built defining all the offered load rates that
> > will be supported (R1 through Rn, in ascending order, corresponding
> > to indexed rows in the table). It is RECOMMENDED that rates begin with
> > 0.5 Mbps at index zero, use 1 Mbps at index one, and then continue in
> > 1 Mbps increments to 1 Gbps. Above 1 Gbps, and up to 10 Gbps, it is
> > RECOMMENDED that 100 Mbps increments be used. Above 10 Gbps,
> > increments of 1 Gbps are RECOMMENDED. Each rate is defined as...
> Is this what you actually used in your test implementation? 
Yes, except that the current table stops at 10Gbps. We haven't had the 
opportunity to test >10Gbps.

> At my first glance
> this recommendation looks to suffer from rather severe step effects and also
> make the respond to losses behave strange around the transitions. Wouldn't some
> type of logarithmic progression be more appropriate here for initial
> probing?
Len and I considered various algorithms for the search.  

Logarithmic increase typically means more rate overshoot than Linear increases.
Unfortunately, a large rate overshoot means that the queues will fill and need
a longer time to bleed-off, meaning that rate reductions will take you far from
the "right neighborhood" again.

Our experience is that we avoid large overshoot with fast or slow linear increases.
It means we've taken some care to keep the network running. We haven't broken 
any test path yet, and we bail-out quickly and completely if something goes wrong.

> If I have 1 GBPS line rate, there is a 1000 steps in the table to this value.
> Even if I increase with the suggested 10 steps until first congestion seen, it
> will take 100 steps, and with 50 ms feedback interval that is 5 seconds before
> it is in the right ball park.
Your math is correct.

Remember that the only assumption we made when building the table of sending rates
is that the maximum *somewhere between 500kbps and 10Gbps*. Our lab tests used 
unknown rates between 50Mbps and 10Gbps, as the "ground truth" that we asked UDP 
and TCP-based methods to measure correctly. Measurements on production networks 
encountered many different technologies. Some subscriber rates were 5 to 10 Mbps
on upstream.  

> And I get one random loss at 10 mbps, then its 990
> steps, In such a situation the whole measurement period (10 s) would be over
> before one has reached actual capacity.
I'm sorry, that's not quite correct, assuming delay range meets the criteria
below, which would be consistent with "one random loss".

The text says:
  If the feedback indicates that sequence number anomalies were detected OR 
  the delay range was above the upper threshold, the offered load rate is 
  decreased.  (by one step)

But when the next feedback message arrives with no loss, and the "congestion"
state has not been declared, the relevant text is:

  If the feedback indicates that no sequence number anomalies were detected AND 
  the delay range was below the lower threshold, the offered load rate is increased. 
  If congestion has not been confirmed up to this point, the offered load rate is 
  increased by more than one rate (e.g., Rx+10).

and we return to the high speed increases, because:

  Lastly, the method for inferring congestion is that there were sequence 
  number anomalies AND/OR the delay range was above the upper threshold for 
  *two* consecutive feedback intervals.

So, there is a single step back for the single random loss, but then immediately
back to Rx+10 increases.

> To me it appears that the probing (slow start) equivalent do need logarithmic
> increase to reach likely capacity quickly. Then how big the adjustment is
> actually dependent on what extra delay one consider the target for the test.
Our delay variation values are low, but need to accommodate the relatively high 
delay variation of some access technologies. We learned this during our testing
on production networks.

> Having a step size of 1 GBPS if probing a 2.5 GBPS path would likely make it
> very hard to keep the delay in the intended interval when it would fluxtuate
> between 500 mbps to much traffic and then 500 mbps to little. Sure with
> sufficiently short FT it will likely work in this algorithm. However, I wonder
> about regulation stability here for differnet RTT, FTs and buffer depth
> fluxtuations.
I'm sorry, that's not quite correct, the text we proposed to add says:

   It is RECOMMENDED that rates begin with 0.5 Mbps at index zero, 
   use 1 Mbps at index one, and then continue in 1 Mbps increments to 1 Gbps. 
   Above 1 Gbps, and up to 10 Gbps, it is RECOMMENDED that 100 Mbps increments be used. 
and                                                        ^^^^^^^^
   Above 10 Gbps, increments of 1 Gbps are RECOMMENDED.

Your example falls in the 1Gbps to 10Gbps range, where table increments are 

> From my perspective I think this is a indication that the load rate adjustment
> algorithm is not ready to be a standards track specification.
Given several corrections above, the authors ask that you reconsider your position.
Please read on.

> I would recommend that you actually take out the control algorithm and write a
> high level functional description of what needs to happen when measuring this
> capacity.
We worked for several weeks in December to make the current high-level description
an accurate one. The IESG review has resulted added some useful details that 
I have shared along the way.

> If I understand this correctly the requirements on the measurement are the
> following.
> - Need to seek the available capacity so that several measurement period are
> likely to be done at capacity
> - Must not create persistent congestion as the capacity measurement should be
> based on traffic capacity that doesn't cause more standing queue that X, where X
> is some additiona delay in ms compared to minimal one way delay. And X is
> actually something that is configurable for a measurement campagin as capacity
> for a given one way delay and delay variation can be highly relevant to
> know.
These are not identical to our requirements.  For example:
 - Both the metric and method consider a number of measurement intervals, 
and the Maximum IP-Layer Capacity is determined from one (or more) of the intervals.

> What else is needed?
> Are synchronized clocks needed or just relative delay changes necessary?
Just delay variation, and the safest is RT delay variation.

> >
> > -=-=-=-=-=-=-=-
> >
> > >
> > > 3. Third the algorithms reaction to any sequence number gaps is dependent on
> > > delay and how it is related to unspecified delay thresholds. Also no text
> > > discussion how these thresholds should be configured for safe operation.
> >
> > [acm]
> > We can add some details in the paragraph below:
> > OLD
> > If the feedback indicates that sequence number anomalies were detected OR
> > the delay range was above the upper threshold, the offered load rate is
> > decreased.
> > Also, if congestion is now ...
> > NEW
> > If the feedback indicates that sequence number anomalies were detected OR
> > the delay range was above the upper threshold, the offered load rate is decreased.
> > The RECOMMENDED values are 0 for sequence number gaps and 30-90 ms for lower
> > and upper delay thresholds. Also, if congestion is now ...
> Ok, but the delay values as I noted before highly dependent of what my goal with
> the capacity metric is. If I want to figure out the capacity for like say XR or
> cloud gaming applications that maybe have much lower OWD variances and absolute
> values so maybe my values are 10-25 ms.
We intend to measure the limit of the access technology, with a set of parameters
that work well for all technologies we have tested so far.  

Notice that I didn't type the word "application" above. Or "user experience".

Sure, there is sensitivity to the parameters chosen, and we supplied our 
well-tested defaults to maximize results comparability and technology coverage
(with no twiddling).

> How much explaration have you done of the control stability over a range
> of parameters? Do you have any material about that?
Yes. There are several parameter ranges we examined.

If we set the delay thresholds high enough, we see the RTT grow as the queues
fill to max and tail-drop finally restricts the rate. We can measure the 
extent of buffer bloat this way (if it is present). It's Not our goal.

We have used lower thresholds of delay variation, which work fine on the 
PON 1Gbps access services.

In the collaborative testing of the Open Broadband Open Source project,
one participant contributed tests with a 5G system that exhibited systematic
low-level loss and reordering in his lab. For this unusual case, Len added
the features to set a loss threshold above zero, and to tolerate reordered
and duplicate packets with no penalty in rate adjustment.

We have tried a range of test durations (I=20, 30 for example). 

We have tried different steep-ness of ramp-up slope. Rate += 10 steps 
works well, even when measuring rates separated by 3 orders of magnitude.

But for the co-authors, it was more important that the load adjustment search
produce the correct Maximum IP-Layer Capacity for each of the lab conditions 
we created (including challenging conditions with competing traffic, long delay
etc.), and the many access technologies we tested in production use 
(where again we encountered similar challenging conditions).

> >
> > -=-=-=-=-=-=-=-
> >
> > Please also note many requirements for safe operation in Section 10,
> > Security Considerations.
> >
> > >
> > > B) Section 8. Method of Measurement
> > >
> > > There are no specification of the measurement protocol here that provides
> > > sequence numbers, and the feedback channel as well as the control channel.
> >
> > [acm]
> > That is correct. The Scope does not include protocol development.
> >
> > > Is this intended to use TWAMP?
> >
> > [acm]
> > Maybe, but a lot of extensions would be involved.
> >
> > >
> > > From my perspective this document defines the metrics on standards track
> > > level. However, the method for actually running the measurements are not
> > > specified on a standards track level.
> >
> > [acm]
> > In IPPM work, the methods of measurement are described more broadly than
> > the metrics, as actions and operations the Src and Dst hosts perform to
> > send and receive, and calculate the results.
> >
> > IPPM Methods of Measurement have not included protocol requirements in
> > the past, in any of our Standards Track Metrics RFCs.  In fact, we developed
> > a measurement-specific criteria for moving our RFCs along the standards track
> > that has nothing to do with protocols or interoperability.
> > See BCP 176 aka RFC 6576:;!!BhdT!3 RMQMZOAFloQ1FvtC_LIrUsDqBGxQeJZ1F4GFt8TpbQ60w841wT54sLLoKS9-Yc$
> > IP Performance Metrics (IPPM) Standard Advancement Testing
> >
> > > No one can build implementation.
> >
> > [acm]
> > I'm sorry, but that is not correct.  Please see Section 8.4.
> Sorry, that was poorly formulated. I mean that you can't give this specification
> to a guy on a island without external communication and have them
> implemented it and it will work with someone else implementation. 
But then you are asking for protocol-level interoperability, Magnus.
That is not our scope, or the scope of any IPPM Metric and Method RFCs.
The procedures of BCP 176 tell us when independent implementations produce 
equivalent results, which is IETF's definition of "works with" for metrics
and methods.

> You have clearly
> implemented a solutiont that works for some set of parameters. And I am asking how
> much of the reasonable parameter space you have tested.
Right. I answered this question qualitatively above, but the co-authors
claim that an equally important question is the breadth of access technologies
we have tested.

The tests conducted over 2+ years used the following production access types:

1. Fixed: DOCSIS 3.0 cable modem with "triple-play" capability and embedded WiFi and
Wired GigE switch (two manufacturers).
2. Mobile: LTE cellular phone with a Cat 12 modem (600 Mbps Downlink, 50 Mbps uplink).
3. Fixed: passive optical network (PON) "F", 1 Gbps service.
4. Fixed: PON "T", 1000 Mbps Service.
5. Fixed: VDSL, service, at various rates <100 Mbps.
6. Fixed: ADSL, 1.5 Mbps.
7. Mobile: LTE enabled router with ETH LAN to client host 
8. Fixed: DOCSIS 3.1 cable modem with "triple-play" capability and embedded WiFi and
Wired GigE switch (two other manufacturers).

> Based on this discussion I don't think I can build an implementation that
> fulfills the measurement goals, becasue I have questions about them. And I
> suspect it would take substantial amount of experimentation to get it to
> work correctly over a broader range of input parameters.
Now we refer you to the references in the memo, particularly Appendix X 
of Y.1540:

   [Y.1540]   Y.1540, I. R., "Internet protocol data communication
              service - IP packet transfer and availability performance
              parameters", December 2019,

   [Y.Sup60]  Morton, A., Rapporteur, "Recommendation Y.Sup60, Interpreting
              ITU-T Y.1540 maximum IP-layer capacity measurements", June
              2020, <>.

and Liaisons, were many of the experimental results are summarized:

              12, I. S., "LS - Harmonization of IP Capacity and Latency
              Parameters: Revision of Draft Rec. Y.1540 on IP packet
              transfer performance parameters and New Annex A with Lab
              Evaluation Plan", May 2019,

              12, I. S., "LS on harmonization of IP Capacity and Latency
              Parameters: Consent of Draft Rec. Y.1540 on IP packet
              transfer performance parameters and New Annex A with Lab &
              Field Evaluation Plans", March 2019,

Also, see our slides from the Hackathons at IETF 105 and 106, and the 
IPPM WG sessions slides beginning with IETF-105, July 2019.
You might also look into the discussions on the mailing list.
Some other results are available to those with ITU-TIES accounts.

The load adjustment algorithm itself was improved after experimentation,
adding the fast ramp-up with rate += 10 when feedback indicates no
impairments. The original/current algorithms appear in Y.1540 Annexes A 
and B, respectively. 

> >
> > > And if the section is
> > > intended to provide requirements on a protocol that performs these
> > > measurements
> > > I think several aspects are missing. There appear several ways forward here
> > > to
> > > resolve this; one is to split out the method of measurement and define it
> > > separately to standard tracks level using a particular protocol, another
> > > is to write it purely as requirements on a measurement protocols.
> >
> > [acm]
> > As stated above, connecting a method with a single protocol is not IPPM's way.
> That is fine. However, I find the attempt to specify a specific load regulator
> in the method of measurement to take this specification beyond a general method
> of measurment. The high level requirement appear to be that to correctly find
> the capacity, and that requires that one load to the point where buffers are
> filled sufficiently to introduce extra delay or where AQM starts dropping or
> marking some of the load. Thus, I am questioning if the described algorithm will
> adqueately solve that issue over a wider range of parameters.
> So if you have more information to show at least which range it has been proven
> to do its work and with what input parameters? 
Yes. See ~10 references  and replies above.

> I hope you understand that I
> expect this load control algorithm to get simularly scrutinized to congestion
> control algorithms that we standardize in IETF.
Yes, although it is a surprising at this point, we certainly understand your 
current position.

However, Rüdiger made a relevant point in our discussions (why our algorithm's 
role is different from Transport Area congestion control algorithms, 
and need not be subjected to the same scrutiny):

    This is a measurement method designed for infrequent and sensible maximum 
    capacity assessment, instantiated only in an OAM or diagnostic tool. 

    It is not a blueprint for a congestion control algorithm (CCA) in a bulk 
    transfer protocol that runs by default and is globally deployed by 
    commodity stacks.

We don't want to re-create any TCP CCA: they weren't designed for accurate 
measurement of maximum rate (as the referenced measurements show). 

It appears that the most recent (2018) standardized and widely used 
CCA is Cubic (RFC 8312 ).  

The great TCP CCA Census (2019)

finds that BBR versions account for greater popularity on Alexa-250 sites
(25.2%) than CUBIC, and more than 40% of downstream traffic on the Internet
(slide 14). I found some references to BBR in ICCRG drafts, but no RFC.
I would guess that BBR has already provided CCA for more traffic than the 
test traffic complying to this memo ever will.

Our overall method works similar to BBR: Received rate per RTT is the 
feedback to the sender. 

We added Applicability to the access portion, not the global Internet 
where standardized transport protocol CCAs must operate.

We are not specifying a transport CCA that must support many applications.
Measurement is the *only* application (for an IP-layer metric).

Early impressions have been formed on several erroneous assumptions 
regarding algorithm stability (operation above 1Gbps) and 
suitability for purpose (one random loss case).

Ad-hoc methods resulting in TCP-based under-estimates of Internet Speed 
are the problem we attack here! Implementation of harmonized industry 
standards are the solution.

We believe in rough consensus and running code.

We also ask that you understand our position, that tests with many different 
access technologies in production, and careful comparison of ad-hoc methods
claiming to make the similar measurements in the lab and the field are equally, 
if not more important than even more parameter investigations at this point. 
I have personally been running lab tests since September 2018 with various tools. 
Len released his first version of the code in Feb 2019,
and we immediately focused on tests with his utility instead of UDP packet 
blasters like iPerf, and Trex with my own Binary Search with Loss verification
algorithm that we use in device benchmarking (cross-over with BMWG).

> I would very much prefer to take out the load algorithm and place it in a
> seperate document where it can have a tighter scope description and more
> discussion about about that it does its job.
> I hope this clarifies what my concerns are with this document in its
> current form.
Yes, and we have rather exhaustively argued to go ahead here, especially 
since a much less-frequently-used testing-only algorithm is a different situation 
than specifying a CCA for global TCP deployment: transport area's usual role.

We hope you can now appreciate the years of study, experimentation and 
running code that you apparently first encountered last Thursday, 
and will look into some more of the supporting background material.

Thanks again.

> Cheers
> Magnus