Re: [ippm] How should capacity measurement interact with shaping?

Matt Mathis <mattmathis@google.com> Sat, 21 September 2019 18:33 UTC

Return-Path: <mattmathis@google.com>
X-Original-To: ippm@ietfa.amsl.com
Delivered-To: ippm@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 75F7F1200B7 for <ippm@ietfa.amsl.com>; Sat, 21 Sep 2019 11:33:16 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -17.4
X-Spam-Level:
X-Spam-Status: No, score=-17.4 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIMWL_WL_MED=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, ENV_AND_HDR_SPF_MATCH=-0.5, HTML_MESSAGE=0.001, HTTPS_HTTP_MISMATCH=0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, USER_IN_DEF_DKIM_WL=-7.5, USER_IN_DEF_SPF_WL=-7.5] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=google.com
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id DBNlY2lsfL9V for <ippm@ietfa.amsl.com>; Sat, 21 Sep 2019 11:33:12 -0700 (PDT)
Received: from mail-yb1-xb2b.google.com (mail-yb1-xb2b.google.com [IPv6:2607:f8b0:4864:20::b2b]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id A6E69120024 for <ippm@ietf.org>; Sat, 21 Sep 2019 11:33:12 -0700 (PDT)
Received: by mail-yb1-xb2b.google.com with SMTP id t15so3884745ybg.7 for <ippm@ietf.org>; Sat, 21 Sep 2019 11:33:12 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=l8NpKBbIw3KTuVoKkRgoj33aqSE2sVUtC/XwZMLFSFU=; b=Z+jnkX+Xj6xmLQLfhuTQY6NBs6MbqFik8Pn23roswZyzh60JhLeb78xF/yo6Y5RqvU Hm3SLUSbqhp7xsnbiUne44ccSkEZafkoNnW34N59IJ3znny/KoNEAOvnDTtNtXf6dtZS uX1zILSmw77hSJXjz/nJRHc0EhDyd8TPYN3eZD5U8ii9XAsIraCV3D6NwntrLwsckNmX MfwCsx+faJKSwTtp+qW/C67Hiy4izIuOzjtc0982k8Ks98RenFs0DWOGSkf3dslUSBn2 8wVDtXwKMivE5IWdcWgA0/OTC4MCPCzTGFRyCwyzf8kmXEIzUoDbtiQrTUCEt0Z9Ss2y hqmA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=l8NpKBbIw3KTuVoKkRgoj33aqSE2sVUtC/XwZMLFSFU=; b=Nw+X3tFRYc/0OhWjAAoM+04RcPvMpZAhO2gLQZGkC1jte/6aAq3zpSoLyB77s2IbLJ SXTykRoljPGlRtpBHI8GPq6K7UdH/n1wiMcpBaX2DbE4zyUxrGp2NqhnI/jgnVfrG+K4 n+SfFbpmBSf6P2dN25UUt53+rRfEqPi7BL+vp8DjTWFUhVJ4WbyhvQA7X8IWxX/Ey786 i9VhbjBJbz0DwjHnVrf9rcgWekMTntT5C1RbWrI12s14qc+BGwNJbxMZq0a2Eh1PmAmd l6Ljqr+FyY0+m7k0wU7f5tFj61YFBEAnNTP/bUHupyDJbINBwbmdH5YwoRygOqBgqtHG SiOg==
X-Gm-Message-State: APjAAAXRiQbf07yQdGGsBe3bwjToABadmM5s7+IxK4esUa4OzspoqJus tka+Pjohc2JdI3IcS/S545A0etQu6eFTmhrT710atg==
X-Google-Smtp-Source: APXvYqwuitKNTGgokQn5uX1xQ4XvZBu3ocgmeODQ/Xch7ZvhkpVyavKIzphgR9saPaZm8mV1K07zRBe85tFOYYBcOIM=
X-Received: by 2002:a25:4802:: with SMTP id v2mr14697212yba.463.1569090791088; Sat, 21 Sep 2019 11:33:11 -0700 (PDT)
MIME-Version: 1.0
References: <CAH56bmBmywKg_AxsHnRf97Pfxu4Yjsp_fv_s4S7LXk1voQpV1g@mail.gmail.com> <4D7F4AD313D3FC43A053B309F97543CFA0ADC777@njmtexg4.research.att.com> <LEXPR01MB05607E081CB169E34587EEEF9CA10@LEXPR01MB0560.DEUPRD01.PROD.OUTLOOK.DE> <4D7F4AD313D3FC43A053B309F97543CFA0AF9184@njmtexg5.research.att.com> <CAH56bmC3gDEDF0wypcN2Lu+Ken3E7f_zXf_5yYbJGURBsju22w@mail.gmail.com> <4D7F4AD313D3FC43A053B309F97543CFA0AF94D0@njmtexg5.research.att.com> <CAH56bmBvaFb9cT+YQUhyA4gYywhjFuhk12snFh8atB9xAA5pWg@mail.gmail.com> <4D7F4AD313D3FC43A053B309F97543CFA0AF9BA5@njmtexg5.research.att.com>
In-Reply-To: <4D7F4AD313D3FC43A053B309F97543CFA0AF9BA5@njmtexg5.research.att.com>
From: Matt Mathis <mattmathis@google.com>
Date: Sat, 21 Sep 2019 11:32:58 -0700
Message-ID: <CAH56bmDmFwpzmB3NeDoE3cE-er6jZzZg_p-St6fO5nu3Ls1fJQ@mail.gmail.com>
To: "MORTON, ALFRED C (AL)" <acm@research.att.com>
Cc: "Ruediger.Geib@telekom.de" <Ruediger.Geib@telekom.de>, "ippm@ietf.org" <ippm@ietf.org>, "CIAVATTONE, LEN" <lc9892@att.com>
Content-Type: multipart/alternative; boundary="00000000000039a4cb0593146bf5"
Archived-At: <https://mailarchive.ietf.org/arch/msg/ippm/9HcThL7qFheLxqnOQSod9D4JmA0>
Subject: Re: [ippm] How should capacity measurement interact with shaping?
X-BeenThere: ippm@ietf.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: IETF IP Performance Metrics Working Group <ippm.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/ippm>, <mailto:ippm-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/ippm/>
List-Post: <mailto:ippm@ietf.org>
List-Help: <mailto:ippm-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/ippm>, <mailto:ippm-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sat, 21 Sep 2019 18:33:17 -0000

Yes, exactly, I am sure this is a provider's feature.   I have a receiver
side pcap, and it is quite a bit more complicated than I thought:
- Zero losses in the entire trace.  It is dynamically shaped at a
bottleneck with a long queue that is pacing packets.
- The initial part is really straight (looks like a hard limit)
- The rate (and packet headway) smoothly wanders irregularly all over the
place in the latter part of the trace, from a low of about 1 Mb/s to peaks
close to the max rate.   My earlier data was from BBR max_rate, so the
fluctuating rate apparently has stable peaks.

By smooth: it looked like a spline fit, suggesting the instantaneous packet
headway was determined by a differential equation....

This behavior is not an accident, but the result of a very sophisticated
controller.   And we have seen other bottlenecks like it elsewhere in the
US and Europe.

As people may know, I do believe that shaping is the most appropriate way
to deal with heavy hitters.  And that we (IPPM) really need a way to
characterize shaping.  I do care about the asymptotic rate, and how quickly
I fall out of the initial rate.

Thanks,
--MM--
The best way to predict the future is to create it.  - Alan Kay

We must not tolerate intolerance;
       however our response must be carefully measured:
            too strong would be hypocritical and risks spiraling out of
control;
            too weak risks being mistaken for tacit approval.


On Sat, Sep 21, 2019 at 8:34 AM MORTON, ALFRED C (AL) <acm@research.att.com>
wrote:

> Hi Matt,
>
>
>
> I had another thought about the 94 > 75 > 83 Mbps trace:
>
> This might well be a service provider’s feature, where
>
> they favor new flows with a high-rate for a short amount
>
> of time.  I remember some ISPs offering a “speed boost”
>
> to load web pages fast, but settling to a lower rate for
>
> longer transfers. The same strategy could help reduce
>
> the initial buffering time for video streams, perhaps with
>
> different time intervals and rates. This might be implemented
>
> by changing the token rate, or through some other means.
>
>
>
>
>
> *From:* Matt Mathis [mailto:mattmathis@google.com]
> *Sent:* Thursday, September 19, 2019 9:43 PM
> *To:* MORTON, ALFRED C (AL) <acm@research.att.com>
> *Cc:* Ruediger.Geib@telekom.de; ippm@ietf.org; CIAVATTONE, LEN <
> lc9892@att.com>
> *Subject:* Re: How should capacity measurement interact with shaping?
>
>
>
> I am actually more interested in the philosophical questions about how
> this should be reported, and what should the language be about
> non-stationary available capacity.   One intersecting issue: BBR converges
> on both the initial and final rate in under 2 seconds (this was a long
> path, so startup took more than a second).   Do users want a quick (and
> relatively cheap) test that takes 2 seconds or a longer test that is more
> likely to discover the token bucket?  How long?  If we want to call them
> different names, what should they be?
>
> *[acm] *
>
> So, if your trace is revealing a bimodal form of service rates,
>
> then it ought to be characterized with that in mind and
>
> allow for two modes of operation when reporting:
>
> 94 initial peak Capacity, 83 sustained Capacity
>
> *when this behavior is demonstrated and repeatable”.
>
>
>
> Thanks for more insights about BBR, too.
>
> Al
>
>
>
> On the pure technical issues: BBR is still quite a moving target.   I have
> a paper in draft that will shed some light on this.  It is due to be
> unembargoed sometime in October.
>
> BBRv1 (released slightly after the cacm paper you mention) measures the
> max_BW every 8 RTT.  BBRv2 measures the max_BW on a sliding schedule that
> loosely matches CUBIC.  (In both, min_RTT is measured every 10 seconds, in
> the absence of organic low RTT samples).   BBRv2 uses additional signals
> and does a much better job of avoiding overshoot at the startup.
>
>
>
> In any case the best (most stable) BBR based metric seems to be
> delta(snd.una)/elapsed_time, which is the progress as seen by upper
> layers.  If you look at short time slices (we happen to be using 0.25
> seconds) you see a mostly crisp square wave.  If you average from the
> beginning of the connection to now, the peak rate happens at the moment the
> bucket runs out of tokens, and falls towards the toke rate after that.
>
>
> Thanks,
>
> --MM--
> The best way to predict the future is to create it.  - Alan Kay
>
> We must not tolerate intolerance;
>
>        however our response must be carefully measured:
>
>             too strong would be hypocritical and risks spiraling out of
> control;
>
>             too weak risks being mistaken for tacit approval.
>
>
>
>
>
> On Thu, Sep 19, 2019 at 3:35 PM MORTON, ALFRED C (AL) <
> acm@research.att.com> wrote:
>
> Thanks Matt!  This is an interesting trace to consider,
>
> and an important discussion to share with the group.
>
>
>
> When I look at the equation for BBR:
>
>
> https://cacm.acm.org/magazines/2017/2/212428-bbr-congestion-based-congestion-control/fulltext
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__cacm.acm.org_magazines_2017_2_212428-2Dbbr-2Dcongestion-2Dbased-2Dcongestion-2Dcontrol_fulltext&d=DwMFaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=_6cen3Hn-e_hOm0BhY7aIpA58dd19Z9qGQsr8-6zYMI&m=hc7clxrgBJnSNsskiiOadU_kTq28Yt__Nwwdg4C9zms&s=r9ZlRUrf3rt7ZNsIpoSXARp3_ljQn3CHbrDZ7zMojow&e=>
>
>
>
> both BBR and Maximum IP-layer Capacity Metric seek the
>
> Max over some time interval. The window seems smaller for
>
> BBR: 6 to 10 RTTs, where we’ve been using parameters that
>
> result in a rate measurement once a second and take the max
>
> of the 10 one-second measurements.
>
>
>
> We’ve also evaluate several performance metrics when
>
> adjusting load, and that determines how high the sending
>
> rate will go (based on feedback from the receiver).
>
>
> https://tools.ietf.org/html/draft-morton-ippm-capcity-metric-method-00#section-4.3
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__tools.ietf.org_html_draft-2Dmorton-2Dippm-2Dcapcity-2Dmetric-2Dmethod-2D00-23section-2D4.3&d=DwMFaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=_6cen3Hn-e_hOm0BhY7aIpA58dd19Z9qGQsr8-6zYMI&m=hc7clxrgBJnSNsskiiOadU_kTq28Yt__Nwwdg4C9zms&s=CJxf89ZTabkMAiPj-b3_gyTKJfK2Ob3z9z1_Gwvzq-Q&e=>
>
>
>
> So, it seems that the MAX delivered rate for the 10 second test,  we
>
> can all see is 94.5 Mbps. This rate was sustained for more
>
> than a trivial amount of time, too. But if you are concerned that this
>
> rate was somehow inflated by a large buffer and a large
>
> burst tolerance in the shaper – that’s where the additional
>
> metrics and slightly different sending rate control
>
> that we described in the draft (and the slides) might help.
>
>
> https://datatracker.ietf.org/meeting/105/materials/slides-105-ippm-metrics-and-methods-for-ip-capacity-00
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__datatracker.ietf.org_meeting_105_materials_slides-2D105-2Dippm-2Dmetrics-2Dand-2Dmethods-2Dfor-2Dip-2Dcapacity-2D00&d=DwMFaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=_6cen3Hn-e_hOm0BhY7aIpA58dd19Z9qGQsr8-6zYMI&m=hc7clxrgBJnSNsskiiOadU_kTq28Yt__Nwwdg4C9zms&s=8Fc9gs6FeJKe_bVYX3ROnlZNUfJxvV2MW8qvgNAWShU&e=>
>
>
>
> IOW, it might well be that Max IP Capacity, measured as we designed
>
> and parameterized it, measures 83 Mbps for this path
>
> (assuming the 94.5 is the result of big overshoot at sender, and the
>
> fluctuating performance afterward seems to support that).
>
>
>
> When I was looking for background on BBR, I saw a paper comparing
>
> BBR and CUBIC during drive tests.
>
> http://web.cs.wpi.edu/~claypool/papers/driving-bbr/
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__web.cs.wpi.edu_-7Eclaypool_papers_driving-2Dbbr_&d=DwMFaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=_6cen3Hn-e_hOm0BhY7aIpA58dd19Z9qGQsr8-6zYMI&m=hc7clxrgBJnSNsskiiOadU_kTq28Yt__Nwwdg4C9zms&s=eOYze9z2Nr5bSmDtcvs7xwy8Rn7IFzUnigeTS9ul0W8&e=>
>
> One pair of plots seemed to indicate that BBR sent lots of Bytes
>
> early-on, and grew the RTT pretty high before settling down
>
> (Figure 5, a & b).
>
> This looks a bit like the case you described below,
>
> except 94.5 Mbps is a Received Rate – we don’t know
>
> what came out of the network, just what went in and filled
>
> a buffer before crashing down in the drive test.
>
>
>
> So, I think I did more investigation than justification
>
> for my answers, but I conclude the parameters like the
>
> individual measurement intervals and overall time interval
>
> from which the Max is drawn, plus the rate control algorithm
>
> itself, play a big role here.
>
>
>
> regards,
>
> Al
>
>
>
>
>
> *From:* Matt Mathis [mailto:mattmathis@google.com]
> *Sent:* Thursday, September 19, 2019 5:18 PM
> *To:* MORTON, ALFRED C (AL) <acm@research.att.com>;
> Ruediger.Geib@telekom.de
> *Cc:* ippm@ietf.org
> *Subject:* Fwd: How should capacity measurement interact with shaping?
>
>
>
> Ok, moving the thread to IPPM
>
>
>
> Some background, we (Measurement Lab) are testing a new transport (TCP)
> performance measurement tool, based on BBR-TCP.   I'm not ready to talk
> about results yet (well ok, it looks pretty good).    (BTW the BBR
> algorithm just happens to resemble the algorithm described
> in draft-morton-ippm-capcity-metric-method-00.)
>
>
>
> Anyhow we noticed some interesting performance features for number of ISPs
> in the US and Europe and I wanted to get some input for how these cases
> should be treated.
>
>
>
> One data point, a single trace saw ~94.5 Mbit/s for ~4 seconds,
> fluctuating performance ~75 Mb/s for ~1 second and then stable
> performance at ~83Mb/s for the rest of the 10 second test.    If I were to
> guess this is probably a policer (shaper?) with a 1 MB token bucket and
> a ~83Mb/s token rate (these numbers are not corrected for header overheads,
> which actually matter with this tool).  What is weird about it is that
> different ingress interfaces to the ISP (peers or serving locations)
> exhibit different parameters.
>
>
>
> Now the IPPM measurement question:   Is the bulk transport capacity of
> this link ~94.5 Mbit/s or ~83Mb/s?   Justify your answer....?
>
>
>
> Thanks,
>
> --MM--
> The best way to predict the future is to create it.  - Alan Kay
>
> We must not tolerate intolerance;
>
>        however our response must be carefully measured:
>
>             too strong would be hypocritical and risks spiraling out of
> control;
>
>             too weak risks being mistaken for tacit approval.
>
>
>
> *Forwarded Conversation*
> *Subject: How should capacity measurement interact with shaping?*
> ------------------------
>
>
>
> From: *Matt Mathis* <mattmathis@google.com>
> Date: Thu, Aug 15, 2019 at 8:55 AM
> To: MORTON, ALFRED C (AL) <acm@research.att.com>
>
>
>
> We are seeing shapers  with huge bucket sizes, perhaps as larger or larger
> than 100 MB.
>
>
>
> These are prohibitive to test by default, but can have a huge impact in
> some common situations.  E.g. downloading software updates.
>
>
>
> An unconditional pass is not good, because some buckets are small.  What
> counts as large enough to be ok, and what "derating" is ok?
>
>
> Thanks,
>
> --MM--
> The best way to predict the future is to create it.  - Alan Kay
>
> We must not tolerate intolerance;
>
>        however our response must be carefully measured:
>
>             too strong would be hypocritical and risks spiraling out of
> control;
>
>             too weak risks being mistaken for tacit approval.
>
>
>
> ----------
> From: *MORTON, ALFRED C (AL)* <acm@research.att.com>
> Date: Mon, Aug 19, 2019 at 5:08 AM
> To: Matt Mathis <mattmathis@google.com>
> Cc: CIAVATTONE, LEN <lc9892@att.com>, Ruediger.Geib@telekom.de <
> Ruediger.Geib@telekom.de>
>
>
>
> Hi Matt, currently cruising between Crete and Malta,
>
> with about 7 days of vacation remaining – Adding my friend Len.
>
> You know Rüdiger. It appears I’ve forgotten how to typs in 2 weeks
>
> given the number of typos I’ve fixed so far...
>
>
>
> We’ve seen big buffers on a basic DOCSIS cable service (downlink >2 sec)
>
> but,
>
>   we have 1-way delay variation or RTT variation limits
>
>   when searching for the max rate, that don’t many packets
>
>   queue in the buffer
>
>
>
>   we want the status messages that result in rate adjustment to return
>
>  in a reasonable amount of time (50ms + RTT)
>
>
>
>   we usually search for 10 seconds, but if we go back and test with
>
>   a fixed rate, we can see the buffer growing if the rate is too high.
>
>
>
>   There will eventually be a discussion on the thresholds we use
>
>   in the search // load rate control algorithm. The copy of
>
>   Y.1540 I sent you has a simple one, we moved beyond that now
>
>   (see the slides I didn’t get to present at IETF).
>
>
>
>   There is value in having some of this discussion on IPPM-list,
>
>   so we get some **agenda time at IETF-106**
>
>
>
> We measure rate and performance, with some performance limits
>
> built-in.  Pass/Fail is another step, de-rating too (made sense
>
> with MBM “target_rate”).
>
>
>
> Al
>
>
>
> ----------
> From: <Ruediger.Geib@telekom.de>
> Date: Mon, Aug 26, 2019 at 12:05 AM
> To: <acm@research.att.com>
> Cc: <lc9892@att.com>, <mattmathis@google.com>
>
>
>
> Hi Al,
>
>
>
> thanks for keeping me involved. I don’t have a precise answer and doubt,
> that there will be a single universal truth.
>
>
>
> If the aim is only to determine the IP bandwidth of an access, then we
> aren’t interested in filling a buffer. Buffering events may occur, some of
> which are useful and to be expected, whereas others are not desired:
>
>
>
>    - Sender shaping behavior may matter (is traffic at the source CBR or
>    is it bursty)
>    - Random collisions should be tolerated at the access whose bandwidth
>    is to be measured.
>    - Limiting packet drop due to buffer overflow is a design aim or an
>    important part of the algorithm, I think.
>    - Shared media might create bursts. I’m not an expert in the area, but
>    there’s an “is bandwidth available” check in some cases between a central
>    sender using a shared medium and the receivers connected. WiFi and may be
>    other wireless equipment buffers packets also to optimize wireless resource
>    optimization.
>    - It might be an idea to mark some flows by ECN, once there’s a guess
>    on a sending bitrate when to expect no or very little packet drop. Today,
>    this is experimental. CE marks by an ECN capable device should be expected
>    roughly once queuing starts.
>
>
>
> Practically, the set-up should be configurable with commodity hard- and
> software and all metrics should be measurable at the receiver. Burstiness
> of traffic and a distinction between queuing events which are to be
> expected and (undesired) queue build up are the events to be distinguished.
> I hope that can be done with commodity hard- and software. I at least am
> not able to write down a simple metric distinguishing queues to be expected
> from (undesired) queue build up causing congestion. The hard- and software
> to be used should be part of the solution, not part of the problem (bursty
> source traffic and timestamps with insufficient accuracy to detect queues
> are what I’d like to avoid).
>
>
>
> I’d suggest to move discussion to the list.
>
>
>
> Regards,
>
>
>
> Rüdiger
>
>
>
> ----------
> From: *MORTON, ALFRED C (AL)* <acm@research.att.com>
> Date: Thu, Sep 19, 2019 at 7:01 AM
> To: Ruediger.Geib@telekom.de <Ruediger.Geib@telekom.de>
> Cc: CIAVATTONE, LEN <lc9892@att.com>, mattmathis@google.com <
> mattmathis@google.com>
>
>
>
> I’m catching-up with this thread again, but before I reply:
>
>
>
> *** Any objection to moving this discussion to IPPM-list ?? ***
>
>
>
> @Matt – this is a question to you at this point...
>
>
>
> thanks,
>
> Al
>
>
>
> *From:* Ruediger.Geib@telekom.de [mailto:Ruediger.Geib@telekom.de]
> *Sent:* Monday, August 26, 2019 3:05 AM
> *To:* MORTON, ALFRED C (AL) <acm@research.att.com>
> *Cc:* CIAVATTONE, LEN <lc9892@att.com>; mattmathis@google.com
> *Subject:* AW: How should capacity measurement interact with shaping?
>
>
>
> Hi Al,
>
>
>
> thanks for keeping me involved. I don’t have a precise answer and doubt,
> that there will be a single universal truth.
>
>
>
> If the aim is only to determine the IP bandwidth of an access, then we
> aren’t interested in filling a buffer. Buffering events may occur, some of
> which are useful and to be expected, whereas others are not desired:
>
>
>
> -        Sender shaping behavior may matter (is traffic at the source CBR
> or is it bursty)
>
> -        Random collisions should be tolerated at the access whose
> bandwidth is to be measured.
>
> -        Limiting packet drop due to buffer overflow is a design aim or
> an important part of the algorithm, I think.
>
> -        Shared media might create bursts. I’m not an expert in the area,
> but there’s an “is bandwidth available” check in some cases between a
> central sender using a shared medium and the receivers connected. WiFi and
> may be other wireless equipment buffers packets also to optimize wireless
> resource optimization.
>
> -        It might be an idea to mark some flows by ECN, once there’s a
> guess on a sending bitrate when to expect no or very little packet drop.
> Today, this is experimental. CE marks by an ECN capable device should be
> expected roughly once queuing starts.
>
>
>
>