Re: [tsvwg] Traffic protection as a hard requirement for NQB

Bob Briscoe <> Tue, 10 September 2019 22:49 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 68C54120019 for <>; Tue, 10 Sep 2019 15:49:50 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -1.998
X-Spam-Status: No, score=-1.998 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Authentication-Results: (amavisd-new); dkim=pass (2048-bit key)
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id 5PUUXZUCOzew for <>; Tue, 10 Sep 2019 15:49:47 -0700 (PDT)
Received: from ( []) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by (Postfix) with ESMTPS id 8170E12001A for <>; Tue, 10 Sep 2019 15:49:45 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;; s=default; h=Content-Type:In-Reply-To:MIME-Version:Date: Message-ID:From:References:Cc:To:Subject:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=77wuOBYD20QzZjNBgzWkc798c76IpZPOXC3XjR/0ijg=; b=jUvDA5s7lYKbhyC1M5MZG1kB4 C27bvsqIrD57KhuVdaNPSCCq46W+/wYYcUha6s8GoPyK0435ox0dst4o24PMzLSCO3nqhscofUv4L n3nVDzAieFjRQlEWfy6eWgpSqYK6gcailuQNHerpEmZcIYY6+PwqnevxURUZY0wjHm+QQNUr8dfOD F55sal+Edf+obN/x3f9nUvws/O6cgkmddUmBFPxDdCcT24yVkz6EWfw88W5l/E9rRGz15ANLWvwhn 978v6432h3y031YpFxvBqZ5qavYkb8EUUnKRYMYu3IfnrgxlslKO6Tv4qnyF6+fsfJev3Yvo82Cdf FDGQTgc0Q==;
Received: from [] (port=44622 helo=[]) by with esmtpsa (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128) (Exim 4.92) (envelope-from <>) id 1i7oxC-0003t6-Hl; Tue, 10 Sep 2019 23:49:43 +0100
To: Sebastian Moeller <>
Cc: "Black, David" <>, "" <>
References: <> <> <> <> <> <> <> <>
From: Bob Briscoe <>
Message-ID: <>
Date: Tue, 10 Sep 2019 23:49:41 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.6.1
MIME-Version: 1.0
In-Reply-To: <>
Content-Type: multipart/alternative; boundary="------------24880F08BE38735BACFEE8CF"
Content-Language: en-GB
X-AntiAbuse: This header was added to track abuse, please include it with any abuse report
X-AntiAbuse: Primary Hostname -
X-AntiAbuse: Original Domain -
X-AntiAbuse: Originator/Caller UID/GID - [47 12] / [47 12]
X-AntiAbuse: Sender Address Domain -
X-Get-Message-Sender-Via: authenticated_id:
Archived-At: <>
Subject: Re: [tsvwg] Traffic protection as a hard requirement for NQB
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Transport Area Working Group <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Tue, 10 Sep 2019 22:49:50 -0000


On 10/09/2019 21:16, Sebastian Moeller wrote:
> Dear Bob,
>> On Sep 10, 2019, at 17:13, Bob Briscoe <> wrote:
>> Sebastian,
>> On 07/09/2019 16:48, Sebastian Moeller wrote:
>>> Dear Bob,
>>>> On Sep 5, 2019, at 20:23, Bob Briscoe <> wrote:
>>>> ==Incentive Alignment?==
>>>> To judge whether there's any tragedy of the commons (incentive alignment), I'll put up a straw-man NQB configuration that complies with the following requirement in the draft:
>>>>     a useful property of nodes that support separate queues for NQB and QB
>>>>     flows would be that for NQB flows, the NQB queue provides better
>>>>     performance (considering latency, loss and throughput) than the QB
>>>>     queue; and for QB flows, the QB queue provides better performance
>>>>     (considering latency, loss and throughput) than the NQB queue.
>>>> Background: NQB has become possible largely because Internet access link rates have typically become fast enough that the serialization delay of a packet can be sub-millisecond, and therefore a queue of a few packets introduces delay that is small relative to other non-optional sources of delay like propagation. In these cases we no longer need priority scheduling for low delay.
>>>> Config:
>>>> 	• Scheduler:
>>>> 		• WRR with weight 0.5 for NQB on a 120Mb/s link. That gives at least 60Mb/s for NQB flows.
>>>> 		• Scheduler quantum: 1500B.
>>>> 	• Buffering:
>>>> 		• The NQB buffer is fairly shallow (30 packets or 3ms at 120Mb/s).
>>>> 		• The QB buffer is deeper (say 200ms) with an AQM target delay of say 10ms.
>>>> Traffic: Let's introduce some example NQB traffic:
>>>> 	• 2 paced flows of VoIP datagrams, each avg 50B payload plus 58B of Eth/IP/UDP/RTP headers at 33pkt/s
>>>> 		• bit-rate: 29kb/s /flow
>>>> 		• serialization delay: 7.2us / pkt
>>>> 	• 2 streams of 1000B game sync datagrams at 30pkt/s =>
>>>> 		• bit-rate: 240kb/s /flow
>>>> 		• serialization delay: 67us / pkt
>>>> 	• plus occasional DNSSec datagrams, worst case 1500B
>>>> 		• serialization delay: 100us / pkt
>>>> 	• Perhaps 540kb/s in all, which is about 0.9% of the min NQB capacity.
>>>> Worst-case NQB queuing delay calculation for the above traffic model:
>>>> Each NQB flow paces out its own packets, but one might happen to arrive while a packet from any of the other NQB flows is already queued. Worst case n-1 = 4 other NQB packets queued, where n is the number of application flows. And if there's traffic in the QB queue, each NQB packet will have to wait for one quantum (100us) while the other queue is served. Worst-case NQB delay is then:
>>>> (67us * 2 + 7.2us + 100us) + (100us * 4) = 641us,
>>>> It's v unlikely that this worst-case would arise, but it gives an indication of where the tail of the delay distribution will lie.
>>> 	[SM] I just re-thought this section, and believe it to be a strawman argument.
>>> In my reality the NQB draft states:
>>> "7.  Relationship to L4S
>>>     The dual-queue mechanism described in this draft is intended to be compatible with [I-D.ietf-tsvwg-l4s-arch]."
>>> In turn [I-D.ietf-tsvwg-l4s-arch], drags in I-D.ietf-tsvwg-aqm-dualq-coupled which sizes the shared dualQ buffer to allow for 250ms at the egress bandwidth ("limit = MAX_LINK_RATE * 250 ms               % Dual buffer size"), and not 3 ms.
>>> In the example above it is ONLY the extremely shallow buffer of 3 ms that "aligns" the incentives such that QB traffic gets unhappy in the NQB queue. So either  -D.ietf-tsvwg-aqm-dualq-coupled needs to change, or your argument above needs adaption to 250ms. And even if we would allow 50% of that for the QB queue, that still leaves 125ms. Since you are heavily involved in the other L4S drafts this is a rather surprising lack of candidness about how the NQB queue is actually supposed to be operated.
>>> Feel free to demonstrate again how with a 125 ms buffer (that is 0.125/((1500*8)/(120*1000^2)) = 1250 packets or 1250*1500/1000^2 = 1.875 MB, a whole lot of flows are one with their business long before they transmitted 1,8 MB)  there are no incentives for QB flows to mark themselves as NQB unless there is a stringent monitor and enforcement regime operated at the bottleneck NQB queue operating element to weed out misbehaving flows. But please plug in the numbers that the l4S family of drafts actually recommend to use, so it becomes relevant for this draft.
>> You're right that, when the NQB draft says it is compatible with l4s-arch, it means it is compatible with the structure of the architecture, not necessarily the specific recommended numbers in one of the example implementations in aqm-dualq-coupled.
> 	[SM] Well, as far as I can tell these are the only numbers around.
>> Indeed, in Appendix A of aqm-dualq-coupled, note b says that two separate buffers might be preferred to a shared 250ms buffer [and the Low Latency buffer would not need to be more than a few ms].
> 	[SM] Indeed, except I can not find the part in brackets in that draft.
[BB] That's the reason it's in brackets.
> Setting the Low Latency buffer to only a few milliseconds, will then make the whole low latency queue susceptible of the attack scenario I described before, is that really an improvement (and helping your cause])?
[BB] It is intended to align incentives. If someone is minded to attack 
a queue, they always have numerous approaches available in their 
armoury. Per-flow traffic protection is designed to address that.

You keep repeating the same arguments about vulnerability and keep 
ignoring the argument that vulnerability is not a reason for the IETF to 
mandate protection.

Every RFC is vulnerable to such attacks (including other Diffserv PHBs). 
None make protection against such attacks mandatory. Because in the IETF 
MUSTs are standardization language, not general importance language.

>> That said, here's a strawman (i.e. not tested, but for you to bash) for adding NQB support to the DualPI2 example in appendix A of aqm-dualq-coupled:
>> * Classify NQB packets as well as L4S ECN into the Low Latency queue
>> * Don't subject NQB packets to the L4S AQM (unless their ECN field is also L4S).
> 	[SM] You are talking about the ECN marking component only here?
[BB] Yup, There's a special case in the aqm-dualq-coupled draft at 
for Not-ECN packets classified into the L queue in a 'no protection' case:

             the L AQM
             SHOULD apply drop using a drop probability appropriate to
             Classic congestion control and appropriate to the target
             delay in the L queue

In DOCSIS, if the operator disables protection, it implements this 
special case using tail drop in the shallow buffer I mentioned. 
Alternatively, for a DualPI2 implementation without QProt, its L4S AQM 
could drop Not-ECT NQB packets with the square of the probability output 
by the L4S AQM.

>> * During overload, subject all packets to the same drop prob (Classic, L4S and NQB)
> 	[SM] Is there any other option, unless one wants to effectively starve all other traffic "classes"?
[BB] Access links generally do not include anything to handle overload 
attacks on their current (FIFO) links. Some ISPs provide DDoS detection, 
redirection and scrubbing services in their core. Access links are 
usually rudimentary, simple things though.

One could apply per-flow protection. However, here we're talking about 
options if the implementer has chosen not to implement traffic protection.
>> * add a 3ms sojourn limit for packets marked NQB (but not L4S), after which they are dropped
> 	[SM] Great idea, looking forward seeing this in the next dualQ draft!
[BB] I think you're misunderstanding the purpose of the appendix in the 
draft. It's an example for tutorial purposes.

I have no intention of adding NQB support to that tutorial example, 
which needs to focus on explaining the Coupled DualQ, not get cluttered 
up with extraneous details like optional extra DSCPs, and corner case 

But we are planning to add NQB support to the Linux reference 
implementation of DualPI2.

> Until then I will assume 250ms/2 as the likely default value.
[BB] That would be a very poor assumption. The shared buffer is 250ms to 
absorb Classic bursts, and the L4S queue just shares a part of that 
buffer that is typically tiny.

If an implementation used separate buffers, I would advise physical 
buffer of 250ms at the link rate for Classic and perhaps 4ms at the link 
rate for L4S, but with this 3ms sojourn limit applied within that.

> I also note that this adds another complication to dualQ, basically multiplexing a dropping regime into an ECN based system.
[BB] There would be an extra 'if' per packet (which is why we just used 
tail drop in a shallow buffer in DOCSIS).
>> * It might be necessary to tune down the L4S scheduling weight to bolster the incentive against mismarking, e.g. 3/4 not 15/16.
> 	[SM] You mean the part in the dualQ draft that says " if L4S traffic is over-aggressive or unresponsive, the scheduler weight for Classic traffic will at least be large enough to ensure it does not
>        starve."? I would not actually direct attention to this part of the dualQ rfc, as for a system that claims to share between L4S and standard-compliant flows only giving 1/16th to the standard-compliant flows is a problem.
[BB] You've completely missed the point of the coupling and how it 
overrides the scheduler. Please read the explanation in the draft 
(search for the three places where '1/16' is mentioned). It's no wonder 
you're not impressed with the design if you don't understand how it works.

>> The alternative specified for DOCSIS is to use a 10ms low latency buffer but also to implement per-flow queue protection (which can be disabled by the operator).
> 	[SM] If we are talking about [DOCSIS-MULPIv3.1]  Cable Television Laboratories, Inc., "MAC and Upper Layer Protocols Interface Specification, CM-SP-MULPIv3.1-I18-190422", April 22, 2019,
>                < CM-SP-MULPIv3.1>.
> In Annex N I read: I read:
> BUFFER_SIZE                // The size of the buffer for the LL service flow [B]
>                             //  (see Section C. A value of 10 ms multiplied
>                             // by the Maximum Sustained Rate (MAX RATE) is recommended
> Naively, I would say this is only 10 ms if MAX RATE equals 1, no? What am I missing.
[BB] That means the buffer size in bytes (indicated by '[B]') is 10ms 
multiplied by MAX_RATE given in the appropriate units (B/ms).



> Best Regards
> 	Sebastian
>> HTH
>> Bob
>>> Best Regards
>>> 	Sebastian
>> -- 
>> ________________________________________________________________
>> Bob Briscoe                     

Bob Briscoe