Re: [tsvwg] Comments on draft-white-tsvwg-nqb-02

Greg White <> Fri, 01 November 2019 21:56 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id AFFF012024E for <>; Fri, 1 Nov 2019 14:56:24 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -1.999
X-Spam-Status: No, score=-1.999 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Authentication-Results: (amavisd-new); dkim=pass (1024-bit key)
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id zhnBxqlq3zVP for <>; Fri, 1 Nov 2019 14:56:20 -0700 (PDT)
Received: from ( []) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by (Postfix) with ESMTPS id 0E2BE120826 for <>; Fri, 1 Nov 2019 14:55:55 -0700 (PDT)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901;; cv=none; b=GfCsb3ZYP+av17wPdnGlPLVb3zcgKFiGnQLnh1vUFDbzANZfZkN5bqXOmo+KO9oUBQG1d4Hl6WrEWX476CV+ss5zffSXaxG8Kp2dfgcf9qKtLy9aPYkmjD/97myqHJQfNf3j9ew2xQWCKCPD+EVyIey/NiUT+ctogQOUgkSDHVccqxZgNaZDBIOeKJpcLLkruL019L6K2YZ75g2ulih7scF5ZIAHwDNvctaErrDqgjo2KcG5+cF17YuiRGXkbJlltmGe9+Npft+5d2yAC8Y+/6deQNRgAgq1J8bxB6BabW8/3YmFtixpliIXkbY3WG/CfzAVlWoFX0mgvCUcqL2PVg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed;; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=QPHSUc7XSLh3veJeUI9azCATWZqsudkJlTRWIcCsvSc=; b=IM19alI+Y3aRjbXJDb7HY/4cp/rqNv60bvbeIDlIpAQL1XWjXmG35lmc5uJhrhzIPIW7tIQI16PdI0g5IiMeZT/6k4ED/b/dyGiUi6HnP6sbxYh7OD/Uwy59PTKmKoWWcVCacj0sjcHPq9pDKOO1A+3OiWmLrOGT1GrYee7XbtQCO6nXpnitXXxA2gNKBrFCz4yPE7Q+GFuw0cNnevxgJyrLGu1rkv5lcRnygPynpCJrEDraPsu/Ggf4rjcbLToh7J9gj+S0yThUqyCcB+KeT6S27ny+rvoK8jnOb7na+BqlEKQCgmQR8L3EqGz22c1T5Ya0ooZahRGTvP5Fq1hbPA==
ARC-Authentication-Results: i=1; 1; spf=pass; dmarc=pass action=none; dkim=pass; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=QPHSUc7XSLh3veJeUI9azCATWZqsudkJlTRWIcCsvSc=; b=sDSz0v3qPJR7p8pE0UxdBX6DLENi/Oc87AFC1rGR0izbqaW0B1jX9pgEIbEE9oEHnKysHeUAMUFQ76fpDB2E43j5Up0CucEiKc6q5L8xwzHfky8CQd4vFLXPMWUe+gwjipmyScXl+EtSUqjtQsfchgXenRidwegJ0t9eWjrMYm8=
Received: from ( by ( with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2408.17; Fri, 1 Nov 2019 21:55:52 +0000
Received: from ([fe80::cd06:3025:a8a3:f4bd]) by ([fe80::cd06:3025:a8a3:f4bd%6]) with mapi id 15.20.2387.027; Fri, 1 Nov 2019 21:55:52 +0000
From: Greg White <>
To: Kyle Rose <>, "Bless, Roland (TM)" <>
CC: "" <>
Thread-Topic: [tsvwg] Comments on draft-white-tsvwg-nqb-02
Thread-Index: AQHVakrSnOSsDhmDW0mXdXHU2boXi6cunjQAgCKCWICAJaSJgA==
Date: Fri, 1 Nov 2019 21:55:52 +0000
Message-ID: <>
References: <> <> <>
In-Reply-To: <>
Accept-Language: en-US
Content-Language: en-US
user-agent: Microsoft-MacOutlook/10.1c.0.190812
authentication-results: spf=none (sender IP is );
x-originating-ip: [2601:285:8200:323:a4d3:a2b7:4698:e8fb]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 5ced981f-76e2-4a35-ea81-08d75f1643be
x-ms-traffictypediagnostic: SN6PR06MB4909:
x-ms-exchange-purlcount: 3
x-microsoft-antispam-prvs: <>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-forefront-prvs: 020877E0CB
x-forefront-antispam-report: SFV:NSPM; SFS:(10019020)(136003)(39850400004)(366004)(346002)(376002)(396003)(51914003)(189003)(199004)(51234002)(51444003)(25786009)(76116006)(71190400001)(2171002)(561944003)(6246003)(81166006)(33656002)(4326008)(110136005)(46003)(86362001)(66946007)(316002)(58126008)(5660300002)(446003)(91956017)(30864003)(71200400001)(8936002)(2616005)(11346002)(99286004)(6486002)(2906002)(6506007)(476003)(478600001)(54896002)(790700001)(606006)(102836004)(6512007)(966005)(81156014)(6306002)(486006)(14454004)(256004)(76176011)(14444005)(7736002)(6116002)(36756003)(236005)(66556008)(64756008)(6436002)(66476007)(186003)(8676002)(53546011)(229853002)(66446008)(85282002)(579004); DIR:OUT; SFP:1102; SCL:1; SRVR:SN6PR06MB4909;; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; MX:1; A:1;
received-spf: None ( does not designate permitted sender hosts)
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 7hrYvgynP4+4e+a3kr/FyskYU5q8tXqweCn0NnxFvkFIFItGFlrYzLQsqcyzEbmGg8HPanOLClVp2BwEzgGKFVbvthK53rizZLB89A9c602rZo8MAqTmaUpPVmWfROLkso5is2kD+waUr5V9x6J5WD5mROs6RgLNgD9EmmtWR2zX2cguHvY/3JEskOtG4jM2ls1CIqftGMyzRQx6FBLsKAWBqiKsKrPiE4/3PKnMmuN5nosb5G2sfyqM+Y2cvb36DvX3wJi3ez/NcRZsyTGG/emiF+RDYFgjQp18JuXB5DU0cIfwYRZr2MMxv/m7XriCT3mOhu6Dl4ZjyUl+fZGd/xnKrjF8QKcIo5FH02cqFwPh+ZoANIeoQx1Mt6vf5HSEnePhroAoa8KAZEOLYM6A94NwY4+CrugKNpLF9J/qZ8Y34qlNR4KR2AF7OwUHoEWbHnIBQL8YMEZZ6hVzesNJRgjIXVy4Ymi6AES93kTHMBc/y9GsCM2Hum2S+HnmS2Dw
x-ms-exchange-transport-forked: True
Content-Type: multipart/alternative; boundary="_000_749AC080702C488C96A8AEEB47E5633Fcablelabscom_"
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-Network-Message-Id: 5ced981f-76e2-4a35-ea81-08d75f1643be
X-MS-Exchange-CrossTenant-originalarrivaltime: 01 Nov 2019 21:55:52.5769 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: ce4fbcd1-1d81-4af0-ad0b-2998c441e160
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: rYeJsNJVYoS3/EsbSH2wWJ6+h1DvxLDXPsEnMATDyrm+zH9tmehKhJJ0dvQOurXNKBsAb7VZ51/1JRobtPFxPw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN6PR06MB4909
Archived-At: <>
Subject: Re: [tsvwg] Comments on draft-white-tsvwg-nqb-02
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Transport Area Working Group <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Fri, 01 Nov 2019 21:56:25 -0000


I’ve included a few inline responses (marked [GW]) to some of your comments/questions below that I didn’t already address in my previous email.


From: tsvwg <> on behalf of Greg White <>
Date: Wednesday, October 9, 2019 at 3:22 PM
To: Kyle Rose <>rg>, "Bless, Roland (TM)" <>
Cc: "" <>
Subject: Re: [tsvwg] Comments on draft-white-tsvwg-nqb-02

Roland and Kyle,

Thanks for the review and comments, and apologies for the delay in responding.  Yes, the terms “non-queue-building” (NQB) and “queue-building” (QB) are not mathematically precise, but I don’t think they need to be.   This solution was developed with a particular environment and use cases in mind, but we believe that it solves a wider problem, and so should be described in more general terms.  Your questions are good ones to help us articulate what those terms should be.

The concept of enabling the dual-queue at a bottleneck link is to be able to provide significantly better quality of experience for latency sensitive applications without causing degradation of other applications, and without violating principles of network neutrality.  The bottleneck provides two queues, one that is essentially optimized for traditional Reno/Cubic congestion control (i.e. it has a buffer size that is on the order of the expected worst-case BDP, and it perhaps implements a traditional AQM like PIE, ARED, CoDel, etc.), and the other that is optimized for applications that are relatively isochronous or sparse (i.e. it has a much smaller buffer, and does not implement a packet-drop-based AQM).  The precise characteristics of these two queues will likely be dependent on the characteristics of the link that they serve.  In general, the NQB queue isn’t going to provide deterministic latency.  The NQB buffer isn’t a single packet deep, in some cases it may be on the order of 5-10 ms.  So, with a mix of uncoordinated senders, the latency will vary a bit.

In the case of a DOCSIS link, the NQB queue is sized at 10 ms, but it includes Queue Protection, which will aim to prevent the queue from getting to the tail drop limit by shedding load.

The use case that motivated the NQB proposal was one in which L4S ECN marking is provided in the NQB queue as well, so compatibility between NQB traffic and ECT(1) traffic is an important characteristic that we wish to promote.  In other words, capacity-seeking flows that wish to utilize the low latency queue would need to implement L4S-style congestion response (and signal ECT(1) rather than NQB in the TOS byte).  So, I think it is correct to state that capacity-seeking flows (regardless of congestion control algorithm) are not recommended to be tagged as NQB.

In the case of BBR, Neal Cardwell has indicated that BBRv2 will support an L4S-style congestion response, and so would presumably be marked ECT(1) (not NQB).

For the scheduler, an equal weighted round robin would be an example (recommended?) approach in the absence of dualq-coupled-aqm.  When dualq-coupled-aqm is present (and thus the low latency queue is being shared by both NQB and ECT(1) traffic), a weighted round robin as described in the dualq-coupled-aqm draft is appropriate.    This is one reason to say that capacity-seeking flows should not be tagged as NQB.

For high data rate (e.g.) CBR flows that are not capacity-seeking, I do agree that they could technically be candidates to mark as NQB, but I think there is some risk in doing so.  As the draft discusses, some links (e.g. WiFi) have limitations in that they by default only support priority queues, so the only way to get isolation between QB and NQB on these links entails also getting different priority.  If we limit the definition of NQB to traffic that is relatively low data rate (i.e. consistent with the definitions of the existing DSCPs that map to AC_VI in default WiFi equipment), the risk of mapping NQB to the AC_VI queue (for purposes of isolation) is reduced.  WiFi links are an important piece of the solution, so we can’t sweep this aside.


From: Kyle Rose <>
Date: Monday, September 16, 2019 at 12:06 PM
To: "Bless, Roland (TM)" <>
Cc: "" <>rg>, Greg White <>
Subject: Re: [tsvwg] Comments on draft-white-tsvwg-nqb-02

On Fri, Sep 13, 2019 at 11:49 AM Bless, Roland (TM) <<>> wrote:
·         I find the whole term "non-queue building flows" a bit confusing.
What the draft actually means are application-limited, mostly low data rate
flows (sparse flows in RFC8290). According to the draft, capacity seeking
flows seem to always be queue building.
I have had the same concern about the terminology. I'm wondering if "NQB" is a term of art that isn't intended to be interpreted literally. As you point out below, a tiny additional flow to a saturated link can cause a queue to start building, which makes me question whether "queue building" is really the right description at all. It seems the draft is trying to classify flows merely according to sender behavior when it's not clear how useful that will be as a tool in preventing queue buildup in bottlenecks of varying size and with varying traffic profiles.

Comparing BBR with Reno, the latter will tend toward full queues while the former will oscillate around a small queue, and yet both are capacity-seeking. The draft classifies BBR flows as QB. Should they be? If so, what actually constitutes queue-building? BBR builds a small queue to probe for capacity but backs off quickly in response to increased latency. Does minuscule transient queue-building like this (with the intent to keep queues small) really qualify as QB? Why should they be stuck in the same queue with Reno flows, when from a latency and queue-building standpoint they are more like the flows described in the draft as NQB (and in some cases better, since they'll actually be able to back off, unlike fixed-bandwidth flows).

·         It's not clear to me how the service rates between the NQB and QB queue
are distributed by the scheduler. From the description in the draft, I have
no clue what kind of scheduler would be a good fit for the desired behavior.
1.      "NQB traffic is not rate limited or rate policed." what happens if the load in the
NQB queue increases (e.g., not by queue building flows)? Will QB flows be starved?
I think that some scheduler will determine the rate share between NQB/QB queues...
1.      "it places unnecessary restrictions on the scheduling between the two queues"
so what is the requirement for your own approach then?
Good questions related to some of my main concerns related to the L4S, SCE, and FQ debate.

When a link is close to saturated, FQ policy allocates an equal share of available capacity to all non-sparse flows/buckets, preventing a subset of flows from starving the rest via aggressive capacity-seeking behavior. This clearly isn't ideal in every circumstance, but what's an application-oblivious alternative that is? I want someone to explain how a bottleneck is supposed to divide capacity on a saturated link between (say) file downloads and videoconferencing without knowing what either of those flows contain. An oracle might give priority to video and treat the file download as best-effort, but that's a policy choice as much as one in which the user is willing to sacrifice video quality by forcing the application to downgrade encoding bitrate so their download completes more quickly.

Whether we like it or not, there is no knob-free QoS even with huge pipes. This draft must attempt to identify and address relevant policy considerations at bottlenecks.


From: "Bless, Roland (TM)" <>
Organization: Institute of Telematics, Karlsruhe Institute of Technology
Date: Friday, September 13, 2019 at 9:49 AM
To: "" <>rg>, Greg White <>
Subject: Comments on draft-white-tsvwg-nqb-02


I'm quite late to the discussion and eventually took time to have quick read
over the draft, but I think it is rather vague at times and there is quite some
work required to either get rid of implicit assumptions or to write them down

First, some general comments:
·         I find the whole term "non-queue building flows" a bit confusing.
What the draft actually means are application-limited, mostly low data rate
flows (sparse flows in RFC8290). According to the draft, capacity seeking
flows seem to always be queue building.
This is not true, because it depends on the particular congestion control
and how the capacity seeking is done. For instance, TCP LoLa [1] flows are
not queue building, but are capacity seeking nevertheless, i.e., they will
saturate the bottleneck link. Strictly speaking, they are building a _limited_
standing queue.
·         The notion "queue building" is also somewhat fuzzy: do you mean flows that build
increasing and unlimited standing queues (i.e., until buffer is completely filled) or
also flows that are able to build limited standing queues? LoLa and BBR would fall
into the latter class, where BBR's standing queue depends on "the BDP".
·         Moreover, even sparse flows can build a queue: assume we have a sparse flow
arriving at a link that is already saturated by a single capacity seeking flow.
So even if the sparse flow is only sending 0.1% of the link capacity,
it will _cause_ a queue to build up (under the assumption that the other flow
does not back off immediately). The only difference is that the large flow possesses
at lot more data in the queue compared to the sparse flow. Similarly, if
there are a lot of sparse flows saturating a bottleneck, they will also create a growing
queue (until they reduce their sending rate).
·         It's not clear to me how the service rates between the NQB and QB queue
are distributed by the scheduler. From the description in the draft, I have
no clue what kind of scheduler would be a good fit for the desired behavior.
·         Would it be possible to let an NQB flow (or several) saturate the link? I think it would be a perfectly valid scenario to let high-data rate, low delay flows use the NQB queue.
"NQB traffic is not rate limited or rate policed." suggests that.
·         Can a low-latency high data rate flow be in the NQB queue, e.g., a TCP LoLa
flow? An NQB PHB would be nice for them, because they are neither suppressed
nor delay-wise adversely affected by the (loss-based) queue filling flows.
·         I also second Dave Black's comments on looking at the PHB definition
requirements. A PHB definition should define the behavioral characteristics
and more details, see and
While the PHB definition should be more abstract, I find it useful to provide at least
examples of the mechanisms that can be used for an implementation.

Some more text related comments:
1.          "Active Queue Management (AQM) mechanisms (such as PIE [RFC8033],
   DOCSIS-PIE [RFC8034], or CoDel [RFC8289]) can improve the quality of
   experience for latency sensitive applications, but there are
   practical limits to the amount of improvement that can be achieved
   without impacting the throughput of capacity-seeking applications."
I'm not sure what your are referring to here. My experience tells me that
this is also highly related to the congestion control variant. Let's assume
Cubic TCP for now: If only a few flows are present at the AQM bottleneck, then the
utilization/throughput may be adversely affected, but if already a small number of flows is
present, a good AQM achieves good utilization due to its desynchronization feature.
[GW] addressed in upcoming version
2.       "These applications do not make use of
   network buffers, but nonetheless can be subjected to packet delay and
   delay variation as a result of sharing a network buffer with those
   that do make use of them."
These application are actually using the buffer too, but their proportion
w.r.t. queued data is probably small compared to "queue building" flows.
[GW] addressed in upcoming version
3.       "Here the data rate is essentially limited by the Application itself."
That would be good to mention earlier. Is this a requirement for NQB flows?
As I wrote above: there are also elastic NQB flows possible, using all available
capacity. Furthermore, a lot of app-limited NQB flows could also saturate
a link, do you exclude this case?
4.       "but there are also application flows that may be in a gray area in between
(e.g. they are NQB on higher-speed links, but QB on lower-speed links)."
Is this the case I just described: enough sparse flows may saturate lower-speed
links already? or do you think of something else?
[GW] Here I was thinking of a case where an application is CBR at (e.g.) 200kbps.  On a lightly-used 10 Mbps bottleneck, that may be easily considered NQB, but on a 256 kbps bottleneck, it probably isn’t really NQB if there is much other traffic.
5.       "As an answer the last question" => As an answer _to_ the last question
6.       "Thus, a useful property of nodes that support separate queues for NQB and QB
   flows would be..."
It seems that supporting separate queues is a requirement for the PHB. If so,
please state that.
[GW] done
7.       " and for QB flows, the QB queue provides better performance
   (considering latency, loss and throughput) than the NQB queue."
I don't see how they achieve better latency, except in the case that
they experience less packet loss and thus suffer from less retransmissions,
but this may be quite situation dependent...
[GW] fixed.
8.       Should starvation of QB flows be impossible?
9.       "and reclassify those flows/packets to the QB queue."
This should probably be explicit about flow classification, because otherwise
per-microflow reordering may also adversely affect the e2e performance. So
probably there is a requirement to move all packets belonging to a flow
consistently to the QB queue.
[GW] With queue protection in place, one concern is that it could completely eliminate the disincentive for mismarking.  If the worst that would happen to my QB flow by mismarking it as NQB is that it gets exactly the same performance that it would have if I’d marked it correctly, then why not mark everything NQB and let the network sort it out?  At least for the QP algorithm that we’re using in DOCSIS, it is preferred that applications select the appropriate queue to begin with.   So, we would like to preserve at least some disincentive for mismarking.  The potential for packet reordering could be a disincentive.
10.   "This queue SHOULD support a latency-based queue protection mechanism"
as others already pointed out: this seems to be an important requirement and
therefore, you should either state how that is supposed to work or the corresponding
[GW] Ok.  I didn’t really tackle this one yet.
11.   "NQB traffic is not rate limited or rate policed." what happens if the load in the
NQB queue increases (e.g., not by queue building flows)? Will QB flows be starved?
I think that some scheduler will determine the rate share between NQB/QB queues...
12.   "The node supporting the NQB PHB makes no guarantees on latency or
data rate for NQB marked flows"
The EF PHB is also not giving actual guarantees other than "low latency, low jitter,
low loss"...
[GW] I removed the comparison to EF
13.   "The choice of
   the 0x2A codepoint (0b101010) for NQB would conveniently allow a node
   to select these two codepoints using a single mask pattern of
Diffserv codepoints are unstructured and should NOT be handled like this.
RFC 2474 requires anyway that the "mapping of codepoints to PHBs
MUST be configurable"
[GW] Deleted.
14.   Section 6: a discussion is missing if intermediate domains/nodes do not
support the proposed NQB PHB: the low latency property may get lost
completely if it gets treated as default PHB etc. RFC 2474 states
"Packets received with an unrecognized codepoint SHOULD be forwarded
   as if they were marked for the Default behavior (see Sec. 4), and
   their codepoints should not be changed."
[GW] This is mentioned in the Security Considerations section.  LMK if you think it needs a more detailed discussion.
15.   "cable broadband services MUST be configured
   to provide a separate queue for NQB traffic that shares the service
   rate shaping configuration with the queue for QB traffic."
I have no clue what that means...
[GW] reworded.

16.   Section 9 contains some "negative examples" or why existing solutions
are not solving the problem. I think there are probably some hidden/implicit
requirements that should be rather extracted from this section and put
into the definition of this PHB. For example you seem to do not want
"that each non-sparse flow gets an equal fraction of link bandwidth"...
or that there is a difference between the "sparse flow" definition of
FQ and your definition of NQB flows (which are probably sparse, but
app-limited or latency-sensitive at least?). What about latency-sensitive
high data rate flows?
[GW] We will work on this.  I’ve added a requirements section which has an initial set, but it will need some improvement.
17.   "it places unnecessary restrictions on the scheduling between the two queues"
so what is the requirement for your own approach then?

That's all for now, regards

[GW] Thanks again for the review!


[1] Mario Hock, Felix Neumeister, Martina Zitterbart, Roland Bless: TCP
LoLa: Congestion Control for Low Latencies and High Throughput,
IEEE LCN 2017,