Re: [tsvwg] draft-ietf-tsvwg-nqb-15.txt - 5.1 Primary Requirements, Forwarding - Departure rate

Sebastian Moeller <moeller0@gmx.de> Tue, 11 April 2023 15:02 UTC

Return-Path: <moeller0@gmx.de>
X-Original-To: tsvwg@ietfa.amsl.com
Delivered-To: tsvwg@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 3A180C1522B9 for <tsvwg@ietfa.amsl.com>; Tue, 11 Apr 2023 08:02:00 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.537
X-Spam-Level:
X-Spam-Status: No, score=-2.537 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_ENVFROM_END_DIGIT=0.25, FREEMAIL_FROM=0.001, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H2=-0.001, RCVD_IN_ZEN_BLOCKED_OPENDNS=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_PDS_SHORTFWD_URISHRT_QP=0.01, URIBL_DBL_BLOCKED_OPENDNS=0.001, URIBL_ZEN_BLOCKED_OPENDNS=0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=gmx.de
Received: from mail.ietf.org ([50.223.129.194]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id GVlbQ0RGgfTn for <tsvwg@ietfa.amsl.com>; Tue, 11 Apr 2023 08:01:56 -0700 (PDT)
Received: from mout.gmx.net (mout.gmx.net [212.227.17.22]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-256) server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id B254FC1524B2 for <tsvwg@ietf.org>; Tue, 11 Apr 2023 08:01:55 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=gmx.de; s=s31663417; t=1681225311; i=moeller0@gmx.de; bh=zs/f9W/SNVES/C2RDotXWqd9krNEGEmB5cMO2P5xGtA=; h=X-UI-Sender-Class:Subject:From:In-Reply-To:Date:Cc:References:To; b=Teu0NrVUzUWdqUuX9cfhyV6M9VHEPUTUK+hQJNmRQjNqke2NIXVnolHD8hPaOi5j4 QYyb6ivhtgAQX3Nm5D7rvoHRjm5zz0Gj9cYVErmy29kjAV4gtsksJejfUopYY0MyaV H/NIDACuICmekSdZtomOqKg+hy+/V89UIam/mPoZjG7SHU6fSue5ApgNzm9vjQawtY zdQCQ62mcgqWx86ayCj2sfGSKNl0/5fnyL+IWmscRRPzbFIM6w7YqsyjPx27o8sMy2 S1YrJzk/JHhDjGtUWykJEoxL7pdygR/i2QMnL+2pLsx/pG8AW85HX+U4XHWxVjqR+g frPSx1OrfuGfg==
X-UI-Sender-Class: 724b4f7f-cbec-4199-ad4e-598c01a50d3a
Received: from smtpclient.apple ([134.76.241.253]) by mail.gmx.net (mrgmx105 [212.227.17.168]) with ESMTPSA (Nemesis) id 1MdebB-1qLJL53xFB-00Zc86; Tue, 11 Apr 2023 17:01:50 +0200
Content-Type: text/plain; charset="utf-8"
Mime-Version: 1.0 (Mac OS X Mail 16.0 \(3696.120.41.1.2\))
From: Sebastian Moeller <moeller0@gmx.de>
In-Reply-To: <FR2P281MB15271CBFBE97FCC5EAD11ED79C9A9@FR2P281MB1527.DEUP281.PROD.OUTLOOK.COM>
Date: Tue, 11 Apr 2023 17:01:50 +0200
Cc: tsvwg@ietf.org
Content-Transfer-Encoding: quoted-printable
Message-Id: <5C47E6DC-01F5-408F-98E9-3D0F22D4C26E@gmx.de>
References: <167348364734.15098.9183646444272144529@ietfa.amsl.com> <FR2P281MB15272D72FF9840601F20FB039CD79@FR2P281MB1527.DEUP281.PROD.OUTLOOK.COM> <70A2425B-E5C5-4889-B645-2CB6D976BEC9@cablelabs.com> <FR2P281MB15279F63768D7D3FE5632D729CBD9@FR2P281MB1527.DEUP281.PROD.OUTLOOK.COM> <55198C96-2CA9-4A62-BA73-CD21D640F8E6@cablelabs.com> <FR2P281MB1527B3C340FEF9C9D9420B0A9C909@FR2P281MB1527.DEUP281.PROD.OUTLOOK.COM> <8721A569-984A-4521-A20B-9546CCC344EB@cablelabs.com> <FR2P281MB1527488D8FC9BA71B5B068F39C919@FR2P281MB1527.DEUP281.PROD.OUTLOOK.COM> <098D4F77-2D8D-487B-BCA5-45EE67626A46@gmx.de> <FR2P281MB1527B93DEB0D26DD82443B749C919@FR2P281MB1527.DEUP281.PROD.OUTLOOK.COM> <A69E640C-B8CD-4586-B81E-D633946BCF0C@gmx.de> <FR2P281MB152741C0775A7A02E6E312DB9C919@FR2P281MB1527.DEUP281.PROD.OUTLOOK.COM> <008EC0E9-D603-45D0-8E0A-4DD4654EB97A@gmx.de> <FR2P281MB15271CBFBE97FCC5EAD11ED79C9A9@FR2P281MB1527.DEUP281.PROD.OUTLOOK.COM>
To: Ruediger.Geib@telekom.de
X-Mailer: Apple Mail (2.3696.120.41.1.2)
X-Provags-ID: V03:K1:CXN31lxmQEt/jjSaq+QOBTFgLiX67FKKyjlsNyNVpAt7KDVUYY1 v/Iuecp3E2i3hBALWXw71VeDBaoK9iOIhFczIpW6krzNZtBevCDhHOmK/U9MLJd4vS3/J44 X30GaUdYc7O+k9WwSWrpgd+95odsqYIK6+Zmq4pZ6P6bxPXq89oLBzCbdnCPWGwOSpIRaAB 0FVDRiORV3h7RnX+qCzew==
UI-OutboundReport: notjunk:1;M01:P0:zeUi4x4CFic=;roPHlWxD+i8sxwQzeVNVQsQ+5f6 3tlmZc3NFyL7vM9Dc86/o1W9iW7/QiyqmHq8ylQZ9a4zWKX83YR82pOOOQhihRogYTH0FGA7j rnwiaQj114ihK7AssUQUBF36TCPK+URN5JvALI+3ZH/2CrJxCLr6d39WJJrKsuy6Eb5eTLxJb f+s3FafwTgHDO0tdEBD4wOTKYEK+a2Vgzk+oqGe+PnkBUR07cG2vuRlodqjcbX5rjuUHV0g+P VsiyfMqyafkmzGyFiq/CjNbjlSybuLKDRyc95FsjxqsFIBb1h4TyHUtkZhyM7Lqi5/Hjn9xdV p1UitVWo15TACjYfy1vdCeobtEnL8K6mUfhlJAUC8SVMOAvoq0el2gcch21rFCZm2A91On/RR q23o/jIcpjMKDgr0fH4WfMHxGe9VVu/yDz/qVrHKYiDbvrENK+8YcXlhe/UHSjAX/xvBdPv5Z fmP87pamM2L3BPheexAhdAOlDozKladjDc/tZTxB7nMs3zl4L4DfolIc7DnHLIFVihFI17fCn APkFDRbr8zL6Ig+rsjlWTtDS6IEOFAHFelCfTAgLbgcTQ4r1oweOFiDDm/6imcwvJpwmUwKNw 33f32AHBx8fUGv6Kq/grj6kSU5+txiX/4SdzQR7Ul8NbT91GOQ2cuG3HZXRrznrbsWZlnQsTi 0eUK0uBnxHIC9ZOxbAdzPtAE1XDmd0mfIXt73Reo9+IM3lp/52uS9wynoPMLorh6ndZz7Q2NS h6zqtxZ4YD1NJLpbFI5QrZ5HDpTbaaxQfA2UElutAh8lyS8tJM/NQWmwOnksCZftSuEcP4vCe l0AHmykQIw6O2ioo1AKZPpgPbWk1YFRrr7CshHcCMP92E+AUDmANdWfGFnbj6ViFigdX8/n3k vOLBBWMk0n1FsXXIRynAskTTeQyuICCL2cLhE1YteVXW/tv+DsSKDFeI+FvuFYqP4wRhgaIo7 Q4g1eu2q9ggBp3H7h/tPUVNQH0w=
Archived-At: <https://mailarchive.ietf.org/arch/msg/tsvwg/onP7O5dcidHcgwNT7NlztCd7I6E>
Subject: Re: [tsvwg] draft-ietf-tsvwg-nqb-15.txt - 5.1 Primary Requirements, Forwarding - Departure rate
X-BeenThere: tsvwg@ietf.org
X-Mailman-Version: 2.1.39
Precedence: list
List-Id: Transport Area Working Group <tsvwg.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/tsvwg>, <mailto:tsvwg-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/tsvwg/>
List-Post: <mailto:tsvwg@ietf.org>
List-Help: <mailto:tsvwg-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/tsvwg>, <mailto:tsvwg-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 11 Apr 2023 15:02:00 -0000

Hi Ruediger,

> On Apr 11, 2023, at 12:43, <Ruediger.Geib@telekom.de> <Ruediger.Geib@telekom.de> wrote:
> 
> Hi Sebastian,
> 
> thanks. Your comments sound, as if one can produce the behaviour desired for NQB by FQ Codel (CAKE, maybe too), namingly
> - getting a rather low added delay for spurious traffic

	[SM] I think for fq_codel/cake we do have some published data:
Høiland-Jørgensen, T. (2018)
Analysing the Latency of Sparse Flows in the FQ-CoDel Queue Management Algorithm IEEE Communications Letters
https://doi.org/10.1109/LCOMM.2018.2871457
is relevant in that it allows you to deduce the number of sparse flows that benefit from fq_codel's sparse-flow boost mechanism (the number of these flows depends on the number of backlogged bulk flows), but even without that boosting flows in average can expect delays in the order of magnitude of the round-robin delay of the number of active flows.

> - getting a rather low added delay for well spaced flows

	[SM] Here:
Høiland-Jørgensen, Toke, Per Hurtig, and Anna Brunstrom. ‘The Good, the Bad and the WiFi: Modern AQMs in a Residential Setting’. Computer Networks 89 (4 October 2015): 90–106. https://doi.org/10.1016/j.comnet.2015.07.014.

Section "5.2. VoIP test" and Figure 3 seem relevant, albeit only testing a single low-rate well paced flow.


Mone, Prathamesh, Hrishikesh Athalye, Chetas Borse, Deepak D. Kshirsagar, and Sachin D. Patil. ‘Testbed Based Analysis of Linux Queue Disciplines over Internet Traffic Mix’. Simulation Modelling Practice and Theory 120 (1 November 2022): 102601. https://doi.org/10.1016/j.simpat.2022.102601.

Also has an interesting section on VoIP and gaming traffic (both paced but low-rate) in a set of different AQM/scheduler combinations.

What is missing is showing how a well-paced greedy or fixed-but-high-rate flow would do in either fq_codel or cake

That said, Figure 5 of:
Høiland-Jørgensen, Toke, Dave Täht, and Jonathan Morton. ‘Piece of CAKE: A Comprehensive Queue Management Solution for Home Gateways’. In 2018 IEEE International Symposium on Local and Metropolitan Area Networks (LANMAN), 37–42, 2018. https://doi.org/10.1109/LANMAN.2018.8475045.

shows that a fixed-rate flow (unsure how well-paced this was) exceeding its equitable capacity share will experience in self-congestion. (It also shows how cake's built-in DSCP handling can be used to avoid that self-congestion by scheduling the fixed-rate flow in a higher priority tin, but that requires the fixed-rate flow fitting into one of the higher priority cake buckets fully; this is a case of a priority scheduler built on top of rate-scheduling). 


What is also missing is data for an NQB-compatible scheduler as comparison...


> 
> If you are aware of published measurement results proving these points, please share a pointer.

	[SM] I tried to collect a few references above, all related, but none a perfect match.


> As to your good advice regarding well spaced traffic (which doesn't build queues): I'd be interested in results of high bandwidth testing streams which send fairly spaced traffic. That holds for FQ-Codel as well as for NQB.

	[SM] +1; I would like to see that as well... I think that looking at L4S testing might be relevant here, as TCP Prague is defaulting to pacing...
Also:
https://www.comsys.rwth-aachen.de/fileadmin/papers/2019/2019-kunze-ccwild-tnsm.pdf
states that some of the tested CDNs use pacing protocols like BBR (but do not confirm that for individual tests), Figure 10 shows how codel and especially fq_codel equalize fairness between flows... but that is not as clear and well-defined a data set as I would like, so I would say this at best hints at fq_codel/cake do not appear to give a throughput bonus for better-pacing.

> 
> Just thinking... a small RTT creates somewhat smaller "spaced bursts", than a large RTT.

	[SM] I would guess a real pacing protocol will space packets in accordance to the sending rate and congestion window, so two flows of equal rate should have equal packet spacing independent of their respective RTTs... the longer RTT flow will react more sluggish, but during steady-state conditions that should not matter much, no?


> That translates to small RTT flows "cause less congestion" at an FQ scheduler / DOCSIS traffic protection.

	[SM] I think what matters is the quality of pacing, e.g. a perfect paced flow might evade docsis queue protection, while trying to burst the same number of bytes and packets once-per-RTT as a back-to-back train is more likely to engage the queue protection (as it will introduce noticeable "lumping"). At least that is my understanding.


> That would add another category of testing, low RTT NQB flows competing with higher RTT NQB and QB flows. 

	[SM] +1; I would like to see such data as well, but I am not prepared to actually create that data (not just because I have no NQB-compliant scheduler set-up)

Regards
	Sebastian


> 
> Regards,
> 
> Ruediger
> 
> 
> 
> -----Ursprüngliche Nachricht-----
> Von: Sebastian Moeller <moeller0@gmx.de> 
> Gesendet: Donnerstag, 6. April 2023 16:05
> An: Geib, Rüdiger <Ruediger.Geib@telekom.de>
> Cc: tsvwg@ietf.org
> Betreff: Re: [tsvwg] draft-ietf-tsvwg-nqb-15.txt - 5.1 Primary Requirements, Forwarding - Departure rate
> 
> Hi Ruediger,
> 
> 
>> On Apr 6, 2023, at 11:52, <Ruediger.Geib@telekom.de> <Ruediger.Geib@telekom.de> wrote:
>> 
>> Hi Sebastian,
>> 
>> here we discuss NQB native, and afaik, a 50/50 WRR scheduling, if deployed by WRR (although Greg's latest response wasn't clear about that, it had a "whatever" clause). So L4S isn't the measure.
> 
> 	[SM] Fair enough. I will keep NQB over L4S out of this sub-thread, sorry for my confusion.
> 
> 
>> 
>> Traffic protection, at least the one that has been published, re-schedules packets, rather than flows.
> 
> 	[SM] Ah, I see, the NQB draft offers both the packet-by-packet remapping or the queieing-score per-flow based approach from the docsis queue protection design. 
> 
>> Also Greg's NQB draft argues by re-ordering as an incentive to mark correctly. 
>> https://datatracker.ietf.org/doc/html/draft-briscoe-docsis-q-protectio
>> n-06#name-rationale-for-reclassificat
> 
> 	[SM] This is IMHO one of the cases to me where documenting a bug or mis-design does not make it a feature. If this is truly decided packet by packet, then the packet suffering from this redirection is not in any way guaranteed to be from the flow that caused the NQB queue to overfill, so this is not a real incentive as the gain from mis-marking a violating flow as NQB is harvested by the mismarker itself , while the harm is spread over all NQB users.
> 	I severely doubt that these kind of "incentive" discussions are actually helping unless "incentive" can be based on some quantitative way to measure and compare gain and harm.
> 
> 
>> To me, this traffic protection sounds a bit like some kind of FQ 
>> mechanism,
> 
> 	[SM] For the docsis queue protection that seems part of the design, approximate FQ without actually doing real FQ... the question I have is whether that is a good trade-off or not.
> 
> 
>> with the option to re-schedule packets rather than drop them (as compared to FQ-Codel).
> 
> 	[SM] I think the re-scheduleing is the default option. But again queue protection is based on a flows contribution to queue build up, not on its actual relative flow-rate. I think that a well-paced flow will be able to gain considerable capacity above its equitable share without actually triggering queue protection to engage.
> 
> 
>> To me, traffic protection + NQB would sum up to a hierarchical scheduler then (FQ followed by an NQB/QB scheduler - I didn't worry about detailed specs of the NQB-FQ/QB traffic part, but if you operate FQ by x ms <= target <= 5 ms for the NQB portion, using the NQB scheduler rate or a slightly smaller one as the resource to share for these flows, and move away offending packets into the QB scheduler, you seem to have largely made it - as I see in the text referenced above, QB "has a latency target of 10ms", if you want, that's 2 (schedulers) * target 5 ms each). The NQB scheduler following the NQB FQ traffic protection wouldn't see an additional queue then, as the arrivals are shaped. I'm not an FQ expert however and may have misperceptions.
> 
> 	[SM] I think with a real FQ scheduler, the kind of flows that qualify for NQB treatment actually do not need any special casing at all... sparse flows staying below their capacity share will be serviced with at most one round-robin cycle delay (fq_codel/cake will actually give such flows a mild boost, as this tends to be generally helpful for e.g. newly establishing flows), and those flows exceeding their share will experience their own self-caused delay without harming other flows (much). In a sense a proper fq_scheduler does away with the need of an NQB PHB for low-rate unresponsive traffic. 
> 	But I think this is not the route that the NQB draft choose, and the proposed queue protection mechanism does not aim to equalize departure rate but to minimize queueing score. However these are not fully independent, any flow sending at a higher rate is likely to have a higher queueing score than sparser flows. My complaint in the context of NQB is still, that if the first thing to mention for NQB eligibility is "rate <= 1 Mbps" then this rate criterion should actually be policed, and that is something the proposed queue protection does not, not does the alternative of putting packets that meet a full NQB queue into the QB queue.
> 
> Regards
> 	Sebastian
> 
> 
>> 
>> Regards, Ruediger
>> 
>> PS: I agree, single flows are unlikely to congest single access lines and RTT is important, while bandwidth isn't any longer. In many urban regions.
>> 
>> -----Ursprüngliche Nachricht-----
>> Von: Sebastian Moeller <moeller0@gmx.de>
>> Gesendet: Donnerstag, 6. April 2023 10:15
>> An: Geib, Rüdiger <Ruediger.Geib@telekom.de>
>> Cc: Greg White <g.white@cablelabs.com>; tsvwg@ietf.org
>> Betreff: Re: [tsvwg] draft-ietf-tsvwg-nqb-15.txt - 5.1 Primary 
>> Requirements, Forwarding - Departure rate
>> 
>> Hi Ruediger,
>> 
>>> On Apr 6, 2023, at 09:30, <Ruediger.Geib@telekom.de> <Ruediger.Geib@telekom.de> wrote:
>>> 
>>> Hi Sebastian,
>>> 
>>> I haven't been involved in the measurements characterizing L4S. I'd hope (but I'm not sure) that the L4S queue coupling results in roughly equal congestion in both queues, once one is starting to be congested.
>> 
>> 	[SM] This only holds for responsive traffic in the LL-queue. The queue is designed to not drop packet, only mark or, if queue protection is enabled, to move a full "flow" into the non-LL queue. For non-responsive traffic (the type NQB is intended for) there os no push back, as far as I understand. Pete's measurements already demonstrated a number of cases where all that conserves some non-LL traffic throughput is the ~10% default weight for the non-LL queue. So no, the queue coupling is not robust and reliable (a fact the L4S designers implicitly conceded by putting in the weighted scheduler back-stop I would argue). Now, it is possible that alternative L$S-AQMs could avoid this problem, but neither DUalQ nor the related AQM set-up described in the DOCSIS specifications are such alternative implementations and both seem to be easily exploitable.
>> 
>> 
>>> Also, if unresponsive traffic is present in the L queue (I'd wonder how a single first C flow would be able to kick in, while standard conformant L4S flows already saturate a link).
>> 
>> 	[SM] Well, DualQ specifies a weighted scheduler LL/non-LL 90/10 as back-stop. I assume that would trigger in such a situation. Also if the non-responsive flow's rate exceeds 90% of the capacity the optional queue-protection might kick in, but with faster and faster access links, it becomes IMHO more likely that a traffic source might end up being application limited ad hence might well stay below the critical 90% mark (note that these 90% are for the aggregate LL queue traffic, so if there are multiple flows in the LL-queue they need to share that capacity).
>> 
>> 
>>> Otherwise, the C queue will be pretty much starved. That said - I'm not an L4S expert and rely upon the measurements executed during evaluation.
>> 
>> 	[SM] These measurements where IMHO performed (see https://github.com/heistp/l4s-tests), and confirmed that under reasonable (but non-optimal) conditions the L4S DualQ falls back to the configured weights of the back-stop scheduler. What was controversial is whether that condition would technically qualify as "starvation" and whether it was sufficiently likely to matter, however none of the L4S proponents was willing to offer a stringent definition of what "starvation" actually means let alone "sufficiently likely". At least that is my recollection*. 
>> 	In L4S' defense, it does offer methods/heuristics that with responsive traffic of similar RTTs will end up sharing roughly equitably between the two queues, albeit not robustly or reliably...
>> 
>> Regards
>> 	Sebastian
>> 
>> *) I was and am on record that L4S should never have been granted RFC status, so I am not unbiased on this topic.
>> 
>> 
>>> 
>>> Regards,
>>> 
>>> Ruediger
>>> 
>>> -----Ursprüngliche Nachricht-----
>>> Von: Sebastian Moeller <moeller0@gmx.de>
>>> Gesendet: Donnerstag, 6. April 2023 08:41
>>> An: Geib, Rüdiger <Ruediger.Geib@telekom.de>
>>> Cc: Greg White <g.white@cablelabs.com>; tsvwg@ietf.org
>>> Betreff: Re: [tsvwg] draft-ietf-tsvwg-nqb-15.txt - 5.1 Primary 
>>> Requirements, Forwarding - Departure rate
>>> 
>>> Hi Ruediger,
>>> 
>>>> On Apr 6, 2023, at 08:24, <Ruediger.Geib@telekom.de> <Ruediger.Geib@telekom.de> wrote:
>>>> 
>>>> Hi Greg,
>>>> 
>>>> draft NQB is on standards track. Please specify the weights to be set for NQB and QB scheduler by requirements:
>>>> - some generic text, not based on any implementation (which I think is roughly there). I'd appreciate text stating that QB/NQB share the same overall resources, are configured by the same priority, weight (or minimum departure rate) and also are equipped by the same priority and weight to access spare capacity. There's some text in the draft, but it is not very precise.
>>>> 
>>>> - to that a clarification related to your response: you write [GW] 
>>>> It can be set to whatever ratio the network operator chooses (e.g. 50/50) in the case that L4S support is disabled.
>>>> Please clarify "whatever" in the sense of standard track NQB: which range of NQB/QB WRR weight configurations is compliant with this draft standard? My perception was exactly 50/50, but "whatever" seems to allow for arbitrary configurations.
>>>> - I'd strongly suggest that you provide an example traceable for a fair share of readers, which from my perception is good practice of other RFC authors. You reference the DOCSIS L4S implementation, a WRR scheduler with 50/50 weights (and the same for access to spare capacity) seem good to me.
>>> 
>>> 	[SM] The DOCSIS scheduler defaults to 90% LL-queue (used for NQB) and 10% classic; this is supposed to be OK as L4S-flows will scale down quickly on cross-pressure from the classic queue. However unresponsive NQB traffic is not expected to scale back and hence will be able to gain ~90% of capacity if sufficiently well-paced to avoid the optional queue-protection functionality (that does not asster maximal per-flow rates, but simply looks at the queueing caused by a flow, which can be minimized by high-precision pacing).
>>> 	I think the NQB draft should mention this in the "3.3. Relationship to L4S" section explicitly*. The point here is that just moving NQB traffic onto an L4s AQM relies fully on the sender of NQB traffic to voluntarily follow the recommended rate limits (which as we discussed separately are hard to get right, how should n endpoint using rate-limited traffic ever know what the "true" link capacity actually is to scale its permissible rate to, but that is a different kettle of fish). 
>>> 
>>> 
>>> Regards
>>> 	Sebastian
>>> 
>>> 
>>> *) With that I mean that a standard compliant L4S AQM will not assert that NQB and QB traffic share a link equitably and that the imbalance can be as high as the scheduler weights for a dualQ AQM imply. Sure there are other potential implementations for an L4S AQM, but e.g. for the one described in the DOCSIS specifications with its default 100*230/256 = 89.84375% LL weight it seems worth mentioning that NQB traffic can gain up to 90% of capacity, OR if I happen to be incorrect in my interpretation, WHY the NQB class will not be able to gain an unfair advantage.
>>> 
>>> 
>>>> 
>>>> Regards,
>>>> 
>>>> Ruediger
>>>> 
>>>> 
>>>> -----Ursprüngliche Nachricht-----
>>>> Von: Greg White <g.white@cablelabs.com>
>>>> Gesendet: Donnerstag, 6. April 2023 01:57
>>>> An: Geib, Rüdiger <Ruediger.Geib@telekom.de>
>>>> Cc: tsvwg@ietf.org
>>>> Betreff: Re: [tsvwg] draft-ietf-tsvwg-nqb-15.txt - 5.1 Primary 
>>>> Requirements, Forwarding - Departure rate
>>>> 
>>>> Hi Ruediger,
>>>> See my responses below [GW].
>>>> -Greg
>>>> 
>>>> On 4/5/23, 6:12 AM, "Ruediger.Geib@telekom.de <mailto:Ruediger.Geib@telekom.de>" <Ruediger.Geib@telekom.de <mailto:Ruediger.Geib@telekom.de>> wrote:
>>>> Hi Greg,
>>>> 
>>>> Section "7.7.3.2 Inter-SF Scheduler" of CM-SP-MULPIv3.1-I24-221019 contains the following statement:
>>>> 
>>>> coupling .... the Classic Service Flow to the Low Latency Service Flow, it relies on the Inter-SF Scheduler to balance this. Weighted Round Robin (WRR) is a simple scheduler that achieves the desired results, and is recommended in [draft-ietf-tsvwg-aqm-dualq-coupled].
>>>> 
>>>> The above text covers L4S, not straight NQB.
>>>> - Please explain how this WRR scheduler is configured to support straight NQB/QB without L4S being configured.
>>>> [GW] The WRR in DOCSIS has a configurable weight.  It can be set to whatever ratio the network operator chooses (e.g. 50/50) in the case that L4S support is disabled.
>>>> 
>>>> - If there's no WRR scheduler, then please explain how an implementer ensures that NQB and QB fairly share the same resource, while each operate with separate queues. I'm especially interested in the part "no configurable service rate/weight etc." for the NQB queue. An example is sufficient, maybe one including a WRR scheduler, if applicable.
>>>> [GW] Aside from WRR mentioned above, perhaps a TS-FIFO could be used?  I have to admit that I've not thought about other scheduler options extensively.
>>>> 
>>>> - if WRR can't be used to realise separate NQB/QB queues for an implementation, please let me know, why this isn't possible.
>>>> 
>>>> Regards,
>>>> Ruediger
>>>> 
>>>> 
>>>> -----Ursprüngliche Nachricht-----
>>>> Von: Greg White <g.white@cablelabs.com 
>>>> <mailto:g.white@cablelabs.com>>
>>>> Gesendet: Freitag, 24. März 2023 21:24
>>>> An: Geib, Rüdiger <Ruediger.Geib@telekom.de 
>>>> <mailto:Ruediger.Geib@telekom.de>>
>>>> Cc: tsvwg@ietf.org <mailto:tsvwg@ietf.org>
>>>> Betreff: Re: [tsvwg] draft-ietf-tsvwg-nqb-15.txt - 5.1 Primary 
>>>> Requirements, Forwarding
>>>> 
>>>> 
>>>> Hi Ruediger,
>>>> 
>>>> 
>>>> FYI I've added an issue in the GitHub tracker to ensure this gets resolved.
>>>> https://github.com/gwhiteCL/NQBdraft/issues/32
>>>> <https://github.com/gwhiteCL/NQBdraft/issues/32>
>>>> 
>>>> 
>>>> 
>>>> 
>>>> I'll try to answer your question.
>>>> 
>>>> 
>>>> [RFC2598]: The EF PHB is defined as a forwarding treatment for a 
>>>> particular diffserv aggregate where the departure rate of the 
>>>> aggregate's packets from any diffserv node must equal or exceed a 
>>>> configurable rate. The EF traffic SHOULD receive this rate 
>>>> independent of the intensity of any other traffic attempting to transit the node.
>>>> It SHOULD average at least the configured rate when measured over 
>>>> any time interval equal to or longer than the time it takes to send 
>>>> an output link MTU sized packet at the configured rate. (Behavior at 
>>>> time scales shorter than a packet time at the configured rate is 
>>>> deliberately not specified.) The configured minimum rate MUST be 
>>>> settable by a network administrator (using whatever mechanism the 
>>>> node supports for non-volatile configuration).
>>>> 
>>>> 
>>>> [NQB]: ... the NQB PHB provides a shallow-buffered, best-effort service as a complement to a Default deep-buffered best-effort service. ... A node supporting the NQB PHB makes no guarantees on latency or data rate for NQB-marked flows, but instead aims to provide an upper-bound to queuing delay for as many such marked flows as it can and shed load when needed.
>>>> 
>>>> 
>>>> So, NQB PHB does not have a configurable departure rate, nor does it guarantee that NQB traffic will receive any particular departure rate, regardless of the presence of other traffic of any intensity.
>>>> 
>>>> 
>>>> <snip>
>>> 
>> 
>