[tsvwg] Re: WGLC for A NQB PHB for Differentiated Services (draft-ietf-tsvwg-nqb) to end 27th May 2024.
Sebastian Moeller <moeller0@gmx.de> Wed, 22 May 2024 07:39 UTC
Return-Path: <moeller0@gmx.de>
X-Original-To: tsvwg@ietfa.amsl.com
Delivered-To: tsvwg@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 29750C1CAF58; Wed, 22 May 2024 00:39:13 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -6.844
X-Spam-Level:
X-Spam-Status: No, score=-6.844 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_ENVFROM_END_DIGIT=0.25, FREEMAIL_FROM=0.001, RCVD_IN_DNSWL_HI=-5, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_ZEN_BLOCKED_OPENDNS=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, URIBL_DBL_BLOCKED_OPENDNS=0.001, URIBL_ZEN_BLOCKED_OPENDNS=0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=gmx.de
Received: from mail.ietf.org ([50.223.129.194]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id BvTwC0Iismer; Wed, 22 May 2024 00:39:09 -0700 (PDT)
Received: from mout.gmx.net (mout.gmx.net [212.227.17.20]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-256) server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id D54F8C1C3D58; Wed, 22 May 2024 00:39:08 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmx.de; s=s31663417; t=1716363546; x=1716968346; i=moeller0@gmx.de; bh=TvXzd1sqB/tyVrRT2Yq2+9orpnO/w+CEGHFgxbxDIbY=; h=X-UI-Sender-Class:Content-Type:Mime-Version:Subject:From: In-Reply-To:Date:Cc:Content-Transfer-Encoding:Message-Id: References:To:cc:content-transfer-encoding:content-type:date:from: message-id:mime-version:reply-to:subject:to; b=S318PUiR5HAh+uN/BVUI+o015G4lNT0bZIvX+q82JZPgO+14OYLViNZ6bVehVh6F aBuFeQvRabs3HBWdRj6IVx1eI0lS55A7YaPCVoSmkr/ZjKuQdSMBowIRnuO+7GwyD PMqRN2RZh/6W4p71UENr+0aXkPjpisHv/lQ4nn8ZW686CE415oOYDbhtj8GMkxEPT ZyRfSZE5WGfYRhRndKqvbyaJOGyXpGWXEmdlwhE9z8Dv0PFUTZXWM4g1+k7D2AMEG sEiteFpmtu1xm2W3T77QSS0JCXgNsJLKY0lSGB9siXVDPk5M76y3dHLrIy36kFhwq ZuXhkQvVWjrNoyEziA==
X-UI-Sender-Class: 724b4f7f-cbec-4199-ad4e-598c01a50d3a
Received: from smtpclient.apple ([95.119.211.13]) by mail.gmx.net (mrgmx104 [212.227.17.168]) with ESMTPSA (Nemesis) id 1M8QS8-1sE6PK2Tdw-0080fb; Wed, 22 May 2024 09:39:06 +0200
Content-Type: text/plain; charset="utf-8"
Mime-Version: 1.0 (Mac OS X Mail 16.0 \(3774.600.62\))
From: Sebastian Moeller <moeller0@gmx.de>
In-Reply-To: <BEUP281MB3757D7993BDE5D40208F4C769CEB2@BEUP281MB3757.DEUP281.PROD.OUTLOOK.COM>
Date: Wed, 22 May 2024 09:38:55 +0200
Content-Transfer-Encoding: quoted-printable
Message-Id: <5BA3DC7B-0D22-4968-AE02-F8FF31818594@gmx.de>
References: <LV2PR01MB7622AB269736527A859CA8259FE92@LV2PR01MB7622.prod.exchangelabs.com> <EF9FC229-2CA8-4809-B554-9F67172F1A25@gmx.de> <3503D2F6-8EB1-4236-B1C8-0FA9459CF4AD@comcast.com> <B26CE814-2C54-4179-A8D5-5FF6D435C68D@gmx.de> <CB0CFF3D-B611-4F2B-9B93-0A77178BF839@CableLabs.com> <AE664F42-FBFB-467D-8D4A-09E05A7E21FF@gmx.de> <BEUP281MB3757D7993BDE5D40208F4C769CEB2@BEUP281MB3757.DEUP281.PROD.OUTLOOK.COM>
To: Ruediger.Geib@telekom.de
X-Mailer: Apple Mail (2.3774.600.62)
X-Provags-ID: V03:K1:Nee1sUapCjhTVCPoD9S9COlXAkI7gKAFJ1meaZfuNSHqkdwEChG KZ+B4DZi0gg2+cRBxzKO/H2GpPF0VaSeX90uKIlYc1jSvfaR7uIGfFxS9dSIXh0AYB7N6yU OjYnr0su1NTPyhFfAOhuQ7H8E2bKeXDBk3YJLuwmWgnUso7fI0suqv+/MU//0HjNjwjxqys 2Ik2Sg7A1Q5Nsc+xN4iwg==
UI-OutboundReport: notjunk:1;M01:P0:1SOLf3laASk=;s7uEORimbOkrYwgsWhBIRaYDevT hiKTUdrFMB794sVNkVH4sk5RFGYQ8GeNBc8JoGiTBBoQymH+l4cnw5ESqXW4npk/V5P6D+ZnF MKhEnfokHux5icnt6AOj0gG3oteMjYJnImSDfIclVTJDfc8TgHK4fUnuYlHrhqQ1UBFX7gpoJ L0Pfa7JxdlQ/mOYUhNf4o48FBf8Q409rITOI4EHMc15c4f44TXqJd5ye+dQUfHjmceKRwF2sX OeXwuUBC5ydG32QHluJU1wc2UhnH6M6oCyLUCIPIu/SH590S9iGtxAH6MoaNA/NSm8/u2vDD4 uLRBN6KVJD8zdDt3s1KwMVhN3ipgwunqpDfDU8IZ8MNNAAvmFjG6nTEwKrOS49jHDzaN/xRQo D+sBQNfUyCYQwpvJpwOmlZmXqdaQCMcJc5Y2vWFQHyupCZ+x0Abo1Ug+oaKSK7tTcUYrp0oqd 1pyvdrg4iKT7iskRXIRVZ/Eqh4UKbARWhIP1wQYJnsn26dQVosKoDqlpGwFOcV8Qk15bRsZ7y Ml4jPiMdwkhE5529hQd3rvgobqCjFOBqKhC5ZKQnVEs3kG20G5raHQqacvUF08SHJ2otVin3l bDHSHlQVdFID9G1qDw3KslymX9XoatnWLx4x++6lUZMd4gmjFL5nlzvqJ6cxMWMFxioCcD1ft MVR5yla0tNEbMgG/amV9dOx7Bd631MczZdeIS9KYknpud0W4y9vQCj3mz8Z00CaB0qU2oNdDr fMk1t1jQRfxfcFU6Pq/qYeSNobRjt9ht5GsyHDu12WJjnw0b2E9a9j0q+/TLJ40HNYV/a1vDH 2gDr5H4TcQIPux8XZVFa74gE0jwc0tNLj/4nJmOq9OX0Q=
Message-ID-Hash: V6KVR63MW4N7ER4U5BUOTOQ42ZLBJXSU
X-Message-ID-Hash: V6KVR63MW4N7ER4U5BUOTOQ42ZLBJXSU
X-MailFrom: moeller0@gmx.de
X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; header-match-tsvwg.ietf.org-0; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header
CC: Sebastian Moeller <moeller0=40gmx.de@dmarc.ietf.org>, Greg White <g.white=40CableLabs.com@dmarc.ietf.org>, "Livingood, Jason" <jason_livingood=40comcast.com@dmarc.ietf.org>, tsvwg@ietf.org
X-Mailman-Version: 3.3.9rc4
Precedence: list
Subject: [tsvwg] Re: WGLC for A NQB PHB for Differentiated Services (draft-ietf-tsvwg-nqb) to end 27th May 2024.
List-Id: Transport Area Working Group <tsvwg.ietf.org>
List-Archive: <https://mailarchive.ietf.org/arch/browse/tsvwg>
List-Help: <mailto:tsvwg-request@ietf.org?subject=help>
List-Owner: <mailto:tsvwg-owner@ietf.org>
List-Post: <mailto:tsvwg@ietf.org>
List-Subscribe: <mailto:tsvwg-join@ietf.org>
List-Unsubscribe: <mailto:tsvwg-leave@ietf.org>
Hi Ruediger, > On 22. May 2024, at 09:29, <Ruediger.Geib@telekom.de> <Ruediger.Geib@telekom.de> wrote: > > Folks, > > NQB/L4S-Queue: I think 125 packets of 200 Byte per second which are forwarded when received cause an additional buffer delay of 0.008 ms for default traffic which is waiting to be forwarded on a 200 Mbit/s link (each single packet adds this amount). If there's a collision at all. > > If a 200 kbit/s flow and a Cubic flow share the same queue, the process is harder to be described. TCP creates bursty traffic. A single 1500 Byte packet creates 0.06 ms queuing delay. Assuming the backbone-link and DC interfaces to be of Gbit/s bandwidth, any single 200 Byte packet may face up to one ms of additional delay, if queued behind a burst of Cubic traffic. In the absence of congestion, there's likely no serious impact of the gaming flow on the Cubic flow, if the gaming flow is priority queued. If both are queued in the default queue, the gaming traffic may start to experience additional queuing delay, if the Cubic traffic starts to build a standing queue. [SM3] Sure, that in a nutshell is the rationale for using priority scheduling and I do not disagree with that. I disagree with the notion L4S/NQB would magically do away with the consequence of giving priority to packets. It does not if you look closely enough, hence I consider arguments like Greg's "That isn't true." incorrect and not helpful. > > The 200 kbit/s flow will not be reduced to a fair share of 199,8 kbit/s, if priority queued and competing with default traffic. [SM3] As I wrote initially: >> And I question that, priority scheduling does not magically create more capacity or transmit opportunities, for every packet you treat to less delay, some other packet(s) will be treated to more delay. We can argue that the amount of this additional delay is insignificant, but that is a different claim than 'it does not exist' as you seem to be making above. > > I am fine with arguing that the effect is small enough to not bother about it, I am not fine with claiming it does not exist... The former is decent pragmatism, the latter in conflict with the observable reality. And I add I did offer this in my post as bridge to come to some level of agreement. Regards Sebastian > > Regards, > > Ruediger > > -----Ursprüngliche Nachricht----- > Von: Sebastian Moeller <moeller0=40gmx.de@dmarc.ietf.org> > Gesendet: Mittwoch, 22. Mai 2024 08:17 > An: Greg White <g.white=40CableLabs.com@dmarc.ietf.org> > Cc: Sebastian Moeller <moeller0=40gmx.de@dmarc.ietf.org>; Livingood, Jason <jason_livingood=40comcast.com@dmarc.ietf.org>; tsvwg@ietf.org > Betreff: [tsvwg] Re: WGLC for A NQB PHB for Differentiated Services (draft-ietf-tsvwg-nqb) to end 27th May 2024. > > Hi Greg, > > see [SM2] below. > > >> On 21. May 2024, at 23:35, Greg White <g.white=40CableLabs.com@dmarc.ietf.org> wrote: >> >> Sebastian, >> See below [GW]. >> >> On 5/21/24, 10:01 AM, "Sebastian Moeller" <moeller0=40gmx.de@dmarc.ietf.org <mailto:40gmx.de@dmarc.ietf.org>> wrote: >> Hi Jason, >>> On 21. May 2024, at 15:57, Livingood, Jason <jason_livingood=40comcast.com@dmarc.ietf.org <mailto:40comcast.com@dmarc.ietf.org>> wrote: >>> >>> On 5/21/24, 02:34, "Sebastian Moeller" wrote: >>> >>>> [SM] Could you be a bit more explicit, what exactly did you test, what was posiztive and did yu also look for side effects? >>> >>> While you directed this question to Michael, we have tested this as well in our field trials. We have for example, tested single queue with AQM and looked at p99 latency under load and jitter as well as median & max bitrate for different application flows - then turned on NQB LL and observed the same for the classic and LL queues. What we have found generally is that the bitrates, working latency, and jitter for classic traffic with single queue AQM and dual queue LL is the same; classic queue applications do not experience degradation. >> >> >> [SM] Puzzled, how can the bitrate in the C-queue not change when there is also quantitative traffic in the L-queue? >> >> [GW] This shouldn't be puzzling. See below. > > [SM2] Well, it is not puzzling, but simply not true as you state below, for a link of capacity X with (for simpler argument) with the C-quueue capacity at Y=X if we add a flow of capacity Z to the higher priority L-queue the capacity available for the C-queue will be X-Z... and so if you can not measure an effect of L-traffic n the C-queue bitrate, respectfully you need to reconsider your measurement approach. > You can IMHO rightfully argue that for low capacity L-traffic that is not significant change in C-traffic capacity, but claiming there is NO change is simply incorrect. You could also try to argue that the same would happen if that low capacity flow would be added to the C-queue, but that is also not quite correct, because the same traffic added to the C-queue would be subject to the different congestion signalling (e.g. a non-zero probability of being dropped in a single queue bottleneck, but I would guess this effect to be really small, but still really small != 0). > In short, please be precise in what you claim. > > >> >> >>> At the same time, the NQB flow experiences working latency that approaches idle latency, with consistently low latency. >> >> [SM] Well, we know that priority scheduling does work and will give a sufficiently sparse traffic in the priority class shorter delays (and more potential throughput)... so that is not all that surprising. >> Surprising is that according to your argument that improvement in service did not come at a cost for other traffic. And I question that, priority scheduling does not magically create more capacity or transmit opportunities, for every packet you treat to less delay, some other packet(s) will be treated to more delay. We can argue that the amount of this additional delay is insignificant, but that is a different claim than 'it does not exist' as you seem to be making above. >> >> [GW] That isn't true. > > [SM2] A work conserving scheduler between two queues essentially plays a zero-sum game in transmit-slot acquisition, any time an L-queue packet is sent the C-queue is transiently stalled and hence the proto-sojourn time of the next C-quuee packet gets increased. Unless the head packet of the C-queue is dropped with 100% probability that stall will result in an increase in real-sojourn time and hence delay in the C-queue*. And as far as I understand the mechanism by which the C-queue increases the drop probability to adjust the C-queue bitrate (to make room for the L-traffic) is driven by a good part by increases in sojourn time... > I do think your claim "That isn't true" in itself is incorrect. Again, you can argue that this effect is insignificant (as I say above) or small enough to be hard to measure, but claiming as you apparently do this effect does not exist requires more than just generalised argument over long term averages. > > *) And for e.g. TCP such an immediate drop can have drastic consequences on the achievable throughput... but I digress. > >> A simple thought experiment > > [SM2] Not sure a simple thought experiment is helpful, as you simplified away the mechanistic issues that make your claim incorrect... > > >> case is a 100 kbps gaming flow (200B @ 60Hz) sharing a 200 Mbps bottleneck with a Cubic TCP flow. Whether these two flows share a single FIFO or are separately queued in an NQB instance, the gaming flow consumes 100 kbps and the Cubic flow can consume the rest (199.9 Mbps). > > [SM2] Yes, but here is the kicker, in case both are in the C-queue, the C-queue bitrate is 200 Mbps, in case they are in different queues, the C-queue bitrate is 199.9 and since 200 > 199.9 the claim the C-queue bitrate stays the same is false. (And the amount of significance depends on how much traffic we assume flows in the L-queue). > > >> There is no difference in capacity available to the Cubic flow. > > [SM2] Mmmh, instead of admitting that C-queue bitrate gets reduced by the L-queue bitrate you now look at an individual competing flow depending on whether the competing flow is in the L- or C-queue. But that is not what Jason claimed, and also not what I found puzzling. > > >> In the single FIFO case, with DOCSIS-PIE AQM, the Cubic flow might induce a packet loss every (say) 1.4 million packets, which will cause a single packet drop in the game flow once every 48 hours, so I suppose the single FIFO does reduce the gaming flow rate by 800 bpd (bits per day), and the Cubic flow could take advantage of that. The delay of the Cubic flow is controlled by the AQM, which has a target of 10ms in both cases, so there is no difference in Cubic delay caused by using NQB. > > [SM2] Well, now you are arguing by significance, which IMHO is a valid argument, but not the one Jason made or I reacted too. > >> >> [GW] Even if the NQB marked flow was non-compliant and was running at 180 Mbps, the Cubic TCP flow would have essentially 20 Mbps either way. In this case the NQB flow would see 0.007% packet loss in the single FIFO option so that does give the Cubic flow an additional 13 kbps, so I guess it is 20.013 Mbps vs 20 Mbps. So, yes the bitrate is slightly different. As above, the delay of the Cubic flow is controlled by AQM so remains the same whether single queue or dual queue. > > [SM2] See above, sure over longer periods this will average mostly out, but mechanistically it is still true that in scheduling there is no free lunch and for every packet treated to lower delay other packets will need to be treated to longer delay (or the ultimate delay: a drop) unless we allow for potentially infinite sized queues... > > I am still puzzled why any of this is apparently worth discussing, Jason's claim, as made, is demonstrably incorrect, so why are you arguing the claim is correct? > >> >> [GW] If the NQB marked flow was even more non-compliant, and was >> running at greater than 180 Mbps, the NQB implementation could potentially provide some protection to the Cubic flow, depending on whether and what kind of Traffic Protection function is provided. > > [SM] Well, potentially we could do a lot of things... but in the experiments that Jason performed, I still do not believe that there was no effect of L-traffic on C-capacity so either: > a) the measurement system was not precise enough to measure this* > b) the argument was misstated and did not actually means C-queue bitrate. > > *) Which also tells me that likely nobody tried measuring what happens if non-compliant traffic enters the L-queue, which in turn makes the whole report of "works as designed" a lot weaker, there was little question in my mind whether a priority scheduler would work * but a big question how this L4S+NQB experiment would fare with realistic internet traffic, which at times can be adversarial. > > Regards > Sebastian > > *) After all this is the time tested work-horse of more traditional QoS systems > > >> >> >> Regards >> Sebastian >> >> >>> >>> JL > >
- [tsvwg] WGLC for A NQB PHB for Differentiated Ser… Gorry Fairhurst
- [tsvwg] Re: WGLC for A NQB PHB for Differentiated… Livingood, Jason
- [tsvwg] Re: WGLC for A NQB PHB for Differentiated… Overcash, Michael (CCI-Atlanta)
- [tsvwg] Re: WGLC for A NQB PHB for Differentiated… Sebastian Moeller
- [tsvwg] Re: WGLC for A NQB PHB for Differentiated… Sebastian Moeller
- [tsvwg] Re: WGLC for A NQB PHB for Differentiated… Livingood, Jason
- [tsvwg] Re: WGLC for A NQB PHB for Differentiated… Livingood, Jason
- [tsvwg] Re: WGLC for A NQB PHB for Differentiated… Sebastian Moeller
- [tsvwg] Re: WGLC for A NQB PHB for Differentiated… Kevin Smith, Vodafone
- [tsvwg] Re: WGLC for A NQB PHB for Differentiated… Greg White
- [tsvwg] Re: WGLC for A NQB PHB for Differentiated… Sebastian Moeller
- [tsvwg] Re: WGLC for A NQB PHB for Differentiated… Ruediger.Geib
- [tsvwg] Re: WGLC for A NQB PHB for Differentiated… Malla, Deependra (CCI-Atlanta)
- [tsvwg] Re: WGLC for A NQB PHB for Differentiated… Sebastian Moeller
- [tsvwg] Re: WGLC for A NQB PHB for Differentiated… Greg White
- [tsvwg] Re: WGLC for A NQB PHB for Differentiated… Sebastian Moeller
- [tsvwg] Re: WGLC for A NQB PHB for Differentiated… Sebastian Moeller
- [tsvwg] Re: WGLC for A NQB PHB for Differentiated… Dave Taht
- [tsvwg] Re: WGLC for A NQB PHB for Differentiated… Sebastian Moeller