Re: [tsvwg] I-D Action: draft-ietf-tsvwg-aqm-dualq-coupled-24.txt

Sebastian Moeller <moeller0@gmx.de> Mon, 11 July 2022 14:21 UTC

Return-Path: <moeller0@gmx.de>
X-Original-To: tsvwg@ietfa.amsl.com
Delivered-To: tsvwg@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id B03BEC16ED1F for <tsvwg@ietfa.amsl.com>; Mon, 11 Jul 2022 07:21:56 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.656
X-Spam-Level:
X-Spam-Status: No, score=-1.656 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, FREEMAIL_ENVFROM_END_DIGIT=0.25, FREEMAIL_FROM=0.001, RCVD_IN_MSPIKE_H2=-0.001, RCVD_IN_ZEN_BLOCKED_OPENDNS=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01, URIBL_BLOCKED=0.001, URIBL_DBL_BLOCKED_OPENDNS=0.001, URIBL_ZEN_BLOCKED_OPENDNS=0.001] autolearn=no autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (1024-bit key) header.d=gmx.net
Received: from mail.ietf.org ([50.223.129.194]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 15d4saF1iE_s for <tsvwg@ietfa.amsl.com>; Mon, 11 Jul 2022 07:21:52 -0700 (PDT)
Received: from mout.gmx.net (mout.gmx.net [212.227.15.18]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-256) server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id B7B3CC14F725 for <tsvwg@ietf.org>; Mon, 11 Jul 2022 07:21:51 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=gmx.net; s=badeba3b8450; t=1657549264; bh=KpmmEbB7ovytbwBistGav/+sb2AuFYlees1d54/8wQw=; h=X-UI-Sender-Class:Subject:From:In-Reply-To:Date:Cc:References:To; b=PEohqzomKHLGOhFYBjneXf3ndxqmHCk3PwwJX258TJJJZTqn/GSCF9mXrmTILPIco LY/pFjG9mYKy4ZGtRYme30QEpz7VKJwmuIirS3Q8Ckh0WrNPz0KQ2/jYPOFnrTIW9d eiPZZjvxlqeyZUAHU9nZzvUUeQ3KvIqbBcFaXpGU=
X-UI-Sender-Class: 01bb95c1-4bf8-414a-932a-4f6e2808ef9c
Received: from smtpclient.apple ([134.76.241.253]) by mail.gmx.net (mrgmx005 [212.227.17.190]) with ESMTPSA (Nemesis) id 1N2mFi-1nPnHP39Bs-0136Jc; Mon, 11 Jul 2022 16:21:03 +0200
Content-Type: text/plain; charset="utf-8"
Mime-Version: 1.0 (Mac OS X Mail 16.0 \(3696.100.31\))
From: Sebastian Moeller <moeller0@gmx.de>
In-Reply-To: <c666dc61-caf6-0992-03c9-6a7b712ff098@bobbriscoe.net>
Date: Mon, 11 Jul 2022 16:21:02 +0200
Cc: tsvwg@ietf.org
Content-Transfer-Encoding: quoted-printable
Message-Id: <85B3C24C-2D73-4CE1-9937-1DEC24DFBD82@gmx.de>
References: <165722397834.53135.18055691085400940242@ietfa.amsl.com> <d1dfe04d-0f95-a828-1c4c-eb7f811e64a0@bobbriscoe.net> <3A833795-E14C-49AF-ABA5-1C7AFC131BF9@gmx.de> <ec1021a4-d02b-9ead-3dbb-a093f7d2203a@bobbriscoe.net> <EB7E2F26-143D-4742-8F9B-EEAF1DD21466@gmx.de> <3b0c7b74-ee50-5f4d-444a-7d2b1ebbdf5a@bobbriscoe.net> <30A1EC9B-8C87-4B75-8605-57995C76D1D6@gmx.de> <c666dc61-caf6-0992-03c9-6a7b712ff098@bobbriscoe.net>
To: Bob Briscoe <ietf@bobbriscoe.net>
X-Mailer: Apple Mail (2.3696.100.31)
X-Provags-ID: V03:K1:F+sgHuWG6xnzfDUdw55LBMbXrbAm5oyRdNeb6K/ir1qH6fJaFI/ BKaG5sqhwzNrwy5C4hEjZBfhWbqGWEWJq1srAQnQ1/HDi2nd5i03nMgJdtSurhejcqpTfnF onPsWTaYLZ309IXsNFJtS37HRzemhVnZv8Ildsrs0nntNUnoXt+crR4S984gslWQ7N6lepU Gdn2F1PSfpKeZx65MZwAw==
X-UI-Out-Filterresults: notjunk:1;V03:K0:AKU1aP7ieWI=:tTthRhshtPiHNvQTYSRhnv jEoQdJbdKzwkz+tKt3XGKHDyp9wLQEcBEOqjpe7k3rLmz1FAY21B2Zn9NuhuiwWVyH0E6i1Wy iw9BYUqc/6Aa25F/2cD3PvZ7JtJw0XbEGgTEkY5aeiO9sHxVdcux9Qvw0WBi8f9VpgikKAnVj 0Ah5U98JsvLAIOH0swdfOIvRzL+6gbRwMy5mGLFpT2IFL7TcynKOKRCw7wmR1tzulPKp+7fYF i/0uy5h1LMG1q0Tw9e70REJqqZN9vzAXR37b6N9mIUVeQaheJe7yLUxa3s37+5sYsHBTFLBcd KX09fZxIi7W3wdfXo9ItIoXO7ELRH+VEBkbqx4M8rTehoQOTqOyeg1XOgOXWEx4HiGF2Dl2vB bBUikApYPk1Aj0TnjxVZAnj0ODWKO0E+Ibk4iu7gtwUHDUaldMuPx2fK37T/YRXBYq/OWvDfp fgC8F6n0kE6UFettufym4ppQclRwCqrjIN1jrUjJrffVNG1P0nhaVbPKZHytTxwiwrKg27JJ9 SkPE6jlpjdrphyHKTVmsTw57vTcsy604ibR9/5Z/dUDtLTR4b8FBWU6yXAimA1+mcZvW6PZDn bb9GMZ6g+kTIx8oMGriBYLTa/vCZ5/hLwZQFJojgVbw7SosMte/8zLpJ4oJcvnvHb/9cnnGWr O1E33wLe+mQEgwF1yUYyWz/qJF+Koje3TjIdxhEOQL0wpzjCSza1JViuDhQZcD2Qh42my/6nE Ypyi4976+ahg5NDul3gR/TJ92KhRutlNLJDv5wQR2mCQCNafgxRX6XmrOvwJtSNC5uZHh1Tqk S50/aaVGWac0mZhBXuq3QKT9N+h6gsdTSFICYx/L4CcVpTNuXDwqeAUtXFsIOT/I7tD9O7357 zlTYlhdcAHvs3+xzXU1de3ET160ieSKg8Zc1cKQgsePlRUJzZFaxWFAwU2Wq/KmFqnz2FCcgp kpQIwoUh5Cx5j07HpFlvShWrEfP3uAlzcUpKvKBJI9OzH4djG3nE8infmJhxUPg3pzjM75xd6 eQDYWsBDxRdgo7qYSjOdYp6xbIggfIcBsPvikJZP6SMnxyI/xAdPpcwdYkAnOF6Qnta4iyZ/j BcbMu7qt/nnreoxToPkFc6z+yvFjj5RMuhI6Sji/MRqC/D1sZS/flf1dw==
Archived-At: <https://mailarchive.ietf.org/arch/msg/tsvwg/TcOV6Ae4k4pvD1iv90zODs2FCFM>
Subject: Re: [tsvwg] I-D Action: draft-ietf-tsvwg-aqm-dualq-coupled-24.txt
X-BeenThere: tsvwg@ietf.org
X-Mailman-Version: 2.1.39
Precedence: list
List-Id: Transport Area Working Group <tsvwg.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/tsvwg>, <mailto:tsvwg-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/tsvwg/>
List-Post: <mailto:tsvwg@ietf.org>
List-Help: <mailto:tsvwg-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/tsvwg>, <mailto:tsvwg-request@ietf.org?subject=subscribe>
X-List-Received-Date: Mon, 11 Jul 2022 14:21:56 -0000

HI bob,

see [SM2]

> On Jul 11, 2022, at 16:00, Bob Briscoe <ietf@bobbriscoe.net> wrote:
> 
> Sebastian,
> 
> As your email proceeded you encountered answers to your earlier questions, so I'll just jump to three questions at the end - see inline tagged [BB3]...
> 
> 
> On 10/07/2022 22:59, Sebastian Moeller wrote:
>> Hi Bob,
>> 
>> 
>>> On Jul 10, 2022, at 22:39, Bob Briscoe <ietf@bobbriscoe.net> wrote:
>>> 
>>> Sebastian,
>>> 
>>> On 09/07/2022 10:22, Sebastian Moeller wrote:
>>>> Hi Bob,
>>>> 
>>>>> On Jul 9, 2022, at 02:13, Bob Briscoe <ietf@bobbriscoe.net> wrote:
>>>>> 
>>>>> Sebastian,
>>>>> 
>>>>> On 08/07/2022 14:44, Sebastian Moeller wrote:
>>>>>> Dear list,
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>>> On Jul 7, 2022, at 23:22, Bob Briscoe<ietf@bobbriscoe.net> wrote:
>>>>>>> 
>>>>>>> tsvwg list,
>>>>>>> 
>>>>>>> We have just posted new revisions of the 3 main L4S drafts to address the Area Director's review comments, for which many thanks to Martin Duke).
>>>>>>> This email concerns the Coupled DualQ AQM.
>>>>>>> 
>>>>>>> A link to a diff since the previous rev can be found in the announcement below.
>>>>>>> Here is a brief summary in English:
>>>>>>> 
>>>>>>> ==Changes to Normative text==
>>>>>>> 
>>>>>>> None
>>>>>>> 
>>>>>>> ==Technical/Editorial Changes==
>>>>>>> 
>>>>>>> Appendix A. Example DualQ Coupled PI2 Algorithm
>>>>>>> A.1. Pass #1: Core Concepts
>>>>>>> An alternative to using sojourn time to measure queuing delay had been added a few revisions ago, to address concerns about marking a mix of bursty and smooth traffic.
>>>>>>> However, it had only been added in a note about something else, and it hadn't been referred to consistently wherever sojourn marking was discussed.
>>>>>>> This has now been corrected, and It has been moved to an earlier note.
>>>>>>> 
>>>>>>> ==Editorial Changes==
>>>>>>> 
>>>>>>> Minor clarifications and corrections and updated references.
>>>>>>> 
>>>>>> However, sojourn time is slow to detect bursts. For instance, if
>>>>>> a burst arrives at an empty queue, the sojourn time only fully
>>>>>> measures the burst's delay when its last packet is dequeued, even
>>>>>> though the queue has known the size of the burst since its last
>>>>>> packet was enqueued - so it could have signalled congestion
>>>>>> earlier. To remedy this, each head packet can be marked when it
>>>>>> is dequeued based on the expected delay of the tail packet behind
>>>>>> it, as explained below, rather than based on the head packet's
>>>>>> own delay due to the packets in front of it. [Heist21] identifies
>>>>>> a specific scenario where bursty traffic significantly hits
>>>>>> utilization of the L queue. If this effect proves to be more
>>>>>> widely applicable, it is believed that using the delay behind the
>>>>>> head would improve performance.
>>>>>> 
>>>>>> 
>>>>>> Is "it is believed that using the delay behind the
>>>>>> head would improve performance." really the level of confirmation applicable to make recommendations? Would it not be more convincing to cite the paper/data that is the foundation of that believe.
>>>>>> 
>>>>>> Would it not be more prudent to actually demonstrate (empirically with data) if the proposed alternative method is:
>>>>>> a) an improvement under conditions of bursty traffic*
>>>>>> b) actually implementable (for variable rate links predicting the service time for the existing queue is a non-trivial problem)
>>>>>> 
>>>>>> instead of basing a recommendation on "believe"?
>>>>> [BB] The word 'believed' was there in the previous draft.
>>>> 	[SM] Well, I did not argue about when this text was added so I take this as a fact orthogonal to my comment.
>>>> 
>>>>> It only appears in the diff 'cos this block of text was moved. It's currently written as 'believed' because experiments are in progress.
>>>> 	[SM] Is that really relevant? If a problematic piece of text are found even in already published RFCs and will be corrected with an erratum if severe enough.
>>>> 
>>>>> Regarding 'implementable', it wasn't hard in software.
>>>> 	[SM] Point is it is easy to have somftware implemt something, but it is hard to do something that actually works robustly and reliably with variable rate links. I have recently started looking at LTE and Starlink throughput and latency traces and the variability is quite high, IMHO to high for any simple heuristic to capture it well.
>>>> 
>>>> 
>>>>> Yes, probably a non-trivial problem for some platforms.
>>>> 	[SM] Gad we agree.
>>>> 
>>>>> Predicting the service time can be based on recent packets - it will always be wrong just after the rate changes.
>>>> 	[SM] Yes and no, there are obviously multiple layers of rate here, on the one hand the total (variable) capacity of the shared segment (in LTE and starlink the air is obviously shared) and on the other hand a user's capacity share of the segment capacity, both can and do vary partially independently.
>>>> 
>>>> 
>>>>> However, I don't believe it's so important to get the rate 100% correct as it is to use the backlog behind the head packet
>>>> 	[SM] Well, with starlink scaling over orders of magnitude in the range of a few seconds (apparently with hard step changes potentially every 4 and mostly 15 seconds), the backlog estimate will really be rather far away from 100% correct.
>>> [BB] In our implementation, we use a per-packet EWMA with gain = 1/2 to average serialization time per byte. So it picks up a rate change after 2 or 3 packets. Obviously we can decrease the gain if we find it doesn't need to be so responsive. But the point is, when I say 'recent', I mean in the last few hundreds of microseconds - not the last few seconds.
>> 	[SM] "after 2 or 3 packets" and "last few hundreds of microseconds" seems to imply rates well in excess of 12 Mbps, which for starlink upload does not seem to be a given. My point however was that the rate changes pretty steeply and relative fixed intervals, and I assume it is going to be around those steps where the fancy predict sojourn time of the most recently enqueued packet will have problems. But hey, I am open to be convinced by data, if this really reduces latency under load without too large a throughput cost and will not create undesired latency spikes if the rate step goes in the "wrong" direction I am all for adding this recommendation to an RFC. My argument however is recommending this approach in an internet draft before actually having the conformation that it improves things seems premature to me, nothing more, nothing less.
>> 
>>>>> (rather than how long the queue in front of the head packet took to service = sojourn time). What's less certain is how important any of this is - i.e. what is the effect of real burstiness in real scenarios, rather than contrived ones. All ongoing.
>>>> "
>>>> 	[SM] Side note about bursty traffic: just talked to a gamer playing riot's Valorant (which uses 128 server ticks per second and sends around 15-20 Mbps from the server to the player for a match with 10-20 players) this will introduce quite massive bursts by necessity (the client will only be able to properly update its game state once it receives all packets for a given tick, so pacing these out is undesirable they will com as a pretty tight burst because that is the network requirement).
>>> [BB2] That's 13 packets every tick (assuming 1500 B packets and the worst-case end of your rate range). If the server sends each of those bursts at its own line rate without pacing, say 10 Gb/s, it's pretty pointless anyway, because few people have an access link that fast.
>> 	[SM] Why is that pointless? If you have urgent information to send, you are best off by doing it as fast as you can, even if you know that each receiver will have ot accept additional pat related delays. That said, I have no insight in what the server-side sending strategy actually is, just the reported observation that packets are received back to back. Theoretically possible that the server maintains a throughput estimate for each client and paces out the packets accordingly, but I have my reservations whether a server operator would implement something as complicated just to avoid bursts.
>> 
>> 
>>> So the user's access link will pace out and therefore delay the last packets of each burst anyway.
>> 	[SM] Yeah, if running full-throttle is considered some for of pacing ;) But an L4S AQM on the ISP's side will have to deal with the packet rates as send by the server, the pacing only happens after the AQM did its work. An ingress-side AQM at the player's home network however would operate on "paced" data.
>> 
>> 
>>> The server might as well pace each burst to add a few hundred μs to the end of each burst.
>> 	[SM] Sure nothing stopping the server, and I personally have no way of capturing the packet stream before it hits the end-users leaf network.
>> 
>> 
>>> For users with high-end capacity that will still be tiny compared to other delays but it will avoid excessive queueing delay to other traffic for their users with lower capacity.
>> 	[SM] True, but keep in mind that the game company is not necessarily payed for or judged by its capability to play nice with "other traffic". What I want to say, they might lack the incentive here to work around L4S' designed in burst intolerance...
> 
> [BB3] I do want to challenge the idea that burst-handling and measurement is an L4S-specific question. It's nothing to do with the way congestion is signalled.

	[SM2] It appears to be a feature of your reference implementation, I am happy to accept that is is not structurally related by the L4S design if you present data from an alternative L4S implementation that does not share this issue.


> I believe, for instance, that SCE used sojourn-based shallow threshold marking as well. The example/recommended target queue delay number that each team used might have been different, but the concern here is marking triggered by a shallow step threshold and sojourn-based measurement, both of which SCE used as well.

	[SM2] As I see it SCE is not on the table/agenda of the TSVWG or the IETF, so let's focus on discussing the drafts and techniques that are close to being ratified, please. 


>> 
>>> For instance, say Valorant currently reckons its highest-end users have 1Gb/s downstream. That will delay the 13th packet by about 156 μs.
>>> If it paces each burst at 250 Mb/s, that will delay the 13th packet by 624 μs; an extra 468 μs.
>>> 
>>> Nonetheless, for a lower-end user with say 100 Mb/s, server pacing won't delay the burst any more at all, 'cos their own access link will delay the 13th packet by 1.56 ms whether sent paced at 250Mb/s or at 10Gb/s line rate. However, server pacing at 250 Mb/s will limit the queue into their 100 Mb/s link to 8 packets or 960 μs.
>>> 
>>> This all depends what the game server designers think the current capacity range of their users is - just examples above.
>> 	[SM] Yes, my point however to bring this up was, L4S seems targeted/advertised as being a great fit for online reaction-time gated gaming, but it might actually require some potentially intrusive redesign. Have you (as team L4S) actually had discussions with game companies about their requirements? (I understand that game streaming/remote control has different properties and the video frames might be better suited to L4S current properties, but even there the desire is to transmit all information required for the next frame ASAP).
> 
> [BB3] Of course, yes. But it's a large market not just a few hyper giants, so less easy to catch the whole range of views.

	[SM2] That answer really is neither here nor there, but that is the fault of an imprecisely asked question, so that is on me.

>>>> I just note that "gaming" is apparently one of the vectors L4S is supposed to be marketed at, so I might be missing something here. If you have data showing L4S "killing" it with games like Valorant please do not hesitate to share. Or if you consider that a "contrived scenario" please elaborate on the why.
>>> [BB2] I have no experience of Valorant. But, from what I know about gaming, I would say 13-packet sync messages is uncommonly high.
>> 	[SM] I was surprised as well about the high average rates, but these are measured so I assume this to be correct. I would guess however that higher tick rates and larger group sizes of active players might be a trend well into the future (partly enabled by increasing access rates), so what is uncommonly high today, might be the new normal in a few years (then again Valorant might be an outlier). The question is, is L4S with its designed-in burst intolerance really a good fit for an application class that desires sending groups of packets essentially synchronously (the client probably requires all packets to display the correct/complete world state)
> 
> [BB3] I think your Valorant example is the high end of a range that is moving upwards as access rates move upwards.

	[SM2] Oh sure, let's not get hung up on the specific example, I think we agree that the trend is upward across the field though.

> Low latency /applications/ have inherent burst intolerance. L4S follows that - it is not the cause.

	[SM2] That is an opinion I do not share; some low latency sensitive use-cases have a limited set of information they need to transmit ASAP, so are not a good fit for L4S' pacing requirement. For example real-time control loops do not differentiate between queueing delay in the network and queueing delay at the sender's queues, they care more about total end to end latency, so L4S' focus on shrinking the network queueing delay only helps if it does not require a similar amount of queueing at the sender (or delayed data production by the application). I guess if an application is fine with sending a single packet every XX milliseconds it might fit well, but the more packets are in a bunch the less well it fits L4S' specific flavor of "low latency".


Regards
	Sebastian


> 
> 
> Bob
> 
> 
>> 
>> Regards
>> 	Sebastian
>> 
>> 
>>> But I'm not going to say 'unnecessarily' high, 'cos one has to assume they have their reasons.
>>> 
>>> 
>>> Bob
>>> 
>>>> 
>>>> 
>>>>>> Regards
>>>>>> 	Sebastian
>>>>>> 
>>>>>> 
>>>>>> *) By "punishing" earlier packets for later bursty the probability is high to hit the wrong flow with a marking while the offender might squeeze through unmarked (so higher than deserved probability to avoid a mark). This might in effect not actually make bursty transmission a bad strategy as the cost of that might be borne by others. With plain sojourn time the tail packet of a burst will see the full latency cost of the previous packets in the burst.
>>>>> [BB] That's the whole point of using the backlog at dequeue. I've explained it all in the write-up I'm doing. I'm not going to rewrite it all here - needs example diagrams etc. Can I pls ask you to wait for it to be published (update to tech report on arxiv - work in progress).
>>>> 	[SM] So the traditional, and IMHO correct, way forward, is first to do the research get it peer-reviewed and then deduce a recommendation for a standard or RFC. I would ask politely to remove these sections from the draft until the point of having the "believe" supported by data and analysis that survived a discussion with experts. A non peer-reviewed "paper" on arrxiv.org, however is barely better that a "believe"*, sorry if that reads to impolite.
>>>> 
>>>> Regards
>>>> 	Sebastian
>>>> 
>>>> 
>>>> *) I would argue that is even worse, arxiv does not seem to offer ways to actually add commentary to a paper, so neither peer review nor a way to comment upon the content, all the while implying the reputation of a published paper.
>>>> 
>>>> 
>>>>> Bob
>>>>> 
>>>>>>> Bob
>>>>>>> for the co-authors.
>>>>>>> 
>>>>>>> 
>>>>>>> On 07/07/2022 20:59,internet-drafts@ietf.org wrote:
>>>>>>>> A New Internet-Draft is available from the on-line Internet-Drafts directories.
>>>>>>>> This draft is a work item of the Transport Area Working Group WG of the IETF.
>>>>>>>> 
>>>>>>>> Title : DualQ Coupled AQMs for Low Latency, Low Loss and Scalable Throughput (L4S)
>>>>>>>> Authors : Koen De Schepper
>>>>>>>> Bob Briscoe
>>>>>>>> Greg White
>>>>>>>> Filename : draft-ietf-tsvwg-aqm-dualq-coupled-24.txt
>>>>>>>> Pages : 65
>>>>>>>> Date : 2022-07-07
>>>>>>>> 
>>>>>>>> Abstract:
>>>>>>>> This specification defines a framework for coupling the Active Queue
>>>>>>>> Management (AQM) algorithms in two queues intended for flows with
>>>>>>>> different responses to congestion. This provides a way for the
>>>>>>>> Internet to transition from the scaling problems of standard TCP
>>>>>>>> Reno-friendly ('Classic') congestion controls to the family of
>>>>>>>> 'Scalable' congestion controls. These are designed for consistently
>>>>>>>> very Low queuing Latency, very Low congestion Loss and Scaling of
>>>>>>>> per-flow throughput (L4S) by using Explicit Congestion Notification
>>>>>>>> (ECN) in a modified way. Until the Coupled DualQ, these L4S senders
>>>>>>>> could only be deployed where a clean-slate environment could be
>>>>>>>> arranged, such as in private data centres. The coupling acts like a
>>>>>>>> semi-permeable membrane: isolating the sub-millisecond average
>>>>>>>> queuing delay and zero congestion loss of L4S from Classic latency
>>>>>>>> and loss; but pooling the capacity between any combination of
>>>>>>>> Scalable and Classic flows with roughly equivalent throughput per
>>>>>>>> flow. The DualQ achieves this indirectly, without having to inspect
>>>>>>>> transport layer flow identifiers and without compromising the
>>>>>>>> performance of the Classic traffic, relative to a single queue. The
>>>>>>>> DualQ design has low complexity and requires no configuration for the
>>>>>>>> public Internet.
>>>>>>>> 
>>>>>>>> 
>>>>>>>> The IETF datatracker status page for this draft is:
>>>>>>>> 
>>>>>>>> https://datatracker.ietf.org/doc/draft-ietf-tsvwg-aqm-dualq-coupled/
>>>>>>>> 
>>>>>>>> 
>>>>>>>> There is also an HTML version available at:
>>>>>>>> 
>>>>>>>> https://www.ietf.org/archive/id/draft-ietf-tsvwg-aqm-dualq-coupled-24.html
>>>>>>>> 
>>>>>>>> 
>>>>>>>> A diff from the previous version is available at:
>>>>>>>> 
>>>>>>>> https://www.ietf.org/rfcdiff?url2=draft-ietf-tsvwg-aqm-dualq-coupled-24
>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>>> Internet-Drafts are also available by rsync at rsync.ietf.org::internet-drafts
>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>> -- 
>>>>>>> ________________________________________________________________
>>>>>>> Bob Briscoe
>>>>>>> http://bobbriscoe.net/
>>>>> -- 
>>>>> ________________________________________________________________
>>>>> Bob Briscoehttp://bobbriscoe.net/
>>> -- 
>>> ________________________________________________________________
>>> Bob Briscoe http://bobbriscoe.net/
> 
> -- 
> ________________________________________________________________
> Bob Briscoe http://bobbriscoe.net/