Re: [tsvwg] new tests of L4S RTT fairness and intra-flow latency

Pete Heist <pete@heistp.net> Tue, 17 November 2020 19:14 UTC

Return-Path: <pete@heistp.net>
X-Original-To: tsvwg@ietfa.amsl.com
Delivered-To: tsvwg@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 238D43A1570 for <tsvwg@ietfa.amsl.com>; Tue, 17 Nov 2020 11:14:02 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -0.135
X-Spam-Level:
X-Spam-Status: No, score=-0.135 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001, URI_DOTEDU=1.964] autolearn=no autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=heistp.net
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id v0xBaTreuG1F for <tsvwg@ietfa.amsl.com>; Tue, 17 Nov 2020 11:13:59 -0800 (PST)
Received: from mail-wm1-x331.google.com (mail-wm1-x331.google.com [IPv6:2a00:1450:4864:20::331]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 144C23A156E for <tsvwg@ietf.org>; Tue, 17 Nov 2020 11:13:59 -0800 (PST)
Received: by mail-wm1-x331.google.com with SMTP id 1so2792164wme.3 for <tsvwg@ietf.org>; Tue, 17 Nov 2020 11:13:58 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=heistp.net; s=google; h=message-id:subject:from:to:cc:date:in-reply-to:references :user-agent:mime-version:content-transfer-encoding; bh=jbFNbh0hu5gI2y4O7E8oMC/zvLbSH02hnWLD6dBDhlw=; b=HmMAxDISToz4x1AZMDMCSEyEsQFSBhEGJczNU9t/qz9Xgm9Pfe/X/njlm+rWHKjTOB 3QRY6JJA67NtSKknMLG6KtdMVUKIzczPCfv25KBSW+amUPnSHy/UJit9IITWf9KEox6W yotM/ewobjP0kVKGiJ+/5EozHMurezMxTFVaNNBWZR5JA0gmIFdDz6ULdv+vY9Vo0Ifs uGfXmraWnLUVKMDG9rRTUpsZYZjYYtTo/u2PyXRtTZbwSn2uHoldWfs8TTzroHduNvNj 7VzyD8FSCJ2SkcvX6tgKWDl6taN7v6rZxyzzWEiInyedRlfcAaVOYvSFEE/vStHIC0xf YseA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:subject:from:to:cc:date:in-reply-to :references:user-agent:mime-version:content-transfer-encoding; bh=jbFNbh0hu5gI2y4O7E8oMC/zvLbSH02hnWLD6dBDhlw=; b=rntkwrgbPNlV1L1chfwHlSUV3crFzfEGsWTTqLsCBdc1jkPX8aLD2uTBD6/l/5UbaB ylFF1mMOXQtx3fgUjE4idA+7UTUGRTBAlX2rU8RwhHxSnN9rTt470TPZNJgpq1v2WE4K ASLxX0YMWRGMiLZ1Rdiul1AR8+wI+pbjx29XoXOhW1ocmncO9okxMkR/o6fYXujhzWNP 9R3lFU3SsNhup8E0jlF90n4oRez4LWtsvGD6+TcaQHzP7hWaXd3afF9Js0Bs1dhFJ4my zqjiq4zokOOaW13iTbbe2NJMofi698gvhzMgoYCBZvY4IJB21bQzVpFKFIfOJiroQQlu WmZQ==
X-Gm-Message-State: AOAM532ObLszJOZFetBcTxTQuhyxixUmqVgb/8pH/VCyX/sSLml/H2nH OFhRdqw3W/kFHWl6ZoH4qeo+2A==
X-Google-Smtp-Source: ABdhPJyiWrLQhuGsU91zM3GPmK0+SB7FawTZUUVRc1jmKCsD5L1Zr+Rr8xc3RjTkDDV1jLHQs5ln0w==
X-Received: by 2002:a1c:ba0b:: with SMTP id k11mr572752wmf.37.1605640437063; Tue, 17 Nov 2020 11:13:57 -0800 (PST)
Received: from sova.luk.heistp.net (h-1169.lbcfree.net. [185.193.85.130]) by smtp.gmail.com with ESMTPSA id b14sm28435143wrs.46.2020.11.17.11.13.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 17 Nov 2020 11:13:55 -0800 (PST)
Message-ID: <0e41b2f1603ae77fce0ef461504eaf6c2fda92db.camel@heistp.net>
From: Pete Heist <pete@heistp.net>
To: "De Schepper, Koen (Nokia - BE/Antwerp)" <koen.de_schepper@nokia-bell-labs.com>
Cc: tsvwg IETF list <tsvwg@ietf.org>
Date: Tue, 17 Nov 2020 20:13:55 +0100
In-Reply-To: <AM8PR07MB74762C5309642C1B28F488ADB9E30@AM8PR07MB7476.eurprd07.prod.outlook.com>
References: <d2edb18dd3cbfecce0f70b3345e4ea70a0be57b9.camel@heistp.net> <AF7A15D8-28DA-4DE5-96AB-BE9B6A468C3D@gmx.de> <MN2PR19MB4045BC0869B633F8EB11155583E40@MN2PR19MB4045.namprd19.prod.outlook.com> <c321dc8ee45d2ecf72080f2900522835cf3753f8.camel@heistp.net> <AM8PR07MB74762C5309642C1B28F488ADB9E30@AM8PR07MB7476.eurprd07.prod.outlook.com>
Content-Type: text/plain; charset="UTF-8"
User-Agent: Evolution 3.38.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Archived-At: <https://mailarchive.ietf.org/arch/msg/tsvwg/H3hOCISNXskFXtcKzt6O8GzQtMk>
Subject: Re: [tsvwg] new tests of L4S RTT fairness and intra-flow latency
X-BeenThere: tsvwg@ietf.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Transport Area Working Group <tsvwg.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/tsvwg>, <mailto:tsvwg-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/tsvwg/>
List-Post: <mailto:tsvwg@ietf.org>
List-Help: <mailto:tsvwg-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/tsvwg>, <mailto:tsvwg-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 17 Nov 2020 19:14:02 -0000

Hi Koen, some responses inline...

On Mon, 2020-11-16 at 20:43 +0000, De Schepper, Koen (Nokia -
BE/Antwerp) wrote:
> Hi Pete,
> 
> Thanks for the tests. They do confirm our test results from many
> years ago now, and are the reason why related Prague requirements are
> in the L4S-ID draft. I guess they are a good introduction for people
> recently joining the discussions and a reminder for the rest that it
> is important to take those conditions into account when designing and
> configuring both the end-host and the network mechanisms. 
> 
> May I also remind that the Linux Prague code is research driven, and
> focused only on de-risking the bigger safety challenges. Most of the
> improvements (except low latency) were not prioritized up to now, as
> we are sure some are low hanging fruit and others with the right
> incentives will want to spend time on the harder ones. Most of the
> commercial parties with an expected interest and benefit of an L4S CC
> have been reasonable quite, I hope because they are confident that
> the network drafts and the mandatory Prague requirements and proposed
> improvements are feasible. I guess some are also less keen on sharing
> their efforts with competition and others might be waiting for real
> deployments before getting really into action. If there are concerns
> from their side, of course we would like to hear those too. The only
> concern I heard up to now is that all this takes too long before they
> can start the experiment. They want asap real world deployment...
> 
> > > 1) Starting with an easier one, can the reported intolerance to
> > > bursts
> > > (https://github.com/heistp/l4s-tests/#burst-intolerance) be fixed
> > > by starting marking in the L queue later? Doing so may also
> > > improve RTT fairness for Prague flows.
> 
> Indeed, the solution is to make sure that your L4S threshold
> configuration is in line with your network capabilities. If you know
> that certain network elements aggregate and burst packets in 4ms
> chunks, then it makes no sense at all to mark packets in that node at
> 1ms, nor in subsequent network nodes, with similar bottleneck
> capacities. We tried to address this by explaining this in section
> 6.3 of the L4S Architecture draft: 
> https://tools.ietf.org/html/draft-ietf-tsvwg-l4s-arch-08#page-20. Is
> this sufficient to cover this issue?

It's good that it was covered in the draft. One source of burstiness
unrelated to the link layer that could be added or addressed is cross-
flow traffic. I'm not an expert on this topic, but I noticed a mention
of it in the description of Figure 3 in this paper:

http://buffer-workshop.stanford.edu/papers/paper34.pdf

It would be good if we knew how much this source of burstiness amounts
to. To me however, it seems like a minority of Internet applications
would need marking near 1ms, and in the cases where that is needed, the
application could use DSCP to supply a hint that it's willing to accept
a potentially large throughput reduction for such a low delay.

> > > 2) For the reported network bias, can this be fixed in the
> > > DualPI2 and Prague implementations? One chart which we didn't
> > > reference in our writeup illustrates the consistent bias for
> > > Prague vs CUBIC.
> 
> Since an important design goal of L4S is that it also supports
> network nodes that don't identify individual flows. To compensate
> individual flow's RTT you need to know them first. To solve this
> anyways, it would need a per packet indication of the RTT (ref past
> proposals like RCP and XCP). If we would have space to put this info
> in the header, we wouldn't be discussing about this single available
> ECN bit, right? Therefor the End-System is the right place to handle
> this, and why the RTT-independent requirement is important. The
> current implementation has a non-Prague-compliant default (not RTT-
> independent), which we need to fix clearly. It would be good to agree
> on a reasonable compromise of a default f(RTT) correction function.
> We also implemented a gradual correction over time, so a new (short)
> flow would not be delayed, and will be responsive, only if it lasts
> for a while it will gradually converge to the RTT-independent rate.
> This could be in the range of the convergence time of the typical-RTT
> flows.
> 
> Feel free to experiment with the available options.

Although I won't have time myself for more experiments with this, our
test source is published, so it would be possible for someone else to
try to add more parameter variations to the batch.

Importantly, the behavior using the default parameters should work well
across a wide range of RTTs without tuning, which I think others
pointed out as well.

> > > * To not consider a 16:1 throughput imbalance between L4S and
> > > non-L4S flows a safety problem. We've seen 11:1 to 18:1 in our
> > > recent tests.
> > > Although we're not sure of the worst case, what we're seeing now
> > > is outside of my comfort zone, personally.
> 
> Actually, I'm also not happy with this strong RTT dependency. You
> probably will see similar unfairness cases if you use under these
> conditions Reno on a CoDel AQM with a target of 5ms. Cubic
> compensates performance for the longer RTT flows, Prague (with the
> right settings) compensates mandatory for the smaller RTT. Clearly a
> production congestion control is likely to do both, or fall back to
> Classic Cubic behavior whenever the RTT is too long for interactive
> services.
> 
> So the exact out-of-our-confort-zone exists today already, but there
> are solutions available, just to be integrated.

Actually the 16:1 referred to in this case wasn't from RTT dependence,
but when L4S and non-L4S flows share the same RFC3168 queue. That ratio
can be significantly higher in some cases (for example with PIE using
the default parameters plus ECN enabled), but it's around this ratio
for Codel queues with the default parameters.

I haven't seen much discussion around it yet (admitting that I'm a
little behind!), but we did post a newer result showing how tunneled
traffic causes L4S and non-L4S flows to wind up in the same queue:

https://github.com/heistp/l4s-tests/#unsafety-in-tunnels-through-rfc3168-bottlenecks

> > > 3) The issues that arise due to the redefinition of CE may be the
> > > hardest. This includes the domination of L4S over non-L4S flows
> > > in the same RFC3168 queue (see the issue with tunnels reported
> > > above for a common path to that), 
> > > and the intra-flow latency spikes for L4S flows (   
> > > https://github.com/heistp/l4s-tests/#intra-flow-latency-spikes).
> 
> This is fortunately showing the flaws of CoDel and on top of that the
> lack of the CoDel implementation to respond with drop when ECN flows
> get into overload. Even more problematic for Cubic on CoDel (deployed
> today), your results show it is also bad (more than 2 seconds latency
> for longer than 10 seconds when throughput reduces from 50Mbps to
> 1Mbps on an 80ms base RTT path). When compared under the same
> conditions with Cubic on PI2 you can definitely see how it can be
> improved using an appropriate AQM (a very short peak of I guess less
> than 100ms extra latency). Hopefully these CoDel instances can be
> fixed and can support the ECN threshold for L4S instead (and maybe
> use PI2 instead of CoDel for Classic). On the other hand these
> extremes would be also easy cases for both Classic and L4S CCs to
> detect and fall back on a delay based CC when latencies are that bad
> (falling back to Classic ECN or Classic drop would even not help to
> avoid the CoDel problem). But I doubt the Linux Cubic developers
> would be happy to extend Cubic to avoid CoDel flaws??

Yes, we're aware that the transient behavior of Codel sometimes leaves
something left to be desired. The magnitude of spikes is related to the
starting bandwidth, ending bandwidth, and RTT. Although CUBIC's spikes
can also be long in extreme cases, for a more common 50Mbps to 10Mbps
reduction, the response is much more reasonable for CUBIC (maybe 400ms
spike that decays sharply and ends in 1.5s):

http://sce.dnsmgr.net/results/l4s-2020-11-11T120000-final/l4s-s2-codel-rate-step/l4s-s2-codel-rate-step-ns-clean-cubic-fq_codel-50Mbit-10mbit-80ms_tcp_delivery_with_rtt.svg

than for Prague (400ms spike with much slower decay, lasting 8s):

http://sce.dnsmgr.net/results/l4s-2020-11-11T120000-final/l4s-s2-codel-rate-step/l4s-s2-codel-rate-step-ns-clean-prague-fq_codel-50Mbit-10mbit-80ms_tcp_delivery_with_rtt.svg

The source of this, mainly for other readers, is the difference between
the MD signal of today's RFC3168 CE, and the AD signal expected by L4S
transports after having redefined the CE codepoint, with no reliable
way of differentiating between the two. The present situation is that
there are many deployed fq_codel instances, all signaling by default
with RFC3168 ECN
(https://github.com/heistp/l4s-tests/#deployments-of-fq_codel), and
it's likely to remain that way for a long time. So, we do have to
consider these cases.

As an aside, overload protection was added to the COBALT AQM using the
BLUE algorithm in part because of this aspect of Codel. That starts at
around 400ms, which may have improved the above cases. However, that
was designed for unresponsive flows and extreme situations, rather than
as ordinary congestion control for well-behaved flows.

> > > Although I won't be able to support a WGLC until I see tested
> > > code that addresses the issues, I do want to support the WG along
> > > its present path to a conclusion...
> 
> Try switching on the RTT-independent settings of Prague, and I assume
> it is a matter of just coding a fallback to Cubic if your detected
> RTT is too big. So as far as I'm concerned, we do have running code! 
> 
> I'll try to get the defaults right. I assume f(RTT)=max(25ms, RTT)
> and a gradual EWMA convergence reaching the 25ms behavior after about
> 10 seconds would be good defaults? If people have suggestions or
> preferences for other values, then let me know. If people have time
> and want to implement the Cubic fallback (or even better longer RTT-
> independence), then let me know too.

I noticed that there were some recent changes, so we'll see what
there's time for.

Regards,
Pete :)

> 
> Regards,
> Koen.
> 
> 
> -----Original Message-----
> From: tsvwg <tsvwg-bounces@ietf.org> On Behalf Of Pete Heist
> Sent: Sunday, November 15, 2020 12:42 PM
> To: Black, David <David.Black@dell.com>; Sebastian Moeller <    
> moeller0@gmx.de>
> Cc: tsvwg IETF list <tsvwg@ietf.org>
> Subject: Re: [tsvwg] new tests of L4S RTT fairness and intra-flow
> latency
> 
> Hi guys, and thanks for the replies.
> 
> Firstly, there was a new test result added for tunnels yesterday, but
> I'll put that in a separate thread so it isn't buried.
> 
> Beyond that, I think it would be productive to try to figure out what
> might be done to solve the reported issues, limiting that exploration
> for now to the confines of the L4S design. We'd like to support
> forward progress while facing the issues head on if possible. I'll
> take a shot here as tersely as I can:
> 
> 1) Starting with an easier one, can the reported intolerance to
> bursts
> (https://github.com/heistp/l4s-tests/#burst-intolerance) be fixed by
> starting marking in the L queue later? Doing so may also improve RTT
> fairness for Prague flows.
> 
> 2) For the reported network bias, can this be fixed in the DualPI2
> and Prague implementations? One chart which we didn't reference in
> our writeup illustrates the consistent bias for Prague vs CUBIC.
> Essentially, Prague wins until there is an RTT imbalance of
> 20ms/80ms:
>  
>     
> http://sce.dnsmgr.net/results/l4s-2020-11-11T120000-final/s1-charts/l4s_network_bias.svg
> 
> 3) The issues that arise due to the redefinition of CE may be the
> hardest. This includes the domination of L4S over non-L4S flows in
> the same RFC3168 queue (see the issue with tunnels reported above for
> a common path to that), and the intra-flow latency spikes for L4S
> flows
> (https://github.com/heistp/l4s-tests/#intra-flow-latency-spikes). 
> 
> Afaik, the proposed solutions in the L4S architecture are:
> 
> * RFC3168 bottleneck detection in the endpoints. However, we went
> through a round of testing earlier this year that showed accurate
> bottleneck detection is likely to be difficult with jitter and cross-
> flow traffic. It's disabled at present in the reference
> implementation.
> 
> * L4S operational guidance for network operators. So far I haven't
> had time to review this, but I may have some feedback at least in the
> area of testing. I suspect the effectiveness of guidance will be
> influenced by human factors.
> 
> * To not consider a 16:1 throughput imbalance between L4S and non-L4S
> flows a safety problem. We've seen 11:1 to 18:1 in our recent tests.
> Although we're not sure of the worst case, what we're seeing now is
> outside of my comfort zone, personally.
> 
> 
> For me, the class of problems in #3 are my area of greatest concern,
> as there are many more RFC3168 bottlenecks deployed today than just a
> few years ago (    
> https://github.com/heistp/l4s-tests/#deployments-of-fq_codel). #2 is
> also important to me, as I trust we're not trying to introduce a so-
> called "fast lane" for certain traffic.
> 
> Although I won't be able to support a WGLC until I see tested code
> that addresses the issues, I do want to support the WG along its
> present path to a conclusion...
> 
> Pete
> 
> On Sun, 2020-11-15 at 01:26 +0000, Black, David wrote:
> > Hi Sebastian,
> > 
> > Some comments as an individual, not a WG chair.
> > 
> > First, I think Pete has pretty clearly established that TCP Prague
> > is 
> > research, and hence (IMHO) TCP Prague ought to be headed for ICCRG
> > as 
> > its primary forum rather than TSVWG.
> > 
> > To date, the 'Prague L4S Requirements' (Appendix A of draft-ietf-
> > tsvwg-ecn-l4s-id) have been strongly associated with TCP Prague.
> > That 
> > association ought to be teased apart so that the resulting L4S 
> > scalable congestion control requirements provide a reasonable
> > design 
> > space that can include a number of other congestion control designs
> > - 
> > in addition to what's been discussed, e.g., SCReAM, it would be
> > useful 
> > to better understand what it would take for implementations of 
> > protocols such as DCTCP and BBR (e.g., for QUIC) to meet those 
> > requirements.
> > 
> > My overall take on the requirements is that in 20/20 hindsight,
> > some 
> > of them were overly optimistic, and hence need to be backed
> > off/toned 
> > down/broadened to encompass what is reasonable in "running code"
> > well 
> > beyond TCP Prague. That sort of collision between interesting ideas
> > and network realities is not an unheard-of scenario in IETF, so I 
> > hesitate to view the need for changes to these requirements as 
> > evidence that the original ideas were inherently defective, as I've
> > seen far more dramatic changes, e.g., some number of years ago, the
> > first design of iSCSI login was elegant ... and resulted in 
> > implementations that did not interoperate, resulting in a complete 
> > redesign.
> > 
> > Thanks, --David
> > 
> > > -----Original Message-----
> > > From: tsvwg <tsvwg-bounces@ietf.org> On Behalf Of Sebastian
> > > Moeller
> > > Sent: Thursday, November 12, 2020 6:11 PM
> > > To: Pete Heist
> > > Cc: tsvwg IETF list
> > > Subject: Re: [tsvwg] new tests of L4S RTT fairness and intra-flow
> > > latency
> > > 
> > > 
> > > [EXTERNAL EMAIL]
> > > 
> > > Hi Pete,
> > > 
> > > great data. IMHO Especially the fact that short-RTT Prague will 
> > > severely outcompete long-RTT Prague way more than the traditional
> > > dumb FIFO will, strongly supports  the following hypotheses I
> > > have 
> > > been posting
> > > before:
> > > 
> > > a) Dualpi2/TCP Prague are no way ready for deployment, but are 
> > > basically still in the toy stage
> > > 
> > > b) all that L4S will effectively do, be it by intent or simply by
> > > mis-design, is built a "fast lane" for short RTT low hop count 
> > > traffic. Given all the hype
> > > (still!) in the L4S
> > > internet drafts about how this is the future of the internet...
> > > Also a fast lane that
> > > requires active cooperation from the leaf ISP (if they keep their
> > > bottlenecks at FIFO, I see no compelling reason for TCP Prague at
> > > all), which will require SLAs between CDNs and ISPs.
> > > 
> > > c) too little, too late. Really only modest gains in RTT
> > > (compared 
> > > to best of class AQMs like fq_codel or cake) and noticeable 
> > > regressions in sharing behavior both between different CCs and
> > > flows 
> > > of different RTTs (and that compared to dumb queue "management"
> > > with 
> > > a simple FIFO).
> > > 
> > > 
> > > 
> > > Also quite sobering, that it AGAIN was not team L4S bringing real
> > > data to the table to support their claims and promises. Then
> > > again 
> > > looking at the RTT fairness for Prague versus Prague under DualQ,
> > > I 
> > > can understand why team L4S stayed mumm, and rather argued for 
> > > allowing unfettered self-mutilation of L4S compliant transport 
> > > protocols in regards to RTT fairness:
> > > 
> > > "So there is no need  to mandate that an L4S implementer does no 
> > > harm to themselves, which window-based congestion controls tend
> > > to 
> > > do at higher RTT.
> > > Of course, this doesn't preclude implementers reducing or 
> > > eliminating RTT bias for larger than typical RTTs, but it removes
> > > any requirement to do so."
> > > 
> > > After years of advertising on increased RTT independence (in
> > > spite 
> > > of the data showing that the proposed combination of DualQ and
> > > TCP 
> > > Prague actually increases RTT bias), team L4S in the last minute
> > > no 
> > > less, decides to do a 180 and just change the requirements to
> > > allow 
> > > for the rather unhealthy behavior demonstrated for L4S...
> > > With the current enhanced RTT bias, who in their right mind is
> > > going 
> > > to use TCP Prague; which currently is the only transport, that at
> > > least attempted to tackle all the "requirements" L4S poses for 
> > > transports that want to use the
> > > ECT(1) express
> > > way? Then again, since none of the network elements are designed
> > > and 
> > > required to actually check and enforce these requirements,
> > > calling 
> > > these "requirements" is a bit of stretch anyway, maybe they
> > > should 
> > > be changed from MUST requirements to COULD musings....
> > > 
> > > 
> > > Best Regards
> > >         Sebastian
> > > 
> > > 
> > > > On Nov 12, 2020, at 21:31, Pete Heist <pete@heistp.net> wrote:
> > > > 
> > > > Hi,
> > > > 
> > > > We have posted some new tests of L4S here:
> > > > 
> > > > https://github.com/heistp/l4s-tests
> > > > 
> > > > These tests cover two-flow fairness in several qdiscs with 
> > > > different path RTTs per-flow, and transient behavior in
> > > > fq_codel 
> > > > queues.
> > > > 
> > > > The results raise some concerns about a bias in favor of L4S
> > > > flows 
> > > > in
> > > > DualPI2 queues, throughput imbalances between flows with
> > > > different 
> > > > path RTTs, and intra-flow latency spikes upon rate reductions
> > > > in 
> > > > fq_codel.
> > > > The repo above contains a walk-through of the key findings, and
> > > > links to more results in the Appendix.
> > > > 
> > > > Regards,
> > > > Pete
> > > > 
> > > > 
> > 
> 
>