Re: [tsvwg] SCReAM (RFC8298) with CoDel-ECN and L4S

Sebastian Moeller <moeller0@gmx.de> Wed, 18 March 2020 08:19 UTC

Return-Path: <moeller0@gmx.de>
X-Original-To: tsvwg@ietfa.amsl.com
Delivered-To: tsvwg@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id A1AFA3A08B7; Wed, 18 Mar 2020 01:19:14 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.648
X-Spam-Level:
X-Spam-Status: No, score=-1.648 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, FREEMAIL_ENVFROM_END_DIGIT=0.25, FREEMAIL_FROM=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=no autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (1024-bit key) header.d=gmx.net
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 7pcZCcRCkw1w; Wed, 18 Mar 2020 01:19:12 -0700 (PDT)
Received: from mout.gmx.net (mout.gmx.net [212.227.15.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 0E8583A08B8; Wed, 18 Mar 2020 01:19:11 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=gmx.net; s=badeba3b8450; t=1584519502; bh=x6KdOeZlK4zJyCarLQqdZYUFHrOuMSG+IoGWiErlnBg=; h=X-UI-Sender-Class:Subject:From:In-Reply-To:Date:Cc:References:To; b=lcXQ2xUpJQxJFwOHMyOgsqiGZGZPnHX3mweasfgFw5Vjkenud93T2HJnUGVEf35uX pEnVkPWtjkEQ+ssCHreqj9/Mcxvta3sNNO39lG2BrzbzJ+cnrltVXBDbIYSo1E6jk1 oJANZjup6vEmPL8RJ19De324oGbYBFCeCvCUrVyU=
X-UI-Sender-Class: 01bb95c1-4bf8-414a-932a-4f6e2808ef9c
Received: from hms-beagle2.lan ([77.8.123.27]) by mail.gmx.com (mrgmx005 [212.227.17.190]) with ESMTPSA (Nemesis) id 1M3lY1-1jEl1J0CFn-000s1C; Wed, 18 Mar 2020 09:18:22 +0100
Content-Type: text/plain; charset="utf-8"
Mime-Version: 1.0 (Mac OS X Mail 12.4 \(3445.104.11\))
From: Sebastian Moeller <moeller0@gmx.de>
In-Reply-To: <e8630484-46af-4130-e603-fc05e8767871@bobbriscoe.net>
Date: Wed, 18 Mar 2020 09:18:20 +0100
Cc: Ingemar Johansson S <ingemar.s.johansson@ericsson.com>, "iccrg@irtf.org" <iccrg@irtf.org>, Ingemar Johansson S <ingemar.s.johansson=40ericsson.com@dmarc.ietf.org>, "tsvwg@ietf.org" <tsvwg@ietf.org>
Content-Transfer-Encoding: quoted-printable
Message-Id: <54B2DDDD-7302-4FE9-B1E6-F1F9A83E65DB@gmx.de>
References: <HE1PR07MB44251B019947CDB6602B30B2C2FF0@HE1PR07MB4425.eurprd07.prod.outlook.com><A2300F8D-5F87-461E-AD94-8D7B22A6CDF3@gmx.de> <HE1PR07MB4425B105AFF56D1566164900C2FF0@HE1PR07MB4425.eurprd07.prod.outlook.com> <9e5ea80f-d709-e204-f08d-93d3479668aa@bobbriscoe.net> <90A501D0-56A1-4685-800F-10F002FD8FCD@gmx.de> <e8630484-46af-4130-e603-fc05e8767871@bobbriscoe.net>
To: Bob Briscoe <in@bobbriscoe.net>
X-Mailer: Apple Mail (2.3445.104.11)
X-Provags-ID: V03:K1:wEoeswDR4W2TfJ7t1R4WgTolOEPM9ieeg4XQNrcuUAZwGN2upME kuTjU+oWW9qTAgOOG8FP2S8Dcjlls4aYt+iuwNR5JWUSlC3/gGnVZqaUONA96QL44jbEfeJ I1hXDBvHYAE8CHgKlunYjZMSS/MesJ+65VPJenGOqhaW9lAc83jGlrLT5u86VyFpaUVk5/P LWwCcc1h/v8Rluwc3ISFA==
X-UI-Out-Filterresults: notjunk:1;V03:K0:uV2vBgftkN4=:g1t8b1233km0G5YNMlta6g 6uJievycBOwzbGkO99cH0cUpC0R8PGvqmzwB9iVunzwi03Tl2l3mK42CtxJsKiYwuhL3sOFS1 XzciEqGXq+kEIniXs0LQ44CFV6tQQK0P9L1pXg7B/jZwpxizw5/GxGHiXfKWZZR7djWOY1okZ MzJoF7ebZ3visUtie1ymRjNdwgaBhUPKA8RoJ7SYu2cRZZQNxnJwsEG/lELKGkjU3tSKdBhkN f7wyR+Whqx5kwmzNrsr83/Zvj+xXQauZ+hajthxaykh5B/zXvQc/RdRKgLrbrIErIx5g5P6kl 02IKAWO0gyH1z3dld7chqwBp1nRKs+NApvHeBfweeBOxaRU0Jmb7L9STW24kTIjs/RtASpFhM /d1TKfhpm51NQWuXvj+rWYJHq7Dz8lr6Jx4DiCYxpCyBXiVf9sO42gYyA0vewy1+71dSUkNRk ArY1YsSkgepyqBnZEFqLboCrCP2dZLJXK0tcSDphDn2P24c2Vpy6pFiMd+RSpczeI1jzCzmzR 3QKyKLfjujcxW2Fatc0bJbBnug+g15n6n9jgBJFHs1uO96ouy9iqUxUR0k7vpAS8Lk4y0L5iI MdagvdXZL83XjKJM+heKcxXLllFyFIHuH408kYBEAA8NeZQno56Rwve8VqPpWxEgYjtXgMezB ZVSZ6KrV0QdSsy1bjUV6es+Pxk8KssMyr9VlKQ/ibwCxRLqvt/1SXkVDMKKV2pfAyPIuzAk1B Azdqj2eEC6iF7ti3AIu7atpyHyhb5Uyr46Uvgvf1ff2YwEMzsEh6svEfFLBOvzotg+mN1mxR4 /pbwYUp6+hNPRgDin9FCcNFl58fDjhHzds4NygywMK7ihMOV3QaxNLOWcW6RmR2LE9tQN3VqX 6RnxR55WAs+Aohpu//BOPq98Rv1QN0qc89c1EhMLka9FgSUlYCSHjQoUB022yCUoZTrRb3jQZ p5uz3RDfgweFCamj73zYXPHClHwq2L3PYuA0osD7Xj59ksa+25ankBFb0YoHaIMWAE5qF1bQy ek0pA+jSEJGWoHgQ56NLq6tY/yLD0cL6H7miZKfvT0Hzl8foRT/3oPgEuvIQac8wanoiw9b4c IgS12rbT641cRf+Z3sNJB/V20jOYiVQpbHPBPry63271Q2xTkfqbbFdkwkH4N4L5eqXSHwzw/ XooCW+JVfw5eYGrlVs6ntUP8nIYdqEdipj5pbGBT5NuhNthDuifWGjt2xpjfb+KBc6TWLjb5S vlqypQVnUd3oMAeGW
Archived-At: <https://mailarchive.ietf.org/arch/msg/tsvwg/xS8En3TerhqbC4E_289Y0jK9pGQ>
Subject: Re: [tsvwg] SCReAM (RFC8298) with CoDel-ECN and L4S
X-BeenThere: tsvwg@ietf.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Transport Area Working Group <tsvwg.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/tsvwg>, <mailto:tsvwg-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/tsvwg/>
List-Post: <mailto:tsvwg@ietf.org>
List-Help: <mailto:tsvwg-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/tsvwg>, <mailto:tsvwg-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 18 Mar 2020 08:19:15 -0000

Hi Bob,


> On Mar 17, 2020, at 19:45, Bob Briscoe <in@bobbriscoe.net> wrote:
> 
> Sebastian,
> 
> On 17/03/2020 14:54, Sebastian Moeller wrote:
>> Hi Bob,
>> 
>> 
>> 
>>> On Mar 17, 2020, at 02:12, Bob Briscoe <in@bobbriscoe.net>
>>>  wrote:
[...]
>>>>>> 
>>>>> 	[SM] So, in this simulations of a 20ms path, SCReAM over L4S gives ~10
>>>>> times less queueing delay, but also only ~2 less bandwidth compared to SCReAM
>>>>> over codel. You describe this as "L4S reduces the delay considerably more" and
>>>>> "L4S gives a somewhat lower media rate". I wonder how many end-users would
>>>>> tradeoff these 25ms in queueing delay against the decrease in video quality from
>>>>> halving the bitrate?
>>>>> 
>>> [BB] This does seem more harsh than I would have imagined - so a useful data point; thx Ingemar. Nonetheless, this is the end-system taking bandwidth away from itself in order to give itself lower latency in the presence of varying network capacity. Nothing to stop other applications making different tradeoffs.
>>> 
>>> It is worth thinking about the complexity of the policy control and signalling system needed for Ingemar's video to express the tradeoffs it is making here;... if it had to get an FQ scheduler to allow these tradeoffs instead. The scheduler would not necessarily have to make all the tradeoffs itself - for instance the app could underutilize it's 'fair' share, but it would need to be able to take more than its 'fair' share during periods of lower capacity.
>>> 
>> 	[SM] I fail to see it that way, in Codel for example, it is the burst tolerance that allows exactly that kind of bandwidth trading with one self (send more now, make up for it a bit later by sending less). In fq_codel it is the FQ component that decides when a flow/bucket/tin is eligible to send something, and inside each flow/bucket/tin is a codel instance that decides what/when to mark/drop. But as Ingemar's example shows, his application gets significantly more bandwidth with Codel than with L4S, it would be interesting to see a plot of video quality (measured at the receiver). Any thing longer than a "burst" will get into tricky accounting territory if one wants to actually enforce long-term adherence to a set bandwidth share.
> 
> [BB] These views are from one side of a long-standing philosophical debate, in which neither side holds a monopoly on the truth.

	[SM] We are not discussing a philosophical point here, but the sheer practicality that long-term  average fairness as you seem to propose as a desirable property above, will drag in way more severe flow accounting issues than the more traditional FQ plus burst-tolerant AQM solutions. You are not even willing to concede that per flow state tracking is an obvious good idea (e.g. to monitor whether a flow's behavior honors certain requirements), so why are you proposing your idea of "anything goes" as a method to implement long-term fairness? I


> 
> Focusing on the "significantly more bandwidth" side without also saying "significantly less latency" misses Ingemar's point. The app could have chosen either but it chose what it chose.
> 
> If an app chooses quality that leads to a worse quality score, that probably means that the weighting of the factors in the quality scoring is flawed.

	[SM] Isn't that shooting the messenger? I will assume that this is supposed to be a jest.

> Shouldn't the quality score reflect what is important for the application?

	[SM] In the case of of a capacity-seeking real-time video streaming application, the point is, most users will expect the application to deliver the best audio-visual video quality that the latency/bandwidth budget allows. If an application decides to deliver sub-par quality, users will switch applications (unless there are redeeming qualities of said application)... And video quality is not rated by what is convenient for encoders/decoders, but by psychophysics experiments with "eyeballs", so no the video  the quality score does not and should not depend on what is important for the application (unless that is best video quality possible under the given constraints). In the reverse applications are compared on what quality they deliver at what costs. 
	Bob, it is arguments like the above that make me feel like you are trying to debate me and are trying to "win" points, instead of a discussion. We can keep doing that, but it will keep leading to situations like this where you end up "painting yourself into a corner".

> 
> The ideas you've expressed here such as:
> 	• the network decides your maximum burst tolerance

	[SM] That is a hard reality and not a mere idea, the existing network puts up quite a number of hard to negotiate with constraints... L4S will simply replace one burst buffer with two probably shallower ones, but it is still going to be the network that controls this. If you overload the network, something will drop, so yes the network limits the "maximum burst tolerance". 

> 	• the network decides that applications can have no more flexibility than a burst tolerance

	[SM] But can they? I would say i a strict FQ system, applications are free to do whatever they want until they reach their instantaneous share limit, just the same as without FQ, the only thing that changes where the "hard" limit is.

> 	• the network decides how much users should prefer bandwidth over latency, and 

	[SM] Sure, but the dual queue coupled AQM's 1ms target for the LL-queue and 15ms target for the non-LL queue seem exactly such  network decisions. Except in the limit the LL-queue will give a paced sender both lower queuing delay and bandwidth superiority over the non-LL queue, giving flows even less choice.

> 	• the network decides one objective video quality metric 

	[SM] SCReAM's goal could be summarized as trying to model the instantaneous network conditions to always send the best achievable video quality. In tracking the networks congestion with high fidelity it can select the best video quality with acceptable side-effects. But note how tracking the network congestion still means that the network decides the video quality...

> ...are from the Bell-head world. I'm not saying Bell-head is wrong. I'm saying it's pointless trying to insist that a Bell-head idea is more correct than a Net-head idea. They are different philosophies.

	[SM] I fail to see a philosophical discussion here, sorry. 

> 
> I prefer to shift the debate

	[SM] I would prefer to have a discussion instead of a debate.

> to how to design a Net-head solution with a configurable degree of Bell-headedness. That is what the combination of the DualQ plus per-flow Queue Protection is. If you wind the queue protection up tight, you emulate per-flow scheduling. If you loosen it off (or disable it, or don't deploy it), you get application freedom. 

	[SM] Or anarchy... But wait, "anything goes" is what we have right now, and people are not all content with the status quo...

> 
> This is also why it's important to enable both DualQ and FQ. Neither are objectively correct. Let the market decide. 

	[SM] The problem with DualQ is that it fails to properly define what it guarantees, per class (as I understood your arguments) or per-flow fairness (as I understood Koen's arguments) but then  fails hard to deliver any guarantee better than the non LL-queue will get at least 1:15 of the available bandwidth. So if the market shall decide, first start with giving the market the information required to judge and compare different offers, which means give a simple description of what dualQ aims to achieve that will allow to easily predit how dualQ will share bandswidth between fllows of different classes.
	And if you want the market to make a meaningful decision the market needs alternative offers. So you could help by making DualQ/L4S not artificially incompatible with alternative 1/p-type congestion controllers like SCE ;) so that there will be a real open contest. Let the market decide rings a tad hollow given your repeated appeals to stop working on alternatives and rally behind the L4S effort...


> 
>>> In the list of SCE issues,
>>> 
>>> https://github.com/heistp/sce-l4s-bakeoff/blob/master/README.md#list-of-sce-issues
>>> 
>>> it says "Another argument is the perception that FQ can do harm to periodic bursty flows, however we have not yet shown this to be the case with hard evidence". I don't recognize that argument, but I would if it had said "...can do harm to flows needing variable throughput or smooth throughput in the presence of variable available capacity". If Ingemar had run a TCP flow in parallel here, I think that would go some way towards the hard evidence sought here?
>>> 
>> 	[SM] Only if we assume that the video stream would be more important to the link's user, what if the TCP flow's speedy completion would actually be more important to the user? 
>> 	That is, to be blunt, where I believe you fail to see the forrest for the trees, the AQM has no chance of being able to optimally split bandwidth between eligible flows without additional information to base its optimization upon. So either you supply that information (say explicit DSCP marking within a well-managed DSCP-domain) or you ned to accept that all you can do is aim for good enough and "do no harm". 
>> 
> 
> [BB] Outside the forest that your mind is in, there is a wider set of forests where:
> * in your forest, FQ attempts to optimally split bandwidth

	[SM] No, as I stated further below in the email you cited , FQ is not producing an OPTIMAL split of bandwidth between active flows, but as I argue, most/all AQM nodes lack the information to objectively optimize anything anyhow. The charm of FQ is that it also is at the same time highly unlikely to create PESSIMAL bandwidth splits and that it is relatively easy to predict reason about. 
	So, I mentioned this non-optimality quite explicitly, so why do you think that I believe that "FQ attempts to optimally split bandwidth"?


> * in other forests, the DualQ Coupled AQM doesn't even attempt to optimally split bandwidth, depending instead on "the application knows that best" (collectively) 

	[SM] Except we already know that applications do not know best, e.g. gamers that download game content via steam complain about choppy experience while using twitch streaming (either in or out) at the same time. CDNs are known to often optimize for expedited delivery (with non-standard means like IW >> 10), which often ist the desired behaviour, but often it is not and leads to reduced bandwidth/increased latency of flows the user values higher. IMHO it is evident that "anything goes" is not a viable route for many users, we have tried that and found it lacking.

> * and in yet other forests (DualQ with configurable per-flow queue protection and policing), there can be various points on a spectrum between the two

	[SM] Both of these addition will nor fix the brokenness of DualQ's core design though... let's admit it, dualQ is broken BY design, as its description is supposed to appease the FQ-leaning crowd, while at the same time not actually giving any FQ-like guarantees; as you are adamantly opposed to FQ on philosophical grounds. This inner conflict is simply too much for dualQ to straddle. 

*) I note that the L4S drafts even reject to require the implementation of these two features, so in all likelihood they will not even be available...


> 
> L4S signalling was designed to support all these forests.
> But according to your ironic insult, one of the architects of this ecosystem of forests only sees trees. There are more forests than the one you are in.

	[SM] You considered that to be an insult? In that case I would like to apologize, this was not intended to offend. Then again, you are returning in kind, so fair is fair.

> 
>> IMHO FQ strikes a decent balance here, it might rarely be optimal, but it also is equally rarely pessimal (while any unequal sharing mechanism will result in optimal and pessimal sharing, depending on whom you ask). In addition FQ allows much simpler prediction of what to expect from a link und saturating load. 
>> Of course this is just an opinion, and everybody here is entitled to their own opinion on this matter...
>> 
> 
> [BB] It is neither appropriate nor necessary for the IETF to decide to assign a codepoint in the IP header that only works well in your forest (FQ).

	[SM] You are mostly missing my arguments, are you. My argument, why the L4S should not currently be elevated to experimental RFC status is not that it is not using my preferred hobby-horse technique under the hood. No, my argument is that L4S components as the dual queue coupled AQM as currently implemented fail to meet the L4S projects own goals and requirements. I guess we agree, that the decision to "assign a codepoint in the IP header" should be made in a prudent way, as there is little margin for error, and as I argue, L4S is not at the point where the trade-offs can be realistically predicted.
	But since you keep draging FQ/SCE into the debate, I also bring these up as a examples of solutions that avoid a number of side-effects that L4S suffers from.
	


> The whole point of L4S was to broaden the ways extremely low latency could be provided (more forests): from FQ to DualQ with QProt+DualQ in between.

	[SM] That IMHO is a red herring, as there are zero applications** that will work with a median queueing delay of 1ms versus the 5ms default fq_codel already delivers in today's internet. And none of the real-time critical applications that one could think of (surgery by remote controlled robot, combustion control of an engine from a remote controller) really are suitable for deployment over the best effort internet.
	I am not saying fq_codel/cake/FQ/... are the end all of all low latency issues, or that reducing median latency below 5ms and/or reducing latency variance/jitter are not worthy goals, but that the considerable side-effects of the L4S approach need to be carefully weighted against the rather modest latency gains it delivers over the state of the art.
	Note, how I am not talking about the high-temporal fidelity 1/p congestion control here? I fully agree that this will be quite a good thing to deploy. I am just not convinced that the current L4S proposals are a good enough solution. 
	And to close the circle, I believe that SCReAM is an interesting example what can be achieved with high-temporal fidelity 1/p congestion control, but it says next to nothing about the low latency component of L4S (there where no competing flows, and SCReAM opted to stay well below link capacity).



Best Regards
	Sebastian


**) Last time I asked I did not get a real answer to what will be additionally made possible by that fivefold reduction in queueing delay.


> 
> 
> 
> Bob
> 
>> 
>> Best Regards
>> 	Sebastian
>> 
>> 
>>> 
>>> Bob
>>> 
>>> 
>>>>> Could you repeat the Codel test with interval set to 20 and target to 1ms,
>>>>> please?
>>>>> 
>>>>> If that improves things considerably it would argue for embedding the current
>>>>> best RTT estimate into SCReAM packets, so an AQM could tailor its signaling
>>>>> better to individual flow properties (and yes, that will require a flow-aware
>>>>> AQM).
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>>> it is fair to say that these simple simulations should of course be seen as just a
>>>>>> 
>>>>> snapshot.
>>>>> 
>>>>> 	[SM] Fair enough.
>>>>> 
>>>>> 
>>>>>> We hope to present some more simulations with 5G access, and not just
>>>>>> 
>>>>> simple bottlenecks with one flow, after the summer.
>>>>> 
>>>>> 	[Looking] forward to that.
>>>>> 
>>>>> 
>>>>>> Meanwhile, the SCReAM code on github is freely available for anyone who
>>>>>> 
>>>>> wish to make more experiments.
>>>>> 
>>>>>> /Ingemar
>>>>>> ================================
>>>>>> Ingemar Johansson  M.Sc.
>>>>>> Master Researcher
>>>>>> 
>>>>>> Ericsson Research
>>>>>> RESEARCHER
>>>>>> GFTL ER NAP NCM Netw Proto & E2E Perf
>>>>>> Labratoriegränd 11
>>>>>> 971 28, Luleå, Sweden
>>>>>> Phone +46-1071 43042
>>>>>> SMS/MMS +46-73 078 3289
>>>>>> 
>>>>>> ingemar.s.johansson@ericsson.com
>>>>>> www.ericsson.com
>>>>>> 
>>>>>> 
>>>>>>   Reality, is the only thing… That’s real!
>>>>>>       James Halliday, Ready Player One
>>>>>> =================================
>>>>>> 
>>> -- 
>>> ________________________________________________________________
>>> Bob Briscoe                               
>>> http://bobbriscoe.net/
>>> 
>>> 
>>> 
> 
> -- 
> ________________________________________________________________
> Bob Briscoe                               
> http://bobbriscoe.net/