Re: [tsvwg] What is "Scalable Congestion Control" in L4S?

Sebastian Moeller <moeller0@gmx.de> Tue, 16 April 2024 14:02 UTC

Return-Path: <moeller0@gmx.de>
X-Original-To: tsvwg@ietfa.amsl.com
Delivered-To: tsvwg@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 1262FC14F6FD; Tue, 16 Apr 2024 07:02:30 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.846
X-Spam-Level:
X-Spam-Status: No, score=-1.846 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_ENVFROM_END_DIGIT=0.25, FREEMAIL_FROM=0.001, RCVD_IN_MSPIKE_H2=-0.001, RCVD_IN_ZEN_BLOCKED_OPENDNS=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001, URIBL_DBL_BLOCKED_OPENDNS=0.001, URIBL_ZEN_BLOCKED_OPENDNS=0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=gmx.de
Received: from mail.ietf.org ([50.223.129.194]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id htNW5yJXh8o6; Tue, 16 Apr 2024 07:02:25 -0700 (PDT)
Received: from mout.gmx.net (mout.gmx.net [212.227.15.15]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-256) server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id A8B23C14F6AE; Tue, 16 Apr 2024 07:02:24 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmx.de; s=s31663417; t=1713276142; x=1713880942; i=moeller0@gmx.de; bh=3zHxVJMRGRSqmEpN5jsT+FIRJxsYCc2TjWGTNsKhRd8=; h=X-UI-Sender-Class:Content-Type:Mime-Version:Subject:From: In-Reply-To:Date:Cc:Content-Transfer-Encoding:Message-Id: References:To:cc:content-transfer-encoding:content-type:date:from: message-id:mime-version:reply-to:subject:to; b=seRg1fUpTPfnMxPDWy+sLokMeIoTJpKZokYDFvSbeOoG+xwBzjNXe9rzXLgaBP/j +eKHptMTT7pchQsmKqKGduIfGXyxe7puOU37Lw8TC84BX2zdzHUvaLTBjGx5K0Nuu GXLM1LvLZVBQKZwEsx4Ak9osEJyJydBngKGxmwGVlZAfWXRiOJIy0/9Qe1KaxW7BC wp55Hv/eny2NBrdd3siExzCF9RdQxkfwakhJRXM6bLYF2X/LCbCtbJpze306Dycf9 JeS1xiHczOeCHdRHizDPuypiure+Edcfvin0MMw0FsgxZK9H/bnlUH3FFHXhkrDbO iKx0lJod4SQsWYGWEQ==
X-UI-Sender-Class: 724b4f7f-cbec-4199-ad4e-598c01a50d3a
Received: from smtpclient.apple ([78.50.54.34]) by mail.gmx.net (mrgmx005 [212.227.17.190]) with ESMTPSA (Nemesis) id 1MqJm5-1sZyDV1Ahf-00nNYV; Tue, 16 Apr 2024 16:02:22 +0200
Content-Type: text/plain; charset="us-ascii"
Mime-Version: 1.0 (Mac OS X Mail 16.0 \(3774.500.171.1.1\))
From: Sebastian Moeller <moeller0@gmx.de>
In-Reply-To: <8e66998698044919b0b5abfaa47ae2fc@huawei.com>
Date: Tue, 16 Apr 2024 16:02:10 +0200
Cc: Sebastian Moeller <moeller0=40gmx.de@dmarc.ietf.org>, "tsvwg@ietf.org" <tsvwg@ietf.org>
Content-Transfer-Encoding: quoted-printable
Message-Id: <0997E246-CEA5-4FFB-9025-F94B48B4B489@gmx.de>
References: <30f6c4b411034046814d6a90956f9949@huawei.com> <BD28D463-9D61-4E91-88B3-78875F6CA45E@gmx.de> <8e66998698044919b0b5abfaa47ae2fc@huawei.com>
To: Vasilenko Eduard <vasilenko.eduard=40huawei.com@dmarc.ietf.org>
X-Mailer: Apple Mail (2.3774.500.171.1.1)
X-Provags-ID: V03:K1:g4gGNH/l9KelDmqfUN/+yIL8W/Ci9RJnj2vZlsSexDRY3huANPL y0ry+39ITG800qUOWGDNka2FILvu7H8wmlsfa4PmvoIXxVDHQorsJ1yBN1TyaB9a9Jguew0 ffP4bqcKUk5NvJ8+SSfeX562kasIo6mT+98vPlyLDlrJe8zdNrYmy66pLfKaZWUiCumlcdu GLCfKAxK3C4A0c1lNNdlA==
UI-OutboundReport: notjunk:1;M01:P0:YbreixYP8bk=;MXsyEI0YxCoUR0W+/gMPVeTYdXk vAAk3QqSHrJ/3H7yOndnUDeHyIra5hdLLb2fZbZkiPFW5E35N6x96joXwCqCNTatIYrSwvt+O i2zcylpvQ4obJaKBDp+nsvR8+BI2t3xLxfYktmiLOXvmcRcxXOKUCgwfegbKzXS8wAVbhBVjV uwyduA0Ak7SRXFe2aDjZ4xI0KFqlh4SvMGwysjJw2taeDKRPyTrhNmQLtyqUGz5Aaxh1J6Yz0 dHRJir82P3VQbDatguZXMEWmuLz52tZZC+xo2rWLcd97MnkkFHRuXo15FysqyhrSz2l9yBZCr g00PqHWC3yAeBBfIhXsRnkae1qWLFVWPeiBNH+5MMt2Zl0bFCHCob22AZZ/FC4ZvQwWThkT14 zIOzcXJfLBVD5xNnQhEYnEDWC9QSGNSJbMweaHMZrt8MJ097F/1K8hw9kB0TgB9muqnG7CmFf TVRo4sNG8VfKdP0MqoFUMiuvYJfdun7d/vNNl48wgt0XYtxIB9+Q79a3bjSm5lgzZ9zDFm+PF vuvqVe/IgverrMf5fobE3Qf9ck1ugH+FRNmZubJS7Wq8bhn++cJnz1eO7Ayq87MVqmtokHP3v tkKZyy+RmB4G7Ujn7CnKHmrke5zDvBIm6um7QDSfQ1Tvo0CguiYL8mlWfUbx+kRh7a6vR68bf w3VQbrBunYdl4Z6h3FuPiqw2hO9K5gSB7lcEos5Iv0CIEhxYzTvHDKGu9yXZArJ9N8rqiaiCl GrZcyAREv3DWnETze1+8KKBlsq9NyF8sgvNhzvr23TO2yV+/x+FQ8xt2TTkjER19B/rRclNTF vGJPczqZSkaXLbhmisWF6/wCvIcCJ/NHSF2ZZsH3Ow6dI=
Archived-At: <https://mailarchive.ietf.org/arch/msg/tsvwg/YkokSmhjWp5S3WR4E-GGMVQVIk8>
Subject: Re: [tsvwg] What is "Scalable Congestion Control" in L4S?
X-BeenThere: tsvwg@ietf.org
X-Mailman-Version: 2.1.39
Precedence: list
List-Id: Transport Area Working Group <tsvwg.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/tsvwg>, <mailto:tsvwg-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/tsvwg/>
List-Post: <mailto:tsvwg@ietf.org>
List-Help: <mailto:tsvwg-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/tsvwg>, <mailto:tsvwg-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 16 Apr 2024 14:02:30 -0000

Hi Ed,


> On 16. Apr 2024, at 15:23, Vasilenko Eduard <vasilenko.eduard=40huawei.com@dmarc.ietf.org> wrote:
> 
> Hi Sebastian,
> Thanks for your comments.
> I did think about it more (+ read the initial article from 1997 where the proportion to the square root was declared) and concluded that the concept is principally wrong.
> 
> The root cause for the "Scalable Congestion Control" definition is the reaction to the drop (claimed to be proportional to something different) - discussed in RFC 9332 section 2.1.
> The 1st derivative for "Scalable Congestion Control" would be to say that "it should not build the queue in the bottleneck link, because if the queue is growing then the RTT loop would be longer which would decrease the rate of congestion signals over such a loop".
> The current "Scalable Congestion Control" definition is the 2nd order derivative about the same root cause.
> 
> The problem with all these square root and not-square root academia approximations is that it was assumed the signal would travel all RTT (including the case when RTT would be bloated on the bottleneck link).
> Look to the "The macroscopic behavior of the TCP Congestion Avoidance algorithm" - they integrate everything under the RENO curve.
> Then look to the "PI2: A Linearized AQM for both Classic and Scalable TCP" - they use the full integral from the previous for the "congestion signal" frequency estimation.
> Effectively, they assume that congestion signal delay is from all BDP (all information in transit). BDP is called "window" in these documents.
> But actually, AQM would mark (or drop) queue packets on the head of the queue (with transmission or instead of transmission), not on the tail. It does not matter for the feedback speed on how many packets are waiting in the queue.
> Including the situation when the queue is huge (compared to the minimal RTT) - for the famous "bufferbloat" problem.
> Hence, the time needed to deliver this congestion signal would not change - it would be 1) the path left after the bottleneck to the destination and 2) back from the destination to the source (assuming no bottleneck in the opposite direction).
> This actual AQM behavior breaks macroscopic assumptions and makes this analytics irrelevant.
> 
> Funny enough, all CCAs are "Scalable" for the current definition of "Scalable". Because of the typical AQM behavior (drop from the head, no dependency on "window size" that is primarily accumulated in the bottleneck queue).

[SM] My take is that is still takes a full RTT (including the filled queue) before the effect of the previous signal (drop/mark) will be visible at the AQM decision point (be that traditional unfortunate tail, or more recent head)... And it is at that point where we need to notice a reduction/improvement in sojourn time or we will keep signalling.
This assumes that the length of the queue did not significantly change in the interim, but for a loaded link that seems a decent approximation, after all a tail-drop queue will maintain at around full state as long as the ingress exceeds the egress...
That said, I will not joust for L4S or for 'scalable' as I do not consider L4S to be a good solution... ;)

Regards
	Sebastian

> 
> CCA's aggressiveness and non-fair link sharing are probably related to something else.
> Eduard
> -----Original Message-----
> From: Sebastian Moeller <moeller0=40gmx.de@dmarc.ietf.org> 
> Sent: Tuesday, April 16, 2024 13:30
> To: Vasilenko Eduard <vasilenko.eduard@huawei.com>
> Cc: tsvwg@ietf.org
> Subject: Re: [tsvwg] What is "Scalable Congestion Control" in L4S?
> 
> Hi Ed,
> 
> I stumbled over the same previously, but the subtle issue is the formal definition is about marking rate in marks/second, while the second looks at marking probability as marks/packets over a time window, and while the marking rate stays constant the resulting marking probability will decrease with increasing packet rate. This is also true if marking probability is measured as marks/byte. However I fail to see a clear methods to deduce the relevant timewindow to calculate marking probability over.
> 
> 
>> On 16. Apr 2024, at 10:25, Vasilenko Eduard <vasilenko.eduard=40huawei.com@dmarc.ietf.org> wrote:
>> 
>> Hi all,
>> 
>> Both RFCs (9332, 9330) gives a formal definition that:
>> "Scalable Congestion Control: A congestion control where the average time from one congestion signal to the next (the recovery time) remains invariant as flow rate scales, all other factors being equal."
>> It is just a rate of the congestion signal, a simple matter.
> 
> [SM] Yes, this is marking rate in Hz.
> 
>> 
>> RFC 9332 section 2.1 gives the impression that Scalable Congestion Control has more fundamental differences:
>> "the steady-state cwnd of Reno is inversely proportional to the square 
>> root of p" (drop probability) But "A supporting paper 
>> [https://dl.acm.org/doi/10.1145/2999572.2999578] includes the 
>> derivation of the equivalent rate equation for DCTCP, for which cwnd 
>> is inversely proportional to p
> 
> [SM] But here this is marking probability which will depend on the actual data rate of the flow...
> 
>> (not the square root), where in this case p is the ECN-marking probability.
> 
> [SM] And that got me confused previously as marking rate and marking probability for a given data rate are proportional so I read p as a different way to say marking rate.
> 
>> DCTCP is not the only congestion control that behaves like this, so the term 'Scalable' will be used for all similar congestion control behaviours". Then in section 1.2 we see the BBR in the list of "Scalable CCs".
>> 
>> 1. The formal definition of "Scalable CC" looks wrong. At least it contradicts section 2.1.
> 
> [SM] Let's say that either description might be served well with explicitly describing the rate versus probability issue.
> 
>> 2. It is difficult to believe that BBR and CUBIC/RENO have so different reactions to overload signals because they both play fairly (starting from BBRv2) in one queue as demonstrated in many tests.
> 
> [SM] But they do differ... Traditional Reno will half its congestion window as a response to a dropped packet (or if rfc3168 is in use also as response to a CE-marked packet), while BBR will not do this... (older Versions of BBR will completely ignore marks and also try to ignore drops, newer versions of BBR will use a scalable response but still ignore drops up to a certain threshold). But these differences are not that relevant to BBR's sharing behaviour, as BBR determines its equitable capacity share via its probing mechanism and hence comes up with a decent response under similar conditions as reno, just based on different principles.
> 
>> It is probably impossible for such different sessions to share the load fairly if one session is reacting to p, but the other is reacting to the square root from p (p is the probability for congestion signal).
> 
> [SM] That is a true point, and that is why L4S requires a strict separation between the different response types and specific AQMs for each traffic type that take this into account.
> 
> Regards
> Sebastian
> 
>> 
>> Best Regards
>> Eduard Vasilenko
>> Senior Architect
>> Network Algorithm Laboratory
>> Tel: +7(985) 910-1105
>> 
>