[tsvwg] https://datatracker.ietf.org/doc/html/draft-livingood-low-latency-deployment-01#NewThinking
Sebastian Moeller <moeller0@gmx.de> Sun, 19 March 2023 10:10 UTC
Return-Path: <moeller0@gmx.de>
X-Original-To: tsvwg@ietfa.amsl.com
Delivered-To: tsvwg@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 7F47AC14CE27 for <tsvwg@ietfa.amsl.com>; Sun, 19 Mar 2023 03:10:02 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.845
X-Spam-Level:
X-Spam-Status: No, score=-1.845 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_ENVFROM_END_DIGIT=0.25, FREEMAIL_FROM=0.001, RCVD_IN_DNSWL_BLOCKED=0.001, RCVD_IN_MSPIKE_H2=-0.001, RCVD_IN_ZEN_BLOCKED_OPENDNS=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001, URIBL_DBL_BLOCKED_OPENDNS=0.001, URIBL_ZEN_BLOCKED_OPENDNS=0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=gmx.de
Received: from mail.ietf.org ([50.223.129.194]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id lw3CbfSvidmI for <tsvwg@ietfa.amsl.com>; Sun, 19 Mar 2023 03:09:58 -0700 (PDT)
Received: from mout.gmx.net (mout.gmx.net [212.227.15.19]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-256) server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 7A178C14EB19 for <tsvwg@ietf.org>; Sun, 19 Mar 2023 03:09:58 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=gmx.de; s=s31663417; t=1679220596; i=moeller0@gmx.de; bh=vTj0RKbSyU5RBeFgSK0+bNHHscpRd57uFkcXLJFA2B8=; h=X-UI-Sender-Class:From:Subject:Date:To; b=l8JE65nliGWA2z1fJ9HhriiUzKs9NgEo0hX4HqHzJhvI87c6NpkH890wnYzGxXsRC lmOjq9j+ZaRXX5Jzci73oGaSmupMTHUz3xU1NhJb6SJj+KBfwXT5k8GissQZe/P5Et RAs0pGVPAdnnciyuKqBiWeXalyh3uLBMZnGbiBJoxGlSI8ddjcrqOBzsuQBoSMceDh kaP9BSYyuAu4rkPEXlbrg5CNbwUP7OU6WSJ/JiZY93gWICptdMMbDBD8rmJaaO0mcZ ni+HC1r5y9ESlQQ5vylZmhPMYAJ2z755txTOwq/7IvgWK78CRDr3gdakF36uoGXmlX dYkYo3kJ9r/cw==
X-UI-Sender-Class: 724b4f7f-cbec-4199-ad4e-598c01a50d3a
Received: from smtpclient.apple ([77.1.248.130]) by mail.gmx.net (mrgmx004 [212.227.17.190]) with ESMTPSA (Nemesis) id 1MAwbz-1pp8W23scr-00BHg8 for <tsvwg@ietf.org>; Sun, 19 Mar 2023 11:09:55 +0100
From: Sebastian Moeller <moeller0@gmx.de>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: quoted-printable
Mime-Version: 1.0 (Mac OS X Mail 16.0 \(3696.120.41.1.2\))
Message-Id: <FA6646A0-774B-46C3-AD90-E96515E39148@gmx.de>
Date: Sun, 19 Mar 2023 11:09:54 +0100
To: tsvwg IETF list <tsvwg@ietf.org>
X-Mailer: Apple Mail (2.3696.120.41.1.2)
X-Provags-ID: V03:K1:FPZzPAGSx352IuJDXRmqTuqbNWuvBMEc3/LHgoIwn0zhHxIWjPS AHs6jjxREmeip4kegNqrwD1oc5zr66V6D4mHw5ToLp40PuIYZthyOq1gf5/PPQNE9mNFzrP r0m19bSNYfjlj7crQYKv3wOTtXxx63Z7n0IoyBdjaMiHEJazH6GJJQgEg9+DnCGn13zyGYb q7Nqq6zoI0jK45j3zEhjA==
UI-OutboundReport: notjunk:1;M01:P0:QIysm0lx5So=;7rEjQf8Z1ug+Smm3TBB0sXt3z3a 7je/GwaBgj1ldIo1R8KcBoMER8osyDSzSTmkSVVLknDtorT+bt0j1wh6t0e1splUON6VdKUlD m5NOPwHjI6c9AHkr5Yv4NdYv8adulbn/28hG1Kkxwvoy6ZIro/kqzLvU8SlJjV2yO5hQLPeX8 ZVlIAG0T82OE4X3EcAN+p35r1JbTN//2n+dEIpkNlVNGItKnh/xJ3OaeCPxaMAYwV+D9QdG2x 7RzItUNTQNJRv6vwnWbfUzdG895ApteiF1FN5vDLJhbCPkFiB2+5K3ydC9sk1KtLX9pQ/VGun dtZBc9l+Vzyb2lsMoKDtJJLK/3S4DRD7gpz/c62KyX3xEojrOKFRzX/+MMM9EIZcWykdZVxgd 9ihq/PqBLDclSZWugtZWip7LKvqx5YMlzfrJwEO1WInTwe0mDnHVxUDiojOJRgzTR/2uVUNQZ oPqNe4EO4Sli665ZCIEIiacENbyIgZLgho/wTnaNLYlW1meCclQJRJxeSyr54VM/Qmz4vzCTr ndC7gidccRwv05nyShF2J/xZ4bJSviACLEa2Jo8BLrtQzC3V8wgWZ8pEy9m0cV1nOaNh4dK/U n6Rgj3uZyTfOfupdm0t1A7rR6q9vDAxztGfSBd4tcyVyarm0xhZbPtp31v91y8OlFhltOymk4 I8XVWFiTO30GOSolYeX53ktrBNAnp/lDDDfrTwwzuv2Cd/2SzvO45nqjEDYBf/YcgUjZA83W0 lgj/WlJUX8IOuk1jD/8Mx5XPAfEO3gqRIZrdUdRaQMT5iz87vLeCfSo2qG4QyyOntig54T7Mn X13QjVuRZ8pvFXFqr2fUyHI2DW9QjSeUxDsEhNTmAP4Op0Eyd3izxo6ZRJvjqh3kj6B+hWGa3 sOCq1eEzRfebMhU/shaLLPzB48QwSgXTjT02HEzGPCh4kChMiXuFU4LmpCKmvJgVKPCx9EwxT qg1Vag==
Archived-At: <https://mailarchive.ietf.org/arch/msg/tsvwg/muBlLl7E8Up114VaNdRvPvSbPfw>
Subject: [tsvwg] https://datatracker.ietf.org/doc/html/draft-livingood-low-latency-deployment-01#NewThinking
X-BeenThere: tsvwg@ietf.org
X-Mailman-Version: 2.1.39
Precedence: list
List-Id: Transport Area Working Group <tsvwg.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/tsvwg>, <mailto:tsvwg-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/tsvwg/>
List-Post: <mailto:tsvwg@ietf.org>
List-Help: <mailto:tsvwg-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/tsvwg>, <mailto:tsvwg-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sun, 19 Mar 2023 10:10:02 -0000
Dear list, [SM] I started to look at https://datatracker.ietf.org/doc/html/draft-livingood-low-latency-deployment-01. Now this is on the Informational track and hence hard to argue with. However it seems to contain imprecise descriptions that I would prefer not to find in any RFC independent of the document track: https://datatracker.ietf.org/doc/html/draft-livingood-low-latency-deployment-01#NewThinking "The Introduction says "Furthermore, unlike with bandwidth priority on a highly/fully utilized link, low latency using these new approaches is not a zero sum game - everyone can potentially have lower latency at no one else's expense." But this bears a bit more discussion to understand more fully." [SM] This seems to misrepresent the mechanism L4S is build on, the dualQ coupled AQM, in a sense the reference L4S AQM, is described as a "conditional priority scheduler" and configure to have a rate share for the non-LL:LL-queue in the 1:10 (current Linux implementation IIRC) to 1:16 (rfc9332). This priority scheduler now is combined with a heuristic that aims at making the likelihood of that priority scheduling actually become visible rare. But "rare" is not never, and the resource distribution problem of selecting a packet for the immediate transmission seems to be a zero sum game, L4S or not L4S. [SM] Here a wikipedia explanation of zero-sum game: "Zero-sum game is a mathematical representation in game theory and economic theory of a situation which involves two sides, where the result is an advantage for one side and an equivalent loss for the other.[1] In other words, player one's gain is equivalent to player two's loss, therefore the net improvement in benefit of the game is zero." [SM] For any given "transmit time" you ca picke a packet from either queue (or introduce a stall, but L4S does not do that)), this is very much a zero-sum game by the definition above. [SM] Let's move on: https://datatracker.ietf.org/doc/html/draft-livingood-low-latency-deployment-01#NewThinking "4S does *not* provide low latency in the same way as previous technologies like DiffServ (QoS). That prior QoS approach used packet prioritization, where it was possible to assign a higher relative priority to certain application traffic, such as Voice over IP (VoIP) telephony. [SM] Again, given the right conditions, L4S will do exactly what is claimed here it would do not, e.g. when a short RTT L4S flow shares a link with a long RTT conventional-TCP flow. https://datatracker.ietf.org/doc/html/draft-livingood-low-latency-deployment-01#NewThinking "This approach could provide consistent and relatively low latency by assigning high priority to a partition of the capacity of a link, and then policing the rate of packets using that partition. For example, on a 10 Mbps link, a high QoS priority could be assigned to VoIP with a dedicated capacity of 1 Mbps of the 10 Mbps link capacity. The other 9 Mbps would be available to lower QoS priority, such as best effort general Internet traffic that was not VoIP.¶ [SM] You realize that priority schedulers nowadays offer things like "rate-borrowing" where unused capacity of a higher priority class can be used by lower classes if the higher class does not use-up its allotment? So in this example the traditional QoS priority hierarchy offers exactly the same performance as the L4S approach, the single VoIP stream sees minimal delay and the remaining traffic gets ~9.9Mbps and there are no transmission stalls. https://datatracker.ietf.org/doc/html/draft-livingood-low-latency-deployment-01#NewThinking "But even when QoS was used in this manner, the latency may have been relatively good but it was not ultra low latency of the sort that low latency networking (NQB and L4S) can deliver. As well, that QoS approach is to some extent predicated on an idea that network capacity is very limited and that links are often highly utilized. But in today's Internet, it is increasingly the case that there is an abundance of capacity to end users, which makes QoS approaches ineffective in delivering ever-lower latency.¶" [SM] This example is counter-intuitve and potentially mis-leading, a VoIP flow of typically ~100Kbps well-paced packets when scheduled in a weighted priority of 1:9 Mbps as the sole member in that priority class, will see pretty much only the delay for the currently transmitted packet, and if the link technology does support pre-emption, it might not even see that delay. [SM] This technically is "as low as queueing delay can go" that is piping that VoIP flow though an L4S scheduler/AQM instead can offer no advantage at all, once delay is minimal it is minimal, and hence "ultra-low latency". (And I add, that VoIP flow is also unlikely to respond to L4S-style marking making it risky to sort it into the LL-queue). https://datatracker.ietf.org/doc/html/draft-livingood-low-latency-deployment-01#NewThinking "The result, as noted in the prior section, has been the role of dual queue networking. With these approaches, the new low latency packet processing queue is introduced on one or more links on the end-to-end path. The internal L4S queuing may still use a sort of internal prioritization, but this is not QoS in the typical sense because this is happening on an extremely short timescale - sub-round trip time (so microseconds or a few milliseconds)." [SM] This does not seem to be a logical argument, prioritization really just means to decide "what to do next" and hence is not defined as requiring a minimal timescale, so just because L4S has short queues does not make it use anything else than prioritization when giving L4S traffic a higher probability of shorter sojourn times. https://datatracker.ietf.org/doc/html/draft-livingood-low-latency-deployment-01#NewThinking "A more important and impactful force at play is the rapid congestion signals that are exchanged that will cause a sender to dynamically yeild to other traffic (as if the other traffic had no QoS priority, which it does not) - which can be thought of as back pressure to signal the sender to backoff prior to packetloss occuring.¶" [SM] This seems to be at least irrelevant for the VoIP example from the same section. Also in traffic if I yield to other traffic at an intersection, I will stop immediately, but an L4S flow will still require a full RTT worth of time before it can react and the changed load hits the bottleneck. And yes that non-LL traffic has 1:10 or 1:16 priority, L4S just does a decent job of not engaging that part of its design under some typical? conditions (as long as users do not start mixing flows with large differences in RTT, or there is not too much under-responsive but paced traffic in the LL-queue, think 90 parallel VoIP flows of 100 Kbps each in the 10Mbps link example above). [SM] Nitpick: yield instead of yeild Again this is targeted as Informational and intended to document Comcast's recommendations, which might well be built on what the text describes, yet it would be helpful to make sure that technology is described in objective language that is free from bias. P.S.: The L4S RFC's clearly state that L4S is built upon conditional priority scheduling between its two queues, so it seems rather surprising to claim that it does not. It is not hard to describe its mechanism in this draft in a way, that is correct and still shows how this design has the potential of higher utility than a strawman of a fixed prioritity assignment, but it will require an example where L4S actually deliver over the traditional hierarchical prioritization method.
- [tsvwg] https://datatracker.ietf.org/doc/html/dra… Sebastian Moeller
- Re: [tsvwg] https://datatracker.ietf.org/doc/html… Livingood, Jason
- Re: [tsvwg] https://datatracker.ietf.org/doc/html… Livingood, Jason