Re: [tsvwg] L4S and 3DupACK CE behavior (from the RFC 4301 thread)

Dave Taht <> Tue, 21 January 2020 17:26 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 554CC120962 for <>; Tue, 21 Jan 2020 09:26:48 -0800 (PST)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -1.9
X-Spam-Status: No, score=-1.9 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, SPF_HELO_NONE=0.001, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id u6BZKi5sZde5 for <>; Tue, 21 Jan 2020 09:26:43 -0800 (PST)
Received: from ( []) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by (Postfix) with ESMTPS id D50FA120969 for <>; Tue, 21 Jan 2020 09:26:42 -0800 (PST)
Received: from (unknown [IPv6:2601:646:8301:676f:eea8:6bff:fefe:9a2]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by (Postfix) with ESMTPSA id A59A822275; Tue, 21 Jan 2020 17:26:39 +0000 (UTC)
From: Dave Taht <>
To: "Black, David" <>
Cc: "" <>
References: <>
Date: Tue, 21 Jan 2020 09:26:37 -0800
In-Reply-To: <> (David Black's message of "Mon, 6 Jan 2020 17:35:01 +0000")
Message-ID: <>
User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/24.5 (gnu/linux)
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: quoted-printable
Archived-At: <>
Subject: Re: [tsvwg] L4S and 3DupACK CE behavior (from the RFC 4301 thread)
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Transport Area Working Group <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Tue, 21 Jan 2020 17:26:48 -0000

"Black, David" <> writes:

> (posting as an individual, not WG chair)

Do we have any data on 3dupack's actual usage... and rack's uptake?

(I actually had an opportunity to explain 3dupack in an
 funny talk about congestion control I gave last week, but
 didn't. Nothing to see here, unless you need a laugh: )

> Following on to the long thread about CE marks for non-L4S traffic
> using the L4S queue, I have a few remarks and a test scenario to
> suggest: 
> Much as we’d like to banish 3DupACK, that’s not going to happen
> anytime soon, hence L4S has to deal with the fact that CE-marked
> traffic for flows that use 3DupACK will be present in the L4S queue of
> the dual-Q AQM. E.g., it’s a fine architectural vision to appeal to
> ubiquitous use of RACK and pacing to make this problem go away in the
> longer term, but that doesn’t address the current engineering problem
> with existing transport protocol implementations.
> We know that if the dual-Q AQM default/base queue is backed up at an
> L4S node, then for a non-L4S flow, arbitrary reordering of CE-marked
> packets wrt the rest of the flow is possible because the CE packets
> use the L4S queue but the rest of the non-L4S flow does not.
> A test scenario to investigate this involves a couple of nodes, [1]
> RFC-3168 AQM and [2]L 4S dual-Q AQM in that order, and two unrelated
> non-L4S flows:
> * Flow [A]: passes through both nodes, bottleneck for flow [A] is at
>   RFC-3168 AQM node [1], CE marks are being generated resulting in the
>   usual TCP “sawtooth” pattern for flow [A] (pattern may not be
>   continuous).
> * Flow [B]: passes only through L4S dual-Q AQM node [2], not RFC-3168
>   AQM node [1], bottleneck for flow [B] is at node [2], so flow [B]
>   builds a deep queue in the coarse grain (conventional) queue at node
>   [2], which generates CE marks , resulting in the usual TCP
>   “sawtooth” pattern for flow [B] (pattern may not be continuous).
> In this test scenario, each flow has only one bottleneck, but the
> bottleneck for flow [B] at node [2] causes CE-marked packets for flow
> [A] to be reordered at that node (in addition to likely applying some
> additional CE marks to flow [A]). I think this overall scenario is
> plausible and realistic, e.g., node [1] could be the egress from a
> home network for the Flow [A] TCP sender, and node [2] could be at the
> boundary between a backbone network and an access network for both TCP
> receivers – both nodes are places where the bandwidth drops between
> ingress and egress. I hope everyone agrees that in this scenario, Flow
> [B] will experience re-ordering, but that’s only the beginning of the
> discussion.

While I understand where you are going with this, I'd merely like to
see if this kind of re-ordering introduces any bugs with existing code
on well known platforms.

> An important open question (IMHO) is whether that re-ordering matters
> in practice. A reason why the answer is less than obvious is that flow
> [A] is already carrying CE marks from node [1], and the flow [A] TCP
> sender only reacts once per RTT to the combination of 3DupACK and CE,
> so at least some of the 3DupACK-based congestion responses for flow
> [A] get combined with CE-based congestion responses because both
> indications of congestion occur within the same RTT reaction period.
> The reason to invest the time to run this sort of test is to
> understand whether or not “some of the 3DupACK-based congestion
> responses” in the above paragraph is in practice “all” or close enough
> to “all” to not matter.
> Thanks, --David