Re: [tsvwg] Another tunnel/VPN scenario (was RE: Reasons for WGLC/RFC asap)

Sebastian Moeller <> Fri, 20 November 2020 07:56 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 98ABF3A1A09; Thu, 19 Nov 2020 23:56:59 -0800 (PST)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -1.648
X-Spam-Status: No, score=-1.648 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, FREEMAIL_ENVFROM_END_DIGIT=0.25, FREEMAIL_FROM=0.001, HTML_MESSAGE=0.001, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=no autolearn_force=no
Authentication-Results: (amavisd-new); dkim=pass (1024-bit key)
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id LOaNNVwOCKG0; Thu, 19 Nov 2020 23:56:56 -0800 (PST)
Received: from ( []) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by (Postfix) with ESMTPS id 410BA3A1A05; Thu, 19 Nov 2020 23:56:56 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;; s=badeba3b8450; t=1605859010; bh=jQjmNl6lpoxap6F2HOEXti6o5wf6xHdJUSnCmx22VBo=; h=X-UI-Sender-Class:From:Subject:Date:In-Reply-To:Cc:To:References; b=SgY4L+bWg21kQe/kbu6HHNbNoxO1Qqwuy3fIfjqUycybmhGf5eGMx7N18ids63JXI uw8mgvuEFGgU567oiZPuuqtRQDeo2cPHvkfRT8wneyiR20gDjo4Cc3MANCmuqMSGYO 1TyXz5Gct1Hpq6k39ckSpcZ+AIhwtKBpMnXf8SiA=
X-UI-Sender-Class: 01bb95c1-4bf8-414a-932a-4f6e2808ef9c
Received: from [] ([]) by (mrgmx105 []) with ESMTPSA (Nemesis) id 1MD9T7-1kWucC3D05-0098Vl; Fri, 20 Nov 2020 08:56:49 +0100
From: Sebastian Moeller <>
Message-Id: <>
Content-Type: multipart/alternative; boundary="Apple-Mail=_2249ACA9-20C8-4C8B-9D17-239D6AAC7878"
Mime-Version: 1.0 (Mac OS X Mail 12.4 \(3445.104.17\))
Date: Fri, 20 Nov 2020 08:56:49 +0100
In-Reply-To: <>
Cc: tsvwg IETF list <>
To: Ingemar Johansson S <>, "Black, David" <>, Pete Heist <>
References: <> <>
X-Mailer: Apple Mail (2.3445.104.17)
X-Provags-ID: V03:K1:ZUvdGIlxkPFHnf3Dhyf1BpXE4ZLzbvumDOKh/KCPg+f+8qg09Bh WMCFPG3s31b+SJrzg3DN0ZtqYC0GkuHP+C4jjTL/m+4hr1fjBkcOkJKh1brtLA+OU+BPIRL 8x2d5Vz6W21em2ydmEV4XJsGf8A54Wsf6sr84KpZf1YmiWvwbkN7ivjlxyNjvFSsqJ0poWE VXJwOlfmF8meLn3uSsBYA==
X-UI-Out-Filterresults: notjunk:1;V03:K0:wGm3jUUlgPA=:xgopZgPeRUzz4MhonMO5vH QWrT3KwSEVAh2AYw0RftFVOdk7jdl4JBtKfG0qeaP9D/EBX7AwNa18jnt79ejmGPmbvXctid0 jKB+y7YjC0wRgjwQU9szesJMQKz/Ol0VGmw/79G+jhFgj/4BqI7Ut/iFQdqOI6J+WUVKIo4/n dcPdDDj6SS05ZMt7KOT+UK/TRgvlXh6aziBz4suPfnccsYv4/At59HSj1ly1deXkIXpbB6wJG z1l+jlSj7gTyg8vR6ITFAAhyPtX2P1lvpL9I8eebK87sDppe6B7nFOwPpwdLvu0j4HnvcuBqn Qx1b7Ocxs9GoCXq7MrD/U+F5A2ceK5bye33I2Sf166P43cwZLYKsDdvPZ2OVh9jiBr1fN76Sx vRN+CvJg8BEno6FvYqJ0wvri287D3tMP2KcLy/8FvZzgaLrk0r0O0QNXiL1+jHfgHQUc8g2eS p64Wnn5V59+ew8tFRmSzqoYU8UucsrnJ9qhV7+TnV5vZPCC1zezk4+O8sz5eLgFs6yURC7ecR kppJVbKwSckm+F8BFq1Jfd9cGRxYpkC1QYr/0N0qO0rlKJrK1y/9r2XprkZldk4RjamtbCDHa orTGmumJ7LP5GNtXUuuvRpukluAcIRDWjKK1a91dm9VeDH7IwM5+WJ5IyOpAaMttpgz6cnsuy jEHAwPpCjV6b6RAdcKC3uUx0+kFD6Pg89hZYerF3F85fHUHiJ716oOJOLwDx4aYVUSid7sfZU /+5Ls7Kv3rMAau3crJLjtOdtpaj1QPrI2ewTqYCzwWhFkl1Q3B1DTOFVU9Ss4Ih/fll9WQ2n1 hCevpTsAr+4+2tlXNR2AFuhAEbONasdzXiFPYGZCeBW6ZT3r2qvYT9QOQyZBBMKAZJlSH21Iw YVzCkBSVjrYLuWAo33NA==
Archived-At: <>
Subject: Re: [tsvwg] Another tunnel/VPN scenario (was RE: Reasons for WGLC/RFC asap)
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Transport Area Working Group <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Fri, 20 Nov 2020 07:57:00 -0000


encrypted tunnels not revealing the individual component flows in their payload is a feature of encryption and not a failure of flow isolation... Arguably an encrypted tunnel that disguises as a single flow should not allow propagation of ECN codepoints between the inner and outer layer at all, but then that is in the hand of the tunnel operator, not the AQM node.
	It is quite interesting though, how tunneling is brought as an argument against the SCE proposal (only CE is guaranteed to be passed between layers*) and the very moment L4S shows issues with tunneling this is interpreted as someone else's problem. This constant application of double standards alone should be reason to reject the L4S drafts....

Best Regards

*) one of the original sins in regards to ECN and tunnels seems to have been not simply requiring complete unconditionally copying of inner ECN bits to outer ECN bits on en-capsulation and outer to inner on de-capsulation and letting the end-points deal with any accidental fall-out (to be complete, a tunnel should either not do any ECN propagation in any direction, or the one described). For rfc3168, I can fully understand why that route was not chosen, but years later for rfc6040 that decision is much harder to rationalize. 

On 20 November 2020 07:04:56 CET, Ingemar Johansson S <> wrote:
Hi David, Pete
I try to make it clear to me what this scenario show is about and somehow I see it more as a flow isolation problem that makes FQ non-functional rather than an L4S problem?.
There is of course a possibility that VPNs do not implement RFC6040 properly. I guess for software VPNs it is only an update cycle away, more hard/firmware VPNs can of course be a different story but I guess that, similar to the discussion on home gateways and ECN a few months ago, they can be upgradeable too  ?
From: tsvwg <> On Behalf Of Black, David
Sent: den 19 november 2020 22:20
To: Pete Heist <>
Cc: tsvwg IETF list <>
Subject: [tsvwg] Another tunnel/VPN scenario (was RE: Reasons for WGLC/RFC asap)
[posting as an individual]
> I'll leave it to the WG to come up with examples of what types of tunnels and traffic scenarios could lead to this,
> but one example is a user who has a privacy VPN on their PC, and fq_codel on their home gateway.
> Let's say one flow connects to an L4S capable server, and another flow to a non-L4S, conventional server.
> The L4S flow will dominate the non-L4S one (whether it's ECN capable or not), probably causing some level
> of poor service, perhaps for a video stream, download, or whatever.
It’s more than home gateways – there will be increasing use of VPNs with public or shared WiFi to block snooping by other WiFi devices and/or the access point infrastructure.  In that case, the WiFi access point and nodes between the access point and the VPN gateway can only look at the outer IP header applied by the VPN.  If the VPN preserves packet boundaries and complies with RFC 6040, then ECT(1) in the inner header will show up in the outer header, but not all VPNs do both of those.
Thanks, --David
From: tsvwg < <>> On Behalf Of Pete Heist
Sent: Thursday, November 19, 2020 3:03 PM
To: Gorry Fairhurst
Cc: tsvwg IETF list
Subject: Re: [tsvwg] Reasons for WGLC/RFC asap

On Thu, 2020-11-19 at 16:34 +0000, Gorry Fairhurst wrote:
On 19/11/2020 16:22, Pete Heist wrote:
Hi Koen,
Rather than thinking of this as advantages and disadvantages to waiting, I see it as an engineering process. It was decided earlier this year that the L4S proposal has enough support to continue, so we're on that path now. Part of that decision, as I understood it, also recognized that there are valid safety concerns around compatibility with existing AQMs, and some solution needs to be devised.
RFC3168 bottleneck detection was added to TCP Prague, which appears to be difficult to do reliably when there is jitter or cross-flow traffic, and it has since been disabled in the reference implementation. The l4s-ops draft was started, but isn't complete yet and may need WG adoption as part of a LC. We can then decide how effective the proposed mitigations are against the risks and prevalence.
To start a WGLC now would circumvent that earlier recognition that a safety case needs to be made. Meanwhile, since testing showed that tunnels through RFC3168 FQ AQMs are a straightforward path to unsafe flow interaction, along with other issues relative to the goals, it doesn't seem like the engineering process is done just yet.
By the way, I liked your data - and it helped me a lot to look at this, thanks very much for doing this.

I'm glad, as I think we're at our best when we're doing engineering and producing data. I wish it were easier to do!
It would help me if you clarify what you mean by  "unsafe" - to me "safety" relates to traffic unresponsive to drop, as in CBR traffic, etc. I've not understood how CE-marked traffic can erode safety, but maybe I missed something?

Sure, so the existing RFC3168 CE signal in use on the Internet today indicates an MD (multiplicative decrease), whereas the redefined CE signal in L4S indicates an AD (additive decrease). Two congestion controls responding to CE in a different way, or one that responds to CE with an AD and one that responds only to drop (i.e. all standard congestion controls that advertise Not-ECT), will not interact safely in the same RFC3168 signaling queue. We're probably on the same page here already, but I'll refer to section 5 of RFC8257.
That is one of the reasons why ECT(1) is used in L4S to place L4S flows in the L queue- to keep them separate from conventional flows in the C queue. As long as flows have advertised their capability correctly, that works.
However, existing RFC3168 queues do not have knowledge of L4S, therefore will not know that ECT(1) means that traffic needs to be segregated and signaled in a different way. They will signal a Prague flow, which sets ECT(1), with CE, expecting the flow to respond with an MD, rather than AD. Meanwhile they'll signal an RFC3168 or non-ECN flow with either CE or drop, and in either case the flow will respond with an MD, causing conventional flows to yield to Prague flows to varying degrees depending on the AQM in use.
Here's an example of CUBIC and Prague when they end up in the same fq_codel queue: <>
Here's a more extreme example of Reno and Prague sharing a single PIE queue with ECN enabled (less common): <>
In the example with PIE, Reno appears to be driven at or close to minimum cwnd. In the fq_codel example, the steady state throughput of Prague:CUBIC is around 19:1. We've seen a range in the Codel case from around 12:1 to 20:1. In my opinion, we could use the word "unsafe" here in both cases.
I'm not sure why "tunnels have crept in here. There have always been side-effects with classification (and hence scheduling), but I don't see new issues relating to "tunnels" with ECN.

Tunnels are relevant because they provide an easy practical path to the unsafe flow interaction described above. The widely used fq_codel qdisc has ECN enabled by default. Fortunately, because it has flow-fair queueing, Prague flows and conventional flows are usually placed in a separate queue (hash collisions aside), causing Prague to only affect itself with additional delay (TCP RTT). However, a tunnel's encapsulated packets all share the same fq_codel queue because they all have the same 5-tuple, so there is unsafe interaction between the tunnel's flows. Here we use Wireguard through fq_codel: <>
I'll leave it to the WG to come up with examples of what types of tunnels and traffic scenarios could lead to this, but one example is a user who has a privacy VPN on their PC, and fq_codel on their home gateway. Let's say one flow connects to an L4S capable server, and another flow to a non-L4S, conventional server. The L4S flow will dominate the non-L4S one (whether it's ECN capable or not), probably causing some level of poor service, perhaps for a video stream, download, or whatever.
I'm not commenting on when the Chairs think a WGLC will provide useful information, we'll say that in due course.

Ok, I trust that we'll engage enough disinterested people into congestion control who will add their input.
Thanks Gorry for looking this over. :)
Best wishes,


On Wed, 2020-11-18 at 10:31 +0000, De Schepper, Koen (Nokia - BE/Antwerp) wrote:
Hi all,
To continue on the discussions in the meeting, a recap and some extra thoughts. Did I miss some arguments?
Benefits to go for WGLC/RFC asap:
There is NOW a big need for solutions that can support Low Latency for new Interactive applications
The big L4S benefits were a good reason to justify the extra network effort to finally implement ECN in general and AQMs in network equipment
Timing is optimal now: implementations in NW equipment are coming and deployment can start now
Deployment of L4S support will include deployment of Classic ECN too! So even for the skeptics among us, that consider that the experiment can fail due to CCs not performing to expectations, we will fall back to having Classic ECN support
Current drafts are about the network part, and are ready and stable for a very long time now.
Only dependency to CCs in the drafts are the mandatory Prague requirements (only required input/review from future CC developers: are they feasible for you)
We have a good baseline for a CC (upstreaming to Linux is blocked by the non-RFC status)
Larger scale (outside the lab) experiments are blocked by non-RFCs status
It will create the required traction within the CC community to come up with improvements (if needed at all for the applications that would benefit from it; applications that don’t benefit from it yet, can/will not use it)
NW operators have benefits now (classic ECN and good AQMs) and in the future can offer their customers better Low Latency experience for the popular interactive applications
When more L4S CCs are developed, the real independent evaluation of those can start
Disadvantages to wait for WGLC/RFC:
We’ll gets stuck in an analysis paralysis (aren’t we already?)
Trust in L4S will vanish
No signs that we can expect more traction in CC development; trust and expectations of continuous delays will not attract people working on it, as there will be plenty of time before deployments are materializing
Product development of L4S will stall and die due to uncertainty on if L4S will finally materialize
Product development of Classic ECN will stall and die due to uncertainty on how L4S will finally materialize
What are the advantages to wait? Do they overcome these disadvantages?