Re: [tsvwg] Follow-up to your DSCP and ECN codepoint comments at tsvwg interim

Bob Briscoe <> Mon, 16 March 2020 10:32 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id F0B1D3A2276 for <>; Mon, 16 Mar 2020 03:32:34 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -2.099
X-Spam-Status: No, score=-2.099 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Authentication-Results: (amavisd-new); dkim=pass (2048-bit key)
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id mLHgw-iGO6Ig for <>; Mon, 16 Mar 2020 03:32:32 -0700 (PDT)
Received: from ( []) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by (Postfix) with ESMTPS id A24913A2272 for <>; Mon, 16 Mar 2020 03:32:32 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;; s=default; h=Content-Transfer-Encoding:Content-Type: In-Reply-To:MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender :Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=ebtZONDAfI2LmOysARI03YhJ3yujXTAcOf1yM9e8SLU=; b=jr07wPcEaJKwALxglbM8tcLqwT 6gLFEMECwGVQ8AA5HkaFBvmT/wTeqo2EqdkhmS6TOhyikfelGZzjdEdeRUpXu4udkY+/AyYtdTmb2 x84aqbjk75TmEDBSDkUP3+xF6sbecCMM7GOeXh8SC5LBF9zVuigQpXO+KSAfxpmWcivq/1UbJu26T IF8kIiLNHq+kpb6x8DsR54Ei2etLdb9KiC9sVx7XaV9nmh0HfIY3dANFl4w35My0SSNFzBkLtwEYT Z9V86CgYI/PVNr+RP/ny5qX1IzYsbK1Ytg9CNDA395yeyfxbIRusgzN8bx/L0X6Ja+18isEg6TjXc aDnw6XXw==;
Received: from ([]:58954 helo=[]) by with esmtpsa (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (Exim 4.93) (envelope-from <>) id 1jDn2s-006VFx-DN; Mon, 16 Mar 2020 10:32:30 +0000
To: Sebastian Moeller <>
Cc: tsvwg IETF list <>, Steven Blake <>
References: <> <> <> <> <> <>
From: Bob Briscoe <>
Message-ID: <>
Date: Mon, 16 Mar 2020 10:32:29 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.6.1
MIME-Version: 1.0
In-Reply-To: <>
Content-Type: text/plain; charset="utf-8"; format="flowed"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-AntiAbuse: This header was added to track abuse, please include it with any abuse report
X-AntiAbuse: Primary Hostname -
X-AntiAbuse: Original Domain -
X-AntiAbuse: Originator/Caller UID/GID - [47 12] / [47 12]
X-AntiAbuse: Sender Address Domain -
X-Get-Message-Sender-Via: authenticated_id:
Archived-At: <>
Subject: Re: [tsvwg] Follow-up to your DSCP and ECN codepoint comments at tsvwg interim
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Transport Area Working Group <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Mon, 16 Mar 2020 10:32:35 -0000


There are two a very important misconceptions in this email that other 
people might be labouring under, so sorry for dredging up this week-old 
thread... inline...

On 08/03/2020 19:52, Sebastian Moeller wrote:
> Hi Bob,
> more below in-line.
>> On Mar 8, 2020, at 19:27, Bob Briscoe <> wrote:
>> Seabstian,
>> On 08/03/2020 15:34, Sebastian Moeller wrote:
>>> Hi Bob,
>>> More below.
>>> On March 8, 2020 3:23:39 PM GMT+01:00, Bob Briscoe <> wrote:
>>>> Sebastian,
>>>> On 07/03/2020 19:16, Sebastian Moeller wrote:
>> especially not for applications that might not be possible any other way (both low latency and capacity-seeking high bandwidth has traditionally been considered impossible).
> 	[SM] The bigger point is IMHO, is greedy and low-latency actually a desirable combination of features?

[BB] Not to have grocked this basic requirement is a serious disconnect.

Short answer:
Application-limited does not preclude needing to be capacity-seeking. 
"Application-limited" is decided when the application is written, 
whereas "capacity-seeking" (a.k.a. "greedy") copes with potential 
conditions encountered at run-time. So when a flow needs to be 
capacity-seeking, you don't want the onset of queuing to ruin the low 

Longer answer:
Presumably you're not disagreeing that being able to have high 
throughput at the same time as very low latency would enable many new 
applications (natural video-based interaction, video-based remote 
control, cloud-based AR/VR, online gaming, remote desktop, etc).

Even if the throughput of a video flow is usually application-limited, 
the higher the throughput, the more likely it will sometimes encounter 
scenarios where it is limited by available capacity. For instance,
a) when a path with a lower capacity bottleneck is used (e.g. I 
communicate with someone who has a lower capacity access link, or I move 
to a lower capacity access), or
b) because enough other traffic has arrived at the existing bottleneck.

Then you want the congestion control to become capacity-seeking (not 
unresponsive), without losing the low latency.

Every time L4S is presented or written about, the same high level 
requirement of high throughput and low latency is repeated. What did you 
think the whole L4S activity was about?

[I've snipped the next part of the conversation, because it was 
opinion-based and going nowhere.]

> 	[SM] But here is the rub, the L4S architecture makes in unduly hard to separate low-latency queueing from 1/p-type congestion control, as I fully agree these are ORTHOGONAL to each other, so artificially coupling them, e.g. by making ECT(1) both denote a 1/--typre response to CE AND requestimg admission to the LL-queue is exactly the wrong thing to do. Glad that you agree.

[BB] Another rudimentary disconnect here...

Queuing delay is not caused by the queue, it's caused by the 
congestion-controllers that induce the queue.

1/p congestion controllers are what keeps the sawteeth small so that the 
ECN-marking threshold in the queue can be shallow without causing 

The CoDel config of 'target' (min queue depth) was recommended to be set 
quite low despite leading to underutilization for higher RTTs. However, 
if the congestion controllers are not scalable (1/p), even if they keep 
the sawteeth fairly small today, they will grow as link rate scales. 
Then we'll be back in the same hole in a few years time.

Unless we sort this out properly, the problem will always stay that we 
have to remain friendly to original Reno flows still likely to be 
running over the Internet. The L4S queue is a transition mechanism that
a) creates a clean-slate isolated from that slow-death problem.
b) gives immediate benefit today, which encourages adoption (it patently 
already has)

Again, every time L4S is presented or written about, this explanation is 
given up-front.

[Again I've snipped some opinion-based conversation, going nowhere...]

> 	[SM] That seems all irrelevant to L4S though, and especially why gauarding L4S use of ECT(1) with a bespoke DSCP would hinder EXPERIMENTAL roll-out of L4S in any meaningful way. As far as I can tell, you seem to think about how to deploy L4S as a standards RFC, but how about first making sure there is a viable path to it becoming a standard first?

[BB] If an experiment cannot transition to wider usage, potential 
adopters will notice and not even start. The limits to the experiment 
are best put in place by the experimenters, not by adding inherent 
limits to the technology.


> Best Regards
> 	Sebastian
>> Bob
>> {Note 1}: E.g. initially some might only classify ECT1 into the L queue for business customers, or as a product for a higher tier of customers, or solely for the operator's own services (if allowed in their jurisdiction). Other ISPs, like you say, will want to use it as a differentiator for their whole regular service (see draft-ietf-l4s-arch).
>> {Note 2}: Of course, certain ISPs might pervert the ECN signal, but I think that's less likely, 'cos they can only access to traffic in their own network, which inherently hits the e2e congestion control of their own customers. If we think that's a possibility, L4S senders could cover the least significant bit of the ECN field with integrity protection to raise the bar against network interference, by tying it to the integrity failure of each whole packet.
>>> And especially for the scope of the experimental deployment? The experiment is required to make sure that L4S can deployed in a safe and backward compatible fashion and that it can deliver it's promises under real-world conditions.
>>> The experiment is NOT about how to deploy something in a fashion that offers the least part of resistance/the highest adoption rate. As a rule of thumb, I would assume the IETF be interested in the technical aspects and not the marketing side....
>>> Regards
>>>          Sebastian
>>>> Bob
>> -- 
>> ________________________________________________________________
>> Bob Briscoe                     

Bob Briscoe