Re: [tsvwg] New Version Notification for draft-heist-tsvwg-ecn-deployment-observations-02.txt

Bob Briscoe <> Mon, 22 March 2021 17:51 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 543653A0EF1 for <>; Mon, 22 Mar 2021 10:51:53 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: 1.934
X-Spam-Level: *
X-Spam-Status: No, score=1.934 tagged_above=-999 required=5 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, HTML_MESSAGE=0.001, NICE_REPLY_A=-0.001, SPF_HELO_NONE=0.001, SPF_SOFTFAIL=0.972, URIBL_BLOCKED=0.001, URI_DOTEDU=1.16] autolearn=no autolearn_force=no
Authentication-Results: (amavisd-new); dkim=pass (2048-bit key)
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id ObLdou4ueAzI for <>; Mon, 22 Mar 2021 10:51:48 -0700 (PDT)
Received: from ( []) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by (Postfix) with ESMTPS id 11B083A0EF0 for <>; Mon, 22 Mar 2021 10:51:47 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;; s=default; h=Content-Type:In-Reply-To:MIME-Version:Date: Message-ID:From:References:Cc:To:Subject:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=NmWox+PgsFDkafW8gBvS71+wd47n2TNIWhQmHMrqVAU=; b=7zYMa4WeO/wE9eCl1huqQQXUh lNGQsvpWWwPkWJeFprrGDbIuJurcTAzupioN7Fj1wiREpSv+7ZjT91gLWGKs7IbF7y3u5Mxltl/OI oXDWto8yf9/7Bj9E8EeueuG9MB5FQQGm31YYfZRAVIY23H70qLoaWAfJtOpR4Uyevu2kYIxIgo8oe /NqvS0W4sBR1NvQNG+qEz5Hpy2Qn9ytEpLvkgIY5amy+lhn5kg7jpA4/MAUSqbRavvbRWFhLiBDou 4fitjtQt2oCOALevK+Vz3a0JW8j9M1kmy08aGyP1osja83oad+49mZbvNnLBaXfYJRlehI24jkQuB SOJZiX44g==;
Received: from ([]:47026 helo=[]) by with esmtpsa (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (Exim 4.93) (envelope-from <>) id 1lOOiQ-0006fx-U5; Mon, 22 Mar 2021 17:51:47 +0000
To: Sebastian Moeller <>
Cc: Pete Heist <>, tsvwg IETF list <>
References: <> <> <> <> <>
From: Bob Briscoe <>
Message-ID: <>
Date: Mon, 22 Mar 2021 17:51:45 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <>
Content-Type: multipart/alternative; boundary="------------29A7E155571990525124A7C3"
Content-Language: en-GB
X-AntiAbuse: This header was added to track abuse, please include it with any abuse report
X-AntiAbuse: Primary Hostname -
X-AntiAbuse: Original Domain -
X-AntiAbuse: Originator/Caller UID/GID - [47 12] / [47 12]
X-AntiAbuse: Sender Address Domain -
X-Get-Message-Sender-Via: authenticated_id:
Archived-At: <>
Subject: Re: [tsvwg] New Version Notification for draft-heist-tsvwg-ecn-deployment-observations-02.txt
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Transport Area Working Group <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Mon, 22 Mar 2021 17:51:53 -0000


On 09/03/2021 12:37, Sebastian Moeller wrote:
>> On 08/03/2021 22:19, Pete Heist wrote:
> [...]
>>> IMO, fq_codel isn't ideal in this situation. When there actually is
>>> congestion to manage, flow-fairness means that one user can dominate
>>> the queue with many flows, which is exactly one of the problems we'd
>>> like to avoid. Host fairness could improve on this, or better yet,
>>> member fairness using the member database, as members can have more
>>> than one IP/MAC. But, fq_codel is what's available now, and generally
>>> usable enough in this case to get started with AQM.
> [...]
>> On Mar 9, 2021, at 01:25, Bob Briscoe <> wrote:
> [...]
>> PS. I agree it's v strange for an ISP to share capacity between its members by how many flows they are using.
> [...]
> I just want to note two things here:
> a) even without a flow queueing AQM on the bottleneck, sharding works to get a higher part of the capacity since TCP tends to be more or less fair to itself
> b) most ISP put a lid on games like that by simply also enforcing (policing or shaping) each users aggregate capacity to numbers mentioned in the contract...
> So fq_codel, while by no means ideal here, does not really make thinks worst that the status quo, it is just that sharding can make pure flow fairness regress to less equitable sharing between classes other than flows.

[BB] You seem to be using "the status quo" to mean TCP sharing out the 
capacity between users. But that has never been the status quo - since 
even before TCP congestion control was invented in 1988 per-customer 
scheduling has been the norm for public ISPs {Note 1}. Irrespective of 
access technology , whether DSL, DOCSIS cable, PON, mobile, satellite, 
etc. They all share downstream capacity between customers at a node that 
is either at the head of the bottleneck access link, or logically 
limited to match the capacity of the customer's bottleneck access link.

You might be able to turn a few debating somersaults and claim that the 
problem is the absence of a per-customer scheduler, not the presence of 
an FQ scheduler where this scheduler would normally sit. If that allows 
you to sleep at night, you're welcome to live in your Alice through the 
Looking Glass world.

{Note 1}: I can't immediately find evidence from the late 1980s of this. 
All I can find is Appx B of TR-059 
from the DSL Forum, but that was only 2003 when QoS was added after 
Diffserv was standardized. The closest I can get to something older is 
the second half of RFC970 written in 1985, which was an early step 
towards per-customer scheduling. You can skip over the first half that 
is trying to solve an early form of bufferbloat that predated TCP 
congestion control. Alternatively, I'm sure many people on this list 
will be able to confirm that per-customer scheduling has 'always' been 
the norm.

> For a cooperative use case, something like a per-member QFQ instance that equitably shares capacity between members with an fq_codel inside each of the QFQ classes seems like a better fit, no?

Well, once you've got the per-member scheduling, an AQM in each member's 
FIFO would suffice. But you can have your FQ-CoDel in there instead if 
you must.

When a node is serving a few thousand members, you don't want to 
unnecessarily have a thousand or so queues per member as well - at least 
not if you can achieve similar performance without them.

For instance, BT's network architect set me the task of simplifying BT's 
QoS architecture, because even 5 queues per customer was considered to 
be the main cause of system complexity. See Section 3.1.2 of this 


> Best Regards
> 	Sebastian

Bob Briscoe