Re: [tsvwg] Comments on L4S drafts

"Holland, Jake" <> Wed, 10 July 2019 13:56 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 162B912002F for <>; Wed, 10 Jul 2019 06:56:13 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -2.7
X-Spam-Status: No, score=-2.7 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, RCVD_IN_DNSWL_LOW=-0.7, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Authentication-Results: (amavisd-new); dkim=pass (2048-bit key)
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id RUHNYCOcfLle for <>; Wed, 10 Jul 2019 06:56:10 -0700 (PDT)
Received: from ( [IPv6:2620:100:9001:583::1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by (Postfix) with ESMTPS id 5281F1200A1 for <>; Wed, 10 Jul 2019 06:56:10 -0700 (PDT)
Received: from pps.filterd ( []) by ( with SMTP id x6ADqjOe003338; Wed, 10 Jul 2019 14:55:24 +0100
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; h=from : to : cc : subject : date : message-id : references : in-reply-to : content-type : content-id : content-transfer-encoding : mime-version; s=jan2016.eng; bh=bNwKShvgCZrq9HKy2GGkdSWrBHVndMyGT3ZMwXKqy+c=; b=Sy+gbi3aCIi01mrOGsTY3N5bXJnpVDFOwuBe9gNHBTtD5s/OwB6qVlCHsFGGO/MtAZqB g1tQv9M06Yzl5wplB1qY/M8LUy2o3MS77p///HJd6VoQ0ajDhuWh32kPG+mNu03prrON Mqexma+GyzfRivkRbIaOYFjCNv/JOvYat9EKbpO5kEAVlbmOAbNTAoIUV6yTXvyJUnq8 KlUi4Hs0L/6d3qX99fSgiWh3jEaU9RxFfvm2vjm2+Rd8BKSLib+Uh5FPLjthFuLrKc8e Hc2oRWHIv4+/fncPsSXFGHrZyfFPeL6MWEk0j7Y9GrbXy8iddDVf0aD3QivmYrNB4Zs0 eA==
Received: from prod-mail-ppoint4 ( [] (may be forged)) by with ESMTP id 2tmqg8w1fb-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 10 Jul 2019 14:55:24 +0100
Received: from pps.filterd ( []) by ( with SMTP id x6ADl1dl020591; Wed, 10 Jul 2019 09:55:22 -0400
Received: from ([]) by with ESMTP id 2tjpyy7u20-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 10 Jul 2019 09:55:20 -0400
Received: from ( by ( with Microsoft SMTP Server (TLS) id 15.0.1473.3; Wed, 10 Jul 2019 08:55:19 -0500
Received: from ([]) by ([]) with mapi id 15.00.1473.004; Wed, 10 Jul 2019 08:55:19 -0500
From: "Holland, Jake" <>
To: Bob Briscoe <>
CC: "" <>, "" <>
Thread-Topic: Comments on L4S drafts
Thread-Index: AQHVGzHSw5V07mpKU0eMpRmcLIrFiKaQ1UeAgAqC9ICACBbqAIAghycA
Date: Wed, 10 Jul 2019 13:55:18 +0000
Message-ID: <>
References: <> <> <> <>
In-Reply-To: <>
Accept-Language: en-US
Content-Language: en-US
user-agent: Microsoft-MacOutlook/10.1a.0.190609
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
x-originating-ip: []
Content-Type: text/plain; charset="utf-8"
Content-ID: <>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-07-10_06:, , signatures=0
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 malwarescore=0 phishscore=0 bulkscore=0 spamscore=0 mlxscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1810050000 definitions=main-1907100161
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-07-10_06:, , signatures=0
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 priorityscore=1501 malwarescore=0 suspectscore=0 phishscore=0 bulkscore=0 spamscore=0 clxscore=1011 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1810050000 definitions=main-1907100162
Archived-At: <>
Subject: Re: [tsvwg] Comments on L4S drafts
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Transport Area Working Group <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Wed, 10 Jul 2019 13:56:13 -0000

Hi Bob,

Responses to a few points inline.  And sorry for the slow response
on this message, it's been a busy time.

From: Bob Briscoe <>
Date: 2019-06-19 at 07:11

[BB] Understood. I was concerned that I was demolishing your idea in public, and I was trying to thank you for being willing to put up a strawman. 

[JH] My pleasure :)  I just wanted to raise the point in hopes
it would help avoid heading down a wrong path there.

Indeed, FQ itself screwed up the work on background transport protocols, and many other plans for novel applications of unequal throughput (I'll start a separate thread on that).

[JH] That's interesting, and I look forward to hearing more about it.
But I'm surprised if this isn't addressable in some other, perhaps
better ways... Between RFC 8622 and a sender's ability to just back off
more aggressively or persistently under congestion signals, this doesn't
seem as intractable as the fairness problem that FQ solves, so my first
instinct is that this is a good trade.  But you seem to have thought
more about it, so I'm interested to hear the counterexamples.

Don't worry. Classic ECN fall-back is on the ToDo list. I just didn't want to do it unless we have to, cos I prefer simplicity.

[JH] My concern is that this seems like a sender-side safety problem
that might not be discovered right away, lacking extensive traffic,
but makes it unsafe to do otherwise reasonable things (like PIE with
ECN) that might seem like they could be a pretty good idea.  If this
gets rolled out without the safety valves, it seems like the kind of
thing that would blow out later, with the problem being traffic from
systems that haven't been touched in years, under the right kind of
pressure, which might not be all that impossible.

And thus it seems worth raising as problematic ossification that's
worth avoiding if possible, rather than making existing deployed code
obsolete without properly deprecating it.

3. One more meta-point: the sales-y language makes the drafts hard to
read for me
[BB] if there are any you want changed, pls call them out.

Thanks for inviting the critique, I haven't been quite sure how to
approach this.  I hope this is received in the spirit it's offered: as an
attempt to improve the document text to make it easier for future readers,
especially potential future implementors.

I'm going to have to do this in sections, because it seems to me there's
quite a large density of advocacy and unquantified hype-increasing
value judgements for an RFC, and that it's spread pretty widely.  If you
want this kind of review for the whole thing, it may take a while, but
hopefully I can give a general idea that could be applied to other
sections as well.  But if needed, I'll try to raise what I can elsewhere
too, my thinly-stretched time permitting.

Also: I recognize that there's some editorial discretion here that can
reasonably differ, and I don't insist that all the instances I'll try to
raise be stripped completely, but it seems to me there's a lot of
text--maybe as much as half or more of the paragraphs in these docs--
with issues like the ones I'll mention here.  My experience reading
these docs was characterized by frequently having to push down the
skepticism I get when someone's trying to sell me something, and
re-focus on the tech.  So this section is based on the assumption that
when an implementor is trying to read an RFC, there's negative value on
exposition with even a little tendency toward hype.

I'll start with just the abstract for l4s-arch:

Overall, I think this can be summarized a lot more concisely, and would
read better if the benefits were outlined more as goals, and less as
speculative claims, and if a lot of the exposition was cut, or moved to
the introduction in cases that it's necessary.

Here's a straw-man suggestion for your consideration, please feel free
to use it or adjust as needed:

This document describes an architecture for "Low Latency, Low
Loss, Scalable throughput" (L4S), a new Internet service targeted
to replace best-effort transport service eventually, via
incremental deployment.

L4S-capable senders rely on congestion signaling to eliminate
queuing delay and loss while maintaining high link utilization
during sustained transfers.  L4S-capable bottlenecks rely on
sender response and packet classification to maintain a low queue
occupancy and provide preferential forwarding for L4S traffic.

Bottleneck link capacity is shared with non-L4S traffic, providing
low loss and low latency to L4S traffic, but with inter-class
fairness roughly equal to inter-flow TCP competition.  This provides
improved fairness relative to Diffserv solutions that use traffic
priority to provide low latency.

But to illustrate the nature of the issues to which I'm referring,
and in case you don't like that text, I'll also flag the points that
gave me trouble in the original:

   This document describes the L4S architecture for the provision of a
   new Internet service that could eventually replace best efforts for

- "could eventually replace" is a speculative claim, better expressed
as a goal IMO.

   all traffic: Low Latency, Low Loss, Scalable throughput (L4S).  It is
   becoming common for _all_ (or most) applications being run by a user

- "_all_ (or most)" means the same thing as "most", and framing it with
"all" seems to have no purpose beyond hype?
- underlining is prohibited punctuation (RFC 7322, section 3.2)

   at any one time to require low latency.  However, the only solution

- "require" seems a hype-adding exaggeration, where something
like "benefit from" is closer to fair.

   the IETF can offer for ultra-low queuing delay is Diffserv, which

- "only solution the IETF can offer" is a mistake, assuming this doc
becomes an IETF-offered solution.  Something about "previous
low-latency solutions rely on Diffserv" seems closer to correct.

   only favours a minority of packets at the expense of others.  In
   extensive testing the new L4S service keeps average queuing delay

- "extensive" is an unquantified value judgement that looks like hype

   under a millisecond for _all_ applications even under very heavy

- underlining prohibited
- "all" is misleading, regarding the overload states that fail over to
classic queue

   load, without sacrificing utilization; and it keeps congestion loss
   to zero.  It is becoming widely recognized that adding more access

- zero is also misleading in overloaded states.
- "widely recognized" is a weird basis for this claim, and also an
unquantified hype-adding value jugement

   capacity gives diminishing returns, because latency is becoming the
   critical problem.  Even with a high capacity broadband access, the

- "latency is the critical problem" is a context-sensitive claim, and
adds hype.

   reduced latency of L4S remarkably and consistently improves

- "remarkably" is an unquantified value judgement, and also makes a
good case study for the general claim of sales-y language I'm
making:  searching for "remarkable" or "remarkably" finds that
they're used in only 4 RFCs so far, all of which are referring to
historical occurrences that exceeded expectations (regarding SMTP
in 1869 and 5598, and Jon Postel's contributions in 5540 and 2555).
Using this term for an expectation of performance in an undeployed
system feels like over-the-top hype to me, even if it might come
true eventually, and even if it exceeded expectations in testing.
- "consistently" likewise is an unquantified value judgement

   performance under load for applications such as interactive video,

- "improves performance" is a context-sensitive claim (it would only
get parity depending on the metrics and conditions).

   conversational video, voice, Web, gaming, instant messaging, remote
   desktop and cloud-based apps (even when all being used at once over
   the same access link).  The insight is that the root cause of queuing

- "the insight is that the root cause" is expository and not concise.
(this one isn't about hype, just editorially the point seems misplaced
in the abstract.)

   delay is in TCP, not in the queue.  By fixing the sending TCP (and
   other transports) queuing latency becomes so much better than today
   that operators will want to deploy the network part of L4S to enable

- "so much better that operators will want" is a highly speculative
hype-y claim, and context-specific.  This has been historically quite
hard to predict, and strongly relies on an absence of unexpected issues
that may not be discoverable in test environments.  This also seems
over the top.

   new products and services.  Further, the network part is simple to
   deploy - incrementally with zero-config.  Both parts, sender and

- the "Further, the..." sentence is expository and redundant

   network, ensure coexistence with other legacy traffic.  At the same

- "legacy" is a bit presumptuous

   time L4S solves the long-recognized problem with the future
   scalability of TCP throughput.

   This document describes the L4S architecture, briefly describing the
   different components and how the work together to provide the

- nit: "the"->"they"

   aforementioned enhanced Internet service.

- In closing, I'll also note that at 1964, the character count for this
abstract is more than 5 standard deviations above the mean for RFC
abstracts in the last 15 years (~521+/-267), and would set a new record
(beating RFC 8148 at 1898), so it seems useful to cut it back somehow,

I hope that's helpful, and thanks again for inviting critique on this

I'll see whether this comment is considered helpful and whether it
provides enough information to generalize before moving on to other
sections, but hopefully these examples demonstrate the overall nature
of the issues I was having trouble with.

Also worth mentioning: I'm of course only one voice, and this is about
consensus.  If others agree or disagree, it would be good to know,
along with whatever caveats, before either of us puts a lot of work
into attempting a big editorial overhaul, independently of whether the
technical considerations lately under discussion end up having any

Best regards,