Re: [tsvwg] Comments on L4S drafts

"Holland, Jake" <jholland@akamai.com> Wed, 10 July 2019 13:56 UTC

Return-Path: <jholland@akamai.com>
X-Original-To: tsvwg@ietfa.amsl.com
Delivered-To: tsvwg@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 162B912002F for <tsvwg@ietfa.amsl.com>; Wed, 10 Jul 2019 06:56:13 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.7
X-Spam-Level:
X-Spam-Status: No, score=-2.7 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, RCVD_IN_DNSWL_LOW=-0.7, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=akamai.com
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id RUHNYCOcfLle for <tsvwg@ietfa.amsl.com>; Wed, 10 Jul 2019 06:56:10 -0700 (PDT)
Received: from mx0a-00190b01.pphosted.com (mx0a-00190b01.pphosted.com [IPv6:2620:100:9001:583::1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 5281F1200A1 for <tsvwg@ietf.org>; Wed, 10 Jul 2019 06:56:10 -0700 (PDT)
Received: from pps.filterd (m0122333.ppops.net [127.0.0.1]) by mx0a-00190b01.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x6ADqjOe003338; Wed, 10 Jul 2019 14:55:24 +0100
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=akamai.com; h=from : to : cc : subject : date : message-id : references : in-reply-to : content-type : content-id : content-transfer-encoding : mime-version; s=jan2016.eng; bh=bNwKShvgCZrq9HKy2GGkdSWrBHVndMyGT3ZMwXKqy+c=; b=Sy+gbi3aCIi01mrOGsTY3N5bXJnpVDFOwuBe9gNHBTtD5s/OwB6qVlCHsFGGO/MtAZqB g1tQv9M06Yzl5wplB1qY/M8LUy2o3MS77p///HJd6VoQ0ajDhuWh32kPG+mNu03prrON Mqexma+GyzfRivkRbIaOYFjCNv/JOvYat9EKbpO5kEAVlbmOAbNTAoIUV6yTXvyJUnq8 KlUi4Hs0L/6d3qX99fSgiWh3jEaU9RxFfvm2vjm2+Rd8BKSLib+Uh5FPLjthFuLrKc8e Hc2oRWHIv4+/fncPsSXFGHrZyfFPeL6MWEk0j7Y9GrbXy8iddDVf0aD3QivmYrNB4Zs0 eA==
Received: from prod-mail-ppoint4 (prod-mail-ppoint4.akamai.com [96.6.114.87] (may be forged)) by mx0a-00190b01.pphosted.com with ESMTP id 2tmqg8w1fb-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 10 Jul 2019 14:55:24 +0100
Received: from pps.filterd (prod-mail-ppoint4.akamai.com [127.0.0.1]) by prod-mail-ppoint4.akamai.com (8.16.0.27/8.16.0.27) with SMTP id x6ADl1dl020591; Wed, 10 Jul 2019 09:55:22 -0400
Received: from email.msg.corp.akamai.com ([172.27.25.30]) by prod-mail-ppoint4.akamai.com with ESMTP id 2tjpyy7u20-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 10 Jul 2019 09:55:20 -0400
Received: from USTX2EX-DAG1MB4.msg.corp.akamai.com (172.27.27.104) by ustx2ex-dag1mb4.msg.corp.akamai.com (172.27.27.104) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Wed, 10 Jul 2019 08:55:19 -0500
Received: from USTX2EX-DAG1MB4.msg.corp.akamai.com ([172.27.6.134]) by ustx2ex-dag1mb4.msg.corp.akamai.com ([172.27.6.134]) with mapi id 15.00.1473.004; Wed, 10 Jul 2019 08:55:19 -0500
From: "Holland, Jake" <jholland@akamai.com>
To: Bob Briscoe <ietf@bobbriscoe.net>
CC: "tsvwg@ietf.org" <tsvwg@ietf.org>, "ecn-sane@lists.bufferbloat.net" <ecn-sane@lists.bufferbloat.net>
Thread-Topic: Comments on L4S drafts
Thread-Index: AQHVGzHSw5V07mpKU0eMpRmcLIrFiKaQ1UeAgAqC9ICACBbqAIAghycA
Date: Wed, 10 Jul 2019 13:55:18 +0000
Message-ID: <A4A776CD-B7EA-42F4-ABCB-7F0661C31E99@akamai.com>
References: <364514D5-07F2-4388-A2CD-35ED1AE38405@akamai.com> <cc446538-cf23-4fd0-12df-7839ec6c04a2@bobbriscoe.net> <B70757E5-7723-4DC2-9B2F-2FF5F34DB9F5@akamai.com> <9d10caa9-2649-486e-525b-bfad4705b395@bobbriscoe.net>
In-Reply-To: <9d10caa9-2649-486e-525b-bfad4705b395@bobbriscoe.net>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Microsoft-MacOutlook/10.1a.0.190609
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
x-originating-ip: [172.19.113.149]
Content-Type: text/plain; charset="utf-8"
Content-ID: <DCE6071D22370C489AC54BF3ED8B2CCE@akamai.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-07-10_06:, , signatures=0
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 malwarescore=0 phishscore=0 bulkscore=0 spamscore=0 mlxscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1810050000 definitions=main-1907100161
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-07-10_06:, , signatures=0
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 priorityscore=1501 malwarescore=0 suspectscore=0 phishscore=0 bulkscore=0 spamscore=0 clxscore=1011 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1810050000 definitions=main-1907100162
Archived-At: <https://mailarchive.ietf.org/arch/msg/tsvwg/wjEpSmefPe6AyLYIdau49w6ZDis>
Subject: Re: [tsvwg] Comments on L4S drafts
X-BeenThere: tsvwg@ietf.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Transport Area Working Group <tsvwg.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/tsvwg>, <mailto:tsvwg-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/tsvwg/>
List-Post: <mailto:tsvwg@ietf.org>
List-Help: <mailto:tsvwg-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/tsvwg>, <mailto:tsvwg-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 10 Jul 2019 13:56:13 -0000

Hi Bob,

Responses to a few points inline.  And sorry for the slow response
on this message, it's been a busy time.


From: Bob Briscoe <ietf@bobbriscoe.net>
Date: 2019-06-19 at 07:11

[BB] Understood. I was concerned that I was demolishing your idea in public, and I was trying to thank you for being willing to put up a strawman. 

[JH] My pleasure :)  I just wanted to raise the point in hopes
it would help avoid heading down a wrong path there.


Indeed, FQ itself screwed up the work on background transport protocols, and many other plans for novel applications of unequal throughput (I'll start a separate thread on that).

[JH] That's interesting, and I look forward to hearing more about it.
But I'm surprised if this isn't addressable in some other, perhaps
better ways... Between RFC 8622 and a sender's ability to just back off
more aggressively or persistently under congestion signals, this doesn't
seem as intractable as the fairness problem that FQ solves, so my first
instinct is that this is a good trade.  But you seem to have thought
more about it, so I'm interested to hear the counterexamples.


Don't worry. Classic ECN fall-back is on the ToDo list. I just didn't want to do it unless we have to, cos I prefer simplicity.

[JH] My concern is that this seems like a sender-side safety problem
that might not be discovered right away, lacking extensive traffic,
but makes it unsafe to do otherwise reasonable things (like PIE with
ECN) that might seem like they could be a pretty good idea.  If this
gets rolled out without the safety valves, it seems like the kind of
thing that would blow out later, with the problem being traffic from
systems that haven't been touched in years, under the right kind of
pressure, which might not be all that impossible.

And thus it seems worth raising as problematic ossification that's
worth avoiding if possible, rather than making existing deployed code
obsolete without properly deprecating it.


3. One more meta-point: the sales-y language makes the drafts hard to
read for me
[BB] if there are any you want changed, pls call them out.

Thanks for inviting the critique, I haven't been quite sure how to
approach this.  I hope this is received in the spirit it's offered: as an
attempt to improve the document text to make it easier for future readers,
especially potential future implementors.

I'm going to have to do this in sections, because it seems to me there's
quite a large density of advocacy and unquantified hype-increasing
value judgements for an RFC, and that it's spread pretty widely.  If you
want this kind of review for the whole thing, it may take a while, but
hopefully I can give a general idea that could be applied to other
sections as well.  But if needed, I'll try to raise what I can elsewhere
too, my thinly-stretched time permitting.

Also: I recognize that there's some editorial discretion here that can
reasonably differ, and I don't insist that all the instances I'll try to
raise be stripped completely, but it seems to me there's a lot of
text--maybe as much as half or more of the paragraphs in these docs--
with issues like the ones I'll mention here.  My experience reading
these docs was characterized by frequently having to push down the
skepticism I get when someone's trying to sell me something, and
re-focus on the tech.  So this section is based on the assumption that
when an implementor is trying to read an RFC, there's negative value on
exposition with even a little tendency toward hype.

I'll start with just the abstract for l4s-arch:

Overall, I think this can be summarized a lot more concisely, and would
read better if the benefits were outlined more as goals, and less as
speculative claims, and if a lot of the exposition was cut, or moved to
the introduction in cases that it's necessary.

Here's a straw-man suggestion for your consideration, please feel free
to use it or adjust as needed:

<suggested_abstract_text>
This document describes an architecture for "Low Latency, Low
Loss, Scalable throughput" (L4S), a new Internet service targeted
to replace best-effort transport service eventually, via
incremental deployment.

L4S-capable senders rely on congestion signaling to eliminate
queuing delay and loss while maintaining high link utilization
during sustained transfers.  L4S-capable bottlenecks rely on
sender response and packet classification to maintain a low queue
occupancy and provide preferential forwarding for L4S traffic.

Bottleneck link capacity is shared with non-L4S traffic, providing
low loss and low latency to L4S traffic, but with inter-class
fairness roughly equal to inter-flow TCP competition.  This provides
improved fairness relative to Diffserv solutions that use traffic
priority to provide low latency.
</suggested_abstract_text>


But to illustrate the nature of the issues to which I'm referring,
and in case you don't like that text, I'll also flag the points that
gave me trouble in the original:

   This document describes the L4S architecture for the provision of a
   new Internet service that could eventually replace best efforts for

- "could eventually replace" is a speculative claim, better expressed
as a goal IMO.

   all traffic: Low Latency, Low Loss, Scalable throughput (L4S).  It is
   becoming common for _all_ (or most) applications being run by a user

- "_all_ (or most)" means the same thing as "most", and framing it with
"all" seems to have no purpose beyond hype?
- underlining is prohibited punctuation (RFC 7322, section 3.2)

   at any one time to require low latency.  However, the only solution

- "require" seems a hype-adding exaggeration, where something
like "benefit from" is closer to fair.

   the IETF can offer for ultra-low queuing delay is Diffserv, which

- "only solution the IETF can offer" is a mistake, assuming this doc
becomes an IETF-offered solution.  Something about "previous
low-latency solutions rely on Diffserv" seems closer to correct.

   only favours a minority of packets at the expense of others.  In
   extensive testing the new L4S service keeps average queuing delay

- "extensive" is an unquantified value judgement that looks like hype

   under a millisecond for _all_ applications even under very heavy

- underlining prohibited
- "all" is misleading, regarding the overload states that fail over to
classic queue

   load, without sacrificing utilization; and it keeps congestion loss
   to zero.  It is becoming widely recognized that adding more access

- zero is also misleading in overloaded states.
- "widely recognized" is a weird basis for this claim, and also an
unquantified hype-adding value jugement

   capacity gives diminishing returns, because latency is becoming the
   critical problem.  Even with a high capacity broadband access, the

- "latency is the critical problem" is a context-sensitive claim, and
adds hype.

   reduced latency of L4S remarkably and consistently improves

- "remarkably" is an unquantified value judgement, and also makes a
good case study for the general claim of sales-y language I'm
making:  searching for "remarkable" or "remarkably" finds that
they're used in only 4 RFCs so far, all of which are referring to
historical occurrences that exceeded expectations (regarding SMTP
in 1869 and 5598, and Jon Postel's contributions in 5540 and 2555).
Using this term for an expectation of performance in an undeployed
system feels like over-the-top hype to me, even if it might come
true eventually, and even if it exceeded expectations in testing.
- "consistently" likewise is an unquantified value judgement

   performance under load for applications such as interactive video,

- "improves performance" is a context-sensitive claim (it would only
get parity depending on the metrics and conditions).

   conversational video, voice, Web, gaming, instant messaging, remote
   desktop and cloud-based apps (even when all being used at once over
   the same access link).  The insight is that the root cause of queuing

- "the insight is that the root cause" is expository and not concise.
(this one isn't about hype, just editorially the point seems misplaced
in the abstract.)

   delay is in TCP, not in the queue.  By fixing the sending TCP (and
   other transports) queuing latency becomes so much better than today
   that operators will want to deploy the network part of L4S to enable

- "so much better that operators will want" is a highly speculative
hype-y claim, and context-specific.  This has been historically quite
hard to predict, and strongly relies on an absence of unexpected issues
that may not be discoverable in test environments.  This also seems
over the top.

   new products and services.  Further, the network part is simple to
   deploy - incrementally with zero-config.  Both parts, sender and

- the "Further, the..." sentence is expository and redundant

   network, ensure coexistence with other legacy traffic.  At the same

- "legacy" is a bit presumptuous

   time L4S solves the long-recognized problem with the future
   scalability of TCP throughput.

   This document describes the L4S architecture, briefly describing the
   different components and how the work together to provide the

- nit: "the"->"they"

   aforementioned enhanced Internet service.


- In closing, I'll also note that at 1964, the character count for this
abstract is more than 5 standard deviations above the mean for RFC
abstracts in the last 15 years (~521+/-267), and would set a new record
(beating RFC 8148 at 1898), so it seems useful to cut it back somehow,
regardless.

I hope that's helpful, and thanks again for inviting critique on this
point.

I'll see whether this comment is considered helpful and whether it
provides enough information to generalize before moving on to other
sections, but hopefully these examples demonstrate the overall nature
of the issues I was having trouble with.

Also worth mentioning: I'm of course only one voice, and this is about
consensus.  If others agree or disagree, it would be good to know,
along with whatever caveats, before either of us puts a lot of work
into attempting a big editorial overhaul, independently of whether the
technical considerations lately under discussion end up having any
bearing.

Best regards,
Jake