Re: [tsvwg] FW: New Version Notification for draft-white-tsvwg-l4sops-00.txt

Sebastian Moeller <moeller0@gmx.de> Tue, 04 August 2020 10:30 UTC

Return-Path: <moeller0@gmx.de>
X-Original-To: tsvwg@ietfa.amsl.com
Delivered-To: tsvwg@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 28E643A111E for <tsvwg@ietfa.amsl.com>; Tue, 4 Aug 2020 03:30:36 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: 0.251
X-Spam-Level:
X-Spam-Status: No, score=0.251 tagged_above=-999 required=5 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, FREEMAIL_ENVFROM_END_DIGIT=0.25, FREEMAIL_FROM=0.001, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=no autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (1024-bit key) header.d=gmx.net
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id JVYBN59CHkg5 for <tsvwg@ietfa.amsl.com>; Tue, 4 Aug 2020 03:30:34 -0700 (PDT)
Received: from mout.gmx.net (mout.gmx.net [212.227.15.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 5C9253A111A for <tsvwg@ietf.org>; Tue, 4 Aug 2020 03:30:33 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=gmx.net; s=badeba3b8450; t=1596537026; bh=VK5m2fpBLteMsxwutmfFZHG0L440qjyUjuXMTY8PpI0=; h=X-UI-Sender-Class:Subject:From:In-Reply-To:Date:Cc:References:To; b=ceCFpGmgAgTkDFKbVxBFaf3XHDXPdEGPecL1PqgvyxmAiKMgevGFbpPDoTi7pSRd1 EmxwQGk+0NfXARL6fqNjgw6iK4nMN2hvYIgruMc3bHemQQ5qd+p7vvUR6RbmTuv0wP 2kn6TKjTDRlcld5KwmRPOdqo4A1meUw95I221egs=
X-UI-Sender-Class: 01bb95c1-4bf8-414a-932a-4f6e2808ef9c
Received: from hms-beagle2.lan ([95.116.252.205]) by mail.gmx.com (mrgmx005 [212.227.17.190]) with ESMTPSA (Nemesis) id 1N6KUd-1kmyT242w5-016jhH; Tue, 04 Aug 2020 12:30:26 +0200
Content-Type: text/plain; charset="utf-8"
Mime-Version: 1.0 (Mac OS X Mail 12.4 \(3445.104.15\))
From: Sebastian Moeller <moeller0@gmx.de>
In-Reply-To: <166EFA24-6CFE-47BF-B970-9FB1AB2AC393@cablelabs.com>
Date: Tue, 04 Aug 2020 12:30:24 +0200
Cc: "tsvwg@ietf.org" <tsvwg@ietf.org>
Content-Transfer-Encoding: quoted-printable
Message-Id: <5A4B4031-F73A-42D5-B5FE-915B676C83D4@gmx.de>
References: <159610640877.23292.15712739866659063100@ietfa.amsl.com> <EDC0072E-EE8F-4734-80AA-9C09867C4661@cablelabs.com> <723D5F7C-7B0F-48B7-AE9B-BF89F4206D0C@gmx.de> <166EFA24-6CFE-47BF-B970-9FB1AB2AC393@cablelabs.com>
To: Greg White <g.white@CableLabs.com>
X-Mailer: Apple Mail (2.3445.104.15)
X-Provags-ID: V03:K1:SWJsDexlZqcuXxebJGzbl+OVOGWNDfl3dd84zFGTSQH9Ap2ld6/ RPIk1SsCx7Mv+y6WpIxVsZynAARnMX5xXxH6eXRj7J6Vh7cNE6liv8cy91VdsnyVtWm7Kqm NsgfwZF9sKIBRN/Nm9GO8bHr/SKtF8YDL56LJzeAldAuSdti3kLvLQZCmuN5ULWIkgpS4QY /lCexB9KQ6Z9jkKYmXctg==
X-UI-Out-Filterresults: notjunk:1;V03:K0:JXiglMvbUP0=:jlm+mHqWohLkYnRoluS/H/ vyrpJjLN/tt7kGsuLjyFGZNktOCKoOkhXDEGKNy5sIG5Q98y+5VNjFEfdYs+upahsUkkpdOHM mh0Kp1B8ZTSyW1BgxgmWJVvX3e/3GIij4/zB5uL7zPug55uNd2Qz9sohVlRfpZnfWXPPRVNvR QTn3mX1JvPiCySdZexO4hOHYd7L5qzZuoUwf/PSrVjmE5gjtTlj29TXn6O7JuKrCRqNEXovHJ mu9iEX0XJBeJmXgTbq6Dulhz0ggofoI5PAJx2wQx6/M5zVf7C88jNbRL4mP9yZZIJdSAKDn1P Mgaegl1pni5gmH5h9XpsYf6Dxaou2CjcVb1GpL9+QvxW58bNWjHh0w0usFWVOlJ2N1jQhaMvH w5QvHByrnVw/ZaXf7MbgPu0S3Po1vvEEojYcjOCoNmiVbHgorqZoKgHudfPBRiTyBSONt+UJm Rwgyevg30LANm6Mx8/K5Nl312y6vt5LEsVlzgTYlZbAZOMQNzLHPlN/Q5hHgVRycWZ3VLIZoT lTWT2B0mwCKdNdD9QUpxvVZjyNdQ12gREDFdCgbk4r1c2sdFwkmXFk+Fph3jtY9m6tX2uUbH5 MS1fz6aTbqKfDDloZR6tv9eP+dqg5YRG03opJDcNwlGhM1sfc2uw1l1IMAIZnv8fi52TaZ9oh Jg1VC/AfTSDPghfyXiRZfP+Z32OC9UpeDhzTNO6UuFccdvPeNY3cxjr2iHCh3IhBxUwhMVyiT 0bObWyqMhjH0QZxztCATf9Gst6wP3h8ME/JddL+VSVp5EefHZdw+ur+FD1btaE5kgyft6NZMY d9zBfPEJlJbu0MLhCPh43gDmPgK1spjlnrcURrekx73TRtFnddqlJV5jUWTrQbTFzCjzUAmAz a31W8YMY9SOnt4OIJJKwx9qfGzocTyyfB9dWqzZaIUomlUysfe60JE7ZyUkWfnXBNmPM8pcT6 Zx3yNXBNd84hu3BDWlvMJenMxj1NtDiamjeHKf0OxZjUdJEGcE4092njX0Kz038nimfn70xYt RknAQNWbNx0opUH+bkVEMxQ8CS7AqhhtgEEnDXkBXvX/L6MWycXQEE2I8DqJJOQcuU3OGtpQ6 4F8r851kPUGcQ1cI1tK3raW13hfOkDgbmkV7wEr9ntaG9EjHYctxXwUcr5TtBjgvhMGGPZSwf BniYyOTIFAumMLhQ9+w09fhfwHsOFAAIVFo+KUH0t/8+ifTJFCQAp/yLVCawQAS//TVvkJ59Q zlp6Dgh+2MUzpZSlj
Archived-At: <https://mailarchive.ietf.org/arch/msg/tsvwg/FaxVwa6T5IcoKZ7Xp-5FHe_AGe0>
Subject: Re: [tsvwg] FW: New Version Notification for draft-white-tsvwg-l4sops-00.txt
X-BeenThere: tsvwg@ietf.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Transport Area Working Group <tsvwg.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/tsvwg>, <mailto:tsvwg-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/tsvwg/>
List-Post: <mailto:tsvwg@ietf.org>
List-Help: <mailto:tsvwg-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/tsvwg>, <mailto:tsvwg-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 04 Aug 2020 10:30:36 -0000

Hi Greg,

more below in-line.

> On Aug 4, 2020, at 01:35, Greg White <g.white@CableLabs.com> wrote:
> 
> Apologies in advance for the mangled URLs in the quoted text below.  My IT department has implemented a system that mangles URLs on inbound email.
>  
> From: Sebastian Moeller <moeller0@gmx.de>
> Date: Thursday, July 30, 2020 at 3:43 PM
> To: Greg White <g.white@CableLabs.com>
> Cc: "tsvwg@ietf.org" <tsvwg@ietf.org>
> Subject: Re: [tsvwg] FW: New Version Notification for draft-white-tsvwg-l4sops-00.txt
>  
> Hi Greg,
>  
> my comments for sections 1 & 2 below (more will follow):
>  
> "1.  Introduction
>  
>    In the majority of network paths, including paths where the
>    bottleneck link utilizes packet drops (either due to buffer overrun
>    or active queue management) in response to congestion, as well as
>    paths that implement a 'flow-queuing' scheduler such as fq_codel or
>    Cobalt, and those that implement dual-Q-coupled AQM, L4S traffic
>    coexists well with classic congestion controlled traffic."
>  
> [SM] This description ignores the birthday paradox issue that Pete Heist demonstrated. A misbehaving L4S will cause less havok there but it is not without side-effects on stochastic flow queueing systems, we might as well acknowledge that, no?
> [GW] IIRC that issue can occur in fq_codel, but is much less likely in Cobalt (due to set-associative hashing).  Is that correct?

	[SM2] As Jonathan wrote less likely in cake, but still possible. Also, my hypothesis is that fq_codel is more widely in use than cake.


>  
> "   On network paths where the bottleneck link implements a shared-queue
>    (FIFO) with an Active Queue Management algorithm that provides
>    Explicit Congestion Notification signaling according to RFC3168, it
>    has been demonstrated that when a set of long-running flows
>    comprising both "Classic" congestion controlled flows and L4S-
>    compliant congestion controlled flows compete for bandwidth, the
>    classic congestion controlled flows may achieve lower throughput when
>    compared to the L4S congestion controlled flows.  This 'unfairness'
>    between the two classes appears to be more pronounced on longer RTT
>    paths (e.g. 50ms and above) and/or at higher link rates (e.g. 50 Mbps
>    and above)."
>  
> This is rather cautious I would use more drastic terms, the observed unfairness approaches starvation of the non-L4S flows and this text makes it sound like it is a minor almost theoretical concern.
>  
> [GW] We can definitely wordsmith the text, but we need to be sure that we are characterizing the situation accurately and not trying to portray things in drastic terms (or glossy ones) unless it is warranted.  

	[SM2] Well, the thing is Pete Heist's testing (see https://github.com/heistp/sce-l4s-ect1#typical-internet-jitter) demonstrated that L4S with rfc3168 detection, which as far as I can tell is still required for a L4S-compatible transport, is quite bad at achieving fairness, even with RED. 
Sidenote: calling 50ms "longer" RTT paths supports my claim, that L4S is really ONLY ever designed and more cricially TESTED for low-hop count low RTT paths.


> My belief is that it isn’t warranted to use drastic terms here.  

	[SM2] I accept your believe, but think that we should base operator recommendations on hard data if possible, that way our respective beliefs do not muddy the waters.


> I think it is a fairly widely held view that single queue RFC3168 bottlenecks are uncommon at best and the majority of the ones that do exist are likely to be RED AQMs.  

	[SM2] Data please, or we are back at Bob's line or reasoning, that because he is bad at measuring something that something might not even exist.

> The testing with RED AQMs showed approximate fairness in most cases (though more testing would be welcome).

	[SM2] What is your definition of "approximate fairness"? Can we agree on calling imbalances inside of 2 orders of base2-magnitude (aka faktor of 4) as approximately fair and everything beyond 3 orders of bas2-magnitude (factor 8) starvation? If you disagree with my numbers, please propose your own set, but let's get away from the idea that approximate fairness and starvation would be concepts that can not be defined numerically.


>  We’ll need to get a sense from the WG as to how to characterize the situation, given what we know (and what we don’t know). 

	[SM2] With all due respect this WG has not been great at giving feed-back on simple questions like "what is starvation" in the past; as much as I would like to be wrong, I do not see this changing now.


>  
> "  The root cause of this unfairness is that RFC3168 does not
>    differentiate between packets marked ECT0 (used by classic senders)
>    and those marked ECT1 (used by L4S senders), and provides an
>    identical congestion signal (CE marks) to both classes, whereas the
>    two classes respond differently to that congestion signal."
>  
> How about keeping causality intact and frame this as a consequence of L4S redefining what CE means. There are reasons for doing that, but this text leaves it unclear who caused this problem (or rather the text implicates rfc3168, which IMHO violates temporal causality).
>  
> [GW] Point taken.  I’ll try to rearrange that.

	[SM2] Thanks!

>  
> "The result is that the
>    classic senders respond to the CE marks provided by the bottleneck by
>    yielding capacity to the L4S flows.  While this has not been
>    demonstrated to cause starvation of the classic flows, the resulting
>    rate imbalance can be a cause of concern."
>  
> Mmmh, https://protection.greathorn.com/services/v2/lookupUrl/43da5eeb-7a88-4655-a9ba-bec785235113/327/acff24c9401e4e78e9981cf8beb4d98870d4f8bf?domain=sce.dnsmgr.net&path=/results/ect1-2020-04-23-final/l4s-s2-twoflow/l4s-s2-twoflow-ns-cubic-vs-prague-codel1q_20ms_-160ms_tcp_delivery_with_rtt.svg
> pretty much demonstrated starvation and that is with TCP-Prague with rfc3168 detection. 
>  
>  
> [GW] Ok, I will update the text to be more clear.   It seems that it can be demonstrated in contrived setups that it is possible to cause starvation.  That condition was a single queue CoDel (do these exist in the wild?) with heavily modified settings. CoDel with default settings did not show unfairness.   

	[SM2] Okay, no please proof that these conditions do not actually exist in the real world and you are scot free... And no, changing the target value is not a contrived set-up, for example have a look at the PIE RFCs where there it is mentioned that for short RTT paths adjusting the latency target is recommended, and for codel/fq_codel/cake the same logic applies. This gets ironic, given the fact that L4S itself is really only taylored for similarly short RTT paths...

>  
>  
> "2.  Per-Flow Fairness
>  
>    There are a number of factors that influence the relative rates
>    achieved by a set of congestion controlled flows sharing a queue in a
>    bottleneck link.
>  
>    TODO: discuss startup & convergence times, short flows, RTT-
>    unfairness, differences in deployed CC algorithms, etc.
>  
>    TODO: also mention that flow sharding is commonplace, so per-flow
>    fairness does not imply per-application fairness"
>  
>  
> This obviously needs flashing out before one can meaningfully comment, but regarding the last section, you realize that application not the relevant classification for intermediate nodes? I would assume that for an ISP a (potentially weighted) per-end-host fairness would be the exact tool to avoid sharding as work-around... This is obviously not a new idea or comment, so if you opt for elaborating on fairness, please also include known remedies. Or better avoid that discussion here altogether.
>  
> [GW] Thanks. Yes per-end-host fairness could be useful in some situations.  AFAIK most residential ISPs implement some form of per-customer fairness, which could be argued is more appropriate in that context. I guess the point is that per-flow fairness isn’t always (or even usually) the most important metric for most users/systems.  It seems important to discuss this, seeing as per-flow unfairness is the main concern that has been raised, but I realize this section could be challenging to write in way that all will agree with.

	[SM2] Okay, I see why you want to discuss that here. Please keep in mind, that while per-flow-fairness is not the ultimate solution it clearly is further along that path than L4S's "anything goes". Specifically everything that works around the strikter fq-sheduling will also wreck havoc on "anything goes"-scheduling, so IMHO the argument that FQ is not perfect is beyond the point, as it seems in all relevant criteria to be better or equal to "anything goes" (mostly better) with the sole exemption of computational complexity. I really believe that this document could do without opening that can of worms...

Best Regards
	Sebastian

>  
> Best Regards
>   Sebastian
>  
>  
> > On Jul 30, 2020, at 13:05, Greg White  wrote:
> > 
> > TSVWG members-
> > 
> > I've posted a rough draft of Operational Guidance for L4S deployment.  This is not much more than an outline at this point, and is almost certainly incomplete even at that, so please read it with that in mind. 
> > 
> > -Greg
> > 
> > 
> > From: "internet-drafts@ietf.org" 
> > Date: Thursday, July 30, 2020 at 4:53 AM
> > To: Greg White 
> > Subject: New Version Notification for draft-white-tsvwg-l4sops-00.txt
> > 
> > A new version of I-D, draft-white-tsvwg-l4sops-00.txt
> > has been successfully submitted by Greg White and posted to the
> > IETF repository.
> > 
> > Name:          draft-white-tsvwg-l4sops
> > Revision:      00
> > Title:         Operational Guidance for Deployment of L4S in the Internet
> > Document date: 2020-07-30
> > Group:         Individual Submission
> > Pages:         7
> > URL:            https://protection.greathorn.com/services/v2/lookupUrl/f7c346c8-8443-44e5-b71e-435217709fda/327/acff24c9401e4e78e9981cf8beb4d98870d4f8bf?domain=www.ietf.org&path=/internet-drafts/draft-white-tsvwg-l4sops-00.txt
> > Status:         https://protection.greathorn.com/services/v2/lookupUrl/e5e7af28-9419-4d0e-a559-650861096bf9/327/acff24c9401e4e78e9981cf8beb4d98870d4f8bf?domain=datatracker.ietf.org&path=/doc/draft-white-tsvwg-l4sops/
> > Htmlized:       https://protection.greathorn.com/services/v2/lookupUrl/3b91a498-7692-49a5-aeb2-601ef0ba245a/327/acff24c9401e4e78e9981cf8beb4d98870d4f8bf?domain=tools.ietf.org&path=/html/draft-white-tsvwg-l4sops-00
> > Htmlized:       https://protection.greathorn.com/services/v2/lookupUrl/813cf103-4e26-44a1-8c2f-a2648d5dacc0/327/acff24c9401e4e78e9981cf8beb4d98870d4f8bf?domain=datatracker.ietf.org&path=/doc/html/draft-white-tsvwg-l4sops
> > 
> > 
> > Abstract:
> >   This is an early, work-in-progress draft - a start at getting some of
> >   the ideas from the mailing list and email exchanges on paper.
> > 
> >   This draft is intended to provide guidance to operators of end-
> >   systems, operators of networks, and researchers in order to ensure
> >   reasonable fairness between L4S and Classic flows sharing a single-
> >   queue RFC3168 bottleneck link.  This draft identifies opportunites to
> >   prevent and/or detect and resolve fairness problems in such networks.
> > 
> > 
> > 
> > 
> > Please note that it may take a couple of minutes from the time of submission
> > until the htmlized version and diff are available at tools.ietf.org.
> > 
> > The IETF Secretariat
> > 
> > 
> > 
>