Re: [tsvwg] L4S vs SCE

Pete Heist <> Fri, 22 November 2019 04:54 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id DAA79120889 for <>; Thu, 21 Nov 2019 20:54:39 -0800 (PST)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -1.999
X-Spam-Status: No, score=-1.999 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Authentication-Results: (amavisd-new); dkim=pass (2048-bit key)
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id a88G44DNhkWt for <>; Thu, 21 Nov 2019 20:54:37 -0800 (PST)
Received: from ( [IPv6:2607:f8b0:4864:20::52f]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by (Postfix) with ESMTPS id 1A465120894 for <>; Thu, 21 Nov 2019 20:54:36 -0800 (PST)
Received: by with SMTP id k13so2775455pgh.3 for <>; Thu, 21 Nov 2019 20:54:36 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=google; h=mime-version:subject:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to; bh=RIHiLXqJXQQEoEtCd6KK0AiBidFTlcIqS/HxA9T5E2A=; b=kmAicItiEwD5i3+pBa+Rgm+gHSHSQIPGdqBXly+Ck8+iObxpGaoowKL8wURI8A0Rns AcB0d5ul3/K3oQ/yjwIj9Ng8uHad3hp237U8+zCjWlOwkUyhtI31VfbY2GwMYRH+Tgzd JgfJFn/K+AlgY46i+d4PDwD5ogPLDKti3U8qOWAlxCdhnKJE3KtUOdWXUwe/Yw4VYis1 OS/OA1+1IukMh2xZH29+xckVMVdL8VC4H7NGVpHxwOHf9Tt0Xeqp4NFKpoOWX2a2iMjk O6yWFOJHI3yIoEYcYV0Zry0BzuOS9enlOr1wuWGY2ZtT/Wle1LXyrP9eqpgJI1O/J7v3 IvhQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=20161025; h=x-gm-message-state:mime-version:subject:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to; bh=RIHiLXqJXQQEoEtCd6KK0AiBidFTlcIqS/HxA9T5E2A=; b=oGPZWlNK+ck11LC7ufFMjnjsJBF3jT2gANHPgR+7y8NkMfd9uHM6q+GHlbOQRf1jkc Q2hd3njw5crRuoIU4RifmRhwvkkfcYFOtGyWHzIWfgreEtRb6utJvyvv3Ild+NeKlVgg gvju2qwNYZfiKgysEW6UBhfqOyinvNL4CfYOO3a2qmDDaDJZOX1lu1x/9bBH6cDVAhQl WEUuAMhahOqeRCD2XnJ7Sn992nSW0vZKwN7zm4P/04EVj10JxYkDTvcTHKk318kClNQf rMFflivGd3vMFCGqdueMizu6+RU5qOygmHZnygOXxQOtHUDbTIy+7XV5T8e+KYMdjmXC u3Iw==
X-Gm-Message-State: APjAAAWOie75gSpY0RHJLR7Yf1hiFTNnoEDOBUZQzS5/catjLR4nGNZY 2KqvTfX4B+eHPmCjW5p+98znAQ==
X-Google-Smtp-Source: APXvYqw9Mx19AelueF4mNKCEc5R4o4hqQZimm6ELCvXghpaVyYUU31eS4shezidpDfdMpEYs41iRCA==
X-Received: by 2002:a65:64c1:: with SMTP id t1mr6973165pgv.263.1574398475459; Thu, 21 Nov 2019 20:54:35 -0800 (PST)
Received: from ( []) by with ESMTPSA id r24sm5037499pgu.36.2019. (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 21 Nov 2019 20:54:34 -0800 (PST)
Content-Type: text/plain; charset=utf-8
Mime-Version: 1.0 (Mac OS X Mail 11.5 \(3445.9.1\))
From: Pete Heist <>
In-Reply-To: <>
Date: Fri, 22 Nov 2019 12:54:32 +0800
Cc: "" <>
Content-Transfer-Encoding: quoted-printable
Message-Id: <>
References: <> <> <> <> <> <> <> <> <> <> <> <> <> <> <> <> <> <>
To: Greg White <>
X-Mailer: Apple Mail (2.3445.9.1)
Archived-At: <>
Subject: Re: [tsvwg] L4S vs SCE
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Transport Area Working Group <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Fri, 22 Nov 2019 04:54:43 -0000

Yes, we look forward to providing more results later, in the form of flent tests. If there's is anything more detailed that needs testing, we’ll enhance flent as a first option.

The CNQ implementation is only a week or so old, and LFQ is not implemented yet. The crude discrete time simulations for both are referenced in the respective drafts.

A clarification for my earlier statement regarding CNQ is below.

> On Nov 22, 2019, at 12:02 PM, Greg White <> wrote:
> Pete,
> I've seen several statements being made to the effect that SCE can possibly work in a single queue or dual queue bottleneck, but I've not seen any data to suggest that it is true.
> If the SCE team can come up with such an approach, and provide test results (preferably to a similar extent to those provided by the L4S team, not just the simple flent scenarios), it would be interesting to see.
> -Greg
> On 11/21/19, 8:58 PM, "Pete Heist" <> wrote:
>> On Nov 21, 2019, at 7:14 AM, Greg White <> wrote:
>> Where I think SCE fails is that it is not available to any link that doesn't implement FQ, whether that is by choice or by necessity.  I don't believe that the IETF should use the last IP codepoint for a signaling mechanism that can *only* work in FQ.
>    The thoughts are appreciated. What I think needs to be clarified is that SCE doesn’t necessarily require full FQ using the traditional many queues approach. It only requires some level of help from the network, when fairness between SCE and non-SCE flows is required. Options include:
>    - Changing the SCE marking ramp, trading off some of the advantages of SCE for improved fairness. This was first described at IETF 105 in Montreal.
>    - Using CNQ, which is implemented but we didn’t have time to present today. That provides a minimal level of improved fairness that can actually favor SCE flows early in their lifetime.

It should more accurately say:

"That provides a minimal level of improved fairness that prevents SCE flows from being starved to minimum cwnd.  It also priorities sparse flows including initial handshakes & request-response transactions.”

Experimentally, we have seen SCE flows outcompete non-SCE flows early in their lifetime with CNQ, but that isn’t directly due to help from CNQ, but to the new marking strategy using Codel, which was prototyped in CNQ.  We’ll port this development to Cake in due course.

>    - Using LFQ, which like CNQ, is in draft form with a crude discrete time simulation, but doesn’t yet have an implementation. This provides closer to full FQ but with a lighter weight implementation.
>    This is still an active area of research with many options available to us, and we feel it’s a tractable problem. We just want to make clear that “SCE requires FQ” isn’t very accurate, and needs more clarification as to the current and future options available.