Re: [tsvwg] L4S status: #17 Interaction w/ FQ AQMs

Greg White <> Mon, 11 November 2019 21:51 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 9525A1200B9 for <>; Mon, 11 Nov 2019 13:51:42 -0800 (PST)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -2.001
X-Spam-Status: No, score=-2.001 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Authentication-Results: (amavisd-new); dkim=pass (1024-bit key)
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id 8yBpPW2CfNb7 for <>; Mon, 11 Nov 2019 13:51:39 -0800 (PST)
Received: from ( []) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by (Postfix) with ESMTPS id 68E201200B7 for <>; Mon, 11 Nov 2019 13:51:39 -0800 (PST)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901;; cv=none; b=eCFVSKyoxvrNe/RraBl6E1lYa3AK0eKyzN6xxY5h4IQHu7J0siSUt3WyBVKMuiesK1BwpeyQjafhtzd+OQb1wSg10aTpa2qP8fvjR4zkN4KYLHqFwypXDUEki/R2oEc6mTT/VUk8Ilrt5Dm+ahCm2s6Slp2DAVmTS86lyLgbc+Jn93v/3eWHxxaehGmTYfcvpCNIrIiz8fHXpN1GAqpmABkroQp0UiEBpOzgFeUUPhlEcq/7hZziH4S5n5ljmP8mUf8mGXNMIP/DXIdGIcsuL2a1LBARtHzKj83ydRxdEeiNTifR3vb3NPSyvms9VEJaAUZhXjPzDldpsWVG7Oq7Iw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed;; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=/YIdXUupkd1ebOa5K84Z/YrvBd80wLu8CCDu2M6X2K4=; b=XNgJ03zO4iHuwK+JJzdvfGvdvRCs5+pKzAZPrRX4mTTxUZSMYFw+hZ4i3Arpiz9ZbwD57WDfBsCSHC1Qx+0qSCBHTSyFolWRFWscuz8NpcY8SqL9/iXldm+3dlNg6k78ddqzNDUcE979jrYUD1jXKuYfxYvoo6xV8+UxfWz2a7vMACXRME+h04nU0MAI7j27UX4f+IM4D1wH21CyMgGBb/x4WF6d/t61qt4Mrgu4uaqe512YxfK8dTNkcpOXUUSTRz5ZKgWgnXcEKZivlrwd51YFckhqxPenrN8+MvhSmEIJugBgfxeYtj9fa/EpmX1+JkJDMMxH9kW9flKXPzKW4Q==
ARC-Authentication-Results: i=1; 1; spf=pass; dmarc=pass action=none; dkim=pass; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=/YIdXUupkd1ebOa5K84Z/YrvBd80wLu8CCDu2M6X2K4=; b=JCwNnvZcvn8jsy0x2iAn4mHEzQJr8MJOwYdZdKtd/3bFy3gK1Ow3Q3K10CmteIahcbcOSnWYXcEEIXWdttpeurGyKwxStsMZ5SP6G6GlW4xN0CSBAszokvPq6p/F0nAt+mt/GNxRl8/mxMxS94keMEkpGBcXTTax1Y+d9r+sQvw=
Received: from ( by ( with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2430.20; Mon, 11 Nov 2019 21:51:36 +0000
Received: from ([fe80::91c5:f29b:4501:6db5]) by ([fe80::91c5:f29b:4501:6db5%2]) with mapi id 15.20.2430.027; Mon, 11 Nov 2019 21:51:36 +0000
From: Greg White <>
To: Pete Heist <>
CC: "" <>
Thread-Topic: [tsvwg] L4S status: #17 Interaction w/ FQ AQMs
Thread-Index: AQHVUIQ5bdBq+WM6aUOvw+6LfuRJ2ab3zrwAgAB9soCAAp22AIABeKwAgCd+C4CAAmm1gIAAOTUAgFfLbgCAABdpgIAIjNKA//+lA4CAAAWNgA==
Date: Mon, 11 Nov 2019 21:51:35 +0000
Message-ID: <>
References: <> <> <> <> <> <> <> <> <> <> <> <> <>
In-Reply-To: <>
Accept-Language: en-US
Content-Language: en-US
user-agent: Microsoft-MacOutlook/10.1c.0.190812
authentication-results: spf=none (sender IP is );
x-originating-ip: [2620:0:2b10:10:c11:da24:1239:7db1]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 4a057ee4-d44c-4479-fadd-08d766f15300
x-ms-traffictypediagnostic: SN6PR06MB6063:
x-ms-exchange-purlcount: 6
x-microsoft-antispam-prvs: <>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-forefront-prvs: 0218A015FA
x-forefront-antispam-report: SFV:NSPM; SFS:(10019020)(396003)(376002)(39850400004)(346002)(136003)(366004)(199004)(189003)(4326008)(6486002)(81166006)(102836004)(478600001)(71190400001)(71200400001)(99286004)(81156014)(64756008)(66476007)(8936002)(14454004)(66946007)(46003)(316002)(966005)(6306002)(76116006)(6436002)(6512007)(476003)(2616005)(11346002)(91956017)(229853002)(66556008)(446003)(66446008)(53546011)(305945005)(186003)(6116002)(76176011)(6506007)(5660300002)(486006)(7736002)(36756003)(8676002)(6916009)(6246003)(86362001)(2906002)(256004)(25786009)(33656002)(14444005)(58126008)(85282002); DIR:OUT; SFP:1102; SCL:1; SRVR:SN6PR06MB6063;; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; A:1; MX:1;
received-spf: None ( does not designate permitted sender hosts)
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: EXRPpqvaZBUfx+guqr5y5Dx/94z34HaaHIE0Zw57nBci0xydvzp/OvYfHByirFwt7qAefs+LKpJ/26aClp5WHe5ddwAeXS1AmGyQFa08gT94mF+9+h76UyObB0Wves4blqAg+S7edT+0vWiF3NY+cCf8AkETBG890mAHyUBBrZQ64uXrB24oyI5uUSwZKh3S9/Zy48d8cC3uoM53Mk//lN2YGTQiRtFYnDKWV6K5zq52d/4e0UpGoSEVxF1o8+UfoZTHSDp4AIrp9D63J0wSmrc7zmPFxkLR/NcWhTK1pwkoKfhArcS21Q8yAKNqcVlbFl/GqUiTkBgyaZxV3Yr7bZEgy2Fj5i47jMentmj57kojQGi0fVuTjQrAeABpqm+VaX3+9D0udjRtUGtUARgrrhWPvpoOWb1jk58/z5wHgPpyvDP05KhMuKXnivOyC/I90yRGcspwmyjX2XA61lJKP0HYBHufZRYefWcPzs4HPW0=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-Network-Message-Id: 4a057ee4-d44c-4479-fadd-08d766f15300
X-MS-Exchange-CrossTenant-originalarrivaltime: 11 Nov 2019 21:51:36.0250 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: ce4fbcd1-1d81-4af0-ad0b-2998c441e160
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 23kXPk6jRGe9dfJvrpe7DUWzP2jGtheoSIF+ezF76/OVhlTCLYR6wmQOwSdwheDXd2w2S6q2bSVGvPkh8lRRFQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN6PR06MB6063
Archived-At: <>
Subject: Re: [tsvwg] L4S status: #17 Interaction w/ FQ AQMs
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Transport Area Working Group <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Mon, 11 Nov 2019 21:51:43 -0000

....actually, I think I misread your comment (and plot*).   The latency spike is _not_ for the TCP cubic flow, but rather for the Prague flow.

So, this is a case of fq immediately cutting the BDP in half due to the arrival of a second flow, but not providing sufficient congestion signaling to Prague to cause it to reduce its cwnd quickly.   The sluggish ramp up of the marking probability in the CoDel AQM being a mismatch to the Prague congestion response is the cause here.


*in my defense the flent plots aren't the friendliest for color-deficient viewers.

On 11/11/19, 2:32 PM, "tsvwg on behalf of Greg White" < on behalf of> wrote:

    Hi Pete,
    Thanks for posting the updated results.
    In the Issue 17 related scenario (Scenario 6), you observed that there was a 5-second-long latency spike for the TCP cubic flow (but for none of the other flows) when the cubic flow starts up.  
    Since this latency spike is only present for the cubic flow, it is clearly not in the shared FIFO, but in the fq_codel queue. I'm having trouble understanding how TCP Prague could be causing queuing delay that *only* affects a cubic flow in a different queue.  Could you explain how this is possible?  It seems like it may be an error in your result.
    On 11/11/19, 12:57 PM, "Pete Heist" <> wrote:
        Hi, thanks for working on the issues we posted in our “bakeoff” tests on Sep. 12. We have updated our repo with a re-run of the tests using the L4S code as of the tag 'testing/5-11-2019', and those results are at the same location:
        We’ve put together a list of changes we saw in the results:
        The changes are:
        - The previously reported intermittent TCP Prague utilization issues appear to be fixed.
        - TCP Prague vs Cubic now does converge to fairness, but appears to have fairly long convergence times at 80ms RTT (example: Convergence times in other scenarios are similarly long.
        - In scenarios 2, 5 and 6, the previously reported L4S interaction with CoDel seems to be partially, but not completely fixed. By reversing the flow start order (prague vs cubic instead of cubic vs prague, second flow delayed by 10 seconds), we can see that while the TCP RTT spikes no longer occur at flow start, they are still present when a second flow is introduced after slow-start exit (example:
        - In scenario 3 (single queue AQM at 10ms RTT), the previously reported unfairness between TCP Prague and Cubic grew larger, from ~4.1x (40.55/9.81Mbit) to ~7.7x (44.25/5.78Mbit). This trend appears to be consistent at other RTT delays in the same scenario (0ms and 80ms).
        I will update the issues in trac with these results, and if there are any questions let us know…
        > On Nov 6, 2019, at 5:23 PM, Greg White <> wrote:
        > Good timing __
        > We've just wrapped up our findings on Issue 17, and have posted them here (along with some comments on Issue 16 as well):
        > (Note, the ns3 repo is not public yet, but will be shortly.   We'll update that page with links within a day or two.)
        > In summary, we reached the following conclusions:
        > - the main result of concern was due to a bug in initializing the value of TCP Prague alpha, which has been fixed and demonstrated to resolve the latency impact that was spanning multiple seconds
        > - the remaining short duration latency spike in the FIFO queue is seen in *all* congestion control variants tested, including BBRv1, NewReno, and Cubic, and is not specific to Prague
        > - if the CoDel queue is upgraded to perform Immediate AQM on L4S flows, the latency spike can be largely avoided.
        > We invite a thorough review of the work, but believe that this closes issue #17.
        > Best Regards,
        > Greg White, Tom Henderson, Olivier Tilmans
        > On 11/6/19, 12:59 AM, "Sebastian Moeller" <> wrote:
        >    Hi Greg,
        >> On Sep 11, 2019, at 19:16, Greg White <> wrote:
        >> I'm planning on doing testing as well, but it will be more than a day or two to get it done.  Rough timeframe would be 2-3 weeks from now.
        >    	Since I can not hide my impatience any loger, did anything come out of this yet?
        >    Best Regards
        >    	Sebastian
        >> -Greg
        >> On 9/11/19, 1:52 AM, "Pete Heist" <> wrote:
        >>> On Sep 9, 2019, at 9:01 PM, Wesley Eddy <> wrote:
        >>> Since this thread seems to have dwindled, I just wanted to summarize that it looks to me like we have agreement that testing as described is needed.
        >>> I updated the issue tracker with a comment saying as much and pointing back to this thread in the archive for reference.
        >>> Is anyone planning to perform this testing in a rough timeframe they might want to share?
        >>   Hi Wesley, I’ll share results from relevant testing in the next day or two...
        >>   Regards,
        >>   Pete