Re: [quicwg/base-drafts] When the sender is using standard Reno congestion control, ack every ~2 packets (#1428)

ianswett <> Wed, 12 September 2018 14:21 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 36C741277C8 for <>; Wed, 12 Sep 2018 07:21:52 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -7.75
X-Spam-Status: No, score=-7.75 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=0.001, HTML_OBFUSCATE_05_10=0.26, MAILING_LIST_MULTI=-1, RCVD_IN_DNSWL_HI=-5, SPF_PASS=-0.001, T_DKIMWL_WL_HIGH=-0.01] autolearn=ham autolearn_force=no
Authentication-Results: (amavisd-new); dkim=pass (1024-bit key)
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id 3HRnyAt_GCXa for <>; Wed, 12 Sep 2018 07:21:50 -0700 (PDT)
Received: from ( []) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by (Postfix) with ESMTPS id 3D771130E2A for <>; Wed, 12 Sep 2018 07:21:50 -0700 (PDT)
Date: Wed, 12 Sep 2018 07:21:49 -0700
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=pf2014; t=1536762109; bh=Nvcd8qMN40K4K9Pms/BIZwsXz10wV9YVPDqRbeNboJs=; h=Date:From:Reply-To:To:Cc:In-Reply-To:References:Subject:List-ID: List-Archive:List-Post:List-Unsubscribe:From; b=EztxvqHyH8i9ClgJ4asODnScqHK29FnzVSDNaYMVEtSc0NaF9kbYFS73s+npf+FDd NOiLRkF9mfTrn1lhoddvaIr6fA/BgocFkWKFwZULksSF0WcyoYhDcghs/j0bxAD7ZV x06PnAZnrP6HqwFQfFCH2Py2la19l/1scRhsT994=
From: ianswett <>
Reply-To: quicwg/base-drafts <>
To: quicwg/base-drafts <>
Cc: Subscribed <>
Message-ID: <quicwg/base-drafts/issues/1428/>
In-Reply-To: <quicwg/base-drafts/issues/>
References: <quicwg/base-drafts/issues/>
Subject: Re: [quicwg/base-drafts] When the sender is using standard Reno congestion control, ack every ~2 packets (#1428)
Mime-Version: 1.0
Content-Type: multipart/alternative; boundary="--==_mimepart_5b9920fd25a22_53733fc0534d45c410824e"; charset="UTF-8"
Content-Transfer-Encoding: 7bit
Precedence: list
X-GitHub-Sender: ianswett
X-GitHub-Recipient: quic-issues
X-GitHub-Reason: subscribed
X-Auto-Response-Suppress: All
Archived-At: <>
X-Mailman-Version: 2.1.29
List-Id: Notification list for GitHub issues related to the QUIC WG <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Wed, 12 Sep 2018 14:21:52 -0000

The data was from experiments in the wild looking at YouTube and Search metrics.  The results weren't awful, but there were small regressions in some metrics, which appeared to indicate Cubic wasn't achieving as much bandwidth with 1/4 RTT decimation enabled.  One possible issue with the experiments is that QUIC's Cubic implementation currently compares bytes_in_flight to CWND to determine if it's app-limited(, and if it's app-limited it doesn't increase the CWND.  I wouldn't expect less frequent ACKs to cause that to be more likely, but if it did, it could explain the behavior.

In traces, it was clear that this experiment was further increasing gaps between ACKs vs acking every 2 packets, at least for some users.  It's very possible this is a Wifi artifact and sending more ACKs caused the Wifi access point to release ACKs a bit more often.  The other potential issue is that QUIC paces at 1.25*CWND/RTT in congestion avoidance for Cubic, because that's what the Linux kernel does.  This pretty much guarantees that a flow will be CWND limited when a new ACK arrives.  We discussed lowering this to 1*CWND/RTT, but never ran an experiment with that.

That brings up another point.  If one doesn't have pacing enabled(which I wouldn't advise, but I know pacing isn't always enabled at this point) then being ACK-clocked should smooth out traffic some.

The 10 packet cap was a random number I came up with, and there was no fundamental reasoning behind it.  I have no evidence it's valuable or necessary.

I'd like to be in a situation where multiple implementations have data to share on this, in case my results were a result of an implementation bug, but we're not there yet.  So my goal is to provide some knobs to allow experimentation, and we need to decide what those should be.

Related to this is the explicit max ack delay, which feeds into the TLP and RTO timeouts.

You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub: