Re: [quicwg/base-drafts] Changing the Default QUIC ACK Policy (#3529)

Gorry Fairhurst <> Fri, 29 May 2020 09:40 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 130CE3A0D6C for <>; Fri, 29 May 2020 02:40:47 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -3.101
X-Spam-Status: No, score=-3.101 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, HTML_MESSAGE=0.001, MAILING_LIST_MULTI=-1, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Authentication-Results: (amavisd-new); dkim=pass (1024-bit key)
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id p3d4M8trSlYA for <>; Fri, 29 May 2020 02:40:45 -0700 (PDT)
Received: from ( []) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by (Postfix) with ESMTPS id DEA993A0D6B for <>; Fri, 29 May 2020 02:40:44 -0700 (PDT)
Received: from ( []) by (Postfix) with ESMTP id D6F12E1612 for <>; Fri, 29 May 2020 02:40:43 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=pf2014; t=1590745243; bh=BLxyAtOej624SA5mxeby+4EC8+EWt5vEBcy9974J3lQ=; h=Date:From:Reply-To:To:Cc:In-Reply-To:References:Subject:List-ID: List-Archive:List-Post:List-Unsubscribe:From; b=zX8U1Mcbj+fXh4Iz7ArHkyyzuCpHqVVjVrlhCBbKbzT1pw3PKvyl8nVG/G2W/DuGr tF092dc8kk9d+WpfsOIzNQVwxv23jRmywyNl0QCcEqQLjXzgvZ00Be4fFhi/iFSkdM vF5Ca3hIuVeC94rlpPtkcSjsM87Q/dAg7RkaElBM=
Date: Fri, 29 May 2020 02:40:43 -0700
From: Gorry Fairhurst <>
Reply-To: quicwg/base-drafts <>
To: quicwg/base-drafts <>
Cc: Subscribed <>
Message-ID: <quicwg/base-drafts/issues/3529/>
In-Reply-To: <quicwg/base-drafts/issues/>
References: <quicwg/base-drafts/issues/>
Subject: Re: [quicwg/base-drafts] Changing the Default QUIC ACK Policy (#3529)
Mime-Version: 1.0
Content-Type: multipart/alternative; boundary="--==_mimepart_5ed0d89bc7f72_4373fbd1d8cd95c4536d"; charset="UTF-8"
Content-Transfer-Encoding: 7bit
Precedence: list
X-GitHub-Sender: gorryfair
X-GitHub-Recipient: quic-issues
X-GitHub-Reason: subscribed
X-Auto-Response-Suppress: All
Archived-At: <>
X-Mailman-Version: 2.1.29
List-Id: Notification list for GitHub issues related to the QUIC WG <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Fri, 29 May 2020 09:40:47 -0000

I think the new ACK text is improving the specification, but I'd like to 
be sure we have thought about this decision. Let me try to carefully 
respond on-list here, to see where we agree/disagree:

On 29/05/2020 07:41, Jana Iyengar wrote:
> @gorryfair <> : I understand your 
> position. While I agree that ack thinning happens with TCP, it is not 
> what I expect to find on a common network path. And as far as I know, 
> the endpoints still implement the RFC 5681 recommendation of acking 
> every other packet.
I disagree, I think we shouldn't be perpetuating a myth that a sender 
receives a TCP ACK for every other packet. Even in the 90's 
implementations TCP stacks often generated stretch-ACKs for 3 segments. 
Since then, limited byte counting was introduced and significantly 
improved the sender's response to stretch ACKs, and that has been good 
for everyone.

Senders still rely on byte-counting in TCP to handle Stretch-ACKs, and 
this need is not decreasing: new network cards reduce per-packet receive 
processing using Large Receive Offload (LRO) or Generic Receiver Offload 
> What you are arguing is that /with ack thinners in the network/ TCP's 
> behavior on /those/ network paths is different.
I am saying many TCP senders actually do today see stretch-ACKs. There 
are papers that have shown significant presence of stretch ACKs, e.g., 
(H. Ding and M. Rabinovich. TCP stretch acknowledgements and timestamps: 
findings and implications for passive RTT measurement. ACM SIGCOMM CCR, 
45(3):20–27, 2015.). A TCP Stretch-ACK of 3 or 4 was common in their 
datasets (about 10% of cases). In some networks, the proportion of 
Stretch-ACKs  will be much much higher, since ACK Thining is now widely 
deployed in WiFi drivers as well as cable, satellite and other access 
technologies - often reducing ACK size/rate by a factor of two.

> However, I do not believe we understand the performance effects of 
> these middleboxes, that is, how they might reduce connection 
> throughput when the asymmetry is not a problem.
I am intrigued and would like to know more, in case I missed something? 
Typical algorithms I am aware of track a queue at the "bottleneck" and 
then make a decision based on upon the contents of the queue. If there 
is no asymmtery, there isn't normally a queue and the algorithm won't 
hunt to discard an "ACK".

> Importantly, I don't think we should be specifying as default what 
> some middleboxes might be doing without fully understanding the 
> consequences of doing this.
But, maybe, there is some truth in being wary: Encrypted flows can also 
build return path queues (as does QUIC traffic). Without an ability to 
interpret the transport data (which nobody, including me, wants for 
QUIC), a bottleneck router is forced to either use another rule to drop 
packets from the queue (such as the smallest packets) or to allow a 
queue to grow, limiting forward direction throughput.

I would say neither of these outcomes are great because the router is 
trying to fix the transport's asymetry. However, if QUIC causes this 
problem, I suggest some method will proliferate as QUIC traffic increases.

> Please bear in mind that Chrome's ack policy of 1:10 works with BBR, 
> not with Cubic or Reno.
I didn't find something in QUIC spec. that was a major concern. It 
looked to me like QUIC had learned from TCP how to handle stretch-ACKs.

After that, we changed quicly to use 1:10 with Reno, and things worked 
well. Didn't @kazuho <> also use Reno?

> I do not believe there is a magic number at the moment,
Sure, the deisgn of TCP recognised that AR 1:1 generated too much 
traffic, AR 1:2 for TCP resulted in an Forward:Return traffic ratio of 
~1.5%, and QUIC increases this  ~3%.

However, as network link speeds increased, this proved too low for many 
network links and so, TCP ACK thinning came into being. This often 
reduces the TCP AR to around 1:4 (<1%). If  QUIC were to use an AR 1:10 
it would be ~0.5%, of course if QUIC specified a default AR 1:4, it 
would also help in many cases.

> and as @kazuho <> noted, even 1:10 is not a 
> small enough ratio under some conditions.
I totally agree, an endpoint might wish to change the AR. However, I 
don't see many **paths** where subnetwork link asymmetry drives that use 
case (spending ~10% of transmission bursts on ~1% of traffic seems like 
something that most links will be designed/dimensioned to work with). On 
the other hand, endpoint application/stack considerations (such as a 
different CC, e.g. the BBR case you note) will likely benefit from other 
ratios/behaviours. I agree this motivates your ID, and that a transport 
parameter can be of great value to synchronise the endpoint behaviours.

However, that was not my comment: The endpoints know little-to-nothing 
about layer 2 congestion... and if we wish to discourage mitigations 
such as thinning or small packet discard, which I really would like to 
disincentivise), then we shouldn't make the QUIC default worse than TCP!

> Given this, and given that we know 1:2 is used by TCP endpoints, it 
> makes the most sense to specify that as the suggested guidance.
That is a myth wrt to the sender, which is why I would like this 
discussed. Even in 2010, RFC 5690 seemed to ignore the presence of ACK 
Thinning, let's not do this again.
> The new text proposed in #3706 
> <> describes the 
> tradeoff involved, and the rationale for using 1:2. I am strongly 
> against doing anything substantially different without /overwhelming 
> information/.
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub 
> <>, 
> or unsubscribe 
> <>.
What do we agree upon?


P.S. I'll certainly review #3706 
<> and will do this 
without bias, whatever the outcome is.

You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub: