Re: [tcpm] Why draft-gomez-tcpm-ack-rate-request?

Jonathan Morton <> Wed, 23 March 2022 13:06 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id C09C23A1138 for <>; Wed, 23 Mar 2022 06:06:56 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -1.857
X-Spam-Status: No, score=-1.857 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_ENVFROM_END_DIGIT=0.25, FREEMAIL_FROM=0.001, RCVD_IN_DNSWL_BLOCKED=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Authentication-Results: (amavisd-new); dkim=pass (2048-bit key)
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id onQD1qWIlBmG for <>; Wed, 23 Mar 2022 06:06:51 -0700 (PDT)
Received: from ( [IPv6:2a00:1450:4864:20::230]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by (Postfix) with ESMTPS id 99CD43A1135 for <>; Wed, 23 Mar 2022 06:06:51 -0700 (PDT)
Received: by with SMTP id r22so1744738ljd.4 for <>; Wed, 23 Mar 2022 06:06:51 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=20210112; h=mime-version:subject:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to; bh=yXtw6RvW1Rj5W6kb24l0t1v1GKCFSrDTewGMIur1fw0=; b=Z0keY4Qtl5fabcSxZrOeygSl4HeoGgN23jkLs2Nw5mFMrj2YB/hFEmCyzkD8lwydIz 7tC8Wq2+hah+OMO41dXKLl9XhrW8jwJMFu/7oM2EH4ZXBPSM9e2cVwUz5zE6D8N5FlqZ aHrzxLC+z/tmHKyjhozAIXgVcn4XTW58I+ciNwdoTBXbHEc93Iqr5kUvUpWfP68CJpm5 cWrikoUDeUMzrMa/0STc+p7aLeO8VqMc0XC6V1P3wdVYPkA1eJ2oPYbB7aYNzGJCr9ue mMwb8zYcdTzKSjObZaRnfdyA0FEygBH7X1sfsuJPuV81sISFXGQWjaltZGwuHQltMzad 9hIA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=20210112; h=x-gm-message-state:mime-version:subject:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to; bh=yXtw6RvW1Rj5W6kb24l0t1v1GKCFSrDTewGMIur1fw0=; b=NoOct2sm8Zco1s7ae4HSLYRyG4yu4phk6jamoxDJCV/cep3GqZI0gKMfy/Q4bH5KDz PFD6bRCBDoQPcNY14dgUosp28uFFBAHvYxDXEysuGNKuByVR/nJjEKWymRfPMZEyUbaF dj0g/aiq81Tnci3Zo/jn1IrHgf96X4bjzNEJfP+FbMcBjk2HNuJF0tO8I8FeEggfFxXg oD5PE9VUk0Ao2EW0D260oWLZGmwcX+gwO1RVh5OF/1F8ixerHaW+lPmEGOwODphoFCq3 uelO0DwtasUWmv0ojwqslXg3F9PpaOzRxpqg78Cyh8PfJJm/E6yzPqY6MdKLM2CciTA1 glLw==
X-Gm-Message-State: AOAM533L3peTCgVfQFyzSRJ7kcFArL6yxgFM6XMFDLOzVpW0HPw0ySYJ LtRlHBBCN9zHvihwg5SMzu3sZnzb5SE=
X-Google-Smtp-Source: ABdhPJxdGyw8b/DZenrdsJoWOekbTypqGvUiPPcXOSPDbAWfggoLtuB4LPNL9k8ixX5nNWrGON1AqQ==
X-Received: by 2002:a05:651c:20f:b0:249:6026:f0f6 with SMTP id y15-20020a05651c020f00b002496026f0f6mr21787310ljn.169.1648040808988; Wed, 23 Mar 2022 06:06:48 -0700 (PDT)
Received: from ( []) by with ESMTPSA id c9-20020a056512238900b0044a2dd75bd2sm953278lfv.14.2022. (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 23 Mar 2022 06:06:48 -0700 (PDT)
Content-Type: text/plain; charset="us-ascii"
Mime-Version: 1.0 (Mac OS X Mail 14.0 \(3654.\))
From: Jonathan Morton <>
In-Reply-To: <>
Date: Wed, 23 Mar 2022 15:06:47 +0200
Cc: tcpm IETF list <>
Content-Transfer-Encoding: quoted-printable
Message-Id: <>
References: <>
To: Gorry Fairhurst <>
X-Mailer: Apple Mail (2.3654.
Archived-At: <>
Subject: Re: [tcpm] Why draft-gomez-tcpm-ack-rate-request?
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: TCP Maintenance and Minor Extensions Working Group <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Wed, 23 Mar 2022 13:06:57 -0000

> On 23 Mar, 2022, at 2:00 pm, Gorry Fairhurst <> wrote:
> Thanks again for raising questions about ACK traffiuc - I'm quite passionate we look again at AcK traffic volume and rate.
> I've just reread and I'm still quite skeptical this is the right solution - look forwrad to the talk.
> Why is this so important to use a TCP option to signal, rather than just specify a better method? 
> 1. An option increases the probability that someone might block/change the option on path.

This sounds like a counterproductive argument to me.  I believe we should be trying to *reduce* the amount of ossification inherent to the interaction of IETF protocols with overzealous middleboxes.  If this option is blocked en route, the transport will fall back to the typical delayed-ack behaviour, so there should be little if any harm.

> 2. Any receiver that sees this option has anyway to decide how to process ACKs - and mkight also need to deal with offload.
> - The use-case for IOT might not need an option:
> For receivers that see one segment/RTT then ACKing each segment immediately seems reasonable. Sending 2 segments would make anyway have this effect.
> When would you not wish to send fewer ACKs?

There are cases where the sender wants to receive more frequent acks in order to infer things about the network path, particularly for advanced congestion control algorithms.  IoT senders would also benefit from the ability to operate efficiently with single-MSS congestion windows, which is most easily achieved by explicitly signalling this intent to the receiver.  Without ARR, the only mechanism for this is the PSH flag, which is not defined with the required on-the-wire meaning, but instead indicates desired behaviour for the receiving socket API.

> 3. The use cases for asymmetry or processinbg load might not need an option:
> I don't understand the motive here, in QUIC some have been using one ACK every ~10 packets 
> in some implementations (wuth whatever caveat to do something differenet for the first ~100 packets). 
> I'd like to argue this is enough "control" for most cases, and not so much "ACK traffic" in many cases where that matters. 

There's about a 25:1 ratio between the size of a data-segment packet and an ack packet in IPv4 - and it's closer for IPv6.  This makes it relatively easy for modern traffic patterns to cause congestion on the smaller direction of an asymmetric-capacity link to limit practically available capacity in the other direction, with today's delayed-ack specifications which effectively require one ack for every two data segments.

Existing solutions to this problem are not at all elegant.  One relies on a side-effect of current NIC hardware releasing batches of received packets, all of which may be acked as a unit.  Another relies on some middlebox actively dropping acks when it decides that too many are passing through it - which is inefficient on several levels simultaneously.  ARR puts some control over this back in the hands of the transport endpoints, and may even result in a reduction of power consumption and shared-medium contention.

 - Jonathan Morton