Re: [tcpm] Proceeding CUBIC draft - thoughts and late follow-up

Markku Kojo <kojo@cs.helsinki.fi> Tue, 12 July 2022 02:54 UTC

Return-Path: <kojo@cs.helsinki.fi>
X-Original-To: tcpm@ietfa.amsl.com
Delivered-To: tcpm@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 7A165C14F718; Mon, 11 Jul 2022 19:54:48 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.007
X-Spam-Level:
X-Spam-Status: No, score=-2.007 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, RCVD_IN_DNSWL_BLOCKED=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01, URIBL_BLOCKED=0.001, URIBL_DBL_BLOCKED_OPENDNS=0.001, URIBL_ZEN_BLOCKED_OPENDNS=0.001] autolearn=unavailable autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (1024-bit key) header.d=cs.helsinki.fi
Received: from mail.ietf.org ([50.223.129.194]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id SpzTbEZRpArY; Mon, 11 Jul 2022 19:54:44 -0700 (PDT)
Received: from script.cs.helsinki.fi (script.cs.helsinki.fi [128.214.11.1]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id A2F38C16ECA9; Mon, 11 Jul 2022 19:54:40 -0700 (PDT)
X-DKIM: Courier DKIM Filter v0.50+pk-2017-10-25 mail.cs.helsinki.fi Tue, 12 Jul 2022 05:54:24 +0300
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cs.helsinki.fi; h=date:from:to:cc:subject:in-reply-to:message-id:references :mime-version:content-type:content-id; s=dkim20130528; bh=Et0YHz 7FIYd0fH1eBTw0NwaCjRAHL9Le/ItwnyfxltU=; b=ZY+JaJc/1bNTjbxvc1pPhy iwgE7jCD97cHU2pyCi332QYmZecoEQJ0nxUIXGL+bldl+pK579BU+P2Fhub0FDCr OQz0wsmOae7v9NWlumfXMK0S/XS7QjdbuNnf9sW5RjKdtz86V6TJI5CFFCezBtF7 /RthRIhtJJ07lV/8HW1xQ=
Received: from hp8x-60 (85-76-35-95-nat.elisa-mobile.fi [85.76.35.95]) (AUTH: PLAIN kojo, TLS: TLSv1/SSLv3,256bits,AES256-GCM-SHA384) by mail.cs.helsinki.fi with ESMTPSA; Tue, 12 Jul 2022 05:54:24 +0300 id 00000000005A014E.0000000062CCE260.00005E17
Date: Tue, 12 Jul 2022 05:54:21 +0300
From: Markku Kojo <kojo@cs.helsinki.fi>
To: Yoshifumi Nishida <nsd.ietf@gmail.com>
cc: Vidhi Goel <vidhi_goel=40apple.com@dmarc.ietf.org>, Vidhi Goel <vidhi_goel@apple.com>, "tcpm@ietf.org Extensions" <tcpm@ietf.org>, tcpm-chairs <tcpm-chairs@ietf.org>
In-Reply-To: <CAAK044TTZKXphE2rW7aFc2TnBLmMK0U+J61ZKzZXYvOmX_TdSA@mail.gmail.com>
Message-ID: <alpine.DEB.2.21.2207120355450.7292@hp8x-60.cs.helsinki.fi>
References: <alpine.DEB.2.21.2206301429210.7292@hp8x-60.cs.helsinki.fi> <AEE43039-8EA0-4FB9-A605-C5C845DF4355@apple.com> <alpine.DEB.2.21.2207041843260.7292@hp8x-60.cs.helsinki.fi> <CAAK044TTZKXphE2rW7aFc2TnBLmMK0U+J61ZKzZXYvOmX_TdSA@mail.gmail.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="=_script-24112-1657594464-0001-2"
Content-ID: <alpine.DEB.2.21.2207120423250.7292@hp8x-60.cs.helsinki.fi>
Archived-At: <https://mailarchive.ietf.org/arch/msg/tcpm/r8scanTb-Cp9q6nyPNvqFUDofE4>
Subject: Re: [tcpm] Proceeding CUBIC draft - thoughts and late follow-up
X-BeenThere: tcpm@ietf.org
X-Mailman-Version: 2.1.39
Precedence: list
List-Id: TCP Maintenance and Minor Extensions Working Group <tcpm.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/tcpm>, <mailto:tcpm-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/tcpm/>
List-Post: <mailto:tcpm@ietf.org>
List-Help: <mailto:tcpm-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/tcpm>, <mailto:tcpm-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 12 Jul 2022 02:54:48 -0000

Hi Yoshi,

On Mon, 4 Jul 2022, Yoshifumi Nishida wrote:

> Hi Markku,
> 
> In addition to the one Vidhi mentioned, I think there are other points we might want to pay attention to in
> RFC9002. 

Please see my reply to Vidhi for the one she mentioned.

> I think I listed some of them below (I think there're some more, but I guess it's not important for the topic)

Sure RFC9002 is slightly more aggressive in some cases but either the 
differences are minor or there is a good reason and justification for RFC 
9002 to do so as QUIC is in many aspects a different protocol and does not 
suffer from the same shortcomings as TCP. See below.

> * RFC9002 uses ACK-byte counting for cwnd increase. It is similar to RFC3465 which is an experimental RFC,
> however, unlike RFC3465, it doesn't have L factor. This may affect the cwnd growth especially in slow-start
> phase.

Essential parts of RFC 3465 are adopted in RFC 5681.

Sure using L-factor with TCP makes it more conservative than RFC 
9002. Not using L-factor in RFC 9002 still follows the principles of 
[Jacob88] by not increasing cwnd by more than doubling per RTT.
RFC 9002 does not suffer from the ACK ambiguity problem like TCP 
does. Therefore, L-factor is not necessity for RFC 9002 like it is 
in TCP for slow start based loss recovery (see RFC 3465, Sec 2.3).

AFAIK, RFC 5681 uses the same L-factor for initial slow start as it 
uses for slow start based loss recovery because there was not enough 
experimental evidence using a larger L-factor for the initial 
slow start at the time RFC 5681 was published. There has already been 
debate on this and we may possibly change it for the initial slow start 
of TCP as well.

[Jacob88] V. Jacobson, Congestion avoidance and control, SIGCOMM '88.

> * RFC9002 specifies two packets for restart window size while RFC5681 specifies one packet. This might mean
> RFC9002 can recover cwnd after RTO more than twice as Reno.

This allows a relatively small benefit for RFC9002, not two times faster 
recovery of cwnd. Assume cwnd=64 when RTO expires. With loss window of 1 
it takes 6 RTTs to reach cwnd=32 and an additional 32 RTTs to recover 
cwnd=64, while it takes 5+32 RTTs to recover cwnd=64 with loss window of 
2. That is, it saves only one RTT out of 38.

What concerns me with the two pkts min cwnd with RFC9002 is that it 
suffers exactly from the same problem with ECN that we corrected for the 
CUBIC draft, deserving an RFC 9002 errata.

> * RFC9002's recovery state ends when one of the packets sent in recovery state has been received. OTOH,
> RFC5681 requires staying in recovery state until all lost packets before recovery are acked. This would mean
> RFC9002 increases cwnd faster than Reno after packet losses have been detected.


RFC 5681 fast recovery ends when the first new ACK arrives, i.e., it ends 
in one RTT. NewReno and RFC 6675 stays in fast recovery until all lost 
packets before recovery are acked.  There is no difference with any of 
these in a typical recovery of one pkt once per a CA cycle.

If there are several losses in a single window of data, TCP and 
QUIC may differ slightly. However, there is a good reason for QUIC to 
specify the "recovery state" to end in one RTT: QUIC does not necessarily 
retransmit the lost data so it is not possible to specify the recovery 
state in the same way as with TCP. Furthermore, QUIC has more SACK ranges 
and better way to detect losses, so QUIC essentially applies efficient 
SACK-assisted recovery that often takes one or only a few RTTs. Therefore, 
the difference in cwnd increase after pkt losses have been detected is 
mostly non-existing or negligible and is mainly due to more efficient loss 
detection and recovery.

> BTW, my intention is not saying RFC9002 is overly aggressive. I just meant comparing with reno precisely might
> not be a very good idea for modern networks. 

Agreed, it is not overly aggressive.

I am not saying that CUBIC should be precisely as aggressive as Reno even 
though its design goal has been to be equally aggressive in the 
Reno-friendly region.

Instead, with the issue 1, for example, a CUBIC flow competing with a 
Reno flow will incorrectly and against its design goal opt out 
roughly every second cwnd decrease that the Reno flow executes. Doesn't 
that result in systematic and significant difference in peformance like 
the measuement results that I've pointed to also confirm?

Thanks,

/Markku

> Thanks,
> --
> Yoshi 
> 
> 
> On Mon, Jul 4, 2022 at 9:29 AM Markku Kojo <kojo@cs.helsinki.fi> wrote:
>       Hi Vidhi,
>
>       On Sun, 3 Jul 2022, Vidhi Goel wrote:
>        
>       >       Could you elaborate a bit why you think there is a major change in congestion control for
>       any
>       >       congestion event? 
>       >
>       >
>       > In Appendix B.6, ssthresh is set using previous cwnd instead of bytes_in_flight. I think this is
>       a major
>       > deviation from 5681.
>       >
>       > ssthresh = congestion_window * kLossReductionFactor
>       >
>       > We fixed this in Cubic bis draft to use bytes_in_flight.
>
>       We have already had this discussion and I have provided a reletively
>       lengthy explanation and perspective to the topic here:
>
>         https://mailarchive.ietf.org/arch/msg/tcpm/mGXKgVeSyLsZcNIsoo_-6i92FH8/
>
>       But for your convenience, I'll try to quickly summarize this here.
>
>       I don't see how this would result in major deviation in the CC behavior of
>       the two algos (RFC 5681 and RFC 9002) (or CUBIC and RFC 9002). Definitely
>       not for "any congestion event".
>
>       When cwnd is fully utilized these two approaches result in exactly the
>       same behavior because cwnd = FlightSize.
>
>       For a flow control limited or application data limited case RFC 9002, Sec
>       7.8 specifies
>
>         When bytes in flight is smaller than the congestion window and
>         sending is not pacing limited, the congestion window is
>         underutilized.  This can happen due to insufficient application data
>         or flow control limits.  When this occurs, the congestion window
>         SHOULD NOT be increased in either slow start or congestion avoidance.
>
>       For many rwnd or app data limited cases this results in the same behaviour
>       as using FlightSize because cwnd is not increased above FlightSize. In
>       certain scenarious using FlightSize will result in too small cwnd and
>       thereby suboptimal performance. This is very well-known problem with
>       simple way of using FS as specified in RFC 5681 but we have RFC 7661 as
>       Experimental that provides (AFAIK) the best known way to solve the problem
>       of simply using FlightSize in these scenarious. The CUBIC draft
>       recommends RFC 7661 and RFC 9002 also points to it as potential
>       alternative.
>
>       And I agree with Gorry that it would be useful to upgrade RFC 7661 to PS
>       and thereby have these scenarious appropriately solved both for TCP as
>       well as for QUIC (rfc7661bis could recommend the upgraded algo to QUIC and
>       possible other CCs as well).
>
>       Hence, I don't see how RFC 9002 would be in notable or any conflict with
>       RFC 5681.
>
>       Am I missing something?
>
>       Thanks,
>
>       /Markku
>
>       > Vidhi
>       >
>       >       On Jul 3, 2022, at 5:05 PM, Markku Kojo <kojo=40cs.helsinki.fi@dmarc.ietf.org> wrote:
>       >
>       >
>       >
>       >       On Mon, 20 Jun 2022, Vidhi Goel wrote:
>       >
>       >             If we are talking about RFC 9002 New Reno implementations, then that already
>       modifies
>       >             RFC 5681 and doesn’t comply with RFC 5033. Since it has a major change from 5681 for
>       >             any congestion event, I wouldn’t call it closely following new Reno.
>       >
>       >
>       >       Could you elaborate a bit why you think there is a major change in congestion control for
>       any
>       >       congestion event? To my understanding RFC 9002 is very clear in that cwnd (and ssthresh)
>       is halved
>       >       which is essentially Reno CC that RFC 5681, RFC 6582 and RFC 6675 all follow. Sure RFC
>       9002
>       >       differs from RFC 5681 in the way a loss is detected and recovered but that is not
>       congestion
>       >       control. There is also a difference in how Fast Recovery period ends but effectively that
>       differs
>       >       only slightly from TCP Fast Recovery with SACK enabled (QUIC is essentially SACK-enabled).
>       >       In the quite usual case with a typical CA cycle where a single packet is lost, this
>       results in
>       >       exactly the same CC behavior. And often when multiple packets are lost in a single window
>       of data,
>       >       SACK allows recovery in one RTT (or in a few RTTs) in which case the difference is minor.
>       >
>       >       Am I missing something?
>       >
>       >             Also, in another email, you said that you didn’t follow discussions on QUIC WG for
>       RFC
>       >             9002, so how do you know whether QUIC implementations are using New Reno or CUBIC
>       >             congestion control?
>       >
>       >             It would be good to stay consistent in our replies, if you agree RFC 9002 is already
>       >             non compliant with RFC 5033, then why use it as a reference to cite Reno
>       >             implementations!
>       >
>       >
>       >       I am not insisting anything about which CC QUIC implementations are using. RFC 9002 says:
>       >
>       >       "If a sender uses a different controller than that specified in this
>       >        document, the chosen controller MUST conform to the congestion
>       >        control guidelines specified in Section 3.1 of [RFC8085]."
>       >
>       >       And RFC 8085 requires that UDP-based bulk-transfer applications comply with the congestion
>       control
>       >       principles (i.e., RFC 2914). Therefore, it is even more important to ensure that CUBIC
>       draft is
>       >       published without any notable issues and that it follows the congestion contol principles.
>       At the
>       >       time when RFC 9002 was published the issues with CUBIC were unknown, so I think it was
>       inherent
>       >       that CUBIC is mentioned as an alternative CC and many implementations have adopted it.
>       >
>       >       BR,
>       >
>       >       /Markku
>       >
>       >             Vidhi
>       >
>       >
>       >                   On Jun 20, 2022, at 5:06 PM, Markku Kojo
>       >                   <kojo=40cs.helsinki.fi@dmarc.ietf.org> wrote:
>       >
>       >                   Hi Lars,
>       >
>       >
>       >                   On Sun, 19 Jun 2022, Lars Eggert wrote:
>       >
>       >
>       >                         Hi,
>       >
>       >
>       >                         sorry for misunderstanding/misrepresenting  your issues.
>       >
>       >
>       >                               On Jun 6, 2022, at 13:29, Markku Kojo
>       >                               <kojo@cs.helsinki.fi> wrote:
>       >
>       >                               These issues are significant and some number of
>       >                               people have also said
>       >
>       >                               they should not be left unaddressed. Almost all of
>       >                               them are related to
>       >
>       >                               the behaviour of CUBIC in the TCP-friendly region
>       >                               where it is intended
>       >
>       >                               and required to fairly compete with the current
>       >                               stds track congestion
>       >
>       >                               control mechanisms. The evaluation whether CUBIC
>       >                               competes fairly
>       >
>       >                               *cannot* be achieved without measuring the impact
>       >                               of CUBIC to the
>       >
>       >                               other traffic competing with it over a shared
>       >                               bottleneck link. This
>       >
>       >                               does not happen by deploying but requires
>       >                               specifically planned measurements.
>       >
>       >
>       >                         So whether CUBIC competes fairly with Reno in certain regions
>       >                         is a
>       >
>       >                         completely academic question in 2022. There is almost no Reno
>       >                         traffic
>       >
>       >                         anymore on the Internet or in data centers.
>       >
>       >
>       >                   To my understanding we have quite a bit QUIC traffic for which RFC 9002
>       >                   has just been published and it follows Reno CC quite closely with some
>       >                   exceptions. We have also some SCTP traffic that follows very closely Reno
>       >                   CC and numerous proprietary UDP-based protocols that RFC 8085 requires to
>       >                   follow the congestion control algos as described in RFC 2914 and RFC 5681.
>       >                   So, are you saying RFC 2914, RFC 8085 and RFC 9002 are just academic
>       >                   exercises?
>       >
>       >
>       >                   Moreover, my answer to why we see so little Reno CC traffic is very
>       >                   simple: people deployed CUBIC that is more aggressive than Reno CC, so it
>       >                   is an inherent outcome that hardly anyone is willing to run Reno CC when
>       >                   others are running a more aggressive CC algo that leaves little room for
>       >                   competing Reno CC.
>       >
>       >
>       >                         I agree that it in an ideal world, the ubiquitous deployment
>       >                         of CUBIC
>       >
>       >                         should have been accompanied by A/B testing, including an
>       >                         investigation
>       >
>       >                         into impact on competing non-CUBIC traffic.
>       >
>       >
>       >                         But that didn’t happen, and we find ourselves in the situation
>       >                         we’re in. What is gained by not recognizing CUBIC as a
>       >                         standard?
>       >
>       >
>       >                   First, if the CUBIC draft is published as it currently is that would give
>       >                   an IETF stamp and 'official' start for "a spiral of increasingly
>       >
>       >                   aggressive TCP implementations" that RFC 2914 appropriately warns about.
>       >                   The little I had time to follow L4S discussions in tsvwg people already
>       >                   insisted to compare L4S performance to CUBIC instead of Reno CC. The fact
>       >                   is that we don't know how much more aggressive CUBIC is than Reno CC in
>       >                   its TCP friendly region. However, if I recall correctly it was considered
>       >                   Ok that L4S is somewhat more aggressive than CUBIC. So, the spiral has
>       >                   already started within the IETF as well as in the wild (Internet).
>       >
>       >
>       >                   Second, by recognizing CUBIC as a standard as it is currently written
>       >                   would ensure that all issues that have been raised would get ignored and
>       >                   forgotten forever.
>       >
>       >
>       >                   Third, you did not indicate which issue are you referring to. A part of
>       >                   the issues have nothing to do with fair competition against Reno CC in
>       >                   certain regions. E.g, issue 2 causes also self-inflicted problems to a
>       >                   flow itself as Neal indicated based on some traces he had seen. And there
>       >                   is a simple, effective and safe fix to it as I have proposed.
>       >
>       >
>       >                   As I have tried to say, I do not care too much what would be the status of
>       >                   CUBIC when it gets published as long as we do not hide the obvious issues
>       >                   it has and we have a clear plan to ensure that all issues that have not
>       >                   been resoved by the time of publishing it will have a clear path and
>       >                   incentive to get fixed. IMO that can be best achieved by publishing it as
>       >                   Experimental and documenting all unresolved issues in the draft. That
>       >                   approach would involve the incentive for all proponents to do whatever is
>       >                   needed (measurements, algo fixes/tuning) to solve the remaining issues and
>       >                   get it to stds track.
>       >
>       >
>       >                   But let me ask a different question: what is gained and how does the
>       >                   community benefit from a std that is based on flawed design that does not
>       >                   behave as intended?
>       >
>       >
>       >                   Congestion control specifications are considered as having significant
>       >                   operational impact on the Internet similar to security mechanisms. Would
>       >                   you in IESG support publication of a security mechanism that is shown to
>       >                   not operate as intended?
>       >
>       >
>       >                   Could we now finally focus on solving each of the remaining issues and
>       >                   discussing the way forward separately with each of them? Issue 3 a) has
>       >                   pretty much been solved already (thanks Neal), some text tweaking may
>       >                   still be needed.
>       >
>       >
>       >                   Thanks,
>       >
>       >
>       >                   /Markku
>       >
>       >
>       >                         Thanks,
>       >
>       >                         Lars
>       >
>       >
>       >                         --
>       >
>       >                         Sent from a mobile device; please excuse typos.
>       >
>       >                   _______________________________________________
>       >
>       >                   tcpm mailing list
>       >
>       >                   tcpm@ietf.org
>       >
>       >                   https://www.ietf.org/mailman/listinfo/tcpm
>       >
>       >       _______________________________________________
>       >       tcpm mailing list
>       >       tcpm@ietf.org
>       >       https://www.ietf.org/mailman/listinfo/tcpm
>       >
>       >
>       >
> 
> 
>