Re: [tcpm] 2nd WGLC for draft-ietf-tcpm-rfc8312bis

Markku Kojo <> Tue, 22 March 2022 17:13 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 419113A0C9E; Tue, 22 Mar 2022 10:13:26 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -2.01
X-Spam-Status: No, score=-2.01 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Authentication-Results: (amavisd-new); dkim=pass (1024-bit key)
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id IBu406Fa4JJ0; Tue, 22 Mar 2022 10:13:17 -0700 (PDT)
Received: from ( []) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by (Postfix) with ESMTPS id 7ED883A0C89; Tue, 22 Mar 2022 10:13:15 -0700 (PDT)
X-DKIM: Courier DKIM Filter v0.50+pk-2017-10-25 Tue, 22 Mar 2022 19:13:11 +0200
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; h=date:from:to:cc:subject:in-reply-to:message-id:references :mime-version:content-type:content-id; s=dkim20130528; bh=T2hvGh lqYZnIqIFL/K+F3tJskV1rzIs4FB158F2wIpk=; b=QmzxWWmvEML2rtpct3+Dq5 RCuwiN2pyYt3RleULM/8Rw427KWVHePStLQtp7asFJ99h0DNlT4zpbT9SvrFfwfP dRPbXbi5C1v27TzGD8uYi4vpQcbcnlifEgvu7VbatdAc49k0wIz6s+/iW72uZWi9 vcEFeZi3TSQcXUhQKA81M=
Received: from hp8x-60 ( []) (AUTH: PLAIN kojo, TLS: TLSv1/SSLv3,256bits,AES256-GCM-SHA384) by with ESMTPSA; Tue, 22 Mar 2022 19:13:11 +0200 id 00000000005A1C65.00000000623A03A7.000036BD
Date: Tue, 22 Mar 2022 19:13:09 +0200
From: Markku Kojo <>
To: Vidhi Goel <>
cc: Yoshifumi Nishida <>, " Extensions" <>, tcpm-chairs <>
In-Reply-To: <>
Message-ID: <>
References: <> <> <> <> <> <> <> <> <>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="=_script-14037-1647969191-0001-2"
Content-ID: <>
Archived-At: <>
Subject: Re: [tcpm] 2nd WGLC for draft-ietf-tcpm-rfc8312bis
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: TCP Maintenance and Minor Extensions Working Group <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Tue, 22 Mar 2022 17:13:27 -0000


On Fri, 18 Feb 2022, Vidhi Goel wrote:

>             The other thing is that if we set a high bar for a PS doc related to CC, we should
>             apply
>             the same bar to all related docs.For example, I am personally wondering how much we
>             have
>             analyzed and performed experiments when we published RFC9002. (BTW, my intention is
>             NOT to
>             accuse a specific doc, there should be some more examples.) 
>       Yes, IMO the bar should be the same. RFC 9002 was first targetted as experimental but was later
>       changed to PS. I did not have time to follow the progress much at all but in my opinion it would
>       probably have been better to publish it also as experimental first. On the other hand, it intends
>       to implement NewReno as close as possible, and therefore not to change the congestion control
>       principles that we have. I cannot tell how much experimental effort was put to verify the
>       behaviour of those features that are new in QUIC and/or differ from stds track TCP but hopefully
>       good enough. (Though there seems to be one issue related to resetting the exponential backoff of
>       PTO, which seems to require a more accurate specification to ensure unambiguous interpretation
>       and correct outcome, but that's an off topic here).
> RFC 9002 deviates from Reno as it uses `cwnd` during multiplicative decrease on a congestion event
> - This is definitely a strong
> deviation from RFC 5681 which clearly mentions not to use cwnd.

Let me try to explain why using cwnd *without any constrains* like 
rfc8312bis text earlier did is the wrong thing to do and some perspective 
why FlightSize has been selected to be used in the stds track TCP 
congestion control RFCs, and why QUIC is not in conflict with RFC 5681:

1) Using cwnd when calculating the MD value allows arbitrarily large 
result values for cwnd because cwnd may increase way above rwnd when the 
sender is rwnd limited. Using FlightSize instead of cwnd prevents this 
from happening. That's why RFC 5681 explicitly warns about this.

2) When the sender is application data limited, the same problem may 
occur. Using FlightSize instead of cwnd in calculating MD value prevents 
also this problem from happening (but creates another problem by 
resulting in too low cwnd after MD with certain application send 
patterns as discussed, see more about this below).

To my understanding, FlightSize was selected at the time when RFC 2581 was 
published (and later TCP specs have continued to use it for consistency) 
because it resolved both problems above in a very simple way.
Moreover, the other problem of too low cwnd caused by the use of 
FlightSize was well known already at the time RFC 2581 got published and 
a bit later RFC 2861 was published with thinking that it resolves the 
other problem of too low cwnd. Therefore, it was considered to be the 
right thing to publish RFC 2581 with using FlighSize as a stds track spec 
because we followed the general guideline "be conservative in what you 
send" that all CC decisions should follow also today, that is, it is more 
adviseable to select conservative CC behaviour rather than allow 
potentially problematic too aggressive behaviour in a stds track RFC (in 
particular, if we even a bit unsure). So, this decision was not seen as a 
problem for any application limited use because RFC 2861 was already 
envisioned to solve the problem of too low cwnd.

Unfortunately, RFC 2861 turned out not to be a viable solution and it 
took a while an alternative way was introduced and later published as RFC 
7661. Meanwhile, it is not a surprise that many stacks abandoned the use 
of FlightSize because a viable solution for the application limited 
senders was not available and such application-limited traffic patterns 
were so common, creating suboptimal performance. However, AFAIK all 
stacks used other means to successfully avoid the problem 1) above and 
many did try various ways to avoid the problem 2) above. And they still 

What becomes to QUIC, it does not introduce FlightSize at all, but it 
carefully tries to avoid both problems above (see Sec 7.8 of RFC 9002):

  When bytes in flight is smaller than the congestion window and
  sending is not pacing limited, the congestion window is
  underutilized.  This can happen due to insufficient application data
  or flow control limits.  When this occurs, the congestion window
  SHOULD NOT be increased in either slow start or congestion avoidance.

In addition, QUIC appropriately points to RFC 7661 as another potential 
Hence, I don't see how RFC 9002 would be in conflict with RFC 5681.

So, the strict use of FlightSize in calculating the MD value is not the 
key here but making sure that the above problems become properly 
addressed in any alterative CC. Use of FlightSize is much preferred 
for consistency in the RFC series as was discussed in the FlightSize issue. 
After all, CC algorithms in RFCs describe the behavior to comply with not 
the way how to code it in the stacks. Stack implementers are, of course, 
free to adapt their code as they wish as long as the behavior remains the 
same as described in the RFC at hand. Linux, for example, has implemented 
NewReno for decades in a way that differs from the algo described in RFC 
2582/3782/6582 but the behaviour is essentially the same.

The most recent text in Sec 4.6 of the draft after discussions is now much 
better and correct compared to the version that was earlier last called. 
The only remaining issue there is that, AFAIK, the last sentence of 2nd 
para does not quite correctly capture the discussions we had nor what the 
stacks actually do:

  "Some implementations of CUBIC currently use _cwnd_ instead
   of _flight_size_ when calculating a new _ssthresh_ using Figure 5."

To my understanding those stacks prevent in one way or another the cwnd 
from increasing beyond rwnd. And, most (all?) of those stacks also 
somhow limit/disallow the cwnd from increasing when cwnd is underutilized.
Different stacks do the latter very differently, so it would have been 
very difficult to have concensus and appropriate text for a PS document 
based what the stacks do.

I would appreciate if the draft text in the above sentence could be 
further worked on to correctly capture that also the stacks that use cwnd 
in calculating the MD value actually (try to) prevent the problems 1) and 
2) above (to make sure nobody interprets the text such that it 
would be all ok to just use cwnd instead of FligtSize).

And, I'd like to conclude that I agree with Gorry in that the best way 
forward with solving the problems of application-limited senders in the 
RFC series is to collect data/experience on using RFC 7661 and advance it 
to PS. If I recall correctly, all who participated in the discussions 
considered RFC 7661 to provide a working solution.



> Thanks,
> Vidhi
>       On Feb 18, 2022, at 2:23 AM, Markku Kojo <> wrote:
> Hi Yoshi,
> On Fri, 18 Feb 2022, Yoshifumi Nishida wrote:
>       Hi Markku,
>       Thanks for the comments. Yes, these are valid and important concerns. However, I have
>       mixed feelings about them.
>       I don't want to open a can of worms, but here's are some of my personally thoughts...
>       One thing is that "long and wide deployment experience" might not be a very strong claim,
>       but then, I am wondering what would be the strong one.
>       One may bring some solid experiment results, but the others might argue it uses too
>       convenient configurations, or the scale is too small or too short.
> Well, that has always been and IMO still is for the wg to decide when the evidence through experiments
> is good enough to proceed with publishing the doc. E.g., when RFC 5562 was considered for publication
> by the wg, the wg requested for more experiments before making the decision and Sally provided what was
> requested. And it was just about adding ECN capability to SYNACK but here we are discussing much more
> significant changes.
> And more importantly, here we are mainly discussing whether there is any experimental evidence at all;
> for example, changing MD (from 0.5 to 0.7 or to whatever) must be considered separately when in slow
> start and when in congestion avoidance as I have pointed out. This requires tests that specifically
> target at studying the slow start overshoot phase and the following recovery phase. AFAIK, there are no
> such experimental results, no analysis nor evaluation presented to support the change when the sender
> is in slow start. So, we are not discussing the level of evidence but whether it exists at all, which
> is very different.
>       The other thing is that if we set a high bar for a PS doc related to CC, we should apply
>       the same bar to all related docs.For example, I am personally wondering how much we have
>       analyzed and performed experiments when we published RFC9002. (BTW, my intention is NOT to
>       accuse a specific doc, there should be some more examples.) 
> Yes, IMO the bar should be the same. RFC 9002 was first targetted as experimental but was later changed
> to PS. I did not have time to follow the progress much at all but in my opinion it would probably have
> been better to publish it also as experimental first. On the other hand, it intends to implement
> NewReno as close as possible, and therefore not to change the congestion control principles that we
> have. I cannot tell how much experimental effort was put to verify the behaviour of those features that
> are new in QUIC and/or differ from stds track TCP but hopefully good enough. (Though there seems to be
> one issue related to resetting the exponential backoff of PTO, which seems to require a more accurate
> specification to ensure unambiguous interpretation and correct outcome, but that's an off topic here).
>       I am not sure how we can maintain this kind of fairness between the docs at the moment.
> I think it is the responsibility for the entire tsv area to do it, and eventually the TSV ADs and IESG.
> The process to follow is clear and robust, and when followed it has shown to be effectual in producing
> good quality documents. AFAIK, we have not decided to change it, maybe just many of us are unaware of
> it or have forgotten its existence?
> Traditionally tsv area has been very careful in considering new CC proposals because congestion control
> is so essential for the stability and fairness of the Internet and I hope we are not sliping too much
> from there. AFAIK, that's one reason why the process was documented.
> Thanks,
> /Markku
>       Thanks,
>       --
>       Yoshi 
>       On Thu, Feb 17, 2022 at 6:41 AM Markku Kojo <> wrote:
>            Hi Yoshi,
>            On Tue, 15 Feb 2022, Yoshifumi Nishida wrote:
>            > Hi Markku,
>            >
>            > Thanks for the comments. I think these are very valid points. 
>            > However, I would like to check several things as a co-chair and a doc
>            shepherd before we
>            > discuss the points you've raised.
>            >
>            > In my understanding (please correct me if I'm wrong), when this draft was
>            adopted as an WG
>            > item, I think the goal of the doc was some minor updates from RFC8312 which
>            include more
>            > clarifications, minor changes and bug fixes. 
>            > However, if we try to address your concerns, I think we'll need to invent a
>            new version of
>            > CUBIC something like CUBIC++ or NewCUBIC in the end. 
>            > I won't deny the value of such doc, but, this seems not to be what we agreed
>            on
>            > beforehand.  
>            > if we proceed in this direction, I think we will need to check the WG
>            consensus whether
>            > this should be a new goal for the doc.
>            >
>            > So, I would like to check if this is what you intend for the doc or you
>            think we can
>            > address your points while aligning with the original goal.
>            > Also, if someone has opinions on this, please share.
>            I think it is important that we remember the status of RFC 8312 and the
>            decades long process that has been followed in tsv area for new
>            TCP congestion control algorithms that have been proposed and submitted
>            to IETF. In order to ensure that new cc algos are safe and fair, the
>            process that has been followed for all current stds track TCP cc algos
>            has required that the cc algo is first accepted and published as
>            experimental RFC and only once enough supportive experimental evidence
>            has been gathered the doc has become a candidate to be forwaded to stds
>            track. We have even agreed on a relatively strict evaluation process to
>            follow when cc algos are brought to the IETF to be published as
>            experimental:
>            RFC 8312 was published as "Informational" and if I recall correctly the
>            idea was "just to publish what's out there" for the benefit of the
>            community. RFC 8312 was never really evaluated, particularly not in the
>            way new cc algos are supposed to be as per the agreed process.
>            I do not recall what/how exactly was agreed when rfc8312bis was launched
>            but I would be very interested to hear the justification why this doc
>            does not need to follow the process mentioned above but we would like to
>            propose IETF to publish a non-evaluated Informational doc to be published
>            "with minor updates", i.e., without actual evaluation, as a stds track
>            RFC? If the target really remains as PS then the bar should be even
>            higher than what is described for experimental docs in the above process
>            document, i.e, what we have followod for experimental to be moved to stds
>            track.
>            The only justification that I have heard has beed "because CUBIC has long
>            and wide deployment experience" and "the Internet has not smelted or that
>            "we should have noticed if there were problems". We must, however,
>            understand that in order to have noticeable bad impact CUBIC should cause
>            some sort of congestion collapse. Congestion collapse, however, is not an
>            issue with CUBIC nor with any other CC algo that applies an RTO mechanisms
>            together with correctly implemented Karn's algo that retains the
>            backed-off RTO until an Ack is received for a new (not rexmitted) data
>            packet. The issue is fairness to competing traffic. This cannot be
>            observed by deploying and measuring the performance and behaviour of CUBIC
>            alone. CUBIC being more aggressive than current stds track TCP CC would
>            just gives good performance results that one running CUBIC would be happy
>            with. One must evaluate CUBIC's impact on the competing (Reno CC) traffic
>            in range of environments which requires carefully designed active
>            measurements with thoroughly-analyzed results (as required by the above
>            process document, RFC 5033 and RFC 2914). What we seem to be missing is
>            this evidence on CUBIC's impact and that is something the IETF must focus
>            on, not just that whether CUBIC can achieve better performance than other
>            existing CCs. The latter has been shown in many publications and is the
>            majos focus in  many scientific papers proposing new algos.
>            I appreciate a lot that CUBIC has been implemented/developped and
>            deployed for long and I wonder whether those deploying CUBIC have
>            unpublished results the wg could review before taking the decicion?
>            I suggest everyone to read carefully RFC 2914 Sec 3.2 and particularly
>            what it says about more aggressive (than RFC 5681) congestion control
>            algorithms:
>              Some of these may fail to implement
>              the TCP congestion avoidance mechanisms correctly because of poor
>              implementation [RFC2525].  Others may deliberately be implemented
>              with congestion avoidance algorithms that are more aggressive in
>              their use of bandwidth than other TCP implementations; this would
>              allow a vendor to claim to have a "faster TCP".  The logical
>              consequence of such implementations would be a spiral of increasingly
>              aggressive TCP implementations, or increasingly aggressive transport
>              protocols, leading back to the point where there is effectively no
>              congestion avoidance and the Internet is chronically congested.
>            And:
>              It is convenient to divide flows into three classes: (1) TCP-
>              compatible flows, (2) unresponsive flows, i.e., flows that do not
>              slow down when congestion occurs, and (3) flows that are responsive
>              but are not TCP-compatible.  The last two classes contain more
>              aggressive flows that pose significant threats to Internet
>              performance,
>            As I have tried to point out there are several features with CUBIC where
>            it is likely to be (or to me it seems it obviously is) more aggressive
>            than what is reguired to be TCP-compatible. I'm not aware of evidince
>            presented to tcpm (or IETF/IRTF) which shows opposite (and I happy to be
>            educated what I have missed).
>            You may take my comments to be a part of the expert review phase
>            performed by the IRTF/ICCRG for CUBIC. I'm not requesting to modify this
>            doc to CUBIC++ (or something) but it seems to be that this would be
>            necessary if this doc intends to become published as PS. For experimental,
>            I think it would need some addtioinal updates and record the areas
>            uncertainty and where more experimentation (clearly) is required.
>            Thanks,
>            /Markku
>            > Thanks,
>            > --
>            > Yoshi
>            >
>            >
>            > On Fri, Feb 11, 2022 at 9:34 PM Markku Kojo <> wrote:
>            >       Hi Yoshi, all,
>            >
>            >       It seems to me that many issues that I raised have been solved,
>            thanks.
>            >       However, there are still a number of important issues that have not
>            been
>            >       addressed adequately. At least the following:
>            >
>            >       #135 on W_max. Yoshi's observation was correct that this is not
>            resolved:
>            >       the co-authors and original developpers of CUBIC (@lisongxu and
>            >       @sangtaeha) agreed in their last message that Wmax needs different
>            >       treatment for slow start and congestion avoidance and plan
>            comprehensive
>            >       (new) evaluation of it. This is obviously an open issue but the issue
>            >       was closed?
>            >
>            >       #85 (& #86 with basically same issue and these two were combined) This
>            >       (#85) is about ECN but the major issue is on using the same MD
>            >       factor in slow start and in congestion avoidance when using
>            >       loss-based CC. This (#85) remained closed even though I provided a
>            >       thorough explanation why it is wrong and against the original theory
>            and
>            >       design by Van Jacobson, against the congestion control principles (RFC
>            >       2914) and two co-authors agreed on this in their same last message to
>            >       #135 when they agreed on Wmax needing rework. This is an important
>            issue
>            >       that the wg should consider very carefully because it is not only
>            >       updating RFC 5681 but also in conflict with RFC 2914. How can
>            >       tcpm (and IETF) suggest and allow one CC algo to not follow congestion
>            >       control principles as set in RFC 2914 while requiring all other CCs to
>            >       follow RFC 2914 guidelines?
>            >       The current draft does not provide any justification for using the
>            same
>            >       MD factor in slow start as in congestion avoidance. Nor am I
>            >       awere of any experimental data that would support this change.
>            >       The fact thet CUBIC has been long deployed does not alone provide any
>            >       supporting evidence because CUBIC is likely to give good performance
>            as
>            >       it is overagressive and thereby unfair to competing traffic and users
>            >       tend to be happy when measuring the performance of the sending CUBIC
>            >       only, not the competing traffic that is badly impacted. HyStart++ is
>            >       suggested as mitigation to the problem but it cannot; HyStart++ is
>            only
>            >       applicable during initial slow start, not during slow start after RTO!
>            >       That is, the "SHOULD use HYStart++" text in Sec 4.10 is impossible
>            >       to implement as I have pointed out in my comments earlier. Using a
>            proper
>            >       MD factor in slow start is even more important if loss is detectected
>            >       during a RTO recovery because the sender is likely to face heavy
>            >       congestion in such a case and it is very bad if the sender continues
>            >       sending with overaggressive rate, stealing the capacity from and
>            causing
>            >       harm to coexisting flows. In addition, as I have explained, HyStart++
>            >       does not remove the problem even for the initial slow start as it is
>            not
>            >       shown to work always. Instead, the results with the HyStart++ draft
>            show
>            >       that it reduces 50% of rexmits and only 36% RTOs, meaning that there
>            is
>            >       likely to be a notable percentage of cases when a sender is still
>            >       in slow start when first loss is detected (i.e., HyStart++ had no
>            effect)
>            >       and a significant number of cases where a CUBIC sender is
>            >       overaggressive continuing with a 1-40% larger cwnd than what is the
>            >       available capacity. Note also that any delay-based heuristics like
>            >       HyStart++ are known to work poorly in various wireless environmens
>            where
>            >       link delay tends to vary a lot. We may come up with some other MD
>            factor
>            >       that 0.5 when in slow start and HyStart++ is in use, but that is
>            >       experimental, if not research, and definitely not ready for stds
>            track.
>            >
>            >       #114, #132, and #143 w.r.t flightsize vs. cwnd. The current text
>            >             does not quite correcly reflect what stacks that use cwnd do.
>            >             I'll comment in #143 separately.
>            >
>            >       #96 & #98: The text added does not address the problems raised which
>            are
>            >       also evidenced in the paper pointed by Bob in #96. Even though CUBIC
>            has
>            >       been modified a bit after the paper was published, it does not
>            >       automatically mean that the problem has been shown resolved:
>            experimental
>            >       evidence is required but not provided. The fact that CUBIC does not
>            >       change MD factor for fast convergence is the root of the problem
>            >       evidenced in the paper and remains so in the algo specified in this
>            >       draft. This is also a significant problem when competing with Reno CC
>            >       because CUBIC behaves much more aggressive than Reno CC when there is
>            >       sudden congestion and all competing flows must converge fast down to a
>            >       small fraction of the current cwnd to be fair to each other. This
>            again
>            >       cannot be evidenced not to be a problem by long deployment experience
>            >       unless experimental data that measures the impact on competing traffic
>            is
>            >       presented to back the claims. Adjusting just Wmax for fast convergence
>            is
>            >       not enough and is even likely to be ineffective because there tend to
>            be
>            >       several losses when sudden congestion is hit, and particularly if
>            NewReno
>            >       is in use the sender stays several RTTs in Fast Recovery being
>            >       overaggressive and then possibly continues at the same rate in CA
>            which
>            >       is unlikely to reach evan close to Wmax before a new loss hits
>            >       the sender again. That is, lower Wmax and lower additive increase
>            factor
>            >       do not compensate the use of larger MD factor when sudden congestion
>            is
>            >       encountered.
>            >
>            >       #93 & #94 & (#89) Sec 5.3 still does not address any difficult
>            >       environments, in particular buffer-bloated paths (nor does Sec 5.4).
>            >       We need evidence (results) that show CUBIC is fair towards other CCs
>            >       (Reno) also in such environments. Note that CUBIC's decision to leave
>            >       Reno-friendly region is based on the size of cwnd which tends to be
>            >       incorrect with buffer-bloated bottlenecks because with huge buffers
>            the
>            >       cwnd can be many times larger than what is actually needed to fully
>            >       utilize the available network bit-rate. Therefore, Reno CC has no
>            problem
>            >       in fully utilizing such bottleneck links and CUBIC must stay in
>            >       Reno-friendly region longer but it leaves it too early because the
>            same C
>            >       is used as with non-bloated environments. We lack experiments showing
>            >       CUBIC follows the congestion control principles and is fair to current
>            >       standard TCP CC; to my understanding no experiments with
>            buffer-bloated
>            >       bottlenecks are cited to back up the claims even though buffer bloat
>            is
>            >       very well known to be a common (difficult) environment in today's
>            >       Internet.
>            >
>            >       #90 The current text on applying undo (a response to detected false
>            >       fastrexmit) does not provide correct result if someone implements it.
>            >       I have explained the problems there in github but seem to have not
>            >       replied to latest comments by Neal. I'll reply and try to explain
>            more.
>            >       Again one major problem here is that the draft suggest a new response
>            >       algo for false fast rexmits but does not provide any experimental data
>            to
>            >       support it. Long deployment experience has been suggested as
>            >       justification but again without any carefully evaluated experimental
>            >       data and evidence there is no meat. The issue is important to solve
>            but is
>            >       not specific to CUBIC. Instead, it is general problem for all TCP CC
>            >       variants. IMHO, this is not ready for standards track but deserves a
>            draft
>            >       of its own so that it can be carefully evaluated and discussed ion the
>            >       list. AFAIK there has been no discussion on this on the tcpm list, so
>            >       those probably interested and having experience are likely to be
>            unaware
>            >       that this is part of CUBIC draft.
>            >
>            >       #88 The problem with correctness of the AIMD model and setting alpha
>            for
>            >       CUBIC requires further consideration. Bob provided an analysis that
>            >       leaves things still open. It seems that I never had time to review and
>            >       comment the analysis and clarify why the model does not work. I'll do
>            that
>            >       separately as it is important to ensure CUBIC behaves fairly as
>            intended
>            >       for the Reno-friendly region.
>            >
>            >       Best regards,
>            >
>            >       /Markku
>            >
>            >       On Mon, 31 Jan 2022, Yoshifumi Nishida wrote:
>            >
>            >       > Hello,
>            >       >
>            >       > After some discussions among chairs, we decided to run the 2nd WGLC
>            on
>            >       draft-ietf-tcpm-rfc8312bis in
>            >       > consideration of the importance of the draft. 
>            >       > We'll be grateful if you could send your feedback to the ML. The
>            WGLC runs
>            >       until *Feb 11*.
>            >       >
>            >       > If interested, you can check in-depth past discussions in the
>            following URL.
>            >       >
>            >       >
>            >       > Thank you so much!
>            >       > --
>            >       > tcpm co-chairs
>            >       >
>            >       >
>            >       > On Wed, Jan 26, 2022 at 2:50 AM Lars Eggert <> wrote:
>            >       >       Hi,
>            >       >
>            >       >       this -06 version rolls in all the changes requested during
>            (and after)
>            >       WGLC ended.
>            >       >
>            >       >       I'll leave it up to the chairs to decide if another WGLC is
>            warranted
>            >       or the document can
>            >       >       progress as-is.
>            >       >
>            >       >       Thanks,
>            >       >       Lars
>            >       >
>            >       >
>            >       >       > On 2022-1-26, at 11:12, wrote:
>            >       >       >
>            >       >       >
>            >       >       > A New Internet-Draft is available from the on-line
>            Internet-Drafts
>            >       directories.
>            >       >       > This draft is a work item of the TCP Maintenance and Minor
>            >       Extensions WG of the IETF.
>            >       >       >
>            >       >       >        Title           : CUBIC for Fast and Long-Distance
>            Networks
>            >       >       >        Authors         : Lisong Xu
>            >       >       >                          Sangtae Ha
>            >       >       >                          Injong Rhee
>            >       >       >                          Vidhi Goel
>            >       >       >                          Lars Eggert
>            >       >       >       Filename        : draft-ietf-tcpm-rfc8312bis-06.txt
>            >       >       >       Pages           : 35
>            >       >       >       Date            : 2022-01-26
>            >       >       >
>            >       >       > Abstract:
>            >       >       >   CUBIC is a standard TCP congestion control algorithm that
>            uses a
>            >       >       >   cubic function instead of a linear congestion window
>            increase
>            >       >       >   function to improve scalability and stability over fast
>            and long-
>            >       >       >   distance networks.  CUBIC has been adopted as the default
>            TCP
>            >       >       >   congestion control algorithm by the Linux, Windows, and
>            Apple
>            >       stacks.
>            >       >       >
>            >       >       >   This document updates the specification of CUBIC to
>            include
>            >       >       >   algorithmic improvements based on these implementations
>            and recent
>            >       >       >   academic work.  Based on the extensive deployment
>            experience with
>            >       >       >   CUBIC, it also moves the specification to the Standards
>            Track,
>            >       >       >   obsoleting RFC 8312.  This also requires updating RFC
>            5681, to
>            >       allow
>            >       >       >   for CUBIC's occasionally more aggressive sending behavior.
>            >       >       >
>            >       >       >
>            >       >       > The IETF datatracker status page for this draft is:
>            >       >       >
>            >       >       >
>            >       >       > There is also an HTML version available at:
>            >       >       >
>            >       >       >
>            >       >       > A diff from the previous version is available at:
>            >       >       >
>            >       >       >
>            >       >       >
>            >       >       > Internet-Drafts are also available by rsync at
>            >
>            >       >       >
>            >       >       >
>            >       >       > _______________________________________________
>            >       >       > tcpm mailing list
>            >       >       >
>            >       >       >
>            >       >
>            >       >       _______________________________________________
>            >       >       tcpm mailing list
>            >       >
>            >       >
>            >       >
>            >       >
>            >       >
>            >
>            >
>            >
> _______________________________________________
> tcpm mailing list