Re: WGLC comments: draft-ietf-quic-recovery-29

Jan Rüth <> Wed, 01 July 2020 12:03 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 4DE6A3A0B84 for <>; Wed, 1 Jul 2020 05:03:49 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -1.901
X-Spam-Status: No, score=-1.901 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id v_x-U1N6kyYz for <>; Wed, 1 Jul 2020 05:03:45 -0700 (PDT)
Received: from ( [IPv6:2a00:8a60:1:e501::5:49]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by (Postfix) with ESMTPS id 031E43A0B50 for <>; Wed, 1 Jul 2020 05:03:44 -0700 (PDT)
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-AV: E=Sophos;i="5.75,300,1589234400"; d="scan'208";a="76713949"
Received: from ([]) by with ESMTP; 01 Jul 2020 14:03:41 +0200
Received: from ( []) by (Postfix) with ESMTPS id 3A809C08D4; Wed, 1 Jul 2020 14:03:41 +0200 (CEST)
Received: from (2a00:8a60:1014::41) by (2a00:8a60:1014::43) with Microsoft SMTP Server (TLS) id 15.0.1347.2; Wed, 1 Jul 2020 14:03:41 +0200
Received: from (2a00:8a60:1014::43) by (2a00:8a60:1014::41) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.2044.4; Wed, 1 Jul 2020 14:03:21 +0200
Received: from ([fe80::c109:b55e:3715:5c2c]) by ([fe80::c109:b55e:3715:5c2c%12]) with mapi id 15.00.1347.000; Wed, 1 Jul 2020 14:03:40 +0200
From: Jan Rüth <>
To: Gorry Fairhurst <>
CC: "" <>
Subject: Re: WGLC comments: draft-ietf-quic-recovery-29
Thread-Topic: WGLC comments: draft-ietf-quic-recovery-29
Thread-Index: AQHWT4Hq0QAc7QDSMEe/bbcfG7COfKjyfsSA
Date: Wed, 01 Jul 2020 12:03:40 +0000
Message-ID: <>
References: <> <>
In-Reply-To: <>
Accept-Language: en-US, de-DE
Content-Language: en-US
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
x-originating-ip: [2a00:8a60:1014:10:c5a5:57a9:219:a5d2]
Content-Type: text/plain; charset="utf-8"
Content-ID: <>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
Archived-At: <>
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Main mailing list of the IETF QUIC working group <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Wed, 01 Jul 2020 12:03:49 -0000


to add to Gorry’s review:

> In B.5,
>           // Congestion avoidance.
>           congestion_window += max_datagram_size * acked_packet.size
>               / congestion_window
> - is this calculation correct? I was thinking of what might happen when the PMTU is large and the sender generates a sequence of small packets… would this result in overestimating cwnd?

Moreover, acked_packet is not defined there.

Only acked_packets (s at the end) is, and does >>size<< denote the number of packets (count might be a better name in this case) or their cumulative size?

Also the part that Gorry quoted is performed within the loop, which would not lead to a linear increase I guess.

I believe, if it refers to the actually acked bytes and the statement is outside of the loop, everything is fine and the packet size or a large PMTU does not affect this.
(also the line length of the RFC made me look 5 times until I saw the “divided by cwnd”)

> /Endpoints SHOULD use an initial congestion window of 10 times the maximum datagram size (max_datagram_size), limited to the larger of 14720 or twice the maximum datagram size./
> - I would like to revist this. We talked in Montreal and at that time I understood the equivalence to TCP for the case where a large MSS was supported by the path, as per RFC6928. I have since revisited this topic and would like to suggest the present IETF advice for TCP is in fact wrong for the large initial MSS case, and that this draft should not perputate that mistake for QUIC. The issue comes when IW is initialiased for a path with a very large PMTU, but that PMTU is not in fact supported by the path.

Do networks that support large MTUs usually also have more queue memory or why is this linked to the PMTU at all?
Apart from this, I agree with Gorry here.