Re: [quicwg/base-drafts] Desirable behavior when it takes time to derive the traffic keys for the next PN space (#3821)

Kazuho Oku <notifications@github.com> Tue, 14 July 2020 18:31 UTC

Return-Path: <noreply@github.com>
X-Original-To: quic-issues@ietfa.amsl.com
Delivered-To: quic-issues@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id AC2C23A0847 for <quic-issues@ietfa.amsl.com>; Tue, 14 Jul 2020 11:31:44 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.483
X-Spam-Level:
X-Spam-Status: No, score=-1.483 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, HTML_IMAGE_ONLY_24=1.618, HTML_MESSAGE=0.001, MAILING_LIST_MULTI=-1, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (1024-bit key) header.d=github.com
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id SPQCogCDSJmZ for <quic-issues@ietfa.amsl.com>; Tue, 14 Jul 2020 11:31:43 -0700 (PDT)
Received: from out-20.smtp.github.com (out-20.smtp.github.com [192.30.252.203]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 66D873A082E for <quic-issues@ietf.org>; Tue, 14 Jul 2020 11:31:43 -0700 (PDT)
Received: from github-lowworker-cd7bc13.ac4-iad.github.net (github-lowworker-cd7bc13.ac4-iad.github.net [10.52.25.102]) by smtp.github.com (Postfix) with ESMTP id 7EF708C0328 for <quic-issues@ietf.org>; Tue, 14 Jul 2020 11:31:42 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=github.com; s=pf2014; t=1594751502; bh=ZisJ26uiBbrXGBPQUhvs68I4+Rk40t0tYAEazQO6GpY=; h=Date:From:Reply-To:To:Cc:In-Reply-To:References:Subject:List-ID: List-Archive:List-Post:List-Unsubscribe:From; b=fIFf5osOp7WPRCWB1ear9hlS7gEtdWSuGYivlqxySZ591L27n5Pl8KgRBY0sGGDZK TFm0HNv8pQI17cMTcLfFeCCjZ+P/ZQfD6lBRQjU/lt85bwl2Ap9NeqgRNbDPp6AE/f Y+Gd/RzuBwfGEFLnHWqQRvLrWJ0x62lWmwW0yXh4=
Date: Tue, 14 Jul 2020 11:31:42 -0700
From: Kazuho Oku <notifications@github.com>
Reply-To: quicwg/base-drafts <reply+AFTOJK7X4SYUXMIFTVXCZXF5DHNQ5EVBNHHCNTMDWA@reply.github.com>
To: quicwg/base-drafts <base-drafts@noreply.github.com>
Cc: Subscribed <subscribed@noreply.github.com>
Message-ID: <quicwg/base-drafts/issues/3821/658341794@github.com>
In-Reply-To: <quicwg/base-drafts/issues/3821@github.com>
References: <quicwg/base-drafts/issues/3821@github.com>
Subject: Re: [quicwg/base-drafts] Desirable behavior when it takes time to derive the traffic keys for the next PN space (#3821)
Mime-Version: 1.0
Content-Type: multipart/alternative; boundary="--==_mimepart_5f0dfa0e70633_588a3ff92eccd960143877"; charset="UTF-8"
Content-Transfer-Encoding: 7bit
Precedence: list
X-GitHub-Sender: kazuho
X-GitHub-Recipient: quic-issues
X-GitHub-Reason: subscribed
X-Auto-Response-Suppress: All
X-GitHub-Recipient-Address: quic-issues@ietf.org
Archived-At: <https://mailarchive.ietf.org/arch/msg/quic-issues/Mz4-T3P6OlYJGN4-l0wTlYEF_Eg>
X-BeenThere: quic-issues@ietf.org
X-Mailman-Version: 2.1.29
List-Id: Notification list for GitHub issues related to the QUIC WG <quic-issues.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/quic-issues>, <mailto:quic-issues-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/quic-issues/>
List-Post: <mailto:quic-issues@ietf.org>
List-Help: <mailto:quic-issues-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/quic-issues>, <mailto:quic-issues-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 14 Jul 2020 18:31:45 -0000

The discussion about RTT estimate is interesting. I'm not sure about the Initial vs. Handshake case, but when the client spends X milliseconds (e.g. X=100) verifying the certificate chain at the same time buffering 0.5 RTT data it receives, then the server would obtain an RTT sample of `X - max_ack_delay` milliseconds (e.g., 75 ms). That's not great.

One way of resolving the issue would be to not cap ack_delay using max_ack_delay for ApplicationData packets that were sent during the handshake. The rationale for not activating PTO for 1-RTT packets during the handshake is that the peer might not immediately have the necessary keys to process those packets. We assume the peer to buffer such packets. When the sender has such an assumption, it does not really make sense to cap the ack_delay.

I think doing something like that would be fine, even though I would not push on recommending such a strategy.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/quicwg/base-drafts/issues/3821#issuecomment-658341794