Re: [quicwg/base-drafts] Request to Retire Locally Issued CIDs (#2769)

MikkelFJ <> Fri, 07 June 2019 04:58 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 0F5FE12015E for <>; Thu, 6 Jun 2019 21:58:41 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -8.009
X-Spam-Status: No, score=-8.009 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=0.001, MAILING_LIST_MULTI=-1, RCVD_IN_DNSWL_HI=-5, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_DKIMWL_WL_HIGH=-0.01] autolearn=ham autolearn_force=no
Authentication-Results: (amavisd-new); dkim=pass (1024-bit key)
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id heeFK_G6e7Gw for <>; Thu, 6 Jun 2019 21:58:39 -0700 (PDT)
Received: from ( []) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by (Postfix) with ESMTPS id 81AD4120144 for <>; Thu, 6 Jun 2019 21:58:38 -0700 (PDT)
Date: Thu, 06 Jun 2019 21:58:37 -0700
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=pf2014; t=1559883517; bh=RyN23mCiVSujmJrOIW2m0MALHRrPci61vvr3gE4iArw=; h=Date:From:Reply-To:To:Cc:In-Reply-To:References:Subject:List-ID: List-Archive:List-Post:List-Unsubscribe:From; b=hLjUB9BkxgM081riZpUVU0cPecUdzWbGZ+U2Ktig2aJOi0ixCc92NW9VQ7ba83o+3 O0opw6O3vRn367F2HJQQyhPbOK3VLMk/xUHn0bHKcI+2Zurd366oj3A/H5++TYl/9U ottNybU19XtlQvVNiLpTzOJItJsEpxRbgH2IdIdE=
From: MikkelFJ <>
Reply-To: quicwg/base-drafts <>
To: quicwg/base-drafts <>
Cc: Subscribed <>
Message-ID: <quicwg/base-drafts/pull/2769/>
In-Reply-To: <quicwg/base-drafts/pull/>
References: <quicwg/base-drafts/pull/>
Subject: Re: [quicwg/base-drafts] Request to Retire Locally Issued CIDs (#2769)
Mime-Version: 1.0
Content-Type: multipart/alternative; boundary="--==_mimepart_5cf9eefd28d78_133d3fe4a12cd95c1407c4"; charset="UTF-8"
Content-Transfer-Encoding: 7bit
Precedence: list
X-GitHub-Sender: mikkelfj
X-GitHub-Recipient: quic-issues
X-GitHub-Reason: subscribed
X-Auto-Response-Suppress: All
Archived-At: <>
X-Mailman-Version: 2.1.29
List-Id: Notification list for GitHub issues related to the QUIC WG <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Fri, 07 Jun 2019 04:58:41 -0000

In line of @MikeBishop's comments and what I have said earlier:

Implicit retirement of "Prior To" has the issues mentioned with distributed ACK processing. But the problem is in fact worse: For almost the same reason that the sender wants to retired CID's, the receiver may want to be either migrating internally to another host, or process different parts of a connection on different servers. It can be detrimental to such deployments to have a force expiration. Therefore I think Mike is right in tying this to RETIRE_CID.

Perhaps the right model is to use RETIRE_CID as the sync point regardless of whether this is done voluntarily or motivated by a "Prior To" flush. Mike suggests the sender should defer RETIRE_CID until safe, but I think the receiver should make that call because it might have a 10 minute periodic load balancer flush or something completely different while the sender can only estimate based on RTT and the sender also wants to forget the CID ASAP while the receiver can track CID's algorithmically.

The recommendation could be to invalidate CIDs from RETIRE_CID frames no earlier than 3PTO and possibly longer if it suits the deployment. Receiving a non-zero "Prior To" field SHOULD cause the CIDs to be flushed and retired with RETIRE_CID as soon as practical. The issuer of "Prior To" MAY close a connection with the error CONNECTION_ID_TIMEOUT but deployments SHOULd avoid doing that aggressively unless critical to infrastructure since clients may not be able or willing to upgrade fast.

Note that there can be race conditions where both endpoints want to retire CID's because they are internally migrating. This can lead to problems if a CID is tied to the peers CID indirectly via source IP. We chose to have asymmetrical path migration due the complexities of this. I start to think we could some nasty corner cases here.

You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub: