Re: [Sidrops] 6486bis: Failed Fetches

Tim Bruijnzeels <> Mon, 07 September 2020 11:17 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 6C7053A0BEF for <>; Mon, 7 Sep 2020 04:17:31 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -2.099
X-Spam-Status: No, score=-2.099 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Authentication-Results: (amavisd-new); dkim=pass (1024-bit key)
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id 5miyUiS-Rsox for <>; Mon, 7 Sep 2020 04:17:30 -0700 (PDT)
Received: from ( []) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by (Postfix) with ESMTPS id DBA8E3A0BEE for <>; Mon, 7 Sep 2020 04:17:29 -0700 (PDT)
Received: from (unknown [IPv6:2001:981:4b52:1:fce7:462d:d025:2a3d]) by (Postfix) with ESMTPSA id 09D6A26263; Mon, 7 Sep 2020 13:17:26 +0200 (CEST)
Authentication-Results:; dmarc=fail (p=none dis=none)
Authentication-Results:; spf=fail
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;; s=default; t=1599477446; bh=lDGTiTQthraqVSLFvJvfZNuuX6ZqEAQUhqhL1MhcnkU=; h=Subject:From:In-Reply-To:Date:Cc:References:To; b=Oexu3DDo7qPgh82orSZiPs+wOmwXdygdnTkRzVj/zbyF+1tA6svncugy7y+Jweb67 HZXJ/Xz661QtzufT6l84v7SKqTkphOj6QomAI0pIy6EkGO+K/sev0NJLqjslYMEzhS 0SEUDMnavyuVqLfVXuWXUx7fmIoAjOboM5mVSdKU=
Content-Type: text/plain; charset="us-ascii"
Mime-Version: 1.0 (Mac OS X Mail 13.4 \(3608.\))
From: Tim Bruijnzeels <>
In-Reply-To: <>
Date: Mon, 07 Sep 2020 13:17:25 +0200
Content-Transfer-Encoding: quoted-printable
Message-Id: <>
References: <> <> <> <> <> <> <> <> <> <> <>
To: Stephen Kent <>
X-Mailer: Apple Mail (2.3608.
Archived-At: <>
Subject: Re: [Sidrops] 6486bis: Failed Fetches
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: A list for the SIDR Operations WG <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Mon, 07 Sep 2020 11:17:31 -0000


> On 5 Sep 2020, at 20:08, Stephen Kent <> wrote:
> Tim,
>> ...
>> In short: Given that I expect that we are never come to a consensus other than the total reject-all-if-one-fails - I can live with this if we must, but don't let it come as a surprise.
> Works for me.
>>> ...
>>> Allowing RPs to ignore object types they don't understand prevents a CA from being able to convey the notion that a new object type is important (to that CA). I don't think this is a good strategy. It means that RP behavior will be ambiguous relative to new object types.
>> So far none of the objects have seemed to need this flag.
> If the goal is to define a very general scheme, then a flag may be necessary to accommodate objects not yet defined, since we cannot yet know whether a uniform response for all such objects will be appropriate. I agree that this need not be a binary flag, as in cert extensions. You suggested a three-value flag below, for example.
>> ...
>>> If we want to have a consistent and flexible approach to accommodating new objects I suggest the strategy I mentioned earlier. Define an additional SIA URI that points to a pub point (and manifest) where we can introduce the next version of the signed object format, one that includes a critical flag, analogous to X.509v3 extensions. This allows each CA to decide which object types have to be processed  by an RP in order for the whole pub point to be accepted vs. rejected. Note that this will require modifying a lot of RFCs, but it is a flexible, extensible approach to this issue.
>> I agree that it's flexible and extensible. I had not thought of this approach.
>> But it is a lot of work, not just in RFCs, also in code. It also raises questions about how and when old PPs without the new objects can be deprecated. You can give operators more time to upgrade, but at some point plugs will probably be pulled? Maintaining multiple PPs indefinitely seems rather wasteful.
>> I would like to hear what others have to say.. I have the feeling that ASPA is getting close, and I would really not like to see it delayed because of this.
> It will take a while to complete another revision of the manifest doc, if we purse additional changes, and even longer before an RFC is approved and published. So ASPA will not be accommodated quickly. Also, as I noted in my reply to jay, a quick read of the ASPA doc didn't indicate how these new objects are validated using the RPKI.

The text is still a bit short, and can probably use some feedback, but I believe that the direction is clear. Informally: It's an RPKI signed object, that contains a "customer" ASN which MUST be in the EE certificate. The EE certificate MUST of course be valid.

In my perception ASPA is quite close to testing (no not in the production repositories of course).

>> If we do go down this road then I think that we should also look at the manifest object itself, and let it convey which object (types) are critical (and while we are at it, we can specify types instead of using filename extensions). That way future object types could introduced more easily perhaps - this obviously needs more discussion but it could even allow for semantics like: 1) new object please test, don't use, 2) new objects, use if you can, 3) new objects, critical - fail if you don't understand.
> One could combine the new SIA URI and a revised manifest, in which the manifest contains the per-object flag, rather than redefining the basic object format to accommodate the flag. That would reduce the number of RFCs that need to change. Good idea.

Upon reflection I realised that even the introduction of an SIA in the issuing CA certificate will lead to issues. RPs would reject the CA certificate, and as a result the whole PP of the parent CA. This means that the SIA cannot be deployed without leaving a significant number of RP installations behind. E.g. if I run a delegated CA under an RIR which wants to adopt ASPA, and I get a new CA certificate with the additional SIA from my parent, then 1000s (RIPE NCC >12k) other CAs will also be rejected.

So the only option then is to wait for a significant number of updated RP tools to be available, and then at some point take the plunge and let CAs deploy - and tell operators who suddenly find that their VRP count drops to near zero that they should (have) upgrade(d).

This poses a serious problem to future RFCs - I just don't see how they can achieve an incremental deployment that does not result in the issues above.

I understand your desire to keep 6486bis focused - and I sympathise. However, I do believe that it needs to specify some plan. Perhaps an SIA can be reserved? Perhaps it can be published in conjunction with ASPA - timelines permitting - postponing the pain? Perhaps just checking the presence and hashes for unknown object types, but not validating them can be reconsidered? (I heard your objections, just listing it again in this context)

I am quite open to other suggestions. But, if there can be no plan that facilitates future RFC deployments, then I believe that at the very least the document should include a section that discusses this issue (perhaps security considerations, or a separate section altogether).

> Your example bothers me a bit- it seems to argue for CA-directed processing flags, perhaps to accommodate experimentation with new object types. This sounds like adopting elements of the IRR DB model which didn't seem to be so great, IMHO.

Not where I meant to go, but this can be a future discussion.


> Separately, I think we need to make GBR mandatory for all pub points. If the intent is to cause RPs to contact a CA/pub point maintainer when errors are encountered, then we need to be confident that RPs know who to contact and how.
> Steve