Re: [Sidrops] 6486bis: Failed Fetches

Martin Hoffmann <> Wed, 26 August 2020 10:25 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 44CD63A0A40 for <>; Wed, 26 Aug 2020 03:25:46 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -1.898
X-Spam-Status: No, score=-1.898 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, SPF_HELO_NONE=0.001, SPF_NONE=0.001] autolearn=ham autolearn_force=no
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id D7klUQsRtj7X for <>; Wed, 26 Aug 2020 03:25:44 -0700 (PDT)
Received: from ( []) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by (Postfix) with ESMTPS id BE0AA3A09A6 for <>; Wed, 26 Aug 2020 03:25:43 -0700 (PDT)
Received: from ( []) by (Postfix) with ESMTPSA id 0024718A82; Wed, 26 Aug 2020 12:25:39 +0200 (CEST)
Authentication-Results:; dmarc=none (p=none dis=none)
Authentication-Results:; spf=none
Date: Wed, 26 Aug 2020 12:25:39 +0200
From: Martin Hoffmann <>
To: Stephen Kent <>, SIDR Operations WG <>
Message-ID: <>
In-Reply-To: <>
References: <> <> <> <>
Organization: Open Netlabs
X-Mailer: Claws Mail 3.17.6 (GTK+ 2.24.32; x86_64-pc-linux-gnu)
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
Archived-At: <>
Subject: Re: [Sidrops] 6486bis: Failed Fetches
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: A list for the SIDR Operations WG <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Wed, 26 Aug 2020 10:25:46 -0000

Hi Steve,

looks like you forgot to include the mailing list, so I’ll keep the
quote in full.

Stephen Kent wrote:
> Martin,
> > Stephen Kent wrote:  
> >>    
> >> Are you positing the case where the cache contains an expired ROA
> >> for a CA instance, and a fetch that would have replaced the
> >> expired ROA fails?  
> > No, I am talking about a case where the ROA can be fetched
> > successfully and matches the manifest hash, but its EE certificate
> > is expired. Section 6.4 says that "if [files] fail the validity
> > tests specified in [RFC6488]", the fetch has failed. Thus, the
> > expired EE certificate in the ROA fails the complete fetch of all
> > objects associated with the CA.  
> Thanks for the clarification.
> > So, not replacing an expired ROA in a publication point makes the
> > entire CA not update anymore. I.e., any other objects that now
> > expire cannot be replaced until that ROA is also replaced.  
> "not update anymore" is not how I would state the result. This fetch 
> will fail. Because a failed fetch will be reported to the RP
> operations staff, hopefully they will contact the cognizant entity
> for the pub point in question, causing the error to be fixed. Them a
> subsequent fetch can succeed.

That seems like an overly optimistic approach to the issue. Assume the
problem is created by a bug or, worse, design oversight in the CA
software. The turnaround from discovering the issue to deploying a fix
can easily be weeks with some vendors. During all that time, not only
can no ROAs be updated and may child CA certificates slowly expire, the
entire CA’s data will not be available at all for any newly deployed
relying parties. With containerised deployment, this is quite a serious

As a consequence, this approach will make the routing system less
secure for, I’d like to argue, no actual gain.

> > You could argue “Don’t do that, then” but this approach doesn’t make
> > the RPKI more robust but rather makes it break more easily on simple
> > oversights.  
> My sense of the WG discussion was that the majority of  folks chose
> to prioritize correctness over robustness, and I made numerous
> changes to the text to reflect that.

I disagree with the blanket assessment that this approach makes RPKI more
correct. To switch to the example I should have used in the first place:
Ignoring a broken GBR object when producing a list of VRPs does not
make the list less correct. In fact, the opposite is true: Ignoring the
CA or updates to the CA because of a broken GBR makes this list less

> >> If a manifest points to objects that are not CRLs, certs, ROAs,
> >> etc., then it is in error.  
> > How do you introduce new object types in this case? There will
> > always be relying parties that run old software that doesn’t know
> > of them. You rather have to assume that objects of unknown types
> > are signed objects with unknown content. If you do that, the
> > current draft stipulates that you have to read, parse, and validate
> > them -- and then throw away the content.
> >
> > This still means that all object types added to the RPKI must be
> > signed objects. Whether that is okay or not, I don’t quite know.  
> Yes, all objects in the RPKI are always signed!
> But, you first question raises a valid point, i.e., we do not have a 
> generic description of how new objects types are to be introduced in
> a graceful fashion (wrt RP processing). I am not sure that one
> can/should amend 6486bis to address this topic.

You absolutely have to deal with this issue in 6486bis in its current
strict form. Any introduction of a new object type will permanently
break CAs that use these objects when validated with a relying party
software that is not aware of this type. I don’t think this is
acceptable, as it effectively blocks the introduction of new types
pretty much forever.

> Instead I believe it
> makes sense for any new object proposed for inclusion in the RPKI
> repository system to address this question as part of its
> documentation; it's not clear that a uniform approach is appropriate,
> i.e., one size may not fit all. 6486 can be updated to reflect the
> processing approach proposed for any new objects.

It seems to me that the best approach is to simply ignore unknown
objects. We could argue whether they can be ignored completely or
whether one should at least check their manifest hash. Personally, I
think completely ignoring is the better approach as I don’t see any
benefit in rejecting a CA because someone swapped out an object I don’t
care about.

Ultimately, I feel we’ve swung the pendulum way too far to the other
side. The RPKI isn’t a single data set that needs to synchronized in
full but it consists of multiple data sets that can be treated as
independent: currently these are VRPs, router keys, and GBRs. If I use
the RPKI for route origin validation, I don’t need to synchronize the
router keys or GBRs. Why does it improve route origin validation if
available and correctly signed data is skipped because of issues with
irrelevant data?

> >> But, your question seems to be what processing
> >> has to be performed on the files contained in an apparently valid
> >> manifest, right? Section 6.4 and RFC 6488 defines the tests to be
> >> performed, and 6.4 explicitly cites 6488. What additional info do
> >> you feel is needed here?  
> > I would like the document to explicitly state how to deal with
> > object types appearing on a manifest that a replying party does not
> > know. If nothing else then to make the document more helpful for
> > implementers.  
> At this time the types of objects that may legitimately appear in a 
> manifest is small and well-defined. Any other objects would result in
> a fatch failure.
> >>> But I’m not even sure it provides any benefit. I, say, I am
> >>> validating a resource tagged association (RTA, [0]), I don’t care
> >>> about the ROAs at all. Does the RTA become invalid because a CA
> >>> somewhere in the validation chain had an expired ROA?  
> >> I have not examined the RTA ID, and it's an expired draft, so ...  
> > RTA validates signed objects distributed via alternative means using
> > the PKI published as part of the RPKI. I.e., one of the CA
> > certificates published via the RPKI has issued the EE certificate
> > used in that signed object.
> >
> > In order to validate that object, I do not need to look at any ROAs
> > or GBRs, only certificates, CRLs, and manifests.  
> According to George, the RTA carries it own, complete cert path, so 
> certs and CRLs stored in the repository system are irrelevant, and, 
> George noted that the RTA was designed to not require distribution
> via the repository system, so it's not relevant example.
> > Should validation of that object fail if there is an expired ROA
> > published by one of the CAs along the validation chain?  
> As I noted above, an RTA is not relevant to this discussion, based on 
> George's description of its design and intent.
> Steve

Kind regards,