Re: [Sidrops] Interim Meeting Follow-up Mail

Tim Bruijnzeels <> Fri, 23 October 2020 19:15 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 361563A13A4 for <>; Fri, 23 Oct 2020 12:15:06 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -2.1
X-Spam-Status: No, score=-2.1 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=unavailable autolearn_force=no
Authentication-Results: (amavisd-new); dkim=pass (2048-bit key)
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id LnJJIebS8Pd4 for <>; Fri, 23 Oct 2020 12:15:03 -0700 (PDT)
Received: from ( []) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by (Postfix) with ESMTPS id 6C61D3A139E for <>; Fri, 23 Oct 2020 12:15:03 -0700 (PDT)
Received: from (unknown []) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by (Postfix) with ESMTPS id 6650460290; Fri, 23 Oct 2020 19:15:01 +0000 (UTC)
Received: from ( []) by
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;; s=soverin; t=1603480500; bh=nG22xBG98J/mxJcISwwPMCQVSVA6dJul9Vg7/xIA6wM=; h=Subject:From:In-Reply-To:Date:Cc:References:To:From; b=aA60DM6etioVdsHs6STEM5j6YFR2YJHUXUSzCwzzdNN7NAZkm5wDOfk5q8C1PfB7F KXoBIwfXBywfK7gRQJGDcqJ8km/f+6OlKjbpCMBFu/H2psl+R450d1Ou1iyMHIxjmY sZ29U/tsrt5MXq1b5NUzWQXwSmcuMLljdgQ+L38oH0r44/Dq/sZUgZeg67eonzSbhn CaDGNTwal+MCj3OwpuETwkI9T4jiAm0x36JquCdfpTsBDAapFb//3V5IN+u8OrecSb nefsA5amgaOnwRqkvFbNHTCSoND50DtdatACLl3LWkjvr1lnfA1vylOHd2rrf8uIHI YtDaLPkAyaPVA==
Content-Type: text/plain; charset="us-ascii"
Mime-Version: 1.0 (Mac OS X Mail 13.4 \(3608.\))
From: Tim Bruijnzeels <>
In-Reply-To: <>
Date: Fri, 23 Oct 2020 21:14:59 +0200
Content-Transfer-Encoding: quoted-printable
Message-Id: <>
References: <> <> <>
To: Stephen Kent <>
Archived-At: <>
Subject: Re: [Sidrops] Interim Meeting Follow-up Mail
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: A list for the SIDR Operations WG <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Fri, 23 Oct 2020 19:15:06 -0000


Thank you for your quick reply.

> On 23 Oct 2020, at 17:33, Stephen Kent <> wrote:
> Tim,
>> ...
>> I pointed out there is a failure scenario that cannot be avoided, where a child CA produces delegated CA certificates or ROAs containing over-claims, i.e. referencing resources not on its own CA certificate as currently published by its parent.
>> As a child CA using RFC 6492 (provisioning) I learn entitlements from my parent (section 3.3.2), but the response lacks the context to know:
>> - when a current resource will disappear
>> - when a new resource will be safe to use, ie. when my *parent* published my certificate, and it's picked up by RPs
>> I am happy to try discuss preventing over-claims as a separate issue, but in the context of this document I believe we should assume that they will happen.
>> This means that CAs will be temporarily rejected at times. This includes the case where an RIR (eg Lacnic), issues a certificate with extra resources to an NIR (eg, and the NIR issues and publishes a delegated certificate to one of its members, before the extended certificate is published by the RIR.
>> FWIW, for the time being I can build a safeguard into the next version of Krill to refrain from using new resources for some time, but any time chosen is of course arbitrary.
>> I think there are two options here:
>> 1) accept that CAs will be temporarily rejected, even NIR level CAs
>> 2) make an exception for overclaiming, but otherwise valid, objects
>> I can see how #2 would be hard to swallow for some, but then at least consciously accept and acknowledge that #1 will happen, and it's not the rejected CA's fault.
> Why should a parent push a new cert to a child without waiting a suitable interval to allow RPs to learn of changes? Such behavior seems a bit careless by the parent.

Part of the issue - I believe - is that the parent can pro-actively issue a shrunk certificate to the child *before* they learn about the shrunk resource set, and send a new CSR for the reduced set. This may seem careless, but the parent may not have a choice if their own certificate is shrunk.

But, I think I should start a separate thread on this and explore the problem statement before suggesting solutions. I don't think this can be solved as part of the bis work.

>> ...
>> The bottom line is that one might want to filter out VRPs for the resources on the CA certificate for the child CA. Except in cases where the child has *all* resources (e.g. it is the online CA under a TA), because then this could invalidate all objects in the RPKI.
>> Two options wrt VRP filtering:
>> 1) Doing this adds complexity, and possibly brittleness in software as a result.
>> 2) Not doing this means the WG must accept that the landing is not always soft, and may result in brittleness in routing. A child CA can end up with invalid routes.
>> To me the most important question is how routing security is best served, by #1 or #2. I can live with either choice, as long as it's consciously accepted and acknowledged.
> VRP filtering seems to add considerable complexity. If CAs can't manage to do a better job when changing resource allocations, why should we expect the RP side of  a child CA to do the right thing re filtering?

Note that this VRP filtering is very similar to the SLURM prefix based filtering already implemented by many RPs (Section 3.3.1 of RFC 8416). The key difference would be that in this case covering VRPs would need to be filtered as well (I am not sure now why this was not done in SLURM).

I don't deny that it adds complexity, and I can understand if the WG prefers the simpler validation model, but there is some prior art.

>> Observe the word 'valid' in this sentence in section 6.7 (Failed Fetches):
>>    "or does not acquire current valid instances of all of the objects enumerated"
>> As written this implies that unknown object types are not considered valid, and therefore RPs MUST reject *all* objects for a CA if they are encountered.
>> This means that new object types can only be published safely when, some arbitrary value of, enough RPs have been upgraded to support it. And even then those operators who did not upgrade will be left behind. By this (arbitrary) time the burden may be on them, rather than the publishing CA.
>> I, and some others I believe, suggested that unknown objects could be considered 'valid' in this context if they are present and match the hash in the manifest (i.e. the PP is complete and unaltered). One could even go further, and as long as the new object is a form of RPKI signed object (RFC6488) one could validate the outer object, but not its content.
> Your suggestion would avoid the problem you describe, but it also means that publishers of such objects have no idea whether any RPs know what to do with them. Is the plan to allow publication of arbitrary new objects (so long as the outer wrapping meets the 6488 syntax), and to address the transition to use of new objects in the RFCs that define the semantics of those objects?
>> ...
>>    Processing of the signed objects associated with the CA instance MUST
>>    be considered as failed for this etch cycle, if:
>>    o current instances of objects matching the names and hashes in a
>>      current valid manifest could not be retreived
>>    o any current object for a supported object type is considered
>>      invalid
>>    o any current object, of an unknown object type, which is found to
>>      be a form of an RPKI Signed Object, fails the validation outlined
>>      in section 3 of RFC 6488, with the exception that further validation
>>      of the eContent (section is not performed.
> I find the wording above very confusing; the term "supported" is not defined, and  the last bullet is very, very convoluted, ...
> How about:
>   Processing of the signed objects associated with the CA instance MUST
>   be considered as failed for this etch cycle, if
>   o current instances of all the objects matching the names and hashes in a
>     current valid manifest could not be retrieved
>   o any of the retrieved objects fails the Signed Object validation process
>     described in Section 3 of RFC 6488
>   o any retrieved object, of a type that the RP is prepared to process,
>     fails validation of the eContent as described in Section above

I like your text better!

>> This would allow for new objects to be introduced, without causing existing - not updated - RPs to reject the publication point altogether. I believe that this is safe as long as a new type is orthogonal in its semantics to existing types - and it does not change their meaning. For example: ASPA objects do not change how ROAs work or should be validated.
> I agree that new objects that are independent of semantics of existing object types could be tolerated under these revised processing rules.
>> If new objects are not safe in this way, eg imagine that ASPA objects would change the meaning of ROAs, then we could just introduce two new object types instead: ASPA and ROAv2. CAs which would choose to do ASPA then, could just publish ASPA and ROAv2 objects and stop doing ROAs(v1).
> Easily said, in a casual fashion, but if one examines the transition processes already defined for alg transition etc., this will entail a complex description. Still, I suspect your principal goal is allowing near-term publication of ASPA objects, without having to deal with description of a transition process, so for that concern the revised Manifest processing changes would suffice.

Indeed that is my principal goal and this works for me.

I can also see how for the class of new object types which are not orthogonal to existing types the transition process needs to be clearly defined as part of their definition.


> Steve
> _______________________________________________
> Sidrops mailing list