Re: [pkix] PKIX and Revocation - Time to move forward!

"Dr. Pala" <> Mon, 20 November 2017 23:27 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 977531200CF for <>; Mon, 20 Nov 2017 15:27:43 -0800 (PST)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -1.889
X-Spam-Status: No, score=-1.889 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, HTML_MESSAGE=0.001, SPF_PASS=-0.001, T_HK_NAME_DR=0.01, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id azaoa4abOGiq for <>; Mon, 20 Nov 2017 15:27:41 -0800 (PST)
Received: from ( []) by (Postfix) with ESMTP id DF0041200C1 for <>; Mon, 20 Nov 2017 15:27:40 -0800 (PST)
Received: from localhost (unknown []) by (Postfix) with ESMTP id AFCC63740FB2; Mon, 20 Nov 2017 23:27:40 +0000 (UTC)
X-Virus-Scanned: amavisd-new at
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with LMTP id W8imfV_s9Jvb; Mon, 20 Nov 2017 18:27:39 -0500 (EST)
Received: from ( []) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by (Postfix) with ESMTPSA id 0636F3740C38; Mon, 20 Nov 2017 18:27:38 -0500 (EST)
References: <> <>
From: "Dr. Pala" <>
Organization: OpenCA Labs
Message-ID: <>
Date: Mon, 20 Nov 2017 18:27:37 -0500
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.4.0
MIME-Version: 1.0
In-Reply-To: <>
Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha-256; boundary="------------ms080204040108010903080603"
Archived-At: <>
Subject: Re: [pkix] PKIX and Revocation - Time to move forward!
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: PKIX Working Group <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Mon, 20 Nov 2017 23:27:43 -0000

Hi Ryan,

thanks for the reply! Some comments inline...

On 11/20/2017 02:06 PM, Ryan Sleevi wrote:
> [...]
> I don't think it's fair to suggest that a lack of interest in 
> addressing the issue as you've framed it supports a conclusion that 
> there's been no improvements. I think there's been substantial 
> improvements within the relatively focused communities - as both LAMPS 
> and TRANS show - but it may be that you simply disagree on whether 
> they're improvements.
Don't get me wrong, there's been work in many directions - but none 
really focused on improving the access/scalability/availability of 
revocation information (to a certain extent). Good work, but I see it 
more as patches rather than tackling the main issue. In my personal 
experience, when we tried to tackle revocation issues directly, the 
answer from WG Chairs or SEC AD was "we are not interested" or "it does 
not belong here", even when interest was expressed on the mailing 
list(s). As I say, regrettably, this is my direct personal experience. :-(
>     [...]
>       * *Does anyone remember why the use of non-sequential random
>         serial number for certificates was introduced ?* Initially, I
>         think, it was an attempt at masking the number of issued
>         certificates from a CA. Then, there was some kind of argument
>         about the "randomness" of data within certificates (that did
>         not make sense to me, really, since the public key provides
>         plenty of randomness) - maybe this was combined with the fear
>         of having "weak" hash algorithms for signing certificates ?
>         I.e., SHA-1 ? If that is correct, would the use of SHA2 family
>         (or next gen ones) solve the issue and remove the alleged need
>         for random serial numbers ?
> This is not a requirement of any IETF documents.
> It is a requirement of various industries - for example, the 
> CA/Browser Forum's Baseline Requirements - and itself was predated by 
> various root program requirements that trace their history back to the 
> MD5 collisions on 'live' CAs ~2006.
> Much in the same way that some CAs (incorrectly) argued that the 
> entropy added for MD5 was unnecessary for SHA-1, it seems premature to 
> suggest that the entropy is yet again unneeded for SHA-2 (given the 
> construction similarities). In either event, this represents one of 
> the few unpredictable fields that cannot be controlled by an attacker 
> under an adversarial issuance model, and in the world in which all 
> (Web) issuance should be fully automatic, that is worth considering.

Sorry if it was not clear, I was not suggesting that this was a 
requirement, rather I wanted to verify the story here. Thanks for the 

> [...]
> As part of this, the industry's adopted requirements require CAs to 
> regularly monitor their OCSP systems for such requests as yet another 
> signal of possible compromise.
Maybe I do not fully understand your point, but I think that you just 
confirmed what I wrote, in some sense... ? The switch in the semantics 
of the OCSP protocol was/is related to certificate transparency and 
certificate usage monitoring (i.e., if bad "serial number" are detected, 
there might be something weird going on). I guess, but correct me if I 
am wrong, that what you are saying is that this switch is complementary 
to CT rather than an initial attempt at CT... ?
>     Since in the recent years we developed lots of experience when it
>     comes to what and how is actually used for revocation checking, I
>     think we can easily provide simpler designs with lower operational
>     costs and, consequently, better (more frequent) updates [*] for
>     revocation information that take all these considerations into
>     account (in particular, the requirements related to the 2nd
>     question above have pushed the costs of providing revocation
>     information - probably an not-intended consequence/side effect).
> I don't think the argument that it's increased the costs is well 
> supported, and certainly not with respect to the holistic costs of 
> maintaining "keys to the Internet".
Let's put it this way - what if we have a system that does not have to 
provide signed "tokens" (OCSP responses) for each issued certificates 
but optimizes for the "valid" case ? In this case, if you run a PKI 
with, let's say, 100M valid certs of which you have 300K revoked, if you 
only have to provide 600K signatures overall, this definitely might 
change the order of magnitude of the incurred costs in terms of 
"capacity" (e.g., networking, signing, etc.).

I am trying to gather some information from major CAs around this point 
to understand what the real situation is today :D When / If I manage to 
get some data around this (if anybody is actually interested), I would 
be happy to share it with the community.

Do you (or anybody on the list) happen to have worked / have experience 
running large revocation infrastructure ? If so, would you mind sharing 
your experience (e.g., costs per-certificate for rev info provisioning - 
comprising the costs incurred for running or outsourcing the required 
infrastructure) ? I think it would be useful...

> In either event, it's worth noting that significant areas of 
> performance, privacy, and correctness have been glossed over, and from 
> industry, we know these are all substantial concerns that exist with 
> the revocation infrastructure today.
> [...]
> This is all predicated on an assumption of server certificates, 
> although admittedly, there are many other deployments that exist. 
> However, it is useful to frame the problem in the intended communities 
> - whether server certs, client certs, S/MIME, whether closed intranets 
> or public Internet, etc - so that the constraints can be understood 
> and the potential implementors can be determined.
>     [...]
You make very good points here, and I do not disagree :D I am also 
thinking about other environments where milliseconds are not the issue 
(IoT) and scalability might be the dominant design requirement.
>     [*] Because of this shift in OCSP behavior, CAs now issue OCSP
>     responses that have validity of up to 7 days, which is definitely
>     not what OCSP was intended for (usually few minutes, hours or
>     maybe one day max was initially view
> The industry has long accepted 8 hours as the absolute minimum for 
> such responses. I also don't think the historic discussion supports 
> that paranthetical - given that OCSP and CRLs were intended with an 
> equivalency and CRL publication itself was measured in much longer 
> timescales.

I think that 8 hours is not the same as 7 days :D Big difference there. 
Also, I would say that CRLs and OCSP responses have very different 
"implemented semantics" when it comes to the validity period - it is 
well understood that CRLs should still be checked for updates (as 
information can become available at any time and certainly within the 
nextUpdate) while OCSP responses validity are today treated as 
"cachable" for tor the entire validity period (which, if I read the RFCs 
correctly, is not the correct behavior). Unfortunately, the semantics 
that would "enforce" the correct behavior (omit the nextUpdate in OCSP 
responses) is not really implemented by today's (Internet) CAs as this 
might break the (limited) HTTP caching mechanism.

Ryan, again, thanks! I really appreciated your good points and the 
discussion :D


Best Regards,
Massimiliano Pala, Ph.D.
OpenCA Labs Director
OpenCA Logo