Re: [Secdispatch] [EXTERNAL]Re: Problem statement for post-quantum multi-algorithm PKI

"Dr. Pala" <> Tue, 15 October 2019 01:29 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 1176F1200E6 for <>; Mon, 14 Oct 2019 18:29:24 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -1.899
X-Spam-Status: No, score=-1.899 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, HTML_MESSAGE=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id cOpcanBqXuz1 for <>; Mon, 14 Oct 2019 18:29:21 -0700 (PDT)
Received: from ( []) by (Postfix) with ESMTP id ECF7D120013 for <>; Mon, 14 Oct 2019 18:29:20 -0700 (PDT)
Received: from localhost (unknown []) by (Postfix) with ESMTP id AE3713741074 for <>; Tue, 15 Oct 2019 01:29:20 +0000 (UTC)
X-Virus-Scanned: amavisd-new at
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with LMTP id kaV_yXkqblqh for <>; Mon, 14 Oct 2019 21:29:18 -0400 (EDT)
Received: from Maxs-MacBook-Pro-2.local (unknown []) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by (Postfix) with ESMTPSA id ACCD23740C3E for <>; Mon, 14 Oct 2019 21:29:18 -0400 (EDT)
References: <> <> <> <> <> <> <> <> <>
From: "Dr. Pala" <>
Message-ID: <>
Date: Mon, 14 Oct 2019 19:29:17 -0600
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:60.0) Gecko/20100101 Thunderbird/60.9.0
MIME-Version: 1.0
In-Reply-To: <>
Content-Type: multipart/alternative; boundary="------------1768C5AB05CB95637C0A175A"
Content-Language: en-US
Archived-At: <>
Subject: Re: [Secdispatch] [EXTERNAL]Re: Problem statement for post-quantum multi-algorithm PKI
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Security Dispatch <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Tue, 15 Oct 2019 01:29:24 -0000

Hi All,

I would like to resume some discussion around the proposed solution for 
defining a new public key and signature format where the internal 
structure is a SEQUENCE of what has been defined so far - e.g., RSA, 

In particular, I think there are distinct aspects of the problem that 
need to be addressed separately. Each of these sub-problems are 
important and need to be investigated. The aspect we are focusing the 
work on, today, is actually the Engineering/Operational part.

In particular, I see three distinct aspects of the problem:

  * *Post-Quantum Cryptography (PQC).* This relates to studying new
    algorithms that can be quantum (and non-quantum) safe. This requires
    years of work and it is a process that NIST already started. We are
    NOT addressing this problem (out of scope).

  * *Algorithms (e.g., TLS) behavior with PQC.* This relates to how
    current protocols handle Post-Quantum Cryptography. In particular,
    if we were to adopt one of PQC available today, how would TLS
    perform ? How would other protocols that leverage PKIX perform ? We
    are NOT addressing this problem (out of scope).

  * *How to Deploy Algorithms in today's PKIs (Engineering/Operations).*
    This relates to how can we deploy the new algorithms. In a mixed
    environment where you do not know if the new algorithms are already
    sound while the older algorithms are not broken yet, being able to
    protect with multiple algorithms the different parts of a PKIX is
    needed today. This proposal is about solving this problem:
    protecting infrastructures today with multiple algorithms (at least
    for Root and CA levels) and allow devices to update their support
    for the intended algorithm in stages - first the older algorithm can
    be used, then the newer algorithm can be used to validate
    certificates, then the EE cert algorithm itself can be updated to
    support signing with the newer algorithm (this is when you can start
    using the new algorithm by itself).

For the specific technical details, /*the proposed solution can actually 
be used to protect the different data structures in PKIX*/, therefore 
making it _/economically feasible/_ to keep trusting your current Trust 
Infrastructures - it does not only apply today to "Traditional" vs. 
"Post-Quantum" cryptography, but it can be used every time you might 
want to support more than one algorithm because of security or 
efficiency (e.g., we can provide an RSA/ECDSA mixed infrastructure - 
older devices that support RSA will use that algorithm to validate the 
chain and revocation info, newer devices that support ECDSA will use 
that algorithm instead).

Other possibilities (e.g., using separate certificate chains), IMO, 
complicate PKIX operations - i.e., now when revoking a certificate we 
need to be sure we revoke all the corresponding certificates in the 
sister chains, thus introducing operational costs and procedures (that 
are not supported by the Ops teams today).

I do like this technical solution because it allows me to extend the 
lifetime of my Trust Infrastructure beyond the "Factorization Doom's 
Day" - this provides me with the tool I need to make sure all of us are 
still protected when getting on your broadband network :D It is not just 
a "Post Quantum Crypto" tool.

I would also welcome any other technical solution that provides the same 
level of backward compatibility and extensible: a solution that will 
allow me to keep trusting my infrastructure today and let me switch to a 
newer protocol tomorrow without having to change my authentication 
procedures (i.e., both validating credentials and their revocation 
status) tomorrow.

Last but not least, /*adopting this idea does not mean we cannot work on 
the other two important aspects of the problem*/ or that a completely 
new method for authentication (like Steve Farrell or Phillip Baker 
suggested) can be investigated - we need to start somewhere and I think 
this is a very good starting point :D Other approaches might require 
multiple years to be discussed, therefore I would suggest we work on 
this first and address the remaining problems in some other venues 
and/or when we have a proposed solution for them.

Does that make sense ?


On 9/16/19 3:37 PM, Mike Ounsworth wrote:
> Hi Stephen,
> I feel like we're arguing in circles here and not making any progress.
> Re: figuring out "hybrid signature authentication" in parallel with NIST;
> You seem to be implying that we can't work on defining message structures to hold multiple keys and signatures until we know the exact encodings of the NIST winners. I'm not sure I follow the reason why.
> Currently, something like, for example, CMS (RFC 5652)  is abstracted away from the encodings of a given algorithm; an algorithm can choose any method it wishes to turn its public key and signature into an octet string; how it does it is an internal detail of the algorithm and has no bearing on the CMS spec. This is abstraction between protocol and crypto is a core part of crypto agility. Surely we can start thinking about how to properly combine multiple signatures before we know exactly what those signatures will be.
> Re: "Why X.509?"
> You seem to be expecting me to justify why X.509 is worth keeping.
> I'm expecting you to propose an alternative and justify why it's better.
> We're at a stalemate.
> Since X.509 is the accepted standard, I think the ball's in your court here to justify why it should be binned.
> - - -
> Mike Ounsworth | Office: +1 (613) 270-2873
> -----Original Message-----
> From: Secdispatch <> On Behalf Of Stephen Farrell
> Sent: Monday, September 16, 2019 3:59 PM
> To:
> Subject: [EXTERNAL]Re: [Secdispatch] Problem statement for post-quantum multi-algorithm PKI
> Hiya,
> Replying to various folks at once...
> On 15/09/2019 15:29, Ira McDonald wrote:
>> Hi,
>> Thanks for the link to Kenny's talk.
>> Stephen - The hard problem for automotive vehicles is that, even if
>> Quantum Computing never comes to pass, algorithms and various
>> implementations go on having new weaknesses found over time.
>> But decent performance requires hardware assist, in many cases.
>> But automotive ECUs are very unlikely to start have large FPGAs added
>> soon.  Replacing 100s of expensive ECUs in fielded vehicles to allow
>> practical algorithm agility is not going to happen.  This issue that
>> Michael Richardson mentioned is at the top of the list for the
>> automotive cybersecurity community.
> I don't understand how devices that are not going to be updated can support algorithm agility. Perhaps you mean that you want to deploy those devices soon and not update for a couple of decades or something? If so, that sound like a bad plan to me, and one that'd be better to not cater to really. (RFC8240 has lots of discussion of that.)
> On 16/09/2019 17:05, Mike Ounsworth wrote:
>> My Goal: multi-vendor interop on PQ certificates.
> That seems to beg the question again as to why x.509 is needed at all as part of a PQ solution.
>> I'm coming from the
>> perspective of a CA; it can take years to distribute a root cert to
>> all the places it needs to be before you can really start using it.
>> Plus, people want to playing with these things ASAP to understand the
>> scope of infrastructure changes required. There's the time pressure.
>> I think you're right that to really deploy any meaningful 20 year root
>> using, for example the small lattice schemes, we'll need to wait for
>> the NIST PQC algs to stop having so much churn.
>> That said, laying the groundwork for the "hybrid" property in
>> certificates that the NIST PQC community is calling for will require
>> much debate and a few RFCs. This work is necessary and independent of
>> the choice of algorithm from the NIST PQC competition, so why should
>> we wait until 2023 to _start_ thinking about it? Why not do it in
>> parallel, be able to offer alpha test versions of PKI products before
>> the conclusion of the NIST PQC, and be ready to drop-in the NIST
>> winners the day they're ready?
> One reason to not do it in parallel is that we don't know how the winning algorithm parameters will look. I can easily imagine NIST modifying how those are encoded and/or introducing new variations, after basic algorithms have been picked, leading to things having to be re-done.
> (Sorry if the quoting is messed up below, if so, it was messed up in my MUA before I started is my excuse:-) On 16/09/2019 19:06, Daniel Van Geest wrote:
>> Can we support multiple signatures inside a certificate? I don't think
>> so.
>> Why not?  Mike’s problem statement draft has two potential technical
>> solutions doing just that, each with advantages and disadvantages.
>> Or is there more of a logistical or other issue?  Knowing why you
>> think we can’t support multiple signatures inside a certificate could
>> help refine the problem statement.
> Again, that assumes that x.509 is a sensible part of a solution.
> We should first question that. (Mike's draft [1] doesn't.)
> Secondly, even if x.509 additions were useful somehow for backwards compatibility (which I find hard to believe TBH) then dealing with
>> 1 certificate is likely far easier than messing about inside certs
> and thereby breaking all the lovely/horrible x.509 code out there.
> So Mike's section 2.1 [1] is way easier than the 2.[2|3] approaches, despite it being the one with no specific drafts.
> Again, all that said, I do understand why it may be attractive for those who produce certificates to argue for putting the PQ magic beans inside x.509. There are costs elsewhere implied in doing that, so it ought not be a starting-out assumption.
> I don't consider the question as to why a PQ x.509 is needed nor why now has been satisfactorily answered so far.
> Cheers,
> S.
> [1]
> _______________________________________________
> Secdispatch mailing list
Best Regards,
Massimiliano Pala, Ph.D.
OpenCA Labs Director
OpenCA Logo