Re: [Cfrg] Progress on curve recommendations for TLS WG

"D. J. Bernstein" <djb@cr.yp.to> Fri, 01 August 2014 01:37 UTC

Return-Path: <djb-dsn2-1406711340.7506@cr.yp.to>
X-Original-To: cfrg@ietfa.amsl.com
Delivered-To: cfrg@ietfa.amsl.com
Received: from localhost (ietfa.amsl.com [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id E9E931A0368 for <cfrg@ietfa.amsl.com>; Thu, 31 Jul 2014 18:37:13 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -0.7
X-Spam-Level:
X-Spam-Status: No, score=-0.7 tagged_above=-999 required=5 tests=[BAYES_40=-0.001, RCVD_IN_DNSWL_LOW=-0.7, UNPARSEABLE_RELAY=0.001] autolearn=ham
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 24DaVUBW7kkp for <cfrg@ietfa.amsl.com>; Thu, 31 Jul 2014 18:37:11 -0700 (PDT)
Received: from mace.cs.uic.edu (mace.cs.uic.edu [131.193.32.224]) by ietfa.amsl.com (Postfix) with SMTP id E31291A035F for <cfrg@irtf.org>; Thu, 31 Jul 2014 18:37:10 -0700 (PDT)
Received: (qmail 9955 invoked by uid 1011); 1 Aug 2014 01:37:11 -0000
Received: from unknown (unknown) by unknown with QMTP; 1 Aug 2014 01:37:11 -0000
Received: (qmail 11642 invoked by uid 1001); 1 Aug 2014 01:36:59 -0000
Date: Fri, 01 Aug 2014 01:36:59 -0000
Message-ID: <20140801013659.11640.qmail@cr.yp.to>
From: "D. J. Bernstein" <djb@cr.yp.to>
To: cfrg@irtf.org
Mail-Followup-To: cfrg@irtf.org
Archived-At: http://mailarchive.ietf.org/arch/msg/cfrg/oMS8YfLOWSBlHMFm7pBeJobafrA
Subject: Re: [Cfrg] Progress on curve recommendations for TLS WG
X-BeenThere: cfrg@irtf.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: Crypto Forum Research Group <cfrg.irtf.org>
List-Unsubscribe: <http://www.irtf.org/mailman/options/cfrg>, <mailto:cfrg-request@irtf.org?subject=unsubscribe>
List-Archive: <http://www.irtf.org/mail-archive/web/cfrg/>
List-Post: <mailto:cfrg@irtf.org>
List-Help: <mailto:cfrg-request@irtf.org?subject=help>
List-Subscribe: <http://www.irtf.org/mailman/listinfo/cfrg>, <mailto:cfrg-request@irtf.org?subject=subscribe>
X-List-Received-Date: Fri, 01 Aug 2014 01:37:14 -0000

Paterson, Kenny writes:
> Would selecting curves that are not in Weierstrass form materially
> slow down deployment?

It wouldn't slow down deployment at all. As explained in detail in my
"curves, coordinates, and computations" message, Montgomery curves and
Edwards curves are trivially expressed in short-Weierstrass form, so
they are fully compatible with standards such as ECDSA, and they allow
all of the usual implementation options.

These curves _also_ enable additional implementation options and better
protocol designs---additional computations and coordinates---helping to
minimize tensions between speed, security, and simplicity. For example,
the Montgomery ladder is the simplest DH algorithm in all of ECC, and in
combination with sensible engineering (always set the high bit of the
scalar, choose twist-secure curves, use protocols that suppress the
y-coordinate) it removes many opportunities for implementors to screw up
security. Similarly, complete Edwards addition simplifies signatures.

The obvious conclusion is that every new curve should support the
_coordinates_ required for these additional options. What this means is
that the curve has a unique point of order 2 and points of order 4. Any
such curve can be expressed in Edwards form _and_ can be expressed in
Montgomery form _and_ can be expressed in short-Weierstrass form.

> What is the cost of keeping backwards compatibility with existing
> defined point formats in RFC 4492, if any, for different curve shapes
> (Edwards, twisted Edwards, Montgomery, Weierstrass-only form)?

Adding new formats would provide only a small gain in efficiency (see my
"curves, coordinates, and computations" message) but a larger gain in
simplicity. For DH protocols I recommend the Montgomery x-coordinate,
allowing remarkably simple DH implementations. Signatures are inherently
more complicated, with two scalars inside verification; for signatures I
recommend Edwards coordinates to simplify the resulting additions. Both
of these encoding choices are compatible with top speed but are amply
justified by simplicity.

> If ephemeral really means ephemeral, what are the implications for the
> mix of fixed-base/variable-base computations in ECDHE and what, if any,
> are the implications for the choice of curve type?

In the setting of one-time DH keys, both fixed-base and variable-base
computations are frequent. For all curves one can easily make fixed-base
a few times faster than variable-base; the exact speedup depends mainly
on the amount of precomputation. A poor choice of curve (cofactor 1)
slows down both of these operations by a factor of roughly 1.5. A poor
choice of prime (e.g., the P-256 prime) creates another big slowdown.

Details of the performance considerations have been amply explored in
the literature, leading to the conclusion that what one wants for
performance is a curve with a unique point of order 2 and points of
order 4---fortunately matching what one wants for simplicity.

> Correspondingly, what are the implications for our choices if we accept
> that ephemeral reuse is the expected behaviour?

I think it's important to be careful with terminology here.

The TLS advertisement of "perfect forward secrecy" seems dangerously
deceptive when keys are in fact not erased for days or months (see
https://www.imperialviolet.org/2013/06/27/botchingpfs.html) Keeping
keys around for an unlimited-length session, or even for a single
unlimited-length connection, is clearly not the right policy. The
fundamental question here is not whether keys are reused, but how much
time is allowed to elapse before keys are erased.

I'm told that the default time for Microsoft products is two hours. This
strikes me as much too long. I'd suggest 10 seconds as an upper limit. I
realize that many protocols need to be changed to do this properly, but
it's a target to shoot for.

But what about the people who really want to save keygen time by reusing
DH keys? Won't they object to the 10-second upper limit?

The good news: 1 keygen is unnoticeable next to (say) 100 variable-base
computations, even if the keygen is implemented as a variable-base
computation without any special fixed-base code. So these people don't
actually have any performance incentive to use a DH key more than 100
times. The only ways to stretch 100 DH invocations across 10 seconds are

   * to use a very slow device or
   * to use a faster device that isn't actually spending much of its
     time on DH, so there's little reason to worry about keygen.

A few data points regarding "very slow device": I'm not talking about a
typical smartphone/tablet CPU such as a 1GHz Cortex-A8 (20000 Curve25519
computations in 10 seconds). I'm talking about, e.g., a 16Mhz ATmega2560
microcontroller. The Hutter--Schwabe AVR implementation of Curve25519
takes 1.4 seconds on this CPU---but this still means that 10 seconds is
enough time for a keygen plus several variable-base computations.

Obviously the performance impact of reusing DH keys is to emphasize
variable-base computations. In the opposite direction, signing
emphasizes fixed-base computations, while verification emphasizes
double-base computations.

As for implications, surely CFRG should consider the performance of each
of these important computations, should consider the simplicity of
implementations focused on one type of computation, and should consider
the simplicity of implementations handling more types of computations.
But I don't expect this broad view to create conflicts in curve choices;
it's more a matter of confirming that one curve works well everywhere.

I do recommend considering, separately from performance and simplicity
of good implementations, the extent to which performance and simplicity
can be improved by _bad_ implementations. Such improvements create an
incentive towards bad implementations and are therefore anti-desiderata.
Examples of "bad":

   * implementations that don't handle all valid inputs correctly;
   * implementations that damage security when the attacker supplies
     invalid inputs;
   * variable-time implementations.

There are two reasons to highlight timing attacks rather than power
attacks, EM attacks, acoustic attacks, etc.: first, timing is naturally
visible through the Internet; second, I highly doubt that considering
side-channel attacks beyond timing will affect any curve decisions.

> Do the current proposals (Curve25519 and friends, and the NUMS curves)
> provide an adequate degree of rigidity that is likely to satisfy the
> widest set of commentators?

This sounds to me like an evaluation question, not a requirements
question. I do think that limiting the curve designer's wiggle room
belongs on the list of desiderata, for reasons explained in the
following documents:

   http://cr.yp.to/talks/2013.05.31/slides-dan+tanja-20130531-4x3.pdf (page 21)
   http://safecurves.cr.yp.to/rigid.html

However, the following recent analysis shows that the Brainpool curves,
which were advertised as "generated in a systematic and comprehensive
way" covering several different security levels, actually gave the curve
generator the flexibility to secretly choose from among more than
1000000 curves for any particular prime:

   http://safecurves.cr.yp.to/bada55.html
   http://safecurves.cr.yp.to/bada55/bada55-20140722.pdf

I would therefore advise against stating any desiderata in terms of
unjustified marketing terminology such as "nothing up my sleeves" or
"generated in a systematic and comprehensive way" or "generated
deterministically from the security level". The actual question here is
quantitative: how many different curves could a malicious curve
generator have fooled the public into accepting?

> Or should we be thinking about generating
> fresh curves using a public process having verifiably random inputs? What
> would the likely impact be on performance?

The impact depends on exactly which class of curves is chosen, raising
the question of what manipulation is possible through this choice---I
would guess more than the manipulation possible for any of the proposals
that are on the table so far.

Dan Brown claims that choosing random curves (in some unspecified class)
is safer than choosing "special" curves. He says that this claim is
justified by "special" MOV-vulnerable curves and "special"
SmartASS-vulnerable curves. However, he ignores

   * the fact that we always take "special" curves with unusually small
     cofactors, and believe these to be more secure than random curves;

   * the fact that we now take even more "special" curves that also have
     unusually small twist cofactors, for the same reason; and

   * five further counterexamples in Section 11 of a 2008 paper by
     Koblitz, Koblitz, and Menezes (http://eprint.iacr.org/2008/390)---
     in these five examples, under perfectly believable hypotheses,
     random curve choices are _less_ secure than "special" curves.

Koblitz, Koblitz, and Menezes conclude that "either choice has risks"
and that the ultimate decision is "a subjective one based on the user's
best guess about future vulnerabilities."

---Dan