Re: [Cfrg] Progress on curve recommendations for TLS WG

"D. J. Bernstein" <djb@cr.yp.to> Sat, 16 August 2014 02:16 UTC

Return-Path: <djb-dsn2-1406711340.7506@cr.yp.to>
X-Original-To: cfrg@ietfa.amsl.com
Delivered-To: cfrg@ietfa.amsl.com
Received: from localhost (ietfa.amsl.com [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 760BE1A09EF for <cfrg@ietfa.amsl.com>; Fri, 15 Aug 2014 19:16:41 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: 0.101
X-Spam-Level:
X-Spam-Status: No, score=0.101 tagged_above=-999 required=5 tests=[BAYES_50=0.8, RCVD_IN_DNSWL_LOW=-0.7, UNPARSEABLE_RELAY=0.001] autolearn=ham
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id gO1whN4JEWDk for <cfrg@ietfa.amsl.com>; Fri, 15 Aug 2014 19:16:39 -0700 (PDT)
Received: from mace.cs.uic.edu (mace.cs.uic.edu [131.193.32.224]) by ietfa.amsl.com (Postfix) with SMTP id 1379F1A09E8 for <cfrg@irtf.org>; Fri, 15 Aug 2014 19:16:38 -0700 (PDT)
Received: (qmail 23987 invoked by uid 1011); 16 Aug 2014 02:16:37 -0000
Received: from unknown (unknown) by unknown with QMTP; 16 Aug 2014 02:16:37 -0000
Received: (qmail 28153 invoked by uid 1001); 16 Aug 2014 02:16:27 -0000
Date: 16 Aug 2014 02:16:27 -0000
Message-ID: <20140816021627.28151.qmail@cr.yp.to>
From: "D. J. Bernstein" <djb@cr.yp.to>
To: cfrg@irtf.org
Mail-Followup-To: cfrg@irtf.org
In-Reply-To: <810C31990B57ED40B2062BA10D43FBF5CCD05A@XMB116CNC.rim.net>
Archived-At: http://mailarchive.ietf.org/arch/msg/cfrg/UQ3NFIfY5pmRUR6INwK2o591pPo
Subject: Re: [Cfrg] Progress on curve recommendations for TLS WG
X-BeenThere: cfrg@irtf.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: Crypto Forum Research Group <cfrg.irtf.org>
List-Unsubscribe: <http://www.irtf.org/mailman/options/cfrg>, <mailto:cfrg-request@irtf.org?subject=unsubscribe>
List-Archive: <http://www.irtf.org/mail-archive/web/cfrg/>
List-Post: <mailto:cfrg@irtf.org>
List-Help: <mailto:cfrg-request@irtf.org?subject=help>
List-Subscribe: <http://www.irtf.org/mailman/listinfo/cfrg>, <mailto:cfrg-request@irtf.org?subject=subscribe>
X-List-Received-Date: Sat, 16 Aug 2014 02:16:41 -0000

Less rigidity means a larger collection of curves available for secret
selection by the curve generator. This is important in the following
scenario, which shouldn't be exaggerated but also shouldn't be ignored:

   * the curve generator is malicious;
   * the curve generator knows an attack that the public doesn't know;
   * some curve in the larger collection is vulnerable to the attack;
   * no curves in the smaller collection are vulnerable to the attack.

A much simpler issue is that any curve, even an honestly generated one,
might turn out to be vulnerable to a secret attack. We do _not_ have any
way to protect against this---but, again, it's not the same issue. What
we _do_ have are ways to stop the curve generator from amplifying a
one-in-a-million secret attack into a guaranteed attack. Dan Brown is
confusing these two issues when he says that "rigidity alone" does not
ensure safety "against secret attacks".

One way to achieve a reasonable level of rigidity is to choose the curve
that is as _small_ as possible---while obeying public security criteria,
obviously. (My experience is that defining "small" as "smallest cost",
averaged in any plausible way across platforms, minimizes wiggle room:
it's vastly more difficult to reach truly top speed than to merely come
close. But that isn't the topic of this message.)

What do we know, for various definitions of "small", about the security
of taking the smallest curve? Answer: Maybe it's more secure than
others. Maybe it's less. We simply don't know. The literature has

   * examples of attacks where the smallest curve resisting other
     attacks is _more_ secure than average and

   * examples of attacks where the smallest curve resisting other
     attacks is _less_ secure than average.

Maybe a secret attack would be of the first type, or maybe it would be
of the second type. In a previous message I pointed to seven known
examples of the first type (for various notions of smallness) and two
known examples of the second type. Koblitz, Koblitz (the co-inventor of
ECC), and Menezes wrote a long paper http://eprint.iacr.org/2008/390
with a section devoted to this issue:

   11.1. Special or random selection of parameters?

   A general philosophy one often encounters in cryptography is that
   whenever possible parameters should be chosen by some random process.
   If a special choice is made to increase efficiency, there is always
   the risk that the same property that made the choice so attractive
   will also lead to vulnerability to an unanticipated attack.

   [... several pages describing five detailed examples of how special
   choices can _improve_ security ...]
   
   Our purpose in giving these examples is not to lobby for the use of
   special curves in preference to random ones. Rather, our point is
   that conventional wisdom may turn out to be wrong and that, as far as
   anyone knows, either choice has risks. The decision about what kind
   of curve to use in ECC is a subjective one based on the user's best
   guess about future vulnerabilities.

   As frequently happens in cryptography, a close examination of a
   commonly accepted viewpoint on security issues reveals that opposing
   opinions or interpretations cannot be ruled out. Much as we might
   wish to convey to the outside world an impression of self-confidence
   and mathematical certainty about our recommendations (see Section 2),
   there is ample reason to wonder whether this self-confidence is
   justified.

Brown keeps claiming, incorrectly, that the risks are entirely on one
side. His argument consists of pointing to the well-documented risks on
that side (such as MOV, one of the examples I had given) while ignoring
the well-documented risks on the other side.

---Dan