Re: [Cfrg] [CFRG] Safecurves v Brainpool / Rigid v Pseudorandom

Paul Lambert <> Wed, 15 January 2014 01:33 UTC

Return-Path: <>
Received: from localhost ( []) by (Postfix) with ESMTP id 4946B1AE143 for <>; Tue, 14 Jan 2014 17:33:23 -0800 (PST)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -1.566
X-Spam-Status: No, score=-1.566 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, HTML_MESSAGE=0.001, IP_NOT_FRIENDLY=0.334, SPF_PASS=-0.001] autolearn=no
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id kG0kmSQKpAJ6 for <>; Tue, 14 Jan 2014 17:33:19 -0800 (PST)
Received: from ( []) by (Postfix) with ESMTP id 598AC1AE0EA for <>; Tue, 14 Jan 2014 17:33:19 -0800 (PST)
Received: from pps.filterd ( []) by (8.14.5/8.14.5) with SMTP id s0F1WaJP023088; Tue, 14 Jan 2014 17:33:07 -0800
Received: from pps.reinject (localhost []) by with ESMTP id 1hcwywkv0j-1 (version=TLSv1/SSLv3 cipher=AES256-SHA bits=256 verify=NOT); Tue, 14 Jan 2014 17:33:06 -0800
Received: from ( []) by pps.reinject (8.14.5/8.14.5) with SMTP id s0F1X6DY023197; Tue, 14 Jan 2014 17:33:06 -0800
Received: from ([]) by with ESMTP id 1hcwywkv0e-1 (version=TLSv1/SSLv3 cipher=RC4-MD5 bits=128 verify=NOT); Tue, 14 Jan 2014 17:33:05 -0800
Received: from ([]) by ([]) with mapi; Tue, 14 Jan 2014 17:33:04 -0800
From: Paul Lambert <>
To: David McGrew <>, Dan Brown <>
Date: Tue, 14 Jan 2014 17:33:03 -0800
Thread-Topic: [Cfrg] [CFRG] Safecurves v Brainpool / Rigid v Pseudorandom
Thread-Index: Ac8RZjDUWieUXTiDRMm0SgNONJgwFQAK1nKw
Message-ID: <>
References: <> <> <> <>
In-Reply-To: <>
Accept-Language: en-US
Content-Language: en-US
acceptlanguage: en-US
Content-Type: multipart/alternative; boundary="_000_7BAC95F5A7E67643AAFB2C31BEE662D018B7ED7B6FSCVEXCH2marve_"
MIME-Version: 1.0
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:5.11.87, 1.0.14, 0.0.0000 definitions=2014-01-14_09:2014-01-14, 2014-01-14, 1970-01-01 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=7.0.1-1305240000 definitions=main-1401140189
Cc: "''" <>
Subject: Re: [Cfrg] [CFRG] Safecurves v Brainpool / Rigid v Pseudorandom
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: Crypto Forum Research Group <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Wed, 15 Jan 2014 01:33:23 -0000

From: Cfrg [] On Behalf Of David McGrew
Sent: Tuesday, January 14, 2014 12:21 PM
To: Dan Brown
Cc: ''
Subject: Re: [Cfrg] [CFRG] Safecurves v Brainpool / Rigid v Pseudorandom

Hi Dan,

thanks for sharing your thoughts.  It seems to me that the arguments in favor of rigid curves comes down to the "nothing up my sleeve" argument: if there are no free parameters to be chosen when constructing an elliptic curve group, then there is no need to trust the person doing the construction.

There is another consideration: the cost of attacking multiple instances of the EC discrete log problem can be amortized across multiple such instances, using techniques such as Kuhn and Struik's "Random Walks Revisited: Extensions of Pollard’s Rho Algorithm for Computing Multiple Discrete Logarithms", or a time-memory tradeoff like Shanks' baby step - giant step method, in which the number of stored elements exceeds the square root of the group size.   Some users may be motivated by the logic that by avoiding a particular well-known curve, they prevent potential attackers from amortizing the cost of finding their private within a batch of other attacks.    If the cost of solving a single discrete logarithm problem are within the attacker's budget, then these considerations would come into play.   It would then be valuable to have a way to generate multiple curves, which could be pseudorandom and verifiable, or perhaps could be sequential and based on the "smallest number greater than p" sort of logic.
[ ⨳]  Interesting … so we could have a curve of the day site.  Order is hard to create, but could be generated and used by many.

In any case, the suggestion to document the rationale behind elliptic curve parameter choices is a good one!


On 01/14/2014 11:51 AM, Dan Brown wrote:

-----Original Message-----

From: Alyssa Rowan

Sent: Monday, January 13, 2014 7:27 PM

On 13/01/2014 23:07, Dan Brown wrote:

For a given field, pseudorandom curve better resists secret attacks

than rigidity.

With respect, I disagree.

With zero knowledge of a hypothetical attack, we have zero knowledge about

what may, or may not, be affected by it.

We have zero knowledge that Curve25519 resists the hypothetical attack for which P256 has hypothetically been manipulated to be affected by. [Sorry to grammarians!]

So, what is rigidity is claiming to resist, under your reasoning?

Under my reasoning, I assume that there is a set of curves, partitioned into two subsets: curves vulnerable to a secret attack known the generator (NIST/NSA) and those not..

It's possible that trial and error was used for P256 until landed in the vulnerable subset.  I think that is implicit in the safecurves ... rigidity page.  No?

Now, suppose I were tasked to find my own curve to minimize the chance of landing in the vulnerable set?

Choose a curve with small coefficients?  Since I do not know what the vulnerable set is, and rightly want to assume zero knowledge about it, I cannot presume that small coefficients protect me from the hypothetical secret attack, unless presume nonzero knowledge the vulnerable set.  [Sorry for the tedious redundancy.]

Granted: small coefficients would help my persuade others that I did not select the curve maliciously by exhaustive search.

What about a pseudorandom curve? More precisely, where the coefficients are derived from PRF(rigid-seed).

Assuming independence of the vulnerable set from the PRF and rigid-seed, and the pseudorandomness of the PRF, then it seems that the pseudorandom curve lands in the vulnerable subset with probability p, where p is the density of the vulnerable subset within the larger set of curves from which one draws the curve from.

Under this hypothesis, the manipulation of P256 would have been expected to take on the order of 1/p trials.

One can plug in any value for p, based on whatever assumptions one thinks is reasonable.

For example, my personal belief had been p = 2^(-128), perhaps because I was super biased toward, and committed to, ECC.  In that case, the manipulation phase for NIST P256 would be infeasible, and I could dismiss this whole issue rigidity v pseudorandom as didactic, distraction, irrelevant, baseless, stalling, bogus, moot, FUD, or whatever.

But I recognize that maybe I am too optimistic.  Others have suggested p =2^(-30), and I am listening to them.  Then a malicious NIST P256 would be very feasible.  In my view, the resulting assurance provided by pseudorandom, probability of 2^(-30) seems a little low, but then maybe I am just not yet used to thinking this way.  Others could say that any public key algorithm always has a similar risk of new attacks being discovered, and therefore that this risk, if not actually acceptable, is unavoidable.

What about small coefficient curves?  They don't have the argument of probability p of avoiding the vulnerable set, but they do have some legitimate security arguments: (1, resp.) no attacks are known, and (2, resp.) known attacks do not seem related to coefficient size. As I see it, these arguments assume more about the hypothetical secret attacks than the NIST curves: (1, resp.) either they do not exist (but that puts back to negligible p), or (2, resp.) the hypothetical secret attack is similar to existing known attacks to the extent that it cannot possibility depend on coefficient size.  The latter argument has tremendous merit in that it is a very plausible assumption, but pseudorandom curves do not rely on this assumption.  Again, it presumes virtually nonzero knowledge about the secret attack (other than the partition of the set of curves into vulnerable and non-vulnerable).  No?

It's certainly possible to vilify me for suggesting the possibility of a small coefficient attack, call it baseless, etc., but what I'm saying is that pseudorandom curves do not rely this.  Instead they rely on more falsifiable assumptions, e.g. the effective of the PRF, etc.

This general strategy of reasoning to avoid unknown attacks is used in the field of "provable security" and also seems to be used in the "rigidity" page of the safecurves strategy.

I've a tried to explain this before on the TLS.  In so doing, I reconsidering, or setting aside, my own beliefs and preferences, to try to engage the rigidity arguments.

A point to note here: brainpoolP256r1 has a _very_ weak twist (ρ @ 2^44) - a

practical problem only in a flawed implementation that doesn't check for invalid

points. But here nevertheless, we actually have a curve generated in a

documented pseudorandom manner, yet has at least one desirable property

distinctly less secure than all of its tested peers.

Why? Bad luck, it seems.

Thanks for the pointing this out, I hadn't noticed it before.

I for one prefer sound, well-explained, reproducibly verifiable curve design

which resists all known attacks, to rolling the dice and hoping it resists a

hypothetical attack that we know nothing about.

Don't quite follow, based on my understanding above.

Also, publishing a seed and curve-PRF, is verifiable, and reproducible.

I guess that you refer to the NIST seeds being not well-explained.

Brainpool certainly seeds seem to be.

Hence my preference for the "Safecurves" approach, and thus the "Chicago"


That some of them can be (and have been) implemented in an extremely

efficient way, and thus are exceptionally well-suited for the important task of

accelerating wide adoption of forward security in internet protocols, I also find

a highly desirable point in their favour - as, it seems, do many others.

I can find no remotely compelling argument against them.

People really want fast, because it reduces cost.

But they also want secure.

Fast and secure are both important, but orthogonal.

People need to balance both sides.

I had wanted to focus on the security side, to get a thorough understanding of that side.

Granted, I may be taking a too narrow focus, from an engineering viewpoint, where action is the order of the day, not reflection.  I was hoping CFRG to be more receptive to a (hypothetical) security-focus.

Anyway, once one has a thorough understanding on both sides, one can make a balanced decision.


This transmission (including any attachments) may contain confidential information, privileged material (including material protected by the solicitor-client or other applicable privileges), or constitute non-public information. Any use of this information by anyone other than the intended recipient is prohibited. If you have received this transmission in error, please immediately reply to the sender and delete this information from your system. Use, dissemination, distribution, or reproduction of this transmission by unintended recipients is not authorized and may be unlawful.


Cfrg mailing list<>