Re: [Cfrg] [CFRG] Safecurves v Brainpool / Rigid v Pseudorandom

Alyssa Rowan <> Tue, 14 January 2014 21:01 UTC

Return-Path: <>
Received: from localhost ( []) by (Postfix) with ESMTP id 202AD1AE29F for <>; Tue, 14 Jan 2014 13:01:12 -0800 (PST)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -1.902
X-Spam-Status: No, score=-1.902 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001] autolearn=ham
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id WT_4SNBMByYJ for <>; Tue, 14 Jan 2014 13:01:09 -0800 (PST)
Received: from ( []) by (Postfix) with ESMTP id 3ACC41AE29D for <>; Tue, 14 Jan 2014 13:01:09 -0800 (PST)
Received: from [] ( []) by (Postfix) with ESMTPSA id D840C601AA for <>; Tue, 14 Jan 2014 21:00:56 +0000 (GMT)
Message-ID: <>
Date: Tue, 14 Jan 2014 21:01:02 +0000
From: Alyssa Rowan <>
MIME-Version: 1.0
References: <> <> <>
In-Reply-To: <>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
Subject: Re: [Cfrg] [CFRG] Safecurves v Brainpool / Rigid v Pseudorandom
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: Crypto Forum Research Group <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Tue, 14 Jan 2014 21:01:12 -0000

Hash: SHA512

On 14/01/2014 16:51, Dan Brown wrote:

>> With zero knowledge of a hypothetical attack, we have zero 
>> knowledge about what may, or may not, be affected by it.
> We have zero knowledge that Curve25519 resists the hypothetical 
> attack for which P256 has hypothetically been manipulated to be 
> affected by. [Sorry to grammarians!]

Of course, in truth, we have zero actual knowledge about any such
attack, full stop - including whether it actually exists, or whether
P256 was actually manipulated to be affected by it.

> So, what is rigidity is claiming to resist, under your reasoning?

Simply put, rigidity assures "nothing up my sleeve".

For example, using the Safecurves criteria, three separate groups of
people independently came up with the exact same parameters for E-521.

> It's possible that trial and error was used for P256 until landed 
> in the vulnerable subset.

I gather that the SEC team at Certicom at the time _did_ perform a
trial-and-error search of SHA-1 seeds so that the resulting outputs
would meet certain criteria.

My information suggests that the criteria for selection were the
applicability of Certicom-patented techniques for fast arithmetic over
the curves so generated.

I have not been able to uncover any evidence that they were
intentionally weakened - although sadly, due to the breathtakingly
exceptional opacity of the process, I am unable to comprehensively
rule it out.

We do know they were working closely with NSA during the affected time
(we don't know to which end). We definitely know the NSA have had a
line item on their SIGINT Enabling Project's budget specifically about
influencing public-key standards in a way which makes them amenable to
attacks (although we don't know whether it did at the relevant time,
or what that precisely referred to).

The rest, sadly, is probably lost in some file somewhere. You may very
well be in a better position to uncover their precise provenance than
I (in the absence of nothing-up-my-sleeve numbers, I am not sure to
what degree it would actually assure anyone, but it would still be
nice to have some idea).

Happily, the "Chicago" curves have no such shadow, are faster and
easier to implement well, it is possible to make them extremely fast -
which will strongly aid the requirements of many parties wanting to
deploy forward secrecy (an important defence against nation state
adversaries performing mass surveillance). They make a pretty damn
good replacement.

> Now, suppose I were tasked to find my own curve to minimize the 
> chance of landing in the vulnerable set?

With no knowledge of the vulnerable set, any such choice is, sadly, blind.

We can, of course guess. We may or may not be right.

If we guess that such an attack perhaps resembles an existing attack
but with a new twist (if you'll pardon the pun!), then probably the
best we can do to avoid it is to avoid existing attacks in a
conservative manner, and thoroughly explain any such heuristic.

The question is simply whether our heuristic guesses are better than
random at avoiding an unknown attack, worse, or equivalent.

Obviously, we don't know which. But the strategy seems sensible.

>> [brainpoolP256r1 weak twist] Why? Bad luck, it seems.
> Thanks for the pointing this out, I hadn't noticed it before.

It's simply bad luck. brainpoolP384r1 doesn't have the same issue.
Neither, actually, does secp256r1, although secp224r1 does.

I presume it simply wasn't considered under the selection criteria or
thought of as relevant at the time.

That feeds into my point, however: with no thought that such a thing
might be relevant, psuedorandomness in fact did not protect
brainpoolP256r1 against such a flaw.

>> I for one prefer sound, well-explained, reproducibly verifiable 
>> curve design which resists all known attacks, to rolling the
>> dice and hoping it resists a hypothetical attack that we know
>> nothing about.
> Don't quite follow, based on my understanding above. Also, 
> publishing a seed and curve-PRF, is verifiable, and reproducible.

Naturally, provided the seeds are chosen in a verifiable manner.

I think if we start rolling the dice repeatedly until we get numbers
we'd like, we may as well just use what we know to pick good ones via
defined criteria.

But it seems like we simply disagree on that point - you prefer a
random approach, others including me prefer a solidly-explained
heuristic approach.

We are probably unlikely to reach consensus on that point, although
there's nothing stopping us having it both ways.

> I guess that you refer to the NIST seeds being not well-explained. 
> Brainpool certainly seeds seem to be.

I don't dispute that. It's a slightly odd process, but it's explained,
and seems to have been done that way to mirror the SECP process.

>> I can find no remotely compelling argument against them.
> People really want fast, because it reduces cost. But they also 
> want secure. Fast and secure are both important, but orthogonal. 
> People need to balance both sides.

Orthogonal, but not necessarily always _opposites_. Slow things are
not necessarily insecure; fast things not necessarily weak; not all
secure things are slow; not all insecure things are fast.

They are both separate, desirable criteria. I am familiar with a
number of primitives which have the unenviable properties of being
weak _and_ slow, as I am sure are you.

The only areas where they clearly fall into direct conflict are for
example where more rounds would make a function slower but more
secure, or where all other things being equal, a larger field or key
size would be more resistant to an attack than a smaller one. (Hence
the specification of multiple curves, to trade off performance and
size for estimated security margin.)

Here, we appear to have the rare and treasured opportunity of a useful
set of primitives which are both secure against all known and
reasonably conjectured attacks over the foreseeable future _and_ fast
- - when we need it most. I think we should take that opportunity:
because quite honestly, many aren't going to wait any longer than they
have to, and these curves are in fact already being used in the wild
in several applications.

> I had wanted to focus on the security side, to get a thorough 
> understanding of that side. Granted, I may be taking a too narrow 
> focus, from an engineering viewpoint, where action is the order of 
> the day, not reflection.  I was hoping CFRG to be more receptive
> to a (hypothetical) security-focus. Anyway, once one has a
> thorough understanding on both sides, one can make a balanced
> decision.

You are of course entirely welcome to come up with your own open
criteria and/or generate pseudorandom curves according to those
criteria; it appears that you may wish to do this. From the sounds of
it so far, it would end up broadly like Brainpool - pseudorandom
Edwards curves, not optimised for efficient implementation, but
selected to avoid known attacks, with the hope of avoiding unknown

Any such draft can of course be discussed separately, should you wish
to put it forward. Whether anyone would wish to use them is up to

- --