Re: [Cfrg] ECC reboot (Was: When's the decision?)

Michael Hamburg <> Fri, 17 October 2014 21:27 UTC

Return-Path: <>
Received: from localhost ( []) by (Postfix) with ESMTP id B17811A701C for <>; Fri, 17 Oct 2014 14:27:51 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: 3.454
X-Spam-Level: ***
X-Spam-Status: No, score=3.454 tagged_above=-999 required=5 tests=[BAYES_20=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FH_HOST_EQ_D_D_D_D=0.765, FH_HOST_EQ_D_D_D_DB=0.888, HELO_MISMATCH_ORG=0.611, HOST_MISMATCH_NET=0.311, RDNS_DYNAMIC=0.982, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001] autolearn=no
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id u0Bu4fTdyMOQ for <>; Fri, 17 Oct 2014 14:27:50 -0700 (PDT)
Received: from ( []) (using TLSv1.1 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by (Postfix) with ESMTPS id 04AFE1A6FE1 for <>; Fri, 17 Oct 2014 14:27:49 -0700 (PDT)
Received: from [] (unknown []) by (Postfix) with ESMTPSA id 74062F5EAE; Fri, 17 Oct 2014 14:25:48 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;; s=sldo; t=1413581148; bh=dTGLn+byutOWugYZkYOoJYFFSLQVx+ZfX6PCcubqJ7A=; h=Subject:From:In-Reply-To:Date:Cc:References:To:From; b=HwFBorxv/h/Y+Fp30yZ60bfmCr7A6n9zc1LTOxZfCOTgqfvZ1ORxTK9yYY94wXCr0 4qFKj0lT+qjM4mAmi8ANpzVlUzBbOALAjgJYTFFtfnFXoTt9bV+T+ef8mER14hq1tx vYEQP5Jg84d7mPL0ObzZv3WUud5IKCO3s6flzJp4=
Content-Type: text/plain; charset=utf-8
Mime-Version: 1.0 (Mac OS X Mail 8.0 \(1990.1\))
From: Michael Hamburg <>
In-Reply-To: <>
Date: Fri, 17 Oct 2014 14:27:46 -0700
Content-Transfer-Encoding: quoted-printable
Message-Id: <>
References: <> <> <>
To: Phillip Hallam-Baker <>
X-Mailer: Apple Mail (2.1990.1)
Cc: "" <>
Subject: Re: [Cfrg] ECC reboot (Was: When's the decision?)
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: Crypto Forum Research Group <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Fri, 17 Oct 2014 21:27:51 -0000

> On Oct 17, 2014, at 1:22 PM, Phillip Hallam-Baker <> wrote:
> On Thu, Oct 16, 2014 at 2:29 PM, Alyssa Rowan <> wrote:
>> I put forth that 2^521-1 also falls within scope. It's not very far
>> away, and it's a true Mersenne prime rather than a pseudo-Mersenne,
>> and they do not grow on trees - no others fall near our criteria (the
>> next lowest is 2^127-1 which is way too small, and the next biggest is
>> 2^607-1). They are very attractive - attractive enough for 4 (?)
>> independent research groups to independently arrive on E-521, and
>> SECG/NIST to have independently picked the same prime years ago for
>> secp521r1.
> It is a Mersenne but the performance advantage is compromised by it
> being larger than 512 bits. That means every memory operation is going
> to break stride. Which is a really bad tradeoff for the marginal
> improvement in speed.

Why is this a bad tradeoff?  Will it lead to a more-than-marginal drop in speed, or is breaking stride bad for some other reason?

The best-performing implementations so far — at least the ones that use more than 4 machine words per element, and some that don't — have been those which store an n-bit number in more than n bits.  For example, Curve25519-Donna uses 320 bits to store a 255-bit number, and Curve41417 or Goldilocks or Ridinghood use 512 bits to store a 414- or 448- or 480-bit number.  This is why their primes are strictly less than 512 bits: so that they can fit in 512 bits with delayed carry propagation.

While doing a multiplication, of course, a vectorized 512-bit prime implementation would store at least 1024 bits and probably more.  But reducing back to 512 bits (and not, say, 576 bits) would come at a significant cost.

>> [Previous discussion countering this point: Sean Parkinson @ RSA
>> suggested stepping over a power of 2 is "only going to hurt
>> performance in the future". Phillip Hallam-Baker also thought anything
>> that is not less than a clean multiple of a power of two "may cause
>> severe performance hits on future architectures", mentioning 512-bit
>> memory buses on graphics cards?!
> There is a good reason for that. What we call 'graphics cards' are
> really just what we used to call a vector unit.
> Getting someone to build purpose built ECC accelerators is hard and
> expensive. I can buy an nvida card that with a ridiculous number of
> cores that I can pretty much program to do anything I like.
> Its not just a question of speed, its a question of the amount of
> microcode required to implement ECC math as a native function of the
> GPU.

Discrete graphics probably won’t be be a common crypto platform, though I agree that AVX-512F will be in a few years.

>> although I'm not convinced that's
>> actually primarily relevant to an implementation of a high-strength
>> curve. We will, of course, evaluate performance of contenders in Phase
>> II, future architectures can be more-or-less anything that works well,
>> and performance implications usually aren't anything like so obvious…
>> Aren't Mersennes actually particularly _good_ performance-wise?]
> That depends on whether you are looking for a reason to include or exclude.
> At the ~512 level, what I am looking for is a curve that absolutely
> nobody is going to be able to suggest is suspect as being bongoed.
> 2^512 is a round number that needs no explanation. 2^521 isn’t.

I’m not sure what you mean by “bongoed”.  I do not believe that 2^521-1 has been struck rhythmically by hands, or has been played as a melody on bongos.

It is possible, if unlikely, that some mathematical property of Mersenne primes would make elliptic curves mod them weaker than mod other primes.  But at least 4 groups thought 2^521-1 was an obvious choice, and at least 3 of them came up with the same curve.  That’s about as unsuspicious as you’re going to get without some VPR performance art.

> The Web PKI will almost certainly be based exclusively on the ~512
> level curve. The roots certainly will. So the performance advantage of
> issuing end entity ~256 certs would be very small.

Just the way they use ~15kbit RSA keys now, eschewing ~3072-bit keys as too weak, right?

I looked at Mozilla’s included CAs.  There are four ECC certs there, all of them on the NIST secp384r1 curve.  So they apparently do not consider ~512 bits necessary, but if the only choices are 256 and 512 I suppose they will go with 512.

The RSA-based roots are mostly 2048 bits long, equivalent to about a 224-bit ECC.  If the CAs consider this an acceptable level of security for hosts, I suspect that webmasters will be OK with ~256-bit curves.  That’s what Cloudflare uses for their universal SSL, for example.

> Where I think ~256 keys will be used is for ephemeral mix ins. So I
> exchange a master session key M with the ~512 bit key, use an
> ephemeral ~256 to create a second session key E and then derive my
> encryption and authentication keys using a one way function on M and
> E. That preserves the ~512 work factor for confidentiality and adds a
> ~256 work factor for forward secrecy.
> So I am willing to be more flexible on ~256 than ~512 because for my
> applications the attacker always has to break the ~512.

Maybe for your applications, but that’s not how TLS’s ECDHE works.

>> I put forth that 2^414-17 and 2^448-2^224-1 might fall outside "wiggle
>> room" there, although I do so very reluctantly as I think it's a shame
>> to exclude them on that basis if they have otherwise nice properties,
>> and they do seem to have very good performance for their strength.
> I agree. They are a little faster but we have to give up a lot of
> security to get that speed. Its not 64 bits, its a 2^20 reduction in
> work factor. I want them bits.

I basically get this sentiment.  But just so we can be clear: why do you want them bits?  Is it just an insatiable craving, or does a round 512 sound better in product advertisements, or is there a foreseeable attack that the extra bits will protect you from?

If most CAs today are OK with security equivalent to 224-bit curves, but they're hedging to 384 bits, why are you convinced that 384 or 414 or 448 or 480 bits would not be enough?  And if they aren’t enough, why not go to 607 or 640 or 1024 bits?

>>> 31/10/14 (2 weeks from now): we agree on whatever benchmarking
>>> system we're going to use for performance measurements. (Right now,
>>> supercop seems like the front runner to me.)
> I think a significant performance difference is a tie breaker but not
> the best determinant. What I see as being convincing are:
> 1) Difficulty of screwing up the implementation (see Watson Ladd's post).
> 2) Legacy deployment.

The only new bigger-than-256-bit curve which would ease legacy transition is E-521.

> Given the way that I think we are likely to use ~256 and given the
> fact that DJB has established a large user base for the curve already.
> I am willing to suggest we just let him win on that one unless there
> is at least a 10%, maybe a 20% speed advantage he missed.
> For ~512 I want the Platinum level security, whatever it takes.

Platinum is heavy and very, very expensive.  It resists corrosion phenomenally well, but not cutting.  Quite the metaphor.

— Mike