Re: [Cfrg] [CFRG] Safecurves v Brainpool / Rigid v Pseudorandom

Dan Brown <dbrown@certicom.com> Tue, 14 January 2014 16:51 UTC

Return-Path: <dbrown@certicom.com>
X-Original-To: cfrg@ietfa.amsl.com
Delivered-To: cfrg@ietfa.amsl.com
Received: from localhost (ietfa.amsl.com [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id F134E1AE151 for <cfrg@ietfa.amsl.com>; Tue, 14 Jan 2014 08:51:29 -0800 (PST)
X-Quarantine-ID: <YMfj4ScaAMYs>
X-Virus-Scanned: amavisd-new at amsl.com
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "MIME-Version"
X-Spam-Flag: NO
X-Spam-Score: -1.9
X-Spam-Level:
X-Spam-Status: No, score=-1.9 tagged_above=-999 required=5 tests=[BAYES_00=-1.9] autolearn=ham
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id YMfj4ScaAMYs for <cfrg@ietfa.amsl.com>; Tue, 14 Jan 2014 08:51:27 -0800 (PST)
Received: from smtp-p02.blackberry.com (smtp-p02.blackberry.com [208.65.78.89]) by ietfa.amsl.com (Postfix) with ESMTP id 6FD6B1AE13B for <cfrg@irtf.org>; Tue, 14 Jan 2014 08:51:27 -0800 (PST)
Content-Type: multipart/mixed; boundary="===============1445449493=="
MIME-Version: 1.0
Received: from xct102cnc.rim.net ([10.65.161.202]) by mhs214cnc.rim.net with ESMTP/TLS/AES128-SHA; 14 Jan 2014 11:51:12 -0500
Received: from XCT115CNC.rim.net (10.65.161.215) by XCT102CNC.rim.net (10.65.161.202) with Microsoft SMTP Server (TLS) id 14.3.158.1; Tue, 14 Jan 2014 11:51:12 -0500
Received: from XMB116CNC.rim.net ([fe80::45d:f4fe:6277:5d1b]) by XCT115CNC.rim.net ([::1]) with mapi id 14.03.0158.001; Tue, 14 Jan 2014 11:51:11 -0500
From: Dan Brown <dbrown@certicom.com>
To: "'akr@akr.io'" <akr@akr.io>, "'cfrg@irtf.org'" <cfrg@irtf.org>
Thread-Topic: [Cfrg] [CFRG] Safecurves v Brainpool / Rigid v Pseudorandom
Thread-Index: Ac8QtEhLa4qQ887tT6aHk4eH+te9FQANPUQAABWJmQA=
Date: Tue, 14 Jan 2014 16:51:11 +0000
Message-ID: <810C31990B57ED40B2062BA10D43FBF5C1F190@XMB116CNC.rim.net>
References: <20140113230750.6111382.6841.8590@certicom.com> <52D48450.3070701@akr.io>
In-Reply-To: <52D48450.3070701@akr.io>
Accept-Language: en-CA, en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator:
x-originating-ip: [10.65.160.252]
MIME-Version: 1.0
Subject: Re: [Cfrg] [CFRG] Safecurves v Brainpool / Rigid v Pseudorandom
X-BeenThere: cfrg@irtf.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: Crypto Forum Research Group <cfrg.irtf.org>
List-Unsubscribe: <http://www.irtf.org/mailman/options/cfrg>, <mailto:cfrg-request@irtf.org?subject=unsubscribe>
List-Archive: <http://www.irtf.org/mail-archive/web/cfrg/>
List-Post: <mailto:cfrg@irtf.org>
List-Help: <mailto:cfrg-request@irtf.org?subject=help>
List-Subscribe: <http://www.irtf.org/mailman/listinfo/cfrg>, <mailto:cfrg-request@irtf.org?subject=subscribe>
X-List-Received-Date: Tue, 14 Jan 2014 16:51:30 -0000

> -----Original Message-----
> From: Alyssa Rowan
> Sent: Monday, January 13, 2014 7:27 PM
> 
> On 13/01/2014 23:07, Dan Brown wrote:
> 
> > For a given field, pseudorandom curve better resists secret attacks
> > than rigidity.
> 
> With respect, I disagree.
> 
> With zero knowledge of a hypothetical attack, we have zero knowledge about
> what may, or may not, be affected by it.
> 

We have zero knowledge that Curve25519 resists the hypothetical attack for which P256 has hypothetically been manipulated to be affected by. [Sorry to grammarians!]

So, what is rigidity is claiming to resist, under your reasoning?

Under my reasoning, I assume that there is a set of curves, partitioned into two subsets: curves vulnerable to a secret attack known the generator (NIST/NSA) and those not..

It's possible that trial and error was used for P256 until landed in the vulnerable subset.  I think that is implicit in the safecurves ... rigidity page.  No?

Now, suppose I were tasked to find my own curve to minimize the chance of landing in the vulnerable set?

Choose a curve with small coefficients?  Since I do not know what the vulnerable set is, and rightly want to assume zero knowledge about it, I cannot presume that small coefficients protect me from the hypothetical secret attack, unless presume nonzero knowledge the vulnerable set.  [Sorry for the tedious redundancy.] 

Granted: small coefficients would help my persuade others that I did not select the curve maliciously by exhaustive search.  

What about a pseudorandom curve? More precisely, where the coefficients are derived from PRF(rigid-seed).  

Assuming independence of the vulnerable set from the PRF and rigid-seed, and the pseudorandomness of the PRF, then it seems that the pseudorandom curve lands in the vulnerable subset with probability p, where p is the density of the vulnerable subset within the larger set of curves from which one draws the curve from.

Under this hypothesis, the manipulation of P256 would have been expected to take on the order of 1/p trials.  

One can plug in any value for p, based on whatever assumptions one thinks is reasonable.

For example, my personal belief had been p = 2^(-128), perhaps because I was super biased toward, and committed to, ECC.  In that case, the manipulation phase for NIST P256 would be infeasible, and I could dismiss this whole issue rigidity v pseudorandom as didactic, distraction, irrelevant, baseless, stalling, bogus, moot, FUD, or whatever.

But I recognize that maybe I am too optimistic.  Others have suggested p =2^(-30), and I am listening to them.  Then a malicious NIST P256 would be very feasible.  In my view, the resulting assurance provided by pseudorandom, probability of 2^(-30) seems a little low, but then maybe I am just not yet used to thinking this way.  Others could say that any public key algorithm always has a similar risk of new attacks being discovered, and therefore that this risk, if not actually acceptable, is unavoidable.

What about small coefficient curves?  They don't have the argument of probability p of avoiding the vulnerable set, but they do have some legitimate security arguments: (1, resp.) no attacks are known, and (2, resp.) known attacks do not seem related to coefficient size. As I see it, these arguments assume more about the hypothetical secret attacks than the NIST curves: (1, resp.) either they do not exist (but that puts back to negligible p), or (2, resp.) the hypothetical secret attack is similar to existing known attacks to the extent that it cannot possibility depend on coefficient size.  The latter argument has tremendous merit in that it is a very plausible assumption, but pseudorandom curves do not rely on this assumption.  Again, it presumes virtually nonzero knowledge about the secret attack (other than the partition of the set of curves into vulnerable and non-vulnerable).  No?  

It's certainly possible to vilify me for suggesting the possibility of a small coefficient attack, call it baseless, etc., but what I'm saying is that pseudorandom curves do not rely this.  Instead they rely on more falsifiable assumptions, e.g. the effective of the PRF, etc.  

This general strategy of reasoning to avoid unknown attacks is used in the field of "provable security" and also seems to be used in the "rigidity" page of the safecurves strategy.

I've a tried to explain this before on the TLS.  In so doing, I reconsidering, or setting aside, my own beliefs and preferences, to try to engage the rigidity arguments.

> 
> A point to note here: brainpoolP256r1 has a _very_ weak twist (ρ @ 2^44) - a
> practical problem only in a flawed implementation that doesn't check for invalid
> points. But here nevertheless, we actually have a curve generated in a
> documented pseudorandom manner, yet has at least one desirable property
> distinctly less secure than all of its tested peers.
> 
> Why? Bad luck, it seems.

Thanks for the pointing this out, I hadn't noticed it before. 

> 
> 
> I for one prefer sound, well-explained, reproducibly verifiable curve design
> which resists all known attacks, to rolling the dice and hoping it resists a
> hypothetical attack that we know nothing about.

Don't quite follow, based on my understanding above.

Also, publishing a seed and curve-PRF, is verifiable, and reproducible.

I guess that you refer to the NIST seeds being not well-explained. 

Brainpool certainly seeds seem to be.

> 
> Hence my preference for the "Safecurves" approach, and thus the "Chicago"
> curves.
> 
> That some of them can be (and have been) implemented in an extremely
> efficient way, and thus are exceptionally well-suited for the important task of
> accelerating wide adoption of forward security in internet protocols, I also find
> a highly desirable point in their favour - as, it seems, do many others.
> 
> I can find no remotely compelling argument against them.
 
People really want fast, because it reduces cost.  

But they also want secure.

Fast and secure are both important, but orthogonal.

People need to balance both sides.

I had wanted to focus on the security side, to get a thorough understanding of that side.

Granted, I may be taking a too narrow focus, from an engineering viewpoint, where action is the order of the day, not reflection.  I was hoping CFRG to be more receptive to a (hypothetical) security-focus.

Anyway, once one has a thorough understanding on both sides, one can make a balanced decision.
   

---------------------------------------------------------------------
This transmission (including any attachments) may contain confidential information, privileged material (including material protected by the solicitor-client or other applicable privileges), or constitute non-public information. Any use of this information by anyone other than the intended recipient is prohibited. If you have received this transmission in error, please immediately reply to the sender and delete this information from your system. Use, dissemination, distribution, or reproduction of this transmission by unintended recipients is not authorized and may be unlawful.