Re: [Cfrg] [CFRG] Safecurves v Brainpool / Rigid v Pseudorandom
"Igoe, Kevin M." <kmigoe@nsa.gov> Wed, 15 January 2014 17:01 UTC
Return-Path: <kmigoe@nsa.gov>
X-Original-To: cfrg@ietfa.amsl.com
Delivered-To: cfrg@ietfa.amsl.com
Received: from localhost (ietfa.amsl.com [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 31D5D1AE17F for <cfrg@ietfa.amsl.com>; Wed, 15 Jan 2014 09:01:01 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -7.438
X-Spam-Level:
X-Spam-Status: No, score=-7.438 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, RCVD_IN_DNSWL_HI=-5, RP_MATCHES_RCVD=-0.538] autolearn=ham
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id PWGGzyzH84KO for <cfrg@ietfa.amsl.com>; Wed, 15 Jan 2014 09:00:58 -0800 (PST)
Received: from nsa.gov (emvm-gh1-uea09.nsa.gov [63.239.67.10]) by ietfa.amsl.com (Postfix) with ESMTP id B655C1AE159 for <cfrg@irtf.org>; Wed, 15 Jan 2014 09:00:57 -0800 (PST)
X-TM-IMSS-Message-ID: <7f907f1c0000e650@nsa.gov>
Received: from MSHT-GH1-UEA01.corp.nsa.gov ([10.215.227.18]) by nsa.gov ([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1; TLSv1/SSLv3 AES128-SHA (128/128)) id 7f907f1c0000e650 ; Wed, 15 Jan 2014 12:00:44 -0500
Received: from MSMR-GH1-UEA02.corp.nsa.gov (10.215.227.180) by MSHT-GH1-UEA01.corp.nsa.gov (10.215.227.18) with Microsoft SMTP Server (TLS) id 14.2.342.3; Wed, 15 Jan 2014 12:00:43 -0500
Received: from MSMR-GH1-UEA03.corp.nsa.gov ([10.215.224.3]) by MSMR-GH1-UEA02.corp.nsa.gov ([10.215.227.180]) with mapi id 14.02.0342.003; Wed, 15 Jan 2014 12:00:42 -0500
From: "Igoe, Kevin M." <kmigoe@nsa.gov>
To: 'Watson Ladd' <watsonbladd@gmail.com>, David McGrew <mcgrew@cisco.com>
Thread-Topic: [Cfrg] [CFRG] Safecurves v Brainpool / Rigid v Pseudorandom
Thread-Index: Ac8QtEhLa4qQ887tT6aHk4eH+te9FQANPUQAABWJmQAAFCuIgAAK5JuAAABE2IAAGF7RgAAGUt2AAAnxgtA=
Date: Wed, 15 Jan 2014 17:00:42 +0000
Message-ID: <3C4AAD4B5304AB44A6BA85173B4675CABA9A460F@MSMR-GH1-UEA03.corp.nsa.gov>
References: <20140113230750.6111382.6841.8590@certicom.com> <52D48450.3070701@akr.io> <810C31990B57ED40B2062BA10D43FBF5C1F190@XMB116CNC.rim.net> <52D59C35.10807@cisco.com> <7BAC95F5A7E67643AAFB2C31BEE662D018B7ED7B6F@SC-VEXCH2.marvell.com> <CACsn0ckvQGGG9Fvif25DGVYvseaB1WHC27UoTs2Xpv4hncf8gw@mail.gmail.com> <52D68AA9.1020509@cisco.com> <CACsn0c=V3TpULhGfC18iqwGQQJjV4oLMO957CAD++mSvi_7t9w@mail.gmail.com>
In-Reply-To: <CACsn0c=V3TpULhGfC18iqwGQQJjV4oLMO957CAD++mSvi_7t9w@mail.gmail.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-originating-ip: [10.215.225.46]
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
Cc: Dan Brown <dbrown@certicom.com>, "cfrg@irtf.org" <cfrg@irtf.org>
Subject: Re: [Cfrg] [CFRG] Safecurves v Brainpool / Rigid v Pseudorandom
X-BeenThere: cfrg@irtf.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: Crypto Forum Research Group <cfrg.irtf.org>
List-Unsubscribe: <http://www.irtf.org/mailman/options/cfrg>, <mailto:cfrg-request@irtf.org?subject=unsubscribe>
List-Archive: <http://www.irtf.org/mail-archive/web/cfrg/>
List-Post: <mailto:cfrg@irtf.org>
List-Help: <mailto:cfrg-request@irtf.org?subject=help>
List-Subscribe: <http://www.irtf.org/mailman/listinfo/cfrg>, <mailto:cfrg-request@irtf.org?subject=subscribe>
X-List-Received-Date: Wed, 15 Jan 2014 17:01:01 -0000
> -----Original Message----- > From: Cfrg [mailto:cfrg-bounces@irtf.org] On Behalf Of Watson Ladd > Sent: Wednesday, January 15, 2014 11:20 AM > To: David McGrew > Cc: Dan Brown; cfrg@irtf.org > Subject: Re: [Cfrg] [CFRG] Safecurves v Brainpool / Rigid v > Pseudorandom > > On Wed, Jan 15, 2014 at 5:18 AM, David McGrew <mcgrew@cisco.com> wrote: > > Hi Watson, > > > > > > On 01/14/2014 08:40 PM, Watson Ladd wrote: > >> > >> On Tue, Jan 14, 2014 at 5:33 PM, Paul Lambert <paul@marvell.com> > wrote: > >>> > >>> > >>> > >>> > >>> From: Cfrg [mailto:cfrg-bounces@irtf.org] On Behalf Of David McGrew > >>> Sent: Tuesday, January 14, 2014 12:21 PM > >>> > >>> > >>> To: Dan Brown > >>> Cc: 'cfrg@irtf.org' > >>> Subject: Re: [Cfrg] [CFRG] Safecurves v Brainpool / Rigid v > >>> Pseudorandom > >>> > >>> > >>> > >>> Hi Dan, > >>> > >>> > >>> > >>> thanks for sharing your thoughts. It seems to me that the > arguments > >>> in favor of rigid curves comes down to the "nothing up my sleeve" > argument: > >>> if > >>> there are no free parameters to be chosen when constructing an > >>> elliptic curve group, then there is no need to trust the person > >>> doing the construction. > >>> > >>> There is another consideration: the cost of attacking multiple > >>> instances of the EC discrete log problem can be amortized across > >>> multiple such instances, using techniques such as Kuhn and Struik's > >>> "Random Walks Revisited: > >>> Extensions of Pollard’s Rho Algorithm for Computing Multiple > >>> Discrete Logarithms", or a time-memory tradeoff like Shanks' baby > >>> step - giant step method, in which the number of stored elements > >>> exceeds the square root of > >>> the group size. Some users may be motivated by the logic that by > >>> avoiding > >>> a particular well-known curve, they prevent potential attackers > from > >>> amortizing the cost of finding their private within a batch of > other > >>> attacks. If the cost of solving a single discrete logarithm > problem > >>> are > >>> within the attacker's budget, then these considerations would come > into > >>> play. It would then be valuable to have a way to generate > multiple > >>> curves, > >>> which could be pseudorandom and verifiable, or perhaps could be > >>> sequential and based on the "smallest number greater than p" sort > of > >>> logic. > >> > >> No one uses time-memory tradeoff: why have silicon store data it can > >> work on? > >> This also misstates the issue. Parallel search for AES-128 keys cuts > >> the work by the number of keys, and the time to the first key falls > >> linearly with the number of keys. > >> Parallel rho isn't any faster for the first answer: the remaining > >> answers come out fast, but getting the first one still takes the > same > >> time. > > > > > > Right, which is why I said "If the cost of solving a single discrete > > logarithm problem are within the attacker's budget, then these > > considerations would come into play. " So these considerations > would be > > more valid at an 80-bit security level than a 256-bit security level, > say. > > Well, the only reason to use an 80-bit security level is performance, > and really a marginal amount at that: TLS doesn't have 80-bit ECC for > example, and the existing 80 bit asymmetric algorithms are slower then > Curve25519. > > But when you send someone a curve and ask them to use it they have to > make the following verifications > 1) They have to check that the modulus is prime. > 2) They have to check the order is what you say it is, or find it. > Finding it involves a gigantic table, checking an expoentation which > will test some exceptional cases. Typically the specification of a cryptographic group requires you to provide an element G of prime order q, generating the cryptographic subgroup of order q. (q must, of course, be sufficiently large). This makes life easier. You need to check: a) q is prime b) G is indeed on the curve c) G != Identity_element d) q*G == Identity_element. You might want to require G be generated in a predetermined way. In theory, all non-identity elements of a prime order subgroup are effectively interchangeable, but in practice it is conceivable the security analysis of protocols using the curve might be simplified if G is "rigid". > 3) They have to check for additive and multiplicative embeddings, which > involves a lot of modular exponentiation. > 4) They have to check the order is prime > > Put all of this together, and the performance gain is gone. > Furthermore, batch calculations don't work anymore. > Per system parameters don't give the anti-attackability gain. > > Sincerely, > Watson > > > > > >> The fact that this is stated clearly in the paper cited makes me > >> doubt whether the person citing the paper bothered to read it. > > > > > > Comments like this run the risk of making the environment on this > list less > > friendly. > > > > regards, > > > > David > > > > > >> > >> Sincerely, > >> Watson Ladd > >>> > >>> [ ⨳] Interesting … so we could have a curve of the day site. > Order is > >>> hard > >>> to create, but could be generated and used by many. > >>> > >>> Paul > >>> > >>> > >>> > >>> In any case, the suggestion to document the rationale behind > elliptic > >>> curve > >>> parameter choices is a good one! > >>> > >>> David > >>> > >>> On 01/14/2014 11:51 AM, Dan Brown wrote: > >>> > >>> -----Original Message----- > >>> > >>> From: Alyssa Rowan > >>> > >>> Sent: Monday, January 13, 2014 7:27 PM > >>> > >>> > >>> > >>> On 13/01/2014 23:07, Dan Brown wrote: > >>> > >>> > >>> > >>> For a given field, pseudorandom curve better resists secret attacks > >>> > >>> than rigidity. > >>> > >>> > >>> > >>> With respect, I disagree. > >>> > >>> > >>> > >>> With zero knowledge of a hypothetical attack, we have zero > knowledge > >>> about > >>> > >>> what may, or may not, be affected by it. > >>> > >>> > >>> > >>> > >>> > >>> We have zero knowledge that Curve25519 resists the hypothetical > attack > >>> for > >>> which P256 has hypothetically been manipulated to be affected by. > [Sorry > >>> to > >>> grammarians!] > >>> > >>> > >>> > >>> So, what is rigidity is claiming to resist, under your reasoning? > >>> > >>> > >>> > >>> Under my reasoning, I assume that there is a set of curves, > partitioned > >>> into > >>> two subsets: curves vulnerable to a secret attack known the > generator > >>> (NIST/NSA) and those not.. > >>> > >>> > >>> > >>> It's possible that trial and error was used for P256 until landed > in the > >>> vulnerable subset. I think that is implicit in the safecurves ... > >>> rigidity > >>> page. No? > >>> > >>> > >>> > >>> Now, suppose I were tasked to find my own curve to minimize the > chance of > >>> landing in the vulnerable set? > >>> > >>> > >>> > >>> Choose a curve with small coefficients? Since I do not know what > the > >>> vulnerable set is, and rightly want to assume zero knowledge about > it, I > >>> cannot presume that small coefficients protect me from the > hypothetical > >>> secret attack, unless presume nonzero knowledge the vulnerable set. > >>> [Sorry > >>> for the tedious redundancy.] > >>> > >>> > >>> > >>> Granted: small coefficients would help my persuade others that I > did not > >>> select the curve maliciously by exhaustive search. > >>> > >>> > >>> > >>> What about a pseudorandom curve? More precisely, where the > coefficients > >>> are > >>> derived from PRF(rigid-seed). > >>> > >>> > >>> > >>> Assuming independence of the vulnerable set from the PRF and rigid- > seed, > >>> and > >>> the pseudorandomness of the PRF, then it seems that the > pseudorandom > >>> curve > >>> lands in the vulnerable subset with probability p, where p is the > density > >>> of > >>> the vulnerable subset within the larger set of curves from which > one > >>> draws > >>> the curve from. > >>> > >>> > >>> > >>> Under this hypothesis, the manipulation of P256 would have been > expected > >>> to > >>> take on the order of 1/p trials. > >>> > >>> > >>> > >>> One can plug in any value for p, based on whatever assumptions one > thinks > >>> is > >>> reasonable. > >>> > >>> > >>> > >>> For example, my personal belief had been p = 2^(-128), perhaps > because I > >>> was > >>> super biased toward, and committed to, ECC. In that case, the > >>> manipulation > >>> phase for NIST P256 would be infeasible, and I could dismiss this > whole > >>> issue rigidity v pseudorandom as didactic, distraction, irrelevant, > >>> baseless, stalling, bogus, moot, FUD, or whatever. > >>> > >>> > >>> > >>> But I recognize that maybe I am too optimistic. Others have > suggested p > >>> =2^(-30), and I am listening to them. Then a malicious NIST P256 > would > >>> be > >>> very feasible. In my view, the resulting assurance provided by > >>> pseudorandom, probability of 2^(-30) seems a little low, but then > maybe I > >>> am > >>> just not yet used to thinking this way. Others could say that any > public > >>> key algorithm always has a similar risk of new attacks being > discovered, > >>> and > >>> therefore that this risk, if not actually acceptable, is > unavoidable. > >>> > >>> > >>> > >>> What about small coefficient curves? They don't have the argument > of > >>> probability p of avoiding the vulnerable set, but they do have some > >>> legitimate security arguments: (1, resp.) no attacks are known, and > (2, > >>> resp.) known attacks do not seem related to coefficient size. As I > see > >>> it, > >>> these arguments assume more about the hypothetical secret attacks > than > >>> the > >>> NIST curves: (1, resp.) either they do not exist (but that puts > back to > >>> negligible p), or (2, resp.) the hypothetical secret attack is > similar to > >>> existing known attacks to the extent that it cannot possibility > depend on > >>> coefficient size. The latter argument has tremendous merit in that > it is > >>> a > >>> very plausible assumption, but pseudorandom curves do not rely on > this > >>> assumption. Again, it presumes virtually nonzero knowledge about > the > >>> secret > >>> attack (other than the partition of the set of curves into > vulnerable and > >>> non-vulnerable). No? > >>> > >>> > >>> > >>> It's certainly possible to vilify me for suggesting the possibility > of a > >>> small coefficient attack, call it baseless, etc., but what I'm > saying is > >>> that pseudorandom curves do not rely this. Instead they rely on > more > >>> falsifiable assumptions, e.g. the effective of the PRF, etc. > >>> > >>> > >>> > >>> This general strategy of reasoning to avoid unknown attacks is used > in > >>> the > >>> field of "provable security" and also seems to be used in the > "rigidity" > >>> page of the safecurves strategy. > >>> > >>> > >>> > >>> I've a tried to explain this before on the TLS. In so doing, I > >>> reconsidering, or setting aside, my own beliefs and preferences, to > try > >>> to > >>> engage the rigidity arguments. > >>> > >>> > >>> > >>> > >>> > >>> A point to note here: brainpoolP256r1 has a _very_ weak twist (ρ @ > 2^44) > >>> - a > >>> > >>> practical problem only in a flawed implementation that doesn't > check for > >>> invalid > >>> > >>> points. But here nevertheless, we actually have a curve generated > in a > >>> > >>> documented pseudorandom manner, yet has at least one desirable > property > >>> > >>> distinctly less secure than all of its tested peers. > >>> > >>> > >>> > >>> Why? Bad luck, it seems. > >>> > >>> > >>> > >>> Thanks for the pointing this out, I hadn't noticed it before. > >>> > >>> > >>> > >>> > >>> > >>> > >>> > >>> I for one prefer sound, well-explained, reproducibly verifiable > curve > >>> design > >>> > >>> which resists all known attacks, to rolling the dice and hoping it > >>> resists a > >>> > >>> hypothetical attack that we know nothing about. > >>> > >>> > >>> > >>> Don't quite follow, based on my understanding above. > >>> > >>> > >>> > >>> Also, publishing a seed and curve-PRF, is verifiable, and > reproducible. > >>> > >>> > >>> > >>> I guess that you refer to the NIST seeds being not well-explained. > >>> > >>> > >>> > >>> Brainpool certainly seeds seem to be. > >>> > >>> > >>> > >>> > >>> > >>> Hence my preference for the "Safecurves" approach, and thus the > "Chicago" > >>> > >>> curves. > >>> > >>> > >>> > >>> That some of them can be (and have been) implemented in an > extremely > >>> > >>> efficient way, and thus are exceptionally well-suited for the > important > >>> task > >>> of > >>> > >>> accelerating wide adoption of forward security in internet > protocols, I > >>> also > >>> find > >>> > >>> a highly desirable point in their favour - as, it seems, do many > others. > >>> > >>> > >>> > >>> I can find no remotely compelling argument against them. > >>> > >>> > >>> > >>> People really want fast, because it reduces cost. > >>> > >>> > >>> > >>> But they also want secure. > >>> > >>> > >>> > >>> Fast and secure are both important, but orthogonal. > >>> > >>> > >>> > >>> People need to balance both sides. > >>> > >>> > >>> > >>> I had wanted to focus on the security side, to get a thorough > >>> understanding > >>> of that side. > >>> > >>> > >>> > >>> Granted, I may be taking a too narrow focus, from an engineering > >>> viewpoint, > >>> where action is the order of the day, not reflection. I was hoping > CFRG > >>> to > >>> be more receptive to a (hypothetical) security-focus. > >>> > >>> > >>> > >>> Anyway, once one has a thorough understanding on both sides, one > can make > >>> a > >>> balanced decision. > >>> > >>> > >>> > >>> > >>> > >>> > >>> > >>> > >>> ------------------------------------------------------------------- > -- > >>> > >>> This transmission (including any attachments) may contain > confidential > >>> information, privileged material (including material protected by > the > >>> solicitor-client or other applicable privileges), or constitute > >>> non-public > >>> information. Any use of this information by anyone other than the > >>> intended > >>> recipient is prohibited. If you have received this transmission in > error, > >>> please immediately reply to the sender and delete this information > from > >>> your > >>> system. Use, dissemination, distribution, or reproduction of this > >>> transmission by unintended recipients is not authorized and may be > >>> unlawful. > >>> > >>> > >>> > >>> > >>> _______________________________________________ > >>> > >>> Cfrg mailing list > >>> > >>> Cfrg@irtf.org > >>> > >>> http://www.irtf.org/mailman/listinfo/cfrg > >>> > >>> > >>> > >>> > >>> _______________________________________________ > >>> Cfrg mailing list > >>> Cfrg@irtf.org > >>> http://www.irtf.org/mailman/listinfo/cfrg > >>> > >> > >> > > > > > > -- > "Those who would give up Essential Liberty to purchase a little > Temporary Safety deserve neither Liberty nor Safety." > -- Benjamin Franklin > _______________________________________________ > Cfrg mailing list > Cfrg@irtf.org > http://www.irtf.org/mailman/listinfo/cfrg
- [Cfrg] [CFRG] Safecurves v Brainpool / Rigid v Ps… Dan Brown
- Re: [Cfrg] [CFRG] Safecurves v Brainpool / Rigid … Watson Ladd
- Re: [Cfrg] [CFRG] Safecurves v Brainpool / Rigid … Dan Brown
- Re: [Cfrg] [CFRG] Safecurves v Brainpool / Rigid … Alyssa Rowan
- Re: [Cfrg] [CFRG] Safecurves v Brainpool / Rigid … Watson Ladd
- Re: [Cfrg] [CFRG] Safecurves v Brainpool / Rigid … Michael Hamburg
- Re: [Cfrg] [CFRG] Safecurves v Brainpool / Rigid … Paul Lambert
- Re: [Cfrg] [CFRG] Safecurves v Brainpool / Rigid … Watson Ladd
- Re: [Cfrg] [CFRG] Safecurves v Brainpool / Rigid … Paul Lambert
- Re: [Cfrg] [CFRG] Safecurves v Brainpool / Rigid … Manuel Pégourié-Gonnard
- Re: [Cfrg] [CFRG] Safecurves v Brainpool / Rigid … Dan Brown
- Re: [Cfrg] [CFRG] Safecurves v Brainpool / Rigid … Igoe, Kevin M.
- Re: [Cfrg] [CFRG] Safecurves v Brainpool / Rigid … Mike Hamburg
- Re: [Cfrg] [CFRG] Safecurves v Brainpool / Rigid … Dan Brown
- Re: [Cfrg] [CFRG] Safecurves v Brainpool / Rigid … David McGrew
- [Cfrg] publishing drafts (was: Re: [CFRG] Safecur… David McGrew
- Re: [Cfrg] [CFRG] Safecurves v Brainpool / Rigid … Alyssa Rowan
- Re: [Cfrg] [CFRG] Safecurves v Brainpool / Rigid … Paul Lambert
- Re: [Cfrg] [CFRG] Safecurves v Brainpool / Rigid … Watson Ladd
- Re: [Cfrg] [CFRG] Safecurves v Brainpool / Rigid … David McGrew
- Re: [Cfrg] [CFRG] Safecurves v Brainpool / Rigid … Watson Ladd
- Re: [Cfrg] [CFRG] Safecurves v Brainpool / Rigid … Igoe, Kevin M.
- Re: [Cfrg] [CFRG] Safecurves v Brainpool / Rigid … Dan Brown
- Re: [Cfrg] [CFRG] Safecurves v Brainpool / Rigid … Manuel Pégourié-Gonnard
- Re: [Cfrg] [CFRG] Safecurves v Brainpool / Rigid … David McGrew
- Re: [Cfrg] [CFRG] Safecurves v Brainpool / Rigid … Johannes Merkle
- [Cfrg] NUMs/rigidity security (Re: [CFRG] Safecur… Adam Back
- Re: [Cfrg] NUMs/rigidity security (Re: [CFRG] Saf… David McGrew
- Re: [Cfrg] [CFRG] Safecurves v Brainpool / Rigid … David McGrew
- Re: [Cfrg] NUMs/rigidity security (Re: [CFRG] Saf… Adam Back