Re: [Cfrg] Elliptic Curves - signature scheme: randomised or not (ends on May 13th)

Watson Ladd <watsonbladd@gmail.com> Thu, 07 May 2015 04:37 UTC

Return-Path: <watsonbladd@gmail.com>
X-Original-To: cfrg@ietfa.amsl.com
Delivered-To: cfrg@ietfa.amsl.com
Received: from localhost (ietfa.amsl.com [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id D3CFB1ABD35 for <cfrg@ietfa.amsl.com>; Wed, 6 May 2015 21:37:59 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: 0.3
X-Spam-Level:
X-Spam-Status: No, score=0.3 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FREEMAIL_FROM=0.001, MANGLED_OFF=2.3, SPF_PASS=-0.001] autolearn=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id wGTeuwF_hYMc for <cfrg@ietfa.amsl.com>; Wed, 6 May 2015 21:37:54 -0700 (PDT)
Received: from mail-wg0-x233.google.com (mail-wg0-x233.google.com [IPv6:2a00:1450:400c:c00::233]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 4EF991A9302 for <cfrg@irtf.org>; Wed, 6 May 2015 21:37:53 -0700 (PDT)
Received: by wgyo15 with SMTP id o15so31299457wgy.2 for <cfrg@irtf.org>; Wed, 06 May 2015 21:37:52 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; bh=4yQu2EY6uRL9gJVmtOYtRmWsKgF6dAZn+atM5lzIY/g=; b=XDCTyIcAWdsiuGUzxQh71QoI/QC2UrN+/7BN8yWxGxIF1fIXmeE086wAWH1OL0qvP0 lRmCl62ewGTwNHkDE2kwtcW3QqPU+3uUVgjd0uN5DQI3n2lctg5s3pLdZNZwo+Wcg2b4 ZrVBS+99XD7dpwKsFm7Bgh+XCP+fMD7aHEzGb8rmVt4c6drxq7CFdPWXou7eY1MiqEiM 7MT4KqoxuQVOQeq+cCLCWNMGuzxJpDzVONjSgELMJxunUrMC+NYVjiQTv7Ohu3/Mw2X5 ACX+4gSqykbc6mH8tpTDl3OyRYBHUuH6mrFryEk/uKfdh0U5lPirEVyxxievmZ/YQnD7 FlFg==
MIME-Version: 1.0
X-Received: by 10.180.103.231 with SMTP id fz7mr2936489wib.35.1430973471763; Wed, 06 May 2015 21:37:51 -0700 (PDT)
Received: by 10.194.20.97 with HTTP; Wed, 6 May 2015 21:37:51 -0700 (PDT)
In-Reply-To: <810C31990B57ED40B2062BA10D43FBF5DCBA9C@XMB116CNC.rim.net>
References: <810C31990B57ED40B2062BA10D43FBF5DCAEDC@XMB116CNC.rim.net> <20150505152258.31923.qmail@cr.yp.to> <810C31990B57ED40B2062BA10D43FBF5DCBA9C@XMB116CNC.rim.net>
Date: Wed, 06 May 2015 21:37:51 -0700
Message-ID: <CACsn0c=TWiNaCZC2O2CvRqO-tZgYnCwEzAsXqnX2FzutBFMi-w@mail.gmail.com>
From: Watson Ladd <watsonbladd@gmail.com>
To: Dan Brown <dbrown@certicom.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
Archived-At: <http://mailarchive.ietf.org/arch/msg/cfrg/bDEnYQxldxeL1ANH1oDfCj5VhSI>
Cc: "cfrg@irtf.org" <cfrg@irtf.org>, "djb@cr.yp.to" <djb@cr.yp.to>
Subject: Re: [Cfrg] Elliptic Curves - signature scheme: randomised or not (ends on May 13th)
X-BeenThere: cfrg@irtf.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: Crypto Forum Research Group <cfrg.irtf.org>
List-Unsubscribe: <http://www.irtf.org/mailman/options/cfrg>, <mailto:cfrg-request@irtf.org?subject=unsubscribe>
List-Archive: <http://www.irtf.org/mail-archive/web/cfrg/>
List-Post: <mailto:cfrg@irtf.org>
List-Help: <mailto:cfrg-request@irtf.org?subject=help>
List-Subscribe: <http://www.irtf.org/mailman/listinfo/cfrg>, <mailto:cfrg-request@irtf.org?subject=subscribe>
X-List-Received-Date: Thu, 07 May 2015 04:38:00 -0000

On Wed, May 6, 2015 at 8:39 AM, Dan Brown <dbrown@certicom.com> wrote:
> Professor Bernstein,
>
> Thank you for the informative comments of this tricky topic: persuasive per usual.
>
> Accordingly, I now amend my proposal to
> k = F(m,a,t)
> where
> F is some to-be-specified cryptographically strong deterministic function,
> m is the message,
> a is the actual long-term (signing) key, and
> t is a temporal value (tweak/nonce), preferably secret and unpredictable, or at least non-repeating, optionally fixed (including empty: reducing the signature scheme to an outright deterministic signature scheme), and not under any influence from an adversary.

I think that if we worry about kleptography, t should not be used.

>
> When k is used in signatures, this approach could be called tweakably deterministic signatures, although I would prefer to call it message-dispersed signatures (to avoid the silly pitfall I mentioned of over-generalizing the value of determinism in key generation).
>
> The changes my previous proposal have brought my current proposal closer to deterministic signatures than to purely randomized signatures. The tweak value t helps to address some of the potential pitfalls of purely deterministic signatures that I listed in my previous email.  Please let me know if you think the cost of tweaking outweighs the gains I described.
>
> So, I'm not intending to be critical of deterministic signatures, or to support/defend past pure PRNG-based or underspecified randomized signatures, but rather to be in support something slightly securer than either, by combining the good features of both randomized and deterministic signatures.  Yet more reasons and responses in-line below.  So, please take them with a grain of salt, and read them in the light of trying make things securer.

How is this proposal more secure, i.e. what properties does it have,
that protocols rely on, that EdDSA doesn't?

>
> Best regards,
>
> Dan
>
>> -----Original Message-----
>> From: Cfrg [mailto:cfrg-bounces@irtf.org] On Behalf Of D. J. Bernstein
>>
>> Dan Brown writes:
>> > 2. Hindsight patching
>> > ---------------------
>> > De-randomization is a hindsight patch, even assuming we've determined
>> > a correct cause.
>>
>> Let's go back to the videotape.
>
> I am vaguely familiar with this history, but thanks for the detailed reminder.  I'll add some hazy recollections of my own to the history.
>
>>
>> 1991: NIST proposes DSA. Later, after a CPSR lawsuit, NIST admits NSA
>> authorship of DSA: https://epic.org/crypto/dss/new_nist_nsa_revelations.html
>>
>> 1992: Rivest criticizes several aspects of DSA, including DSA's use of
>> randomness generation during signing: "Signing with DSS requires the
>> generation and use of random numbers (the k values). ... The poor user is given
>> enough rope with which to hang himself---something a standard should not
>> do." https://people.csail.mit.edu/rivest/pubs/RHAL92.pdf
>>
>> 1994: NIST standardizes DSA.
>>
>> 1997: Barwood echoes the randomness criticism and, as a solution, proposes
>> deterministic generation of k: "The normal (i.e. IEEE P1363) way of
>> implementing digital signatures using an elliptic curve seems to call (at least
>> implicitly) for the generation of true random integers.
>> Generating true random integers is difficult to do in a safe portable way, so I
>> propose a method that does not require any true random input.
>> ... What I suggest is that a prng is used to generate k, keyed with d (the private
>> key), and seeded with e (the message digest)."
>> https://groups.google.com/forum/#!original/sci.crypt/SalLSLBBTe4/xtYNGDe6
>> irIJ
>>
>> 1997: Wigley independently proposes the same solution in more
>> generality: "I was thinking last night about an implementation of digital
>> signatures - re: the requirement for a truly random number in DSA / NR / most
>> other algo's. Why not derive the random number from the message to be
>> signed therefore removing the need for a hardware rng
>> (smartcards) or a complex and potentially fallible software attempt."
>> https://groups.google.com/forum/#!original/sci.crypt/3g8DnnEkv5A/iZqTyLxF
>> 2qYJ
>>
>> 1997: Naccache, M'Raihi, and Levy-dit-Vehel independently file a patent
>> application on the same idea. (They abandon the application in 2003.)
>>
>> 1998: M'Raihi, Naccache, Pointcheval, and Vaudenay manage to stretch the
>> idea into an entire SAC paper: "In this paper, we present a simple method for
>> generating random-based signatures when random number generators are
>> either unavailable or of suspected quality (malicious or accidental). ... Suffice it
>> to say that one should replace each session's random number by a digest of
>> the keys (secret and public) and the signed message."
>> https://www.di.ens.fr/users/pointche/Documents/Papers/1998_sac.pdf
>>
>
> For what little it's worth, I do vaguely recall discussions about adding message-determinism and/or message-dependency in the ANS X9F or IEEE P1363 committee, and even the IPR issues.  At one point, there was discussion of mandating feeding the message to be signed into the DRBG before signing, but that did not materialize.
>
>> 2010: Bushing, Marcan, Segher, and Sven ("failOverflow") announce that they
>> have recovered the Sony PlayStation 3 signing key, thanks to Sony's reuse of a
>> single k for many signatures.
>> https://www.youtube.com/watch?v=PR9tFXz4Quc
>>
>> Of course, the 1997 proposals from Barwood, Wigley, Naccache, M'Raihi,
>> Levy-dit-Vehel, Pointcheval, and Vaudenay would have stopped this 2010
>> attack. Can you please explain why you call this "hindsight patching"?
>>
>
> My words "hindsight patching" are, in hindsight, an unclear summary of my technical reasoning. Sorry. I'll still try to explain.
>
> You conclude "Of course ... would have stopped this 2010 attack."  How do you know that the scalar multiplication step would not have been skipped just because of deterministic k?  I've watched that video before, hoping to find some info on the root cause of the 2010 attack, but unless I missed something, they only report on the re-use of k across different messages without explaining how that occurred.  But maybe you know more, e.g. have some insider information.

Let me see if I have this correct. You're arguing that an implementor,
told to compute k=F(m, x), and go through some more steps, will skip
that step. Fine: they evidently skipped the random number generation
step. But at that point we might as well specify an implementor who
skips over comparison steps in signature verification.

What DJB is arguing is that the implementor missed the *per-message*
random generation, a much more subtle issue then failing to compute a
different random number each time. Why, just yesterday I had a student
tell me the odds of rolling a 6 with two dice should be 1/6, not 5/36.

>
> Consider ECDSA as specified X9.62-2005.  I don't know if it was the specification being used in the 2010 attack, but let's say it is for argument's sake.
>
> Section 7.2 , signature generation, requires the signer to use Annex A.4, key generation, which requires one to use Annex D, random number generation, which requires one to use the HMAC-DRBG, (or some other future NIST/ANSI-approved RNG pending their approval) together range refitting.  Full pseudocode for HMAC-DRBG is given in Annex D.
>
> Annex L.4 gives test vectors but, alas, does not use the HMAC-DRBG, instead only specifying k.
>
> Perhaps (a) the 2010 attack followed from there being too many layers of indirection, 7.2->A.4->D... i.e. the implementers stopped reading before they saw the HMAC_DRBG pseudocode.  Perhaps (b) the 2010 attack followed because the HMAC_DRBG was used but for some reason was instantiated just once and re-used.  Perhaps (c) the 2010 attack followed because of the inconsistency of the test vectors with the rest of the specification.
>
> Now replace the HMAC_DRBG with some deterministic scheme: why would the outcome be any different? Are you saying it's (b)?
>
> Who knows: maybe the implementers involved 2010 attack had read one of the papers you cited, but misunderstood the value of determinism and overcorrected to make the signatures deterministic across messages (can I say explosion here? Others have.).
>
> The citations you mentioned did show "foresight", especially if they accurately describe the cause of the 2010 attack.  I think you too modestly omitted the fact that your also proposed deterministic signatures before the 2010 attack, right?  What I want to emphasize is CFRG should apply similar foresight in considering other likely failure modes, not just the one whose likelihood is made evident by an attack.  It's usually better to fix things before an attack, right?
>
> Anyway, what the CFRG is now doing, applying a security fix after an attack is often called a "patch", right?  Also, the _reasons_ many in CFRG are offering for adopting deterministic signatures, i.e. the evaluation of the merit of the deterministic signatures, now is based on the hindsight in view the 2010 attack.
>
> As I mentioned earlier, I think some of these deterministic signature papers were cited before 2010 in some standards committee.   I'm not sure, but I believe they were dismissed as something that would never happen, or as something beyond the scope of the standard's responsibility.  If the 2010 attack were indeed caused entirely by randomizations (which I question!), then these standards lacked foresight.  In this case, I'm arguing CFRG should show more "foresight" than these past committees, by trying to thwart other possible failures before they happen.
>
> To that end, I've tried to think up possible failures, which I listed as 3a...3h.  Maybe CFRG can think of other possible risks.  At least now, we can assess the possibilities.  If you can rule them out as too improbable, then fine, all the better, right?  So, thank you, again, for at least considering them.

But you've offered no assessment of why we should analyze these
failures. Why are these the most likely? By contrast we know
kleptography is a real issue.

>
>> > The secret s SHOULD be new, but MAY be old.
>>
>> With your proposal, the "poor user is given enough rope with which to hang
>> himself---something a standard should not do". What stops the user from
>> choosing a constant s, producing a predictable k and revealing the signing key?
>
> What stops the user: the part I said about s being secret, guarded the same as the signing key.
>
> Quibble: setting s to be the signing key gives a constant but secret s, my original proposal to deterministic signatures, so it is not the constancy of s that is the problem. But I get your point, which is why I amended my proposal, despite my questioning of the actual cause of the 2010 attack.
>
> As I said above, I've amended my proposal to
> k = F(m,a,t)
> where
> m is the message,
> a is the actual long-term signing key, and
> t is a temporal value (tweak/nonce), preferably secret and unpredictable, or at least non-repeating, and optionally fixed (reducing the signature scheme to a deterministic signature scheme).
>
> This aims to be a middle-ground between randomized and fully deterministic signatures, an effort to combine the best of both methods.
>
> I'm theoretically wary of mixing in a, but yes, if F is sufficiently one-way etc., then it's totally intuitively fine.

In the EdDSA scheme, the signing key consists of x a discrete
logarithm of the public key, and z, a key for the PRF. It's a trivial
exercise to show that using k=H(z,m) is secure if H is a PRF from
{0,1}*->bit strings of the correct length, and the length is long
enough for the group being used.

>
> A general strategy here is defense-in-depth: to avoid k reuse across messages and to avoid predictable k, we've got three defenses: m, a and t.  Furthermore, I see t as a low-cost protection against the scenarios I labeled 3a...3h, at least if t is an independent secret from s and unpredictable.
>
> Do you foresee that a signer getting into trouble from the tweak t?  Intuitively, if we assume a is unguessable, and that the signer controls t, the attacker can win by finding m and m' such that F(m,a,t) = F(m',a,t') which seems unlikely, because it is a collision in the a-keyed function, etc.
>
>>
>> It's not enough to _tell_ people that k has security consequences--- something
>> that has been said again and again since the original ElGamal paper in 1985.
>
> At least some old standards do more than warn about security: they specify steps to generate k, HMAC_DRBG.  Yes, this requires entropy to be secure, but so does the long-term key.

It's easy to have standards that specify secure ways of doing things.
Oddly enough, I still have a job breaking systems. Test vectors are
far more likely to be run then extensive audits. I have seen systems
(and once written one!) that don't use long enough k for modular
reduction, that don't ensure adequate seeding of the random number
generator, etc, etc, etc. Many implementations of cryptography aren't
written by experts. They are occasionally written from Wikipedia
articles.

>
>> What actually works is having standard _test vectors_ that eliminate the bad
>> implementations. By agreeing on a deterministic way to generate k, with no
>> additional inputs, we allow every common mistake in k generation to be
>> caught by test vectors.
>
> My (amended) proposal would also admit test vectors, if F is fully specified, by setting:
> m = "test message"
> a = "test signing key - never use this ever"
> t = "test nonce - never use this ever"
> k = F(m,a,t) ="..."
>
> Do test vectors mean it will prevent another 2010 attack?
>
> I suspect that HMAC_DRBG has test vectors somewhere, if only ANSI X9.62-2005 had incorporated them with the ECDSA test vectors!  Should I dig up these, to be sure?

I don't have access to ANSI X9.62-2005. But let's consider the impact
of the Debian bug, or the FreeBSD-CURRENT bug. Even if the authors of
the ECDSA module had taken care to carefully seed the approved random
number generator, they would have generated an extremely limited
number of possible k. How does including HMAC_DRBG test vectors fix
this problem? We know the causes of these failures: where do they fit
in your analysis?

Deterministic functions are much easier to test. We can pick vectors
to stress common issues, automate comparison testing, etc. I don't
know how to do the same for nondeterministic functions.

>
>>
>> I don't mean to suggest that _every_ k-generation function is safe. On the
>> contrary: generating a constant k, as Sony did, is of course very bad;
>> generating k biased mod the group order is bad; etc. The designer needs to
>> specify a k-generation function that's secure _and_ that actually ends up being
>> tested in implementations. Failure to do this is tantamount to allowing _every_
>> function that implementors come up with; instead of having _one_ function to
>> carefully review for security problems, we're faced with a neverending list of
>> implementation-specific functions.
>>
>> The same principles apply to "entropy pools". Having each library make up its
>> own mechanism to combine multiple entropy sources is a recipe for disaster.
>> What we need is a specified, security-reviewed, deterministic, tested
>> mechanism to combine multiple entropy sources---centralized in the operating
>> system (even better, the hypervisor, when there is one) to maximize the
>> chance that good entropy sources are actually used and to minimize the
>> number of lines of code to review.
>>
>
> Well, seems to me like you're straying a bit off-topic (entropy) in the paragraph above.  Anyway, I agree, and if there was such a centralized key/entropy source, then randomized signatures would be a cinch.  I am not sure how best to enforce each library to use some centralized key/entropy source.
>
> ASIDE: I'm very interested in entropy and so on, and wrote a paper on assessing entropy, and the various means to harness it etc.  I've even mentioned some of these ideas to standards bodies, with little traction. What do you think of the 800-90A/B/C effort?  On the flip side, does centralizing (entropy) key generation also make it easier for a corrupt OS to compromise keys?  Would it be slightly safer if a process could obtain direct access to say, a camera, to derive its own keys.  Now I'm straying way off topic, sorry.
>
>> Obviously the system does ultimately rely on having some good entropy
>> source, and testing entropy sources is a quite different problem with a long
>> history of failures, even in cryptographic products "certified" by the U.S.,
>> Canadian, and German governments:
>>
>>    http://smartfacts.cr.yp.to
>>
>> But the fact that there's a hard problem in one corner of the crypto ecosystem
>> doesn't mean that we should give up on solving the problem.
>> Our best chance of solving this problem starts by isolating it--- eliminating it
>> from every other part of the ecosystem.
>>
>> > 3a.... More side channels?
>>
>> No. Unprotected implementations of standard _randomized_ DSA are well
>> known to be broken by side-channel attacks. It's important to protect all
>> secrets---including k, which is used for scalar multiplication, and the long-term
>> secret, which is used for arithmetic mod the group order.
>> Standard defenses include blinding of each secret bit (splitting b into r and r+b),
>> blinding mod the group order (replacing k with k+rp), etc.; these defenses work
>> for deterministic signing systems too.
>>
>
> Um, surely you're not saying unprotected implementations of _deterministic_ (EC)DSA or Schnorr are not broken by side-channel attacks?  Are these not independent issues?
>
> So, what I was getting at: A good implementation might work really hard to protect against say simple power analysis, e.g. Montgomery ladder and the blinding techniques.  Great, but what if differential power/timing analysis is able average out the blinding effect, and detect the very minute remaining signal from the Montgomery ladder?  Are you saying such iterative side channel attacks are fully preventable? If so, then great, else doesn't changing k every time seems to help reduce this threat?  In other words, wouldn't this provide "defense-in-depth" or "multi-factor side-channel-protection", etc.  The way I see it, randomized signatures, or the tweaked deterministic signatures I'm proposing, seem to theoretically require less side-protection than fully deterministic signatures.

One of the standard countermeasures against side channels is to
compute [k]P as [k+r]P-[r]P for a random r the same size as k. Of
course, an attacker attacking an implementation using this method
knows slightly more than if we compute a different [r]P every time:
they know each set of traces sum to the same number. I don't see how
this helps much. (Mike Hamburg would know better).
>
>> The bigger picture is that crypto is constantly mixing secret data with public
>> data. Side-channel defenses take responsibility for protecting this. We know
>> how to defend, e.g., computing an authenticator of a potentially forged
>> message---even if the same message is repeated any number of times. This is
>> also what we're doing in computing k.
>>
>> A carefully managed internal PRNG _can_ be cheaper to protect than this
>> computation, but this cost improvement is quite small compared to the overall
>> cost of a signature unit. An implementor desperate to take this approach will
>> find that the resulting signatures, despite violating the deterministic standard
>> (and failing standard signing tests), are fully interoperable---CFRG is not
>> proposing that _verifiers_ track signatures of the same message and enforce
>> repetition constraints.
>>
>> > 3b... Re-use keys more often, why not always?
>> > 3c....  De-randomized signing key?
>> > 3d.... Increased failure to erase ephemerals?
>> > 3e.... Replay attacks on encryption?
>> > 3f.... Loss of forward secrecy?
>>
>> Here you're speculating that an implementor might draw dumb conclusions
>> from the k definition and, as a result, misimplement something else.
>
> You say "speculating"; I'd say "worrying".
>
> In some of the cases 3b...3g, it would not require a transfer of a mistake from one piece of code to another. An implementer might have a common key generation unit: shared between the ephemeral and static keys, shared between signatures and key exchange, per some kind of centralization strategy.  In such cases, the fault might arise without any transfer of a code.

How is this risk mitigated by your approach? If we assume that they
make a "common key generator", then your specification of F(m,a,t) was
entirely ignored, as was DJB's proposed k=F(m, z). However, in testing
with selected test vectors the deviation from your proposal can't be
detected, while from DJB's it's easy.

>
> Anyway, so you think that you think 3b...3f are too unlikely to address, right?
>
>>
>> Sure, anything is possible, but intelligent risk management requires sensibly
>> evaluating the _likelihood_ and _impact_ of bad events. The chance of an
>> implementor reusing k as Sony did, or generating k that's slightly too short, is
>> obviously much higher than the chance of all of the scenarios that you
>> postulate, and the impact---forgery of arbitrary messages under this public
>> key---is no less disastrous than the impact of any of your scenarios.
>>
> I agree about the principles of risk management likelihood and impact.
>
> Where I seem to disagree is on the likelihood.   Somebody implementing from scratch might build a common key generation unit, shared between signatures and key exchange, that bundles the scalar multiplication step with job of the selection of k.  Maybe that's a bad idea, but aren't we trying to foolproof things?
>
> In "sensibly evaluating likelihood", we should consider things like conditional probability.  Randomized signatures are more widely deployed, so they have a higher incidence of failure.  But do they have a higher rate of failure?
>
> Do we have two specifications that are nearly identical except for one being deterministic and one being randomized, so that we can isolate the effect of deterministic or randomized?  Alas, probably not, so we have to act somewhat by intuition.
>
>> > given a sequence of signatures under a public key, one can detect
>> > repeated messages, even if the repeated message are secret and
>> > unguessable, and therefore impossible to verify.
>>
>> Sure. Standard signature caching (e.g., DNSSEC saves signatures) also has this
>> effect. Many RSA signature systems are also deterministic. You speculate that
>> there are some protocols that need non-deterministic signatures for some
>> vague reason; those protocols will already have to be quite careful in their
>> selection of signature systems.
>>
>> This is obviously far less important than the disastrous consequences of poor k
>> generation for typical real-world applications of signatures.
>>
>> > 3h.... Provable security: more assumptions to prove unforgeability
>> > What is the effect of de-randomization on the security proofs of
>> > unforgeability?
>>
>> First of all, to give proper credit, this purely theoretical issue was adequately
>> analyzed by Olson in 1997:
>>
>>
>> https://groups.google.com/forum/#!original/sci.crypt/3g8DnnEkv5A/TXTyOg9
>> jgvUJ
>>
> Thanks for the link.
>
> Olson concludes: "Using the same key, there's
> still the possibility of unfortunate interaction between the
> hash and the exponential, which is the sort of thing that
> rarely exists, but is hard to disprove."
>
> In the context of provability of stuff, which is the realm of my comment 3h, saying something is "hard to disprove" is not usually considered "adequately analyzed", let alone any sort of proof.  So, I don't see how this answers my question about the effects on security proofs.  That said, I fully agree on the intuition.  I'm not against trusting intuition, but I am for verifying intuition, which means trying to prove stuff.
>
> If you are trying to dismiss comment 3h as theoretical, then why not just say so, otherwise I'm just confused.

See my upthread note about the trivial argument. Down below you
reproduce this argument. So what exactly makes you think 3h is a
problem, given the obvious work-around?

>
> Anyway, Olson's general argument seems to align with the Koblitz--Menezes paper: using a distinct alternative key for deriving signature k value deterministically is easy to prove to be as good as random signatures, but using the actual signing key for this task is mysteriously hard to prove.
>
> I don't think you're proposing to follow Olson literally by using an alternative key instead of the signing key, because you pointed out the risk in my proposal that the alternative key might be predictable, and that you have not specified EdDSA in this way (as I recall).  So, I'm wondering what you're getting at here.

In ECDSA, k, the key, is fed to SHA-512 to produce l and r, each 32
bytes. l is then hashed with the message to produce the commitment
value, and r is the discrete log of the public key. Obviously this
introduces an annoying dependence on the distinguishability of SHA-512
outputs from 64 byte random strings, but granting that assumption this
is exactly Olson's solution.

>
>> The solution to all of the theoretical problems, as mentioned by Olson, is to
>> have one discrete-log key and one message-hash key. Ed25519 uses a KDF
>> (normally SHA-512) to derive these two separate keys from a master signing
>> key.
>>
>> You say that "an implementer who is likely to reuse an ephemeral may be just
>> as likely to choose the alternative key poorly". But these objects have totally
>> different levels of fragility from a security perspective.
>> Choosing a slightly biased k is a disaster; choosing a slightly biased message-
>> hash key (or discrete-log key) is irrelevant. Repeating k is a disaster; I have no
>> idea what "poor" choice of message-hash key you think would be analogous to
>> this. Sure, a _guessable_ message-hash key is as disastrous as a guessable k,
>> but the equal problems in this extreme case don't imply equal problems in
>> other cases.
>>
>> > Of course, we may well already need PRFs for privacy, but I wouldn't
>> > want to some fault in a PRF, e.g. AES, carry over into authentication.
>>
>> The modern trend is to rely on the security of ciphers, notably AES, not just for
>> _encrypting_ every message but also for _authenticating_ every message. Do
>> you think this is a problem?
>>
> As to the modern trend, on the theory side, I think it is moving away from relying on PRF properties, even for encryption.  For example, using AES in counter mode seems to be a way of relying on its PRG properties instead of its PRP/PRF properties.  To me, the PRG property of AES-CTR seems far more certain than a PRF, because a PRG attacker does not get to control the inputs blocks fed into the block cipher.  I'd be wrong if it is true that any PRF attack, however slight, on AES translates into an attack on AES-CTR.  So, the modern trend seems to be moving away from PRF assumptions, towards milder assumptions, as in the move from AES-CBC to AES-CTR, which seems good to me: even lower chance of anything going wrong.  Of course, maybe the actual cause for the trend ECB -> CBC -> CTR is based on the better understanding of encryption modes (and their proper usage), but the theoretical side effect seems to relying on more plausible assumptions.
>
> Ignoring the PRF/PRP/PRG distinctions, yes, the reliance of authentication upon ciphers in general is a problem, but just a _theoretical_ problem.
>
> Going out on limb: AES is not used for authenticating certificates, at least in hierarchical PKI, where signatures are used.  I'm less familiar with distributed PKI, but I don't think it relies cipher-based authentication.  So, in the initial stages of authentication, whether Alice knows if she still start talking Bob or Eve, typically does not seem to rely on AES or any other cipher.  Usually, Alice relies on a signature of a CA, plus Bob's signature or some authenticated key agreement, or perhaps a pre-shared key with Bob (where may ciphers are used?).
>
> But this authentication step is intermediate to Alice's final goals, so I'll try to be more specific. After authenticated key exchange with Bob, she can send a secret message to Bob.   Let's call the assurance she needs before doing contact Bob, "outgoing authentication", to distinguish it from "incoming authentication" of messages Alice receives from Bob.  Ok, so outgoing authentication is not really a standard definition, and is really just an intermediate pre-requisite to the end-goal of privacy.    But by isolating this sub-goal, we distinguish between different reliances.  For example, a super-theoretical risk is if there is a secret attack on AES or SHA1/2: in this disaster, Bob's (or a CA's) old deterministic signatures (if they, say mandate, AES for a PRF) might somehow expose the signing key.  Now alternative cipher/hash users (e.g. ChaCha/Salsa users) would lose some protection: Alice might worry about a MITM attack against Bob.  Maybe, they're using alternative algorithms
>  because of speed, but a little extra security cushion is okay if it is cheap.
>
>> The community has been subjecting ciphers to massive scrutiny for many
>> years. The security standard is very high: we imagine that the attackers can
>> freely choose all inputs and see all outputs, and that the attackers
>> automatically win if they can find _any_ pattern, even though this isn't enough
>> to break typical applications. The best possible security, within various real-
>> world performance constraints, is systematically analyzed.
>> Worrisome cipher structures are identified and avoided.
>
> Yes, I am well aware that AES is widely believed to be a good PRF, as you describe.
>
> The point 3h was about provability.  We do not have some proofs that AES is a good PRF, or do we?
>
> I don't think you want to dismiss provability, en masse, right? It's quite fair to discuss the plausibility of various assumptions used in the proofs. So, I take your position to be that PRF assumptions to be perfectly reasonable to use in security proofs, and that my goal to avoid relying on a PRF assumption, and aiming for a milder PRG assumption, is frivolous?  Is that accurate?

You've never explained what assumption you want to make on F, so it's
entirely unclear if you actually are relying on a weaker assumption.
In fact, as I argue down below, for most hash functions you actually
aren't.

>
> Anyway, are we going to use AES, SHA256, or what, for deterministic signatures?  That question is for future CFRG discussion, I suppose. Out of order, sorry.
>
> On the 3h topic of proofs, do the proposals for deterministic signature already have security proofs?  Since they involve the static signing key, the existence of such proofs seems to contradict the position of Koblitz and Menezes.
>
> Anyway, I hope there's never attack on AES, but if there is, then I can imagine it being only be some kind of obscure theoretical PRF attack, where the attacker is allowed to freely choose the plaintext blocks in some clever way, thereby inducing a minute amount of bias in the ciphertext blocks.  In this situation, AES might still be perfectly good to use in a PRG, as in AES-CTR encryption, and for running CTR-DRBG, because the PRF attacker lacks control of the plaintext blocks needed to induce the bias. Such a mixed situation would be similar to how SHA1 is deemed okay for some uses despite collision attacks, etc., and SHA2 is still okay despite key-extension attacks, and so on.
>
> Although I am impressed by the work done on AES, maybe I don't have as much confidence in it as you, first, because I have not directly worked in the area, and second, RC4 is widely used, yet has problems meeting the much milder PRG goals.
>
> To bring this PRF discussion back to signatures: suppose we use a deterministic signature based on AES, and the attacker has some strange plaintext block pattern that causes a minute bias in the ciphertext blocks, i.e. PRF attack.   As above, perhaps this would have no impact on AES-CTR, because the attacker never controls the counter plaintext blocks.  But small biases are risky for DL signatures, because the lattice attacks make them sensitive to this.  In this unlikely situation, generating signature k values generated with something more like AES-CTR, i.e. a traditional CSPRNG/DRBG, rather than more like AES-ECB, i.e. a deterministic signature mechanism, might be better.

Nobody other than you has actually proposed using AES for a PRF in
this application. (There are several efficiency reasons not to). But
let's consider what failure of of the compression function of an
Merkel-Damgård hash function like SHA2 to be a PRF in the message
argument means for one-wayness and collision attacks.

Your hypothetical attack involves a set S of blocks b such that C(k,b)
has some easily recognizable property for a higher proportion of k
then expected for a random function. My attacker computes a large
number of k_i, and then calculates all C(k_i, b). Since these C(k_i,
b) are not uniformly distributed, the probability of collision
increases above that expected for a random function. So we are relying
on PRF assumptions when signing with SHA2.

(A PRF distinguisher for H(k,m) gives a PRF distinguisher for C.
What's harder is to work out the exact constants in the above).
>
> Intuitively, adding a tweak, as in my second proposal, seems to both list of threats.
>
>>
>> Is it still possible that an attacker will find some shocking break of AES? Yes, it's
>> possible. Is it conceivable that generating k's with a modern cipher will produce
>> k's so biased as to allow forged signatures?
>> Yes, it's conceivable. But is it reasonable to say that this wildly speculative
>> horror story has the same probability as another equally disastrous Sony-style
>> implementation screwup? No, it isn't.
>>
>
> I don't recall saying that 3h scenario had the same probability as the 2010 attack.  The 3h scenario is only theoretical.  It considers a very low probability attack, unless we model some kind of malice in the selection of whatever PRF is being used, e.g. if we use SHA256.
>
> Just to be clear, I think I implied that the event of one of the scenarios 3a...3g happens has comparable probability to a _repeat_ of the 2010 attack: warranting some kind of countermeasure, e.g. the tweaks in my new proposal.
>
>
> _______________________________________________
> Cfrg mailing list
> Cfrg@irtf.org
> http://www.irtf.org/mailman/listinfo/cfrg



-- 
"Man is born free, but everywhere he is in chains".
--Rousseau.