Re: [Cfrg] What is the standard we are going to apply?

Watson Ladd <> Tue, 24 December 2013 23:05 UTC

Return-Path: <>
Received: from localhost ( []) by (Postfix) with ESMTP id 5C6E61AE0DE for <>; Tue, 24 Dec 2013 15:05:35 -0800 (PST)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -0.6
X-Spam-Status: No, score=-0.6 tagged_above=-999 required=5 tests=[BAYES_05=-0.5, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FREEMAIL_FROM=0.001, SPF_PASS=-0.001] autolearn=ham
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id x8ELI-y574vF for <>; Tue, 24 Dec 2013 15:05:33 -0800 (PST)
Received: from ( [IPv6:2a00:1450:400c:c00::234]) by (Postfix) with ESMTP id CDB511AE0D6 for <>; Tue, 24 Dec 2013 15:05:32 -0800 (PST)
Received: by with SMTP id x13so6194240wgg.31 for <>; Tue, 24 Dec 2013 15:05:28 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; bh=NMYyaUpqcjdNVX0o235tEULXdapA1zMJAlQjQuqofNM=; b=LXWMsYZQK/qZxs298KrjmoNyrsJoTlsC712wSC/EOO7h9u1bQtprStIcaJmiuMSATR Z0TI6VCIr6U9sGACEduMj0F4ZWmbenR7TV4CkipBeKxb8mp1qCF071Lh+b1hoUVvYbeK e7SgQPkbJ8TMxXpX/PtCrF3Na/wZXFdtCo8hrFcL7/VCft1PUkfUROi2ySElOptLkyhv j4Dn7sqEQjjHsoFZ+GK4Z7VuVW1W79vzTWq9SuCpd0b8riGI06sv2K7blsXQGtaQOCDD NSJYgURMkOcjMrXHx7q3C+vEzrH/aU6bNIH49gM2TTUFLA8xiNW7JUMqsrvNFpwUhSrv 2F4Q==
MIME-Version: 1.0
X-Received: by with SMTP id dl2mr24408707wib.17.1387926328553; Tue, 24 Dec 2013 15:05:28 -0800 (PST)
Received: by with HTTP; Tue, 24 Dec 2013 15:05:28 -0800 (PST)
In-Reply-To: <>
References: <> <> <>
Date: Tue, 24 Dec 2013 18:05:28 -0500
Message-ID: <>
From: Watson Ladd <>
To: Yoav Nir <>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
Cc: "<>" <>
Subject: Re: [Cfrg] What is the standard we are going to apply?
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: Crypto Forum Research Group <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Tue, 24 Dec 2013 23:05:35 -0000

On Tue, Dec 24, 2013 at 2:27 AM, Yoav Nir <> wrote:
> On Dec 24, 2013, at 5:34 AM, Alyssa Rowan <>
>  wrote:
>> You, John, and others have mentioned a strong desire for protocols
>> and/or primitives being evaluated to have well-vetted proofs:
>> in standard model if possible, else random oracle; and side-channel
>> resistance (i.e. suitability for constant-time implementation, etc).
>> That sounds to me like an excellent idea, wherever it is practical.
>> No matter what adversary might seek to interfere, and whether they're
>> RFC3514 compliant or not when doing it, a protocol or primitive with a
>> solid proof is more transparently, demonstrably effective than one
>> without one.
> I agree that given two similar proposed algorithms or protocols, the one with the security proof is the better choice. But please let's not over-state what these proofs actually show. The assertion "X is secure" is not one that can be tested. So all security proofs end up with a model of the protocol that simplifies out some aspects of the protocol, and a set of assumptions that seems (to the researcher) to be reasonable, but may or may not be correct in the real world, and a class of attack that seems important to the researcher (but other types may exist). So a proof that this class of attack cannot succeed under certain assumptions is important, but the protocol may still be vulnerable to other attacks, especially when used in a certain context, where the context may compromise the (otherwise excellent) algorithm, or by implementation details that allow side-channel information leak.

This is very true. That's why proofs should reduce to accepted primitives like a
block cipher as a PRP or Diffie-Hellman, and if the protocol cannot be
proven simplify the protocol.
There are very standard models for the attacker, namely capable of
doing anything to the messages,
and limited by some computational assumptions. If you can't prove
the entire protocol secure, the protocol is overly complex: there
probably is a provably secure protocol
achieving the same outcome.

In particular all the issues in TLS record layer stemmed from not
being concerned with proofs. EtM was and is trivially provable, while
MtE wasn't, and then was shown to be a rather complicated and subtle
story. E&M requires a constraint against dopey MACs that
leak the message. In 1995 EtM was provably secure, the others weren't.
Lesson learned? Apparently not.

Side-channel security we don't know how to do formally. But we do have
primitives without side channels, and protocols that don't demand
branches on secret data. This is only a protocol issue when the
protocol cannot be implemented without side channels.

> We have had several primitives and protocols with well-vetted security proofs that were later shown to be vulnerable. For many things, especially complex ones, we don't have a better method of vetting then saying "100 people looked at it, and 3 hackers + 2 cryptographers tried to break it for a whole week and they couldn't". Sad but true. The TLS renegotiation vulnerability was in a protocol that probably received more attention from researchers and hackers than any other. Thousands of people read the specification, implemented it, taught it, learned it, and wrote papers analyzing it and proving its security. And yet when after 15 years two people separately found the vulnerability, it was jaw-dropping obvious (after the fact).

I don't think anyone proved the security of anything close to TLS. The
formatting of the messages alone is too nasty to formalize, and the
key confirmation is a mistake (it breaks the standard definitions of
key agreement with no gain). Then each one of the key agreements needs
to get checked, and the ciphersuite, and the choice of which key
agreement to use, and then the fact that RC4 and Lucky13 exist needs
to get analyzed, etc. The renegotiation vulnerability would have been
spotted had a proof been available, as it would have shown up as a
mutual authentication failure.

Proofs let those 100 people focus on the assumptions in the proof.
EAX/EAX' is only the latest example of a proof making clear why a tiny
in a protocol was a bad idea, and sure enough after the change it was broken.
> What I'm getting at is that having someone present a primitive along with a security proof, and then having CFRG look at the proof and say "seems legit" is not a good enough process. As Stephen said in the other thread, we may be facing a time when NIST is no longer the gold-standard for vendors and standards writers. So we won't have the process we had 13 years ago, where NIST says "here's a new block cipher, we call it AES and it rocks", and then we all implement it in our standards and in our products. CFRG as it currently operates, or CFRG plus the requirement for security proofs is not a suitable replacement. I don't have an answer as to what is a suitable replacement. NIST has the resources to put some people to work full time on analyzing protocols and primitives (part of that is by borrowing expertise from the NSA). I don't know how a volunteer organization like IETF/IRTF can duplicate that kind of effort.

The easy answer is "don't". Send the paper to CRYPTO, and wait a few
years. Also, implementing a block cipher is not enough: it needs a
mode of operation and a protocol to be useful. NIST hasn't done much
in the protocol arena: MQV certainly didn't originate with them, and
trusting NIST didn't save TLS. At the end of the day someone is going
to be evaluating cryptographic protocols in RFCs, and whether they are
the CFRG, or the WG,
or whatever, they need to have the ability to do this right. The
guidance and process provided so far have been inadequate, and this
needs to change.

In particular "primitives" aren't the issue. The TLS WG took a secure
MAC, a secure PRF, and a secure block cipher mode of operation, along
with RSA, and managed to make something that has had recurring issues
for years. None of the underlying primitives has been dented, but the
result has certainly not lived up to what it should. I don't think an
RFC or a BCP or an I-D can really fix the issues leading to this sort
of mistake.

What I do think will work is recognizing which protocols are
"high-risk" and acting to reduce that by simplifying them, demanding
proofs (also has
simplifying effect), and making sure the assumptions being made about
the result are correct. DNSSEC and TLS would definitely take longer to
do this way, but I think the results would definitely have been better
for TLS, and for DNSSEC, RSA was never an appropriate choice given the
necessary key sizes and where the keys appear. No attacks yet, but
when Operation Kilobit happens, expect a lot of late nights for a lot
of people.

Watson Ladd
> Yoav
> _______________________________________________
> Cfrg mailing list

"Those who would give up Essential Liberty to purchase a little
Temporary Safety deserve neither  Liberty nor Safety."
-- Benjamin Franklin