Re: [TLS] New drafts: adding input to the TLS master secret

Marsh Ray <> Wed, 03 February 2010 15:18 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 690243A6C51 for <>; Wed, 3 Feb 2010 07:18:34 -0800 (PST)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -2.599
X-Spam-Status: No, score=-2.599 tagged_above=-999 required=5 tests=[BAYES_00=-2.599]
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id VALIvmFVUdiH for <>; Wed, 3 Feb 2010 07:18:33 -0800 (PST)
Received: from ( []) by (Postfix) with ESMTP id 2DFFF3A685A for <>; Wed, 3 Feb 2010 07:18:33 -0800 (PST)
Received: from ([]) by with esmtpa (Exim 4.68) (envelope-from <>) id 1Nch0d-000KTh-5F; Wed, 03 Feb 2010 15:19:15 +0000
Received: from [] (localhost []) by (Postfix) with ESMTP id 5565964F2; Wed, 3 Feb 2010 15:19:13 +0000 (UTC)
X-Mail-Handler: MailHop Outbound by DynDNS
X-Report-Abuse-To: (see for abuse reporting information)
X-MHO-User: U2FsdGVkX180/Rn7EmJY97o9Ewb9pzSQGDZM7+NyNL8=
Message-ID: <>
Date: Wed, 03 Feb 2010 10:19:13 -0500
From: Marsh Ray <>
User-Agent: Thunderbird (Windows/20090812)
MIME-Version: 1.0
To: Paul Hoffman <>
References: <p0624089bc78922bdaddd@[]> <> <p06240813c78e116da3f6@[]> <001001caa442$beefbde0$3ccf39a0$@org> <p06240829c78e37e5a850@[]> <001101caa44b$35f6f540$a1e4dfc0$@org> <p06240831c78e4f0e15ee@[]> <> <p0624083bc78e8c1563cc@[]> <> <p0624083ec78eaacd96fa@[]>
In-Reply-To: <p0624083ec78eaacd96fa@[]>
X-Enigmail-Version: 0.96.0
OpenPGP: id=1E36DBF2
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
Cc: "" <>
Subject: Re: [TLS] New drafts: adding input to the TLS master secret
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: "This is the mailing list for the Transport Layer Security working group of the IETF." <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Wed, 03 Feb 2010 15:18:34 -0000

Paul Hoffman wrote:
> At 9:26 PM -0600 2/2/10, Marsh Ray wrote:
>> Let's say the server has a perfect RNG and the client is broken
>> Debian Etch.
>> The client provides 15 bits of usable entropy in the Client Hello,
>> the server provides 224.
>> RSA key exchange is negotiated.
>> The client generates the 48-byte premaster secret with an effective
>>  entropy of less than 15 bits (his first handshake since power on).
>> Game over, right?
> Wrong. The master secret (the only one that matters for channel
> security) gets the advantage of all the randomness added by the
> client *and* the server.

Sure but MitM knows all that because it's sent in the clear. The only
thing MitM doesn't know in this scenario is the private key to the
server's cert. So if the premaster secret is predictable (only 32k
possibilities or so), it doesn't matter if it's well encrypted to the
server's cert. A passive attacker can work out the master secret.

> Without this extension, that is still
> probably acceptable for some uses; with the extension (if the server
> contributes more randomness), for all uses. Look again at section 8.1
> in RFC 5246.

> master_secret = PRF(pre_master_secret, "master secret",
>                     ClientHello.random + ServerHello.random)[0..47];

It doesn't matter in this case how much randomness there is in the
hellos, what matters is the unpredicatbility and secrecy of the
pre_master_secret (which is generated by the client).

>>>> How big were you planning to make those symmetric keys anyway?
>>> 48 bytes, as shown in the document.
>> So at best there are 384 bits of entropy in play?
> Yes.

Perhaps there could be more than that:

>      key_block = PRF(SecurityParameters.master_secret,
>                       "key expansion",
>                       SecurityParameters.server_random +
>                       SecurityParameters.client_random);

The master_secret could contain 384 BoE from the premaster secret
in addition to 448 from server and client random (832 BoE).

But last night (need to look at it again in the morning :-) I got the
impression that the internal structure of the PRF limited it to 672 BoE.

So, at least for plain RSA key exchange, perhaps one could get 672 BoE
going into the key block.

> Or the server can contribute all that is needed, if you are using the
> proposed extension.

It seems like that cross-checking is important. I try to think of a MitM
replacing either the client or server in the handshake.

>> I do appreciate a healthy safety margin, but there is some
>> complexity overhead and potential security risk.
> The complexity overhead is only paid for in systems that want to use
> these kinds of extensions. (Did I already say that?)


>> Your point about "potentially would allow an attacker who had
>> partially compromised the inputs to the master secret calculation
>> greater scope for influencing the output" is significant.
> So does the sentence that follows it: "Hash-based PRFs like the one
> used in TLS master secret calculations are designed to be fairly
> indifferent to the input size."

The hash attacks I'm familiar with don't work if the input is
constrained to be under a block or two. Allowing arbitrary length
attacker-supplied inputs makes more manipulations possible.

>> Some kinds of hash attacks let the attacker mess with the bytes
>> earlier in the handshake by placing colliding blocks in the Hello
>> Extensions.
> Are you saying that that affects the HMAC-based PRF calculation in
> TLS 1.2? If so, this is a pretty significant flaw in TLS that no one
> else has noticed. Note that there have been papers by well-known
> cryptographers saying that the HMAC construction does not have the
> weakness you ascribe to it here.

I was thinking of TLS 1.0, since that's what seems to be mostly used.

This is where my authority to speak goes off a cliff. But that never
stopped me before. Anyway, it looks to me like HMAC PRF in TLS 1.0
amounts to little more than the better of the two hash functions, which
in this case is SHA-1.

>> It also requires clients and servers to buffer the stuff and later 
>> implement a sorting algorithm based on type order.
> By "later" you mean "still within the same handshake sequence" and by
> "sorting algorithm" you mean "ascending order of two-byte numbers",
> yes?


>> Lots could go wrong with that. Pathological sort order DoS attacks,
>> etc.
> Please explain further. The TLS spec says that you can only have one
> instance of an extension in a handshake. Thus, all TLS
> implementations today already should be looking for multiple
> instances of the same extension. What is left is a relatively small
> number of extension values, each of which has a unique two-byte
> number associated with it.

Well, you could check for duplicated extension IDs with a hash table
data structure that is O(1) in space and time.

Sorting is a little less predictable.

> Where does "pathological sort order DoS
> attacks" or "etc" come in?

For example, once upon a time Perl used classic quicksort. Quicksort has
this worst-case of O(n**2). Sending thousands of extensions ordered in
the worst way can consume a bunch of CPU (the kind which might not be
offloaded onto crypto accelerator hardware).

I'm not saying this is a significant argument against new features in
general, only that adding new complexity adds new attack possibilities.

So it needs a bit of justification.

I'm not sure where that justification lies among the differences between
224/384/448/672/832 bits of entropy for generating 128 or 256-bit
encryption setup.

- Marsh