Re: [TLS] New drafts: adding input to the TLS master secret

Dean Anderson <> Mon, 08 February 2010 19:28 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 499373A7265 for <>; Mon, 8 Feb 2010 11:28:16 -0800 (PST)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -2.114
X-Spam-Status: No, score=-2.114 tagged_above=-999 required=5 tests=[AWL=0.485, BAYES_00=-2.599]
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id 3dd8eb5RLHCp for <>; Mon, 8 Feb 2010 11:28:14 -0800 (PST)
Received: from ( []) by (Postfix) with ESMTP id 9B1F63A7479 for <>; Mon, 8 Feb 2010 11:28:14 -0800 (PST)
Received: from ( []) (authenticated bits=0) by (8.12.11/8.12.11) with ESMTP id o18JTCGK012432 (version=TLSv1/SSLv3 cipher=EDH-RSA-DES-CBC3-SHA bits=168 verify=NO); Mon, 8 Feb 2010 14:29:13 -0500
Date: Mon, 8 Feb 2010 14:29:12 -0500 (EST)
From: Dean Anderson <>
To: Eric Rescorla <>
In-Reply-To: <>
Message-ID: <>
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII
Cc:, Paul Hoffman <>
Subject: Re: [TLS] New drafts: adding input to the TLS master secret
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: "This is the mailing list for the Transport Layer Security working group of the IETF." <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Mon, 08 Feb 2010 19:28:16 -0000

On Sat, 6 Feb 2010, Eric Rescorla wrote:

> At Sat, 6 Feb 2010 17:20:05 -0500 (EST), Dean Anderson wrote:
> > 
> > On Wed, 3 Feb 2010, Eric Rescorla wrote:
> > 
> > > Moreover, the the purpose of the random values is *not* to add
> > > extra cryptographic strength, since they're not secret. Rather,
> > > it's to ensure uniqueness of the handshake master secret even if
> > > the PMS is repeated. However, the bound here is just the collision
> > > bound for the random values, which doesn't require anything like
> > > 224 bits.
> > > 
> > > I'm not really against this extension, but I'm not aware of any
> > > coherent security argument for it.
> > 
> > Thinking it over, I have some concerns about the security
> > implications on the master_secret calculation.  I didn't think about
> > this until Paul Hoffman wanted to alter and increase the size the
> > client and server random numbers. I have to dispute that the client
> > random and server random need not be 'cryptographically random'
> > instead of known but random values.
> I'm not sure what you're disputing. Since both values are published,
> the primary value of having them be cryptographically random is to
> make them unpredictable in advance to the attacker. However, I am
> unaware of any security analysis that relies on this property.

"cryptographically random" was a poor term. It confuses 'cryptographic
PRNG' with 'truly random' or zero-information numbers. I mean the

I'm disputing the assertion that these random numbers don't need to be
truly random.  To recast it as a positive assertion: I'm asserting these
randoms have to be either truly random or truly non-cryptographically
random; They cannot be part of a pseudo-random sequence. Particularly:  
they cannot be from the PRNG that the pre_master_secret comes from.

The primary purpose (unique master_secrets) of these random numbers is
unimportant.  If they are not truly random, they expose the
pseudo-random sequence generator sequence. This is an unintended (but
significant) consequence.

> >> And I have a concern that they are not secret.  The only
> > entropy in the calculation is the pre_master_secret.
> >
> > First, I am concerned that the entropy of the pre_master_secret is
> > reduced by presence of the revealed random number.  In the absence
> > of a perfect random number generator, the pre_master_secret and the
> > random number are not independent.
> Well, this isn't true for the server half of the static RSA cipher
> suites, since the server doesn't contribute to the PMS at all.

True, the server random doesn't expose the client's pre_master_secret in
that _particular_ calculation.  But the server IS still exposing its
(possibly dependent) random numbers. If you collect enough of these
through snooping, you can predict its pre_master_secret for connections
orginating from that machine. Same goes for the client machine. Server
machines tend to be both client and server at different times.

> In any case, this is an argument for *not* generating the Random
> values with a cryptographic PRNG since then you would not be leaking
> information about the PRNG state in the random values.

Yes. That's my point. By 'cryptographic PRNG' you mean a PRNG that is
hard to predict. But if you get enough of its sequence and have enough
compute power, all PRNG's are predictable.

> > I think the rule of thumb is never to give out random numbers to
> > potential attackers if your random numbers aren't perfectly random,
> > and it's hard in practice to have perfectly random numbers. So this
> > seems like a crack in security in many implementations that could
> > potentially be exploited.
> I am not aware of any such rule of thumb. In fact, this design is a
> fairly standard feature of cryptographic protocols.

I'll see if I can find a better reference.  I think the cryptographic
fact I'm looking for is that a pseudo-random sequence is compromised by
having part of the sequence known. Having more of the sequence known
makes it even easier to compromise.

> This isn't how PRFs like the TLS PRF work. Adding new known data does
> not dilute the existing entropy.

For SHA-256, I agree--or at least I can't dispute that at this time*.  
But recall that extensions can specify different PRFs. Perhaps it should
be documented somewhere that adding new known data must not dilute the
existing entropy.

[*Actually, I suspect it isn't true. For the algorithm to be reversible,
certain other things have to be true.  The object of the hash function
is to be irreversible, but they all seem never to really get that
property, just the property of being hard to reverse, which doesn't
affect the properties that must be true if they are reversible. If you
assume they are reversible, and add more known data to the limited
output (which began at a maximum entropy, then entropy of the output
must be reduced. To put it another way: If you start with a lump of iron
(at maximum entropy), and beat it into an iron pot of the same mass
(known input), entropy is reduced. Knowing the exact sequence to get the
pot from the rock reveals information about the rock. But this is no


Av8 Internet   Prepared to pay a premium for better service?         faster, more reliable, better service
617 256 5494