Re: [TLS] New drafts: adding input to the TLS master secret

Dean Anderson <> Fri, 12 February 2010 19:28 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 39C3E3A7736 for <>; Fri, 12 Feb 2010 11:28:25 -0800 (PST)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -1.982
X-Spam-Status: No, score=-1.982 tagged_above=-999 required=5 tests=[AWL=0.017, BAYES_00=-2.599, J_CHICKENPOX_66=0.6]
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id T9OWHAXxPx3T for <>; Fri, 12 Feb 2010 11:28:23 -0800 (PST)
Received: from ( []) by (Postfix) with ESMTP id 7CF4D3A78D9 for <>; Fri, 12 Feb 2010 11:28:23 -0800 (PST)
Received: from ( []) (authenticated bits=0) by (8.12.11/8.12.11) with ESMTP id o1CJTcnl003336 (version=TLSv1/SSLv3 cipher=EDH-RSA-DES-CBC3-SHA bits=168 verify=NO); Fri, 12 Feb 2010 14:29:39 -0500
Date: Fri, 12 Feb 2010 14:29:38 -0500 (EST)
From: Dean Anderson <>
To: Marsh Ray <>
In-Reply-To: <>
Message-ID: <>
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII
Cc: "" <>
Subject: Re: [TLS] New drafts: adding input to the TLS master secret
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: "This is the mailing list for the Transport Layer Security working group of the IETF." <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Fri, 12 Feb 2010 19:28:25 -0000

On Wed, 10 Feb 2010, Marsh Ray wrote:

> Dean Anderson wrote:
> > I agree with Martin Rex, I think a counter should be used instead of a 
> > random number.
> IMHO, the solution to the design problem of insufficient entropy is not
> to use a sequence counter.

The design problem is not insufficient entropy. The problem is that TLS
exposes information that does need to be exposed.  

I propose a design rule:   A PRNG should never be used to obtain a 
unique value. Instead a counter or a date should be used.

> >> Any competent implementer would know that they have to keep their
> >> secure RNG seeded to a reasonable degree.
> > 
> > Actually, secure RNG isn't always an implementer decision, but an
> > operations option. For example, Apache has an option to turn off the
> > waiting on the secure random generator.  Otherwise, it hangs until it
> > gets enough entropy. I suspect there may be a lot of apache sites
> > without sufficient entropy.
> Unless the documentation is misleading, they deserve what they get.

Your lack of sympathy is misplaced. An attack can be used to deplete the 
entropy pool. 

> To me, it's far more plausible that, if some of the random data were
> replaced by sequences, people would very easily shoot themselves in the
> foot with some obscure misconfiguration. Not needing that random data is
> only safe in a very specific situation. Admins are used to configuring
> these things, and the actual parameters are not known by both sides
> until after the random has been sent.

This is odd. As you put so much trust in the security of HMAC and by
implication the SHA-256-based PRF, so you can't really argue that the
master_secret calculated by

  PRF(pre_master_secret + client.counter + "master_secret" +  

is going to be any less secure than it was before with *.random's

Using a counter reveals /nothing/ about pre_master_secret.  Using a 
random reveals /something/ about pre_master_secret.  Security is 
improved by revealing /nothing/ about pre_master_secret.

> I'm a fan of continual seeding as entropy becomes available.
> Are there really important systems with no chaotic interrupts or other
> sensors and no high-precision timers which also need to perform TLS
> handshakes?

There are indeed servers that have to wait TLS connections for entropy
to become available. And as I pointed out, that entropy pool can be
attacked. Hence the apache configuration option, the ability in Linux
kernel to continue without sufficient entropy.

> > But in practice, there isn't always sufficient entropy for secure RNG.
> I am concerned that permitting that case (i.e., trying to make it valid
> in retrospective analysis) is unwise. It is likely to violate some
> implicit and under-documented assumptions made in the design of SSL/TLS.
> (Not to mention the cryptosystems above and below it on the protocol
> stack and services on the same OS kernel).

It is way to late to prevent that practice.  The problem is that getting
entropy is actually very hard. Timers and IO pins are often insufficient
in anycase.

> > In that case, there is a crucial exposure of sequence bits that doesn't
> > need to happen.
> > 
> > If one didn't expose all those bits from the PRNG, there wouldn't be a
> > break on the PRNG even with less entropy: There would be no way to test
> > bits from the PRNG to find its state.
> > I'd say what I've exposed is a design flaw in the way crypto is being
> > used, not a purely cryptographic weakness.  My assertion comes down to
> > this: If PRNG bits aren't exposed to sniffers, they can't be exploited
> > by snifffers. So lets not expose PRNG bits.
> Why stop there? Doesn't that also apply to any ciphertext generated from
> not-perfectly-random plaintext?

Yes and no.  I believe that recommendation is that one should never
encrypt the same plaintext in two different ciphers, and particularly
not the same cipher with different keys, as this exposes a differential
attack. On that condition, the information-theoretic answer is Yes.  
But you said 'not-perfectly-random, which might be different that
re-encrypting the same plaintext.

> >> Dividing it between two separate pools would probably end
> >> up diluting it in one or the other.
> > 
> > I agree. A counter should be used, not another PRNG.
> Which is the ultimate form of dilution in one pool.

Importantly, though, it doesn't dilute the pool which the
pre_master_secret is using.  

> >> For example, many OSes collect and pool entropy in the kernel where it
> >> can be shared by all applications. This is essentially the opposite of
> >> your suggestion.
> > 
> > Yes. That is the best of a bad choice, too.  The pool was used because
> > its hard to collect sufficient entropy. But allowing others to get bits
> > from the pool opens a way to reduce entropy available
> I understand the entropy pool depletes over time, in theory.

No, it happens in real life. If apache is configured to wait, you wait. 
Anywhere from seconds to Several minutes sometimes.

> But it intuitively seems like a pool that had a healthy amount of
> entropy merged in (I'm thinking a thousand or more bits) with a strong
> mixing function can be expected hold up well. Particularly if a little
> additional entropy is mixed in over time.

One can't expand entropy. One can expand the sequence with a PRNG, but
the PRNG has the same entropy as the seed.

> If you know of any remotely plausible attacks against a well-constructed
> PRNG with 1024 bits of entropy I would like to read about them.
> [cue specter of self-aware and omipotent NSA quantum computer]  :-)
> For example, OpenSSL has this line:
> #define ENTROPY_NEEDED 32 /* require 256 bits = 32 bytes of randomness*/
> It requires that much at startup and then quits counting. It doesn't
> care about deducting from the estimated entropy once the initial
> threshold has been reached:
>   * Once we've had enough initial seeding we don't bother to
>   * adjust the entropy count, though, because we're not ambitious
>   * to provide *information-theoretic* randomness.

Openssl typically starts, does one thing, and quits.  Applications that
use OpenSsl libs (like apache) continue to get new random numbers from
the kernel.

> > and cause servers to either wait (or proceed without sufficient
> > entropy).  The way TLS is currently arranged, TLS really screws
> > those who don't have sufficient entropy all the time.
> You might as well say "TLS really screws with those who don't have a
> reliable transport available".

It doesn't. That's why it uses TCP instead of UDP.

> It's just a design requirement.

I suppose Colt might have a design requirement: "Inability to shoot foot
with gun"  ;-)  Reality intrudes. Of course, trigger safety was a good

By contrast, making the *.randoms a counter is an easy thing to do;  it
prevents the unnecessary exposure of information.

> It seems like you're constructing this strawman of "TLS without a
> significant entropy source" then knocking him down and saying we
> should modify the design for this case.
> > And TLS _might_ still screw those who do.
> That part I don't understand.

I mean, there might be an attack against sha-256, as there was against
sha-1, md5, md4....(every mac in history has been cracked).

However, even if cracked, if its impossible to get the state of the PRNG
because you can't see any output, it wouldn't matter. pre_master_secret
could be a constant, if the constant were kept secret from the sniffer
and the sniffer didn't know it was a constant.

The sniffer needs sufficient cleartext output of the PRNG to obtain its

Exposing the PRNG output gives the possibility that a sniffer can
calculate the pre_master_secret, and thus it isn't a secret.  By
removing the client.random and server.random, it becomes impossible for
a sniffer to calculate the pre_master_secret.


Av8 Internet   Prepared to pay a premium for better service?         faster, more reliable, better service
617 256 5494