Re: [TLS] New drafts: adding input to the TLS master secret

Marsh Ray <> Thu, 11 February 2010 04:56 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 6455F28C17B for <>; Wed, 10 Feb 2010 20:56:13 -0800 (PST)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -2.599
X-Spam-Status: No, score=-2.599 tagged_above=-999 required=5 tests=[AWL=0.000, BAYES_00=-2.599]
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id Lt8cywhrv07P for <>; Wed, 10 Feb 2010 20:56:12 -0800 (PST)
Received: from ( []) by (Postfix) with ESMTP id 1C54628C0D6 for <>; Wed, 10 Feb 2010 20:56:12 -0800 (PST)
Received: from ([]) by with esmtpa (Exim 4.68) (envelope-from <>) id 1NfR7E-000Heb-IA; Thu, 11 Feb 2010 04:57:24 +0000
Received: from [] (localhost []) by (Postfix) with ESMTP id 39F546048; Thu, 11 Feb 2010 04:57:23 +0000 (UTC)
X-Mail-Handler: MailHop Outbound by DynDNS
X-Report-Abuse-To: (see for abuse reporting information)
X-MHO-User: U2FsdGVkX1849YSJbW2+RkI5ESneTiZOh+2rDf3UpBE=
Message-ID: <>
Date: Wed, 10 Feb 2010 22:57:27 -0600
From: Marsh Ray <>
User-Agent: Thunderbird (Windows/20090812)
MIME-Version: 1.0
To: Dean Anderson <>, "" <>
References: <>
In-Reply-To: <>
X-Enigmail-Version: 0.96.0
OpenPGP: id=1E36DBF2
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
Subject: Re: [TLS] New drafts: adding input to the TLS master secret
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: "This is the mailing list for the Transport Layer Security working group of the IETF." <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Thu, 11 Feb 2010 04:56:13 -0000

Dean Anderson wrote:
> I agree with Martin Rex, I think a counter should be used instead of a 
> random number.

IMHO, the solution to the design problem of insufficient entropy is not
to use a sequence counter.

>> Any competent implementer would know that they have to keep their
>> secure RNG seeded to a reasonable degree.
> Actually, secure RNG isn't always an implementer decision, but an
> operations option. For example, Apache has an option to turn off the
> waiting on the secure random generator.  Otherwise, it hangs until it
> gets enough entropy. I suspect there may be a lot of apache sites
> without sufficient entropy.

Unless the documentation is misleading, they deserve what they get.

In particular, they'd better hope the administrator of the other peer
wasn't hoping to freeload on the random data contribution, too.

> However, the 40% referred to:
>   640 bits are taken from the PRNG:
>      because pre_master_secret (384) + client.random(256) = 640
>   256 bits are exposed to the sniffer. 256/640 = .4
> So 40% of the sequence bits are exposed to sniffers.  I assume the
> sniffer can see every TLS connection.

If that represents a problem, then either there was really lousy seeding
or really interesting attack on the PRNG.

If I used AES-128 to transfer (128*.40) = 51 bits of plaintext known to
the attacker, should I be concerned that he can recover the key?

How does the method used to generate the IV factor in?

How does the method used to generate the MAC secret factor in?

What if I need to transfer 8 Kbits of known plaintext at the beginning
of the connection? How much entropy would need to feed into the key
material in order to be safe?

I don't have solid answers to these.

> For LFSR, I /think/ 40% was enough to recover the whole sequence (fact
> check, I'm not sure).

My impression is that an LSFR is greatly inferior to the PRNG which is
built from repeated HMACs with MD5 and SHA-1.

> So, it might be plausible that this amount might
> be enough to recover the pseudo random sequence, too.

Anything might be plausible.

To me, it's far more plausible that, if some of the random data were
replaced by sequences, people would very easily shoot themselves in the
foot with some obscure misconfiguration. Not needing that random data is
only safe in a very specific situation. Admins are used to configuring
these things, and the actual parameters are not known by both sides
until after the random has been sent.

Ideally it would not be trusted until after the Finished messages have
been validated.

> Particularly if
> there isn't suffient entropy in the key to start with.  The 256 bits of
> exposed random give an attacker a means to check on the state of the

I'm a fan of continual seeding as entropy becomes available.

Are there really important systems with no chaotic interrupts or other
sensors and no high-precision timers which also need to perform TLS

Maybe some embedded system somewhere? These days even the smallest
microcontrollers have an abundance of timers and IO pins and often
several A/D inputs.

>> You haven't identified an actual weakness, only asserted that there
>> could be one.
> Yes.  I haven't identified a purely cryptographic weakness. I haven't
> found a break in sha256, particularly not if a sufficient amount of key
> entropy is used. And if everyone strictly follows total secure practices
> on key entropy and there are no sha-1,md5 similar breaks in sha256, then
> indeed there is no purely cryptographic problem.
> But in practice, there isn't always sufficient entropy for secure RNG.

I am concerned that permitting that case (i.e., trying to make it valid
in retrospective analysis) is unwise. It is likely to violate some
implicit and under-documented assumptions made in the design of SSL/TLS.
(Not to mention the cryptosystems above and below it on the protocol
stack and services on the same OS kernel).

> In that case, there is a crucial exposure of sequence bits that doesn't
> need to happen.
> If one didn't expose all those bits from the PRNG, there wouldn't be a
> break on the PRNG even with less entropy: There would be no way to test
> bits from the PRNG to find its state.
> I'd say what I've exposed is a design flaw in the way crypto is being
> used, not a purely cryptographic weakness.  My assertion comes down to
> this: If PRNG bits aren't exposed to sniffers, they can't be exploited
> by snifffers. So lets not expose PRNG bits.

Why stop there? Doesn't that also apply to any ciphertext generated from
not-perfectly-random plaintext?

>> Dividing it between two separate pools would probably end
>> up diluting it in one or the other.
> I agree. A counter should be used, not another PRNG.

Which is the ultimate form of dilution in one pool.

>> For example, many OSes collect and pool entropy in the kernel where it
>> can be shared by all applications. This is essentially the opposite of
>> your suggestion.
> Yes. That is the best of a bad choice, too.  The pool was used because
> its hard to collect sufficient entropy. But allowing others to get bits
> from the pool opens a way to reduce entropy available

I understand the entropy pool depletes over time, in theory.

But it intuitively seems like a pool that had a healthy amount of
entropy merged in (I'm thinking a thousand or more bits) with a strong
mixing function can be expected hold up well. Particularly if a little
additional entropy is mixed in over time.

If you know of any remotely plausible attacks against a well-constructed
PRNG with 1024 bits of entropy I would like to read about them.
[cue specter of self-aware and omipotent NSA quantum computer]  :-)

For example, OpenSSL has this line:
#define ENTROPY_NEEDED 32 /* require 256 bits = 32 bytes of randomness*/

It requires that much at startup and then quits counting. It doesn't
care about deducting from the estimated entropy once the initial
threshold has been reached:
  * Once we've had enough initial seeding we don't bother to
  * adjust the entropy count, though, because we're not ambitious
  * to provide *information-theoretic* randomness.

> and cause servers
> to either wait (or proceed without sufficient entropy).  The way TLS is
> currently arranged, TLS really screws those who don't have sufficient
> entropy all the time.

You might as well say "TLS really screws with those who don't have a
reliable transport available".

It's just a design requirement.

It seems like you're constructing this strawman of "TLS without a
significant entropy source" then knocking him down and saying we should
modify the design for this case.

> And TLS _might_ still screw those who do.

That part I don't understand.

- Marsh