Re: [TLS] New drafts: adding input to the TLS master secret

Marsh Ray <> Tue, 09 February 2010 01:59 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id AB70D3A74F6 for <>; Mon, 8 Feb 2010 17:59:50 -0800 (PST)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -2.599
X-Spam-Status: No, score=-2.599 tagged_above=-999 required=5 tests=[AWL=0.000, BAYES_00=-2.599]
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id MasSpejEyYEI for <>; Mon, 8 Feb 2010 17:59:49 -0800 (PST)
Received: from ( []) by (Postfix) with ESMTP id DC4603A67A4 for <>; Mon, 8 Feb 2010 17:59:48 -0800 (PST)
Received: from ([]) by with esmtpa (Exim 4.68) (envelope-from <>) id 1NefPI-000871-PF for; Tue, 09 Feb 2010 02:00:52 +0000
Received: from [] (localhost []) by (Postfix) with ESMTP id DB0476048 for <>; Tue, 9 Feb 2010 02:00:51 +0000 (UTC)
X-Mail-Handler: MailHop Outbound by DynDNS
X-Report-Abuse-To: (see for abuse reporting information)
X-MHO-User: U2FsdGVkX1/e9yXGPtKULbhMpe6y7iGRdzk/pWw/Bwo=
Message-ID: <>
Date: Mon, 08 Feb 2010 21:00:57 -0500
From: Marsh Ray <>
User-Agent: Thunderbird (Windows/20090812)
MIME-Version: 1.0
To: "" <>
References: <>
In-Reply-To: <>
X-Enigmail-Version: 0.96.0
OpenPGP: id=1E36DBF2
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
Subject: Re: [TLS] New drafts: adding input to the TLS master secret
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: "This is the mailing list for the Transport Layer Security working group of the IETF." <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Tue, 09 Feb 2010 01:59:50 -0000

Dean Anderson wrote:
> On Mon, 8 Feb 2010, Eric Rescorla wrote:
>> If
>> you have any evidence that the standard PRNGs/PRFs fall significantly
>> short of this requirement, this would be a result worthy of
>> publication in a good conference.
> Actually, it appears to me that it is an unproven assumption of the
> next-bit test.

I think you'll find that all cryptography rests on assumptions that
aren't "proven" to some arbitrarily high standard.

> ("it" being that there is no polynomial time algorithm to
> calculate the next bit)

Starting with good old
of course.

> I believe the key phrase is "up to the input
> entropy of the PRNG". If you have enough of the sequence, you can get
> the whole sequence.  The TLS random exposes an essentially unlimited
> amount of the pseudo-random sequence.

I don't see where you get that. RFC 2246 says:
> Client hello
>   random_bytes
>        28 bytes generated by a secure random number generator.

To me that does not permit a conforming implementation to "expose an
essentially unlimited amount of the pseudo-random sequence".

Any competent implementer would know that they have to keep their
secure RNG seeded to a reasonable degree.

> I don't assert any revelation about PRNG properties. I think all I have
> discovered is that in TLS, one can get an essentially unlimited amount
> of the PRNG sequence by mere snooping (no decryption).  The
> already-known properties of pseudo-random sequences are that one can
> compute the whole sequence given knowledge of an unlimited amount of the
> sequence because they are not random.

Perhaps the problem here is that the TLS RFCs are a bit loose with the
"secure RNG" "PRNG" and "PRF" labels? The RFC seems to jump between them
expecting readers to be familiar with the construction and seeding of a
secure RNG. Sounds like a reasonable assumption to me.

IMHO, the details of establishing a secure RNG are out of scope for the RFC.

> The tests on for "cryptographic security" of PRBG are complicated, but
> roughly summarized as based on statistical expectations for the next bit
> in the sequence. The 'next-bit' test posits that there are no polynomial
> time algorithms for calculating the next bit.  But this is assumed, not
> proven property. It was also assumed to be true of Linear Feedback Shift
> Registers, but was found false.  There is nothing that proves the PRNG
> isn't predictable if you can see a whole lot of its sequence, or that
> calculating the next bit is computationally infeasible given a lot of
> sequence.

There's nothing that "proves" anything is random to some
arbitrarily-high standard. Indeed, literal religious schisms have been
based on disagreements about the determinism of the universe.

Rather than waste all that energy debating it all over again, you could
just post back here when you've got a useful result. Something like this:

> From "Handbook of Applied Cryptography" Section 5.1.1 on Pseudo Random
> Bit Generators:
>   "The output of a PRBG is not random; in fact the number of possible
>   output sequences is at most a small fraction 2^k/2^l, of all possible
>   binary sequences of length l."
> So one can see, as l gets larger with k constant, the fraction of all
> output sequences gets smaller.  The pre_master_secret is 384 bits and
> the Random value is presently 256 bits. So the random exposes 40% of the
> sequence.

2^256 / 2^384
is not anything like 40%. Looks to me like it's about

> Paul's proposal increases the exposure. That seems like a bad
> idea.  Exposing any part of the sequence seems like a bad idea.
> Full enumeration of the PRBG is probably infeasible. But it is not clear
> that full enumeration is necessary to predict the sequence. It wasn't
> necessary to have very much of the sequence for LFSR.  It may indeed
> prove to be computationally infeasible, but it not proven so far.  Some
> generators may be better than others. Or not. Not so long ago, linear
> feedback shift registers were thought to be computationally infeasible
> to break and were the stream cipher for GSM, but then that was
> discovered to be wrong, and that calculation was really pretty easy.
> When all we need are unique master_secrets across sessions, this seems
> like an unnecessary risk.

You haven't identified an actual weakness, only asserted that there
could be one.

> Why not alter TLS to require a different
> pseudo-random sequence generator for Random values?

I didn't design it of course, but I would say it's because the actual
input entropy is assumed to be finite and by is impossible to measure
accurately. Dividing it between two separate pools would probably end up
diluting it in one or the other.

For example, many OSes collect and pool entropy in the kernel where it
can be shared by all applications. This is essentially the opposite of
your suggestion.

- Marsh