Re: [dsfjdssdfsd] What has gone wrong with RNGs in practice

Arnold Reinhold <> Mon, 18 November 2013 12:42 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id A91CB11E84DA for <>; Mon, 18 Nov 2013 04:42:56 -0800 (PST)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -5.056
X-Spam-Status: No, score=-5.056 tagged_above=-999 required=5 tests=[AWL=-0.668, BAYES_20=-0.74, RCVD_IN_DNSWL_MED=-4, SARE_SUB_11CONS_WORD=0.352]
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id 2cg-A9LguHZy for <>; Mon, 18 Nov 2013 04:42:50 -0800 (PST)
Received: from ( []) by (Postfix) with ESMTP id 078FB11E851A for <>; Mon, 18 Nov 2013 04:42:20 -0800 (PST)
Received: from new-host-3.home ( []) by (Oracle Communications Messaging Server 7u4-27.08( 64bit (built Aug 22 2013)) with ESMTPSA id <> for; Mon, 18 Nov 2013 12:42:19 +0000 (GMT)
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:5.10.8794, 1.0.14, 0.0.0000 definitions=2013-11-18_02:2013-11-18, 2013-11-17, 1970-01-01 signatures=0
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 suspectscore=2 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=7.0.1-1308280000 definitions=main-1311180054
Content-type: text/plain; charset="us-ascii"
MIME-version: 1.0 (Mac OS X Mail 7.0 \(1822\))
From: Arnold Reinhold <>
In-reply-to: <>
Date: Mon, 18 Nov 2013 07:42:18 -0500
Content-transfer-encoding: quoted-printable
Message-id: <>
References: <> <> <>
To: Russ Housley <>
X-Mailer: Apple Mail (2.1822)
Subject: Re: [dsfjdssdfsd] What has gone wrong with RNGs in practice
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: "The dsfjdssdfsd list provides a venue for discussion of randomness in IETF protocols, for example related to updating RFC 4086." <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Mon, 18 Nov 2013 12:42:56 -0000

On Nov 16, 2013, at 7:32 PM, Russ Housley <> wrote:

> Arnold:
>>> == What has gone wrong in practice and led to actual working attacks:
>>> A. Not actually using randomness at all for something that needs some
>>> or all of the properties of a random bitstring.
>>> Example: Sony's implementation of ECDSA failed to actually change the
>>> k value between signatures; they just had a constant.[1]
>> ....
>>> * Underdocumented, underexplained randomness requirements.
>>> Before you sniff too loudly at Sony's mistake in [1]: Pretend that you
>>> are a programmer in a hurry looking at FIPS 186-2, or your favorite
>>> (early) standards-body description of DSA. How well does it explain
>>> the importance of making 'k' completely unpredictable for each
>>> message, and how well does it explain the consequences for failing to
>>> do so?
>> I has also been suggested that Sony's failure to generate unique k's could have been caused by a compiler that optimized the k=crypto_random(); call out of a loop.  Whether that happened or not in the Sony case, the possibility should be dealt with in any RNG standard, as the consequence of a repeated k is easy recovery of the private key. Perhaps there should be a requirement that two (or more) test signatures be generated at application startup to verify independent k's are being generated, as code that worked initially could later be recompiled with different optimization settings for a new release.. 
> There should be a self-test part of the specification that detects this type of failure.  It should not be hard to turn the FIPS 140 test suite into a good-enough-to-continue / fail test.  This would envolve generating 10,000 random bits and doing some statistical checks.

The FIPS 140 statistical test suite won't detect that a complier has optimized a RNG call out of a loop. A special test is require here. Also since any RNG likely to be recommended will either generate or whiten bits using a crypto primitive such as SHAx or AES, statistical tests on the output of such RNGs are of little worth. The output these primitives is guaranteed pass any such test even if their input has be tampered with to allow a small search space.

The place where statistical tests would be very useful is in checking a true random noise source. Here a trustworthy entropy source SHOULD exhibit some deviation from pure randomness. Unfortunately many true random bit generators have built in whitening circuits that prevent downstream software from testing the underlying noise source.  As I understand it, this is the case with Intel's x86 hardware RNG instructions. The lack of access to raw digitized noise makes such RNGs impossible to audit.

We are entering an era where many mission-critical internet devices use solid state mass storage and lack other peripherals that can be used to obtain entropy (such as audio inputs and cameras). There will be a strong tendency to rely solely on random bit generators built into billion-transistor CPUs or SoCs, which can have surreptitious back-doors inserted during production by organizations with enough clout.

Any standard that attempts to deliver real security must include some guidance on choosing hardware that includes more than one independent, auditable entropy source, each capable of delivering at least 128 bits at start up when brand new, at first power up.

Arnold Reinhold