Re: [Cfrg] Fwd: Encryption is less secure than we thought - MIT News Office

David Jacobson <> Sun, 18 August 2013 00:37 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id E3AA411E815C for <>; Sat, 17 Aug 2013 17:37:50 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -3.109
X-Spam-Status: No, score=-3.109 tagged_above=-999 required=5 tests=[BAYES_05=-1.11, GB_I_LETTER=-2, HTML_MESSAGE=0.001]
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id 513DSZ2UskdC for <>; Sat, 17 Aug 2013 17:37:45 -0700 (PDT)
Received: from ( []) by (Postfix) with ESMTP id 8B2E911E817C for <>; Sat, 17 Aug 2013 17:37:39 -0700 (PDT)
Received: from [] by with NNFMP; 18 Aug 2013 00:37:38 -0000
Received: from [] by with NNFMP; 18 Aug 2013 00:37:38 -0000
Received: from [] by with NNFMP; 18 Aug 2013 00:37:38 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=s1024; t=1376786258; bh=XZVA2usuYmZqhsf2HBtyXqOKkYutXiYGl5zoAsfk84Q=; h=X-Yahoo-Newman-Id:X-Yahoo-Newman-Property:X-YMail-OSG:X-Yahoo-SMTP:X-Rocket-Received:Message-ID:Date:From:User-Agent:MIME-Version:To:Subject:References:In-Reply-To:Content-Type; b=ab0Prb2T5VueuJZDhQ6/357efz4NEzLnyaw18R3l9ZH4uLnehlwEWsZpfZ3UM6Xlvs68atDc6wed7uT0wauFmKjBCRP1eF2KVQ8+YhyhY0Bff+R56eKDdQ8uXo10EVuzoTR5gv+1aQaZ1EDm2j5TDKwxwVaCmt3Oo+xpaQ6Dt+4=
X-Yahoo-Newman-Property: ymail-3
X-YMail-OSG: NGOg7joVM1kOEsL5tfo5X2Muow.fxH5WCdnud7uR7m_dvGo CtT1c_v5me8wxcEW2HBlFzCi1gt59OSvQRRRpGAUZyGhZdAdj2HLpYHRGEzx YzogC85MxF4zaUIZMhv6.i3iGFHHL0sWi4tDrgmx_U7PbioRqFEsXVJaYB0b P.Kw9U7WFkjfbP3HJiMI6GeFIUfqiXGIs45nUGJmiV9LPoVPuf7Yky_Z81GJ O0Mspw_ZKeacP1Y356CClLcZetEZrlArBNoqIihCjvkSh0OXz6oCO9sdBE1V Ib0ckJAXDc8eCVuS4wDfyfD3yW2789cBF3V5ZeCLgL1ijLBk3XK0JIbo_6kK eqlAeWdMloZcGa3911n8LQMc1dGmWvYw1tTCqexfeEKqgSCrF_KRLnlxUfgQ sVupCuU8nXqQHlDNB.GT1EvijCSafG7SztyxyrEBmrQjELxrGhq82ejEHUfk w3HDVtQ.Gb_hVMfpquK2h6fmyTrHZR7EItSQCJH6F8wJasZL2_QA.6aqDGBV 4KN18CUMzpzEq3Kkmnag4yfwFHZC.Nk4xVrBaU.YnSICWCy6zbvn9pngvmEW CZ3kUzqhq6A7F_2d9Kf63FQ4fjpU3fEyTPEOLWZOqiLCFzmeMr6xBk26HZTX eaYnRNIoiJ8W6cg.tHzzFVjroATtnFTV0l_k37iOaOJGdB.UrEY1w3o_pAb1 R22F7e3zZI7EUDTc3W.UEeY7H0_RCqheNt8r30JKp6NwwV9Rg5DPdxcqf4Yw Z0zA08AnG.hTCrLM8atrK_RoK9O9dmpq0XyGHwXgb1r.Lp6E3gY6Qnu.LJCC pBpGeZanc41_LRurmEXCnI7GId4.x1K3xphL8SQl2BpHn2FX.EQdC4vxxMoP G_b7DcOC_6F1C61a13OAueL6jnXpB.lloGb5k1LI-
X-Yahoo-SMTP: nOrmCa6swBAE50FabWnlVFUpgFVJ9Gbi__8U5mpvhtQq7tTV1g--
X-Rocket-Received: from [] (dmjacobson@ with plain) by with SMTP; 17 Aug 2013 17:37:38 -0700 PDT
Message-ID: <>
Date: Sat, 17 Aug 2013 17:37:37 -0700
From: David Jacobson <>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:17.0) Gecko/20130801 Thunderbird/17.0.8
MIME-Version: 1.0
References: <> <> <>
In-Reply-To: <>
Content-Type: multipart/alternative; boundary="------------030604000102000005030806"
Subject: Re: [Cfrg] Fwd: Encryption is less secure than we thought - MIT News Office
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: Crypto Forum Research Group <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Sun, 18 Aug 2013 00:37:51 -0000

This is ancient news.  Back in 2004 I went to the NIST Workshop on 
Random Number Generation, which was convened to get reaction to a draft 
of ANSI X9.82 (about random number generation)  That draft really made 
the point that min-entropy was the preferred measure, and even had 
examples of why Shannon entropy, another one they called "guessing 
entropy", and some other one were all flawed.  The flaws were all the 
same: you could find distributions that were quite good in the measure 
in question, yet there was one value that occurred with high probability.

Personally, I'm quite fond of order-2 Renyi entropy.  There is a theorem 
that min-entropy is not less than half the order-2 Renyi entropy, so if 
the order-2 Renyi entropy is decent, you can't have terrible 
min-entropy.   I'll elaborate more on why I like order-2 Renyi entropy 
some day when I have some spare time.  But this is off-topic.

   --David Jacobson

On 8/16/13 5:35 AM, Ben Laurie wrote:
> On 16 August 2013 07:44, Robert Moskowitz < 
> <>> wrote:
>     So what are the thoughts about this here?  I have been asked to do
>     a little digging amongst my contacts.  Seems it clearly states
>     that our secure communications stuff we do is not affected by this
>     work, but perhaps secure data objects and various wireless
>     password passing technologies might be at risk?
> This seems to be restating essentially what Joe Bonneau said far more 
> readably a year ago: 
>     -------- Original Message --------
>           From Evernote: <>
>       Encryption is less secure than we thought - MIT News Office
>     Clipped from:
>     **
>       Encryption is less secure than we thought
>     For 65 years, most information-theoretic analyses of cryptographic
>     systems have made a mathematical assumption that turns out to be
>     wrong*.*
>     Larry Hardesty, MIT News Office
>     Muriel Médard is a professor in the MIT Department of Electrical
>     Engineering*.* Photo: Bryce Vickmark
>     Information theory --- the discipline that gave us digital
>     communication and data compression --- also put cryptography on a
>     secure mathematical foundation*.* Since 1948, when the *paper that
>     created information theory *
>     <>first
>     appeared, most information-theoretic analyses of secure schemes
>     have depended on a common assumption*.*
>     Unfortunately, as a group of researchers at MIT and the National
>     University of Ireland (NUI) at Maynooth, demonstrated in a paper
>     presented at the recent International Symposium on Information
>     Theory (*view PDF * <>), that
>     assumption is false*.* In a follow-up paper being presented this
>     fall at the Asilomar Conference on Signals and Systems, the same
>     team shows that, as a consequence, the wireless card readers used
>     in many keyless-entry systems may not be as secure as previously
>     thought*.*
>     In information theory, the concept of information is intimately
>     entwined with that of entropy*.* Two digital files might contain
>     the same amount of information, but if one is shorter, it has more
>     entropy*.* If a compression algorithm --- such as WinZip or gzip
>     --- worked perfectly, the compressed file would have the maximum
>     possible entropy*.* That means that it would have the same number
>     of 0s and 1s, and the way in which they were distributed would be
>     totally unpredictable*.* In information-theoretic parlance, it
>     would be perfectly uniform*.*
>     Traditionally, information-theoretic analyses of secure schemes
>     have assumed that the source files are perfectly uniform*.* In
>     practice, they rarely are, but they're close enough that it
>     appeared that the standard mathematical analyses still held*.*
>     "We thought we'd establish that the basic premise that everyone
>     was using was fair and reasonable," says Ken Duffy, one of the
>     researchers at NUI*.* "And it turns out that it's not." On both
>     papers, Duffy is joined by his student Mark Christiansen; Muriel
>     Médard, a professor of electrical engineering at MIT; and her
>     student Flávio du Pin Calmon*.*
>     The problem, Médard explains, is that information-theoretic
>     analyses of secure systems have generally used the wrong notion of
>     entropy*.* They relied on so-called Shannon entropy, named after
>     the founder of information theory, Claude Shannon, who taught at
>     MIT from 1956 to 1978*.*
>     Shannon entropy is based on the average probability that a given
>     string of bits will occur in a particular type of digital file*.*
>     In a general-purpose communications system, that's the right type
>     of entropy to use, because the characteristics of the data traffic
>     will quickly converge to the statistical averages*.* Although
>     Shannon's seminal 1948 paper dealt with cryptography, it was
>     primarily concerned with communication, and it used the same
>     measure of entropy in both discussions*.*
>     But in cryptography, the real concern isn't with the average case
>     but with the worst case*.* A codebreaker needs only one reliable
>     correlation between the encrypted and unencrypted versions of a
>     file in order to begin to deduce further correlations*.* In the
>     years since Shannon's paper, information theorists have developed
>     other notions of entropy, some of which give greater weight to
>     improbable outcomes*.* Those, it turns out, offer a more accurate
>     picture of the problem of codebreaking*.*
>     When Médard, Duffy and their students used these alternate
>     measures of entropy, they found that slight deviations from
>     perfect uniformity in source files, which seemed trivial in the
>     light of Shannon entropy, suddenly loomed much larger*.* The
>     upshot is that a computer turned loose to simply guess
>     correlations between the encrypted and unencrypted versions of a
>     file would make headway much faster than previously expected*.*
>     "It's still exponentially hard, but it's exponentially easier than
>     we thought," Duffy says*.* One implication is that an attacker who
>     simply relied on the frequencies with which letters occur in
>     English words could probably guess a user-selected password much
>     more quickly than was previously thought*.* "Attackers often use
>     graphics processors to distribute the problem," Duffy says*.*
>     "You'd be surprised at how quickly you can guess stuff*.*"
>     In their Asilomar paper, the researchers apply the same type of
>     mathematical analysis in a slightly different way*.* They consider
>     the case in which an attacker is, from a distance, able to make a
>     "noisy" measurement of the password stored on a credit card with
>     an embedded chip or a key card used in a keyless-entry system*.*
>     "Noise" is the engineer's term for anything that degrades an
>     electromagnetic signal --- such as physical obstructions,
>     out-of-phase reflections or other electromagnetic interference*.*
>     Noise comes in lots of different varieties: The familiar white
>     noise of sleep aids is one, but so is pink noise, black noise and
>     more exotic-sounding types of noise, such as power-law noise or
>     Poisson noise*.*
>     In this case, rather than prior knowledge about the statistical
>     frequency of the symbols used in a password, the attacker has
>     prior knowledge about the probable noise characteristics of the
>     environment: Phase noise with one set of parameters is more
>     probable than phase noise with another set of parameters, which in
>     turn is more probable than Brownian noise, and so on*.* Armed with
>     these statistics, an attacker could infer the password stored on
>     the card much more rapidly than was previously thought*.*
>     "Some of the approximations that we're used to making, they make
>     perfect sense in the context of traditional communication," says
>     Matthieu Bloch, an assistant professor of electrical and computer
>     engineering at the Georgia Institute of Technology*.* "You design
>     your system in a framework, and then you test it*.* But for
>     crypto, you're actually trying to prove that it's robust to things
>     you cannot test*.* So you have to be sure that your assumptions
>     make sense from the beginning*.* And I think that going back to
>     the assumptions is something people don't do often enough*.*"
>     Bloch doubts that the failure of the uniformity assumption means
>     that cryptographic systems in wide use today are fundamentally
>     insecure*.* "My guess is that it will show that some of them are
>     slightly less secure than we had hoped, but usually in the
>     process, we'll also figure out a way of patching them," he says*.*
>     The MIT and NUI researchers' work, he says, "is very constructive,
>     because it's essentially saying, 'Hey, we have to be careful*.*'
>     But it also provides a methodology to go back and reanalyze all
>     these things*.*"
>     Comments
>     *Log in to write comments *
>     <>
>     **
>     _______________________________________________
>     Cfrg mailing list
> <>
> _______________________________________________
> Cfrg mailing list