Re: [Cfrg] Fwd: Encryption is less secure than we thought - MIT News Office

Ben Laurie <ben@links.org> Fri, 16 August 2013 12:35 UTC

Return-Path: <benlaurie@gmail.com>
X-Original-To: cfrg@ietfa.amsl.com
Delivered-To: cfrg@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 8E5D221F9ADF for <cfrg@ietfa.amsl.com>; Fri, 16 Aug 2013 05:35:30 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -3.977
X-Spam-Level:
X-Spam-Status: No, score=-3.977 tagged_above=-999 required=5 tests=[BAYES_00=-2.599, FM_FORGED_GMAIL=0.622, GB_I_LETTER=-2, HTML_MESSAGE=0.001, NO_RELAYS=-0.001]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id uP83FkP6d7Aq for <cfrg@ietfa.amsl.com>; Fri, 16 Aug 2013 05:35:26 -0700 (PDT)
Received: from mail-qa0-x22d.google.com (mail-qa0-x22d.google.com [IPv6:2607:f8b0:400d:c00::22d]) by ietfa.amsl.com (Postfix) with ESMTP id 38DB921F9798 for <cfrg@irtf.org>; Fri, 16 Aug 2013 05:35:23 -0700 (PDT)
Received: by mail-qa0-f45.google.com with SMTP id l18so445968qak.4 for <cfrg@irtf.org>; Fri, 16 Aug 2013 05:35:22 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:sender:in-reply-to:references:date:message-id:subject :from:to:cc:content-type; bh=mk3ZGT2LfixtkGD3m8+ub5F/VkQzwO7hIw8IUnKdOjE=; b=vR9Ok+9THw6q2+eTooCUf9hBlz120GRraJTe+Pqv8HF9YXZkwDSPCSURr1p74/nj1q jWR+xCQ2ePNHZP9ZZ4b8mErPViEewaxTajPKds2H7T4bxfGg+M5NgfQFMDZoGSnpSCFP XxJ+eMNjlJ8CjbkTP5ZIVjpWFr14qo1b4NDERbz0NBmKjaW3dkhJU6weV1vt5LiYWCCw J9Qa6lr4t3IC3lktMlStyYNobbkUTfPdw1GuZEuQkIJ3jWlv+VYhQPL6Gf/RwT+fKui1 Va+Nx9L2neqnuIftNx37Wa+mRXfVVvgjeuHBe9GpeCbMdr8hpZimHaGneK/BC1vpe3Lv QzHw==
MIME-Version: 1.0
X-Received: by 10.224.80.9 with SMTP id r9mr1011045qak.89.1376656522642; Fri, 16 Aug 2013 05:35:22 -0700 (PDT)
Sender: benlaurie@gmail.com
Received: by 10.49.4.227 with HTTP; Fri, 16 Aug 2013 05:35:22 -0700 (PDT)
In-Reply-To: <520E10A2.2030908@htt-consult.com>
References: <32A53FD55709804D8533DD5D308A40A216B2F1A1CC@FHDP1LUMXC7V33.us.one.verizon.com> <520E10A2.2030908@htt-consult.com>
Date: Fri, 16 Aug 2013 08:35:22 -0400
X-Google-Sender-Auth: FCLQ8W62SKKFbXjbCq1cjYpRfTI
Message-ID: <CAG5KPzw2ASbKGRyGFoFnoL1GdnyZSqgBLzaj6XWzE5Ks75NE5A@mail.gmail.com>
From: Ben Laurie <ben@links.org>
To: Robert Moskowitz <rgm-sec@htt-consult.com>
Content-Type: multipart/alternative; boundary=001a11c2b9ea02a42e04e40fd376
Cc: "cfrg@irtf.org" <cfrg@irtf.org>
Subject: Re: [Cfrg] Fwd: Encryption is less secure than we thought - MIT News Office
X-BeenThere: cfrg@irtf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: Crypto Forum Research Group <cfrg.irtf.org>
List-Unsubscribe: <http://www.irtf.org/mailman/options/cfrg>, <mailto:cfrg-request@irtf.org?subject=unsubscribe>
List-Archive: <http://www.irtf.org/mail-archive/web/cfrg>
List-Post: <mailto:cfrg@irtf.org>
List-Help: <mailto:cfrg-request@irtf.org?subject=help>
List-Subscribe: <http://www.irtf.org/mailman/listinfo/cfrg>, <mailto:cfrg-request@irtf.org?subject=subscribe>
X-List-Received-Date: Fri, 16 Aug 2013 12:35:30 -0000

On 16 August 2013 07:44, Robert Moskowitz <rgm-sec@htt-consult.com>; wrote:

>  So what are the thoughts about this here?  I have been asked to do a
> little digging amongst my contacts.  Seems it clearly states that our
> secure communications stuff we do is not affected by this work, but perhaps
> secure data objects and various wireless password passing technologies
> might be at risk?
>

This seems to be restating essentially what Joe Bonneau said far more
readably a year ago:
http://www.jbonneau.com/doc/B12-IEEESP-analyzing_70M_anonymized_passwords.pdf
.


>
>
> -------- Original Message --------
>
>  ** **
>
>   From Evernote: <http://evernote.com/>****   Encryption is less secure
> than we thought - MIT News Office****
>
> Clipped from: *
> http://web.mit.edu/newsoffice/2013/encryption-is-less-secure-than-we-thought-0814.html
> *<http://web.mit.edu/newsoffice/2013/encryption-is-less-secure-than-we-thought-0814.html>
> ****
>      Encryption is less secure than we thought ****
>
> For 65 years, most information-theoretic analyses of cryptographic systems
> have made a mathematical assumption that turns out to be wrong*.*****
>
> Larry Hardesty, MIT News Office****
>
>
> ****
>
> Muriel Médard is a professor in the MIT Department of Electrical
> Engineering*.* Photo: Bryce Vickmark ****
>
> Information theory — the discipline that gave us digital communication and
> data compression — also put cryptography on a secure mathematical foundation
> *.* Since 1948, when the *paper that created information theory *<http://web.mit.edu/newsoffice/2010/explained-shannon-0115.html>first
> appeared, most information-theoretic analyses of secure schemes have
> depended on a common assumption*.*
>
> Unfortunately, as a group of researchers at MIT and the National
> University of Ireland (NUI) at Maynooth, demonstrated in a paper presented
> at the recent International Symposium on Information Theory (*view PDF *<http://arxiv.org/pdf/1301.6356.pdf>),
> that assumption is false*.* In a follow-up paper being presented this
> fall at the Asilomar Conference on Signals and Systems, the same team shows
> that, as a consequence, the wireless card readers used in many
> keyless-entry systems may not be as secure as previously thought*.*
>
> In information theory, the concept of information is intimately entwined
> with that of entropy*.* Two digital files might contain the same amount
> of information, but if one is shorter, it has more entropy*.* If a
> compression algorithm — such as WinZip or gzip — worked perfectly, the
> compressed file would have the maximum possible entropy*.* That means
> that it would have the same number of 0s and 1s, and the way in which they
> were distributed would be totally unpredictable*.* In
> information-theoretic parlance, it would be perfectly uniform*.*
>
> Traditionally, information-theoretic analyses of secure schemes have
> assumed that the source files are perfectly uniform*.* In practice, they
> rarely are, but they’re close enough that it appeared that the standard
> mathematical analyses still held*.*
>
> “We thought we’d establish that the basic premise that everyone was using
> was fair and reasonable,” says Ken Duffy, one of the researchers at NUI*.*“And it turns out that it’s not.” On both papers, Duffy is joined by his
> student Mark Christiansen; Muriel Médard, a professor of electrical
> engineering at MIT; and her student Flávio du Pin Calmon*.*
>
> The problem, Médard explains, is that information-theoretic analyses of
> secure systems have generally used the wrong notion of entropy*.* They
> relied on so-called Shannon entropy, named after the founder of information
> theory, Claude Shannon, who taught at MIT from 1956 to 1978*.*
>
> Shannon entropy is based on the average probability that a given string of
> bits will occur in a particular type of digital file*.* In a
> general-purpose communications system, that’s the right type of entropy to
> use, because the characteristics of the data traffic will quickly converge
> to the statistical averages*.* Although Shannon’s seminal 1948 paper
> dealt with cryptography, it was primarily concerned with communication, and
> it used the same measure of entropy in both discussions*.*
>
> But in cryptography, the real concern isn’t with the average case but with
> the worst case*.* A codebreaker needs only one reliable correlation
> between the encrypted and unencrypted versions of a file in order to begin
> to deduce further correlations*.* In the years since Shannon’s paper,
> information theorists have developed other notions of entropy, some of
> which give greater weight to improbable outcomes*.* Those, it turns out,
> offer a more accurate picture of the problem of codebreaking*.*
>
> When Médard, Duffy and their students used these alternate measures of
> entropy, they found that slight deviations from perfect uniformity in
> source files, which seemed trivial in the light of Shannon entropy,
> suddenly loomed much larger*.* The upshot is that a computer turned loose
> to simply guess correlations between the encrypted and unencrypted versions
> of a file would make headway much faster than previously expected*.*
>
> “It’s still exponentially hard, but it’s exponentially easier than we
> thought,” Duffy says*.* One implication is that an attacker who simply
> relied on the frequencies with which letters occur in English words could
> probably guess a user-selected password much more quickly than was
> previously thought*.* “Attackers often use graphics processors to
> distribute the problem,” Duffy says*.* “You’d be surprised at how quickly
> you can guess stuff*.*”
>
> In their Asilomar paper, the researchers apply the same type of
> mathematical analysis in a slightly different way*.* They consider the
> case in which an attacker is, from a distance, able to make a “noisy”
> measurement of the password stored on a credit card with an embedded chip
> or a key card used in a keyless-entry system*.*
>
> “Noise” is the engineer’s term for anything that degrades an
> electromagnetic signal — such as physical obstructions, out-of-phase
> reflections or other electromagnetic interference*.* Noise comes in lots
> of different varieties: The familiar white noise of sleep aids is one, but
> so is pink noise, black noise and more exotic-sounding types of noise, such
> as power-law noise or Poisson noise*.*
>
> In this case, rather than prior knowledge about the statistical frequency
> of the symbols used in a password, the attacker has prior knowledge about
> the probable noise characteristics of the environment: Phase noise with one
> set of parameters is more probable than phase noise with another set of
> parameters, which in turn is more probable than Brownian noise, and so on*
> .* Armed with these statistics, an attacker could infer the password
> stored on the card much more rapidly than was previously thought*.*
>
> “Some of the approximations that we’re used to making, they make perfect
> sense in the context of traditional communication,” says Matthieu Bloch, an
> assistant professor of electrical and computer engineering at the Georgia
> Institute of Technology*.* “You design your system in a framework, and
> then you test it*.* But for crypto, you’re actually trying to prove that
> it’s robust to things you cannot test*.* So you have to be sure that your
> assumptions make sense from the beginning*.* And I think that going back
> to the assumptions is something people don’t do often enough*.*”
>
> Bloch doubts that the failure of the uniformity assumption means that
> cryptographic systems in wide use today are fundamentally insecure*.* “My
> guess is that it will show that some of them are slightly less secure than
> we had hoped, but usually in the process, we’ll also figure out a way of
> patching them,” he says*.* The MIT and NUI researchers’ work, he says,
> “is very constructive, because it’s essentially saying, ‘Hey, we have to be
> careful*.*’ But it also provides a methodology to go back and reanalyze
> all these things*.*” ****
>
> Comments****
>
>    *Log in to write comments *<http://web.mit.edu/newsoffice/login.html?articleId=19117&articleItemid=96>
> ****
>
> * *****
>
>
>
> _______________________________________________
> Cfrg mailing list
> Cfrg@irtf.org
> http://www.irtf.org/mailman/listinfo/cfrg
>
>