Re: [TLS] A closer look at ROBOT, BB Attacks, timing attacks in general, and what we can do in TLS

Colm MacCárthaigh <> Mon, 08 January 2018 15:45 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 70C4B126579 for <>; Mon, 8 Jan 2018 07:45:37 -0800 (PST)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -1.899
X-Spam-Status: No, score=-1.899 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_NONE=-0.0001] autolearn=ham autolearn_force=no
Authentication-Results: (amavisd-new); dkim=pass (2048-bit key)
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id wQddLYpQNbBQ for <>; Mon, 8 Jan 2018 07:45:35 -0800 (PST)
Received: from ( [IPv6:2607:f8b0:4002:c05::22f]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by (Postfix) with ESMTPS id E0FF8126C25 for <>; Mon, 8 Jan 2018 07:45:34 -0800 (PST)
Received: by with SMTP id z132so4439098ywd.9 for <>; Mon, 08 Jan 2018 07:45:34 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=20150623; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=PaCsAiC7IkTVsPF3m43+cetpR/DNOisNDe2l3KiGJZ8=; b=auSBbkJnWC0fHQAAVf5JuX6z3ZAvzi6p3ioNMvxTDa8W/mTBb5+7ZBxdwKPpIRVO71 QcYmigMxjhPMq1nzIGp9kU1Gfn9R2GnwxCskL7QVexzvMx48NCtMUbp26Gzf/u+FvCiq 1zlD2jadL8GCcnCs4GC0afy/PP6vmmOCg1dUe/LXWP7Zd9XBJOroMJc0SkBVmH092bwg VBr6gkucsBF3jP4FJCW2l1i/iP6OIiPUJ1ca6frkoSCjFug8Cv6DB+jCgPFzCI+gMals JKLbBJXSHwnEzjZsEucLv/Bs34138awWjLZlgpVXizcs+Y1ZZISkUW1Gnxe0ggtkGHH5 wClw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=PaCsAiC7IkTVsPF3m43+cetpR/DNOisNDe2l3KiGJZ8=; b=bUJWNrP/COrxJJLfGvSCTmWiJeWlx8qktbObb4j4M2zyXI6GQelXGnJeLzZdlVmmg4 pWBN/i9UymcTKtANA7nbjLWZLPmJ+AF5EsrPFBB6hO4atWpOjqyBdlyPwv1aWdxXNqga avjANIRKQknbsHnwcxysqT1lfZCRSUKlEFzXqS+SbfQnMykvcXORcRq+BjmQDc4L/XvB ozyywG5otaib/zyhTvkqthNV9Ybz1ptgJzi6WQ/m6ab9GNlejTiY+g/k5hvDYYSlUWxr 2bjxdDlX/Rnx1yG4lai507BoxuwmfOiTwztFkTt4rDoDm6lvl/9hEN2zx/iEKP5+BRQy qFjg==
X-Gm-Message-State: AKGB3mJ+ngM9GJAxQcXBXCBgtv/q6bGB/lX6lQwIclnZgfCk67dc7TyG hqaS/LmelBQ79kCnA2l54PvviiQFqC+7yxEQY8dQzQ==
X-Google-Smtp-Source: ACJfBosUvOCMdcAdotXUPqZmbsxshgdtPp+05ssVrWin4EI5wSUTjav6L5s5KG4R182n/BKEHs8Pp3Zpc+q41b6Z80o=
X-Received: by with SMTP id x64mr6497852ywc.418.1515426333964; Mon, 08 Jan 2018 07:45:33 -0800 (PST)
MIME-Version: 1.0
Received: by with HTTP; Mon, 8 Jan 2018 07:45:32 -0800 (PST)
In-Reply-To: <>
References: <> <> <> <>
From: =?UTF-8?Q?Colm_MacC=C3=A1rthaigh?= <>
Date: Mon, 8 Jan 2018 09:45:32 -0600
Message-ID: <>
To: Hubert Kario <>
Content-Type: multipart/alternative; boundary="001a1149459651d464056245b0e9"
Archived-At: <>
Subject: Re: [TLS] A closer look at ROBOT, BB Attacks, timing attacks in general, and what we can do in TLS
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: "This is the mailing list for the Transport Layer Security working group of the IETF." <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Mon, 08 Jan 2018 15:45:37 -0000

On Mon, Jan 8, 2018 at 6:29 AM, Hubert Kario <> wrote:
> except that what we call "sufficiently hard plaintext recovery" is over
> triple
> of the security margin you're proposing as a workaround here
> 2^40 is doable on a smartphone, now
> 2^120 is not doable on a supercomputer, and won't be for a very long time

This isn't how these kinds of attacks work. 2^40 would be small for
something that could be attacked in parallel by a very large computing
system. But it's an absolutely massive difficulty factor against a live
on-line attack. Can you propose a credible mechanism where an attacker
would be able to mount say billions (to use the low end) of repeated
connections, without detection? And that's before they wean the signal out.
And since the delay can't be avoided, the attack also costs thousands of
years of attacker-controlled computer time.  I have much much more
confidence in that simple kind of defense giving me real-world security
than I do in my code being absolutely perfect, which I've never achieved.

I'm being stubborn and replying here because I want to argue against a
common set of biases I've seen that I think are harmful to real-world
security, Making attacks billions to trillions of times harder absolutely
does protect real-world users, and we shouldn't be biasing simply for what
we think the research community will take seriously or not scoff at, but
for what will actually protect users.  Those aren't the same.

I'll give another example: over the last few years we have significantly
*regressed* on the real-world security of TLS by moving to AES-GCM and
ChaCha20. Both of their cipher suites leak the exact content-length and
make content-fingerprinting attacks far easier than they were previously
(CBC blocks made this kind attack exponentially more expensive). The
current state is that passive tappers with relatively unsophisticated
databases can de-cloak a high percent of HTTPS connections. This
compromises secrecy, the main user benefit of TLS. That is staggering me,
but it's also an uninteresting attack to the research community, it's long
known about and isn't going to result in much publication or research

> > This bears repeating: attempting to make OpenSSL rigorously constant time
> > made it *less* secure.
> yes, on one specific hardware type, because of a bug in implementation
> I really hope you're not suggesting "we shouldn't ever build bridges
> because
> this one collapsed"...
> also, for how long was it *less* secure? and for how long was it
> vulnerable to
> Lucky13?

I'm saying that trade-offs are complicated and that constant-time "rigor"
isn't worth it sometimes. Adding ~500 lines of hard-to-follow
hard-to-maintain code with no systematic way to confirm that it stays
correct was a mistake and it led to a worse bug. Other implementations
chose more simple approaches; code-balancing, that were
close-to-constant-time but not rigorously so.  I think the latter was
ultimately smarter, all code is dangerous because bugs can be lurking in
its midst, and those bugs can be really really serious like memory
disclosure and remote execution, so leaning towards simple and easy to
follow should be heavily weighted.  So when we see the next bug like
Lucky13, which was un-exploitable against TLS, but still publishable and
research-worthy, we should lean towards simpler fixes rather than complex
ones, while also just abandoning whatever algorithm is effected and
replacing it.

> > Delaying to a fixed interval is a great approach, and emulates how
> clocking
> > protects hardware implementations, but I haven't yet been able to succeed
> > in making it reliable. It's easy to start a timer when the connection is
> > accepted and to trigger the error 30 seconds after that, but it's hard to
> > rule out that a leaky timing side-channel may influence the subsequent
> > timing of the interrupt or scheduler systems and hence exactly when the
> > trigger happens. If it does influence it, then a relatively clear signal
> > shows up again, just offset by 30 seconds, which is no use.
> *if*
> in other words, this solution _may_ leak information (something which you
> can
> actually test), or the other solution that _does_ leak information, just
> slowly so it's "acceptable risk"

Sorry, I'll try to be more clear: A leak in a fixed-interval delay would be
catastrophic, because it will result in a very clean signal, merely offset
by the delay. A leak in a random-interval delay will still benefit from the
random distribution and require many samples.