Re: [TLS] Security review of TLS1.3 0-RTT

Victor Vasiliev <vasilvv@google.com> Fri, 02 June 2017 21:39 UTC

Return-Path: <vasilvv@google.com>
X-Original-To: tls@ietfa.amsl.com
Delivered-To: tls@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 49F84127241 for <tls@ietfa.amsl.com>; Fri, 2 Jun 2017 14:39:45 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.001
X-Spam-Level:
X-Spam-Status: No, score=-2.001 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=0.001, RP_MATCHES_RCVD=-0.001, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=google.com
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id iX8pkQgD35b0 for <tls@ietfa.amsl.com>; Fri, 2 Jun 2017 14:39:43 -0700 (PDT)
Received: from mail-qk0-x229.google.com (mail-qk0-x229.google.com [IPv6:2607:f8b0:400d:c09::229]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id DAC75126D73 for <tls@ietf.org>; Fri, 2 Jun 2017 14:39:42 -0700 (PDT)
Received: by mail-qk0-x229.google.com with SMTP id d14so69368502qkb.1 for <tls@ietf.org>; Fri, 02 Jun 2017 14:39:42 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=Bnr7BFW0WVzbZeeVB56/jGGchKM+52e2HelUvo5Tqw0=; b=b1y6Ctbya8gRh0BpWJSg+xOZBo6IrVn3Oi12iXJcCqdHZIviQ7lEkvaSNoc4u1TVdz +aGMLJU1/MObg0Vt/mHJ6nVpGIPXr5OI7aIT0jBKnh0xRTGDhlsu14NUkZBWi2JX8nmL cknjWMZ4TkP4/o7XPKR3pEpjsXgwl4BAKP0GNHuENPnjp8olNnbCRDsm2v+sVo+uCB3O xFiqvtRjycZjHsvSQWTcekti4VZ3NlwhPBi95oMJY1193KFl7zu3FgamPXksZXaa8uK3 OB7NbeuIhWWeIcQWYdWLHmQA6VJDWB7H/PhmZBDxn8cWD2J5t3SMkRgXV4c0nR8IPj6+ RMJA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=Bnr7BFW0WVzbZeeVB56/jGGchKM+52e2HelUvo5Tqw0=; b=PSiJyumZbx//KsvEZVwXwXBbOR51DxEfJtZ1GX17btVtxUYVL7Ld9wrhiLGKhZVrw7 o6CXjERp54AHEikE1yUDmBrz/j3EfZtZvRFahpA7ISIYFb2s2q6iOeSB0ZaeEwCR8feR iGwOwO27dfG4IER/7U96pQv5awQKvlwx1KmS1s7zpxxR30S+6LJE/FphTwzB1spL5vxc JlalM+PcgDJ/UCibZ0SBjYsNhbllnW4nJUSRjEemShn8SbQ8SY5DLWE02YVbhgIHuRZu c3u0d9RI4LpxSiTEeFigD9pyp2UATDG09Jygt8+W09WRKgipdseDCYd7+kPOS+rTXpQ/ TmNA==
X-Gm-Message-State: AKS2vOzL5jx0T47sxymhjr/w8NHKe94v9/HHj1z0ZMOZNPw/scYPKGlE /rhxjhaSB7Tfvu2JvXzlwviiFyrBjOUwkXY=
X-Received: by 10.55.45.198 with SMTP id t189mr10850393qkh.108.1496439581720; Fri, 02 Jun 2017 14:39:41 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.55.46.65 with HTTP; Fri, 2 Jun 2017 14:39:40 -0700 (PDT)
In-Reply-To: <CAAF6GDfZr_zEuttf2zQhJ9vv2T-e1Mzb3G09_auLReftSJveeg@mail.gmail.com>
References: <CAAF6GDcKZj9F-eKAeVj0Uw4aX_EgQ4DuJczL4=fsaFyG9Yjcgw@mail.gmail.com> <CAAZdMacpJ-qoQt2pDBjTq6ADwmRKOHXTHDyDTzb+g2gYPvtZzQ@mail.gmail.com> <CAAF6GDdobkQh9_iqX1oU_BO9O2aK2_7Cbaper0AY4qEGYXAcvA@mail.gmail.com> <CAAZdMaeTdcgdCj26kVuq6-0EX1nmehvJJCq+YzB-4r84aRjhuA@mail.gmail.com> <CAAF6GDesLzMDN_LVYr6sFU8Z04jpXhFZphOAet-0JPsFF56Oig@mail.gmail.com> <CAAZdMadDctG0sMyDV49+8UUiagqQpi0bSehtQuKPELMU-+Gg5g@mail.gmail.com> <CAAF6GDfZr_zEuttf2zQhJ9vv2T-e1Mzb3G09_auLReftSJveeg@mail.gmail.com>
From: Victor Vasiliev <vasilvv@google.com>
Date: Fri, 02 Jun 2017 17:39:40 -0400
Message-ID: <CAAZdMacwnX2-eu5Ts_XEbiq7bx=XttpM95tZb7qJeBf7BsrYog@mail.gmail.com>
To: Colm MacCárthaigh <colm@allcosts.net>
Cc: "tls@ietf.org" <tls@ietf.org>
Content-Type: multipart/alternative; boundary="001a114f50b0b2ac20055100fdb0"
Archived-At: <https://mailarchive.ietf.org/arch/msg/tls/mcVcPnaTWf_X25DyW7jKdMysh-s>
Subject: Re: [TLS] Security review of TLS1.3 0-RTT
X-BeenThere: tls@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: "This is the mailing list for the Transport Layer Security working group of the IETF." <tls.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/tls>, <mailto:tls-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/tls/>
List-Post: <mailto:tls@ietf.org>
List-Help: <mailto:tls-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/tls>, <mailto:tls-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 02 Jun 2017 21:39:45 -0000

On Thu, Jun 1, 2017 at 7:59 PM, Colm MacCárthaigh <colm@allcosts.net> wrote:

> On Thu, Jun 1, 2017 at 1:50 PM, Victor Vasiliev <vasilvv@google.com>
> wrote:
>
>> I am not sure I agree with this distinction.  I can accept the difference
>> in
>> terms of how much attacker can retry -- but we've already agreed that
>> bounding
>> that number is a good idea.  I don't see any meaningful distinction in
>> other
>> regards.
>>
>
> It's not just a difference in the number of duplicates. With retries, the
> client maintains some control, so it can do things like impose delays and
> update request IDs. Bill followed up with an exactly relevant example from
> Token Binding where the retry intentionally has a different token value.
> That kind of control is lost with attacker driven replays.
>

I am not sure I understand the context in which this is relevant.  I
cannot imagine anyone delay the 1-RTT fallback, given that 0-RTT is
expected to fail regularly.  I also don't understand how changing
request ID would help -- I would assume that for retry safety, you'd
actually want the ID to be the same across all attempts.  Could you give
me a scenario where this actually changes the security properties of the
system?

(I understand the tokbind case -- but tokbind 0-RTT has different
operational requirements from plain TLS; those are documented in
draft-ietf-tokbind-tls13-0rtt and are somewhat out of scope for this
conversation)


> But even if we focus on just the number; there is something special about
> allowing 0 literal replays of a 0-RTT section; it is easy for users to
> confirm/audit/test. If there's a hard-guaranteee that 0-RTT "MUST" never be
> replayable, then I feel like we have a hope of producing a viable 0-RTT
> ecosystem. Plenty of providers may screw this up, or try to cut corners,
> but if we can ensure that they get failing grades in security testing
> tools, or maybe even browser warnings, then we can corral things into a
> zone of safety. Otherwise, with no such mechanism, I fear that bad
> operators will cause the entire 0-RTT feature to be tainted and entirely
> turned off over time by clients.
>

You can't really audit this property, since you never have a
comprehensive list of endpoints to which a 0-RTT can be replayed.


>
>> Sure, but this is just an argument for making N small.  Also, retrys can
>> also
>> be directed to arbitrary nodes.
>>
>
> This is absolutely true, but see my point about the client control.
> Regardless, it is a much more difficult attack to carry out. That is to
> intercept and rewrite a whole TCP connection Vs grabbing a 0-RTT section
> and sending it again.
>

It's within the scope of the threat model.


>
Well in the real world, I think it'll be pervasive, and I even think it
>>> /should/ be. We should make 0-RTT that safe and remove the sharp edges.
>>>
>>
>> Are you arguing that non-safe requests should be allowed to be sent via
>> 0-RTT?
>> Because that actually violates reasonable expectations of security
>> guarantees
>> for TLS, and I do not believe that is acceptable.
>>
>
> I'm just saying that it absolutely will happen, and I don't think any kind
> of lawyering about the HTTP spec and REST will change that. Folks use GETs
> for non-idempotent side-effect-bearing APIs a lot. And those folks don't
> generally understand TLS or have anything to do with it. I see no real
> chance of that changing and it's a bit of a deceit for us to think that
> it's realistic that there will be these super careful 0-RTT deployments
> where everyone from the Webserver administrator to the high-level
> application designer is coordinating and fully aware of all of the
> implications. It crosses layers that are traditionally quite far apart.
>
> So with that in mind, I argue that we have to make TLS transport as secure
> as possible by default, while still delivering 0-RTT because that's such a
> beneficial improvement.
>

Well, 0-RTT is inherently unsafe in a sense that it could be retried
at least one time.  This means that we have to draw a line somewhere.
Safe requests seem like a fine line to draw, in a sense that the
ecosystem is already free to try and retry GET requests at will (if you
don't retry yourself, your HTTP proxy might).  If you believe we should
use a more conservative approach, I am happy to hear your suggestions.


>
>
>> I do not believe that this to be the case.  The DKG attack is an attack
>>>> that allows
>>>> for a replay.
>>>>
>>>
>>> It's not. It permits a retry. The difference here is that the client is
>>> in full control. It can decide to delay, to change a unique request ID, or
>>> even not to retry at all. But the legitimate client generated the first
>>> attempt, it can be signaled that it wasn't accepted, and then it generates
>>> the second attempt. If it really really needs to it can even reason about
>>> the complicated semantics of the earlier request being possibly
>>> re-submitted later by an attacker.
>>>
>>
>> That's already not acceptable for a lot of applications -- and by enabling
>> 0-RTT for non-safe HTTP requests, we would be pulling the rug from under
>> them.
>>
>
> Yep; but I think /this/ risk is manageable and tolerable. Careful clients,
> like the token binding case, can actually mitigate this. I've outlined the
> scheme. For careless clients, like browsers, they can mostly ignore this;
> since they retry so easily anyway, it's no worse.
>

The problem is not just that it's a retry, but it also can be done
out-of-order.

Imagine I have a configuration system, and my client can take out a
global lock for some operation, and normally an update to it looks like
this:

  1. GET to check if the current state needs updating.
  2. POST to take a lock.
  3. GET to check if current state still holds.
  4. POST to update the data held by the lock.
  5. POST to release the lock and publish the notification.
  6. GET to ensure the lock was released.

Now, imagine the following attack:

  a) Between (1) and (2), the attacker resets the TCP connection, after
     the client got the response and the session ticket.
  b) Since the client has the ticket, it 0-RTTs the POST to take out a
     lock.
  c) Attacker redirects the client to another datacenter, which cannot
     accept 0-RTT, so it switches to 1-RTT.
  d) During (b) and (c), the attacker records a transcript of 0-RTT
     request to take out a lock.
  d) The rest of (2)-(6) proceed in normal fashion via client talking to
     the distant datacenter.
  e) Attacker replays the lock being taken out in the datacenter where
     0-RTT would succeed; the lock now is in place, and the client has
     no idea about it.

Because the application layer cannot reasonably assume that such
reordering can happen, it is not acceptable to 0-RTT the POST in #2.



> But there is *no* proposed mitigation for replayable 0-RTT. So I don't
> think that's manageable. Just trying to make a data-driven decision. If
> someone presents an alternative mitigation (besides forbidding replays),
> I'll change my mind.
>

The proposed mitigation is the application-layer profile prohibiting
unsafe requests via 0-RTT.


>
>
>> Throttling POST requests is fine -- they shouldn't go over 0-RTT, since
>> they
>> are not idempotent.  Throttling GET requests in this manner goes at odds
>> with
>> RFC 7231.
>>
>
> Throttling GET requests happens all of the time and is an important
> security and fairness measure used by many deployed systems. 0-RTT would
> break it. That's not ok.
>
> I don't think it is at odds with RFC 7231 ... which also defines the 503
> status code.
>

As I said, the N may vary and can be set by the service operator
depending on what kind of infrastructure they run.  It might be much
easier for a service to increase its throttling thresholds 10x and set
N=10, than to set N=1.

  -- Victor.