Re: [OAUTH-WG] [UNVERIFIED SENDER] Re: New Version Notification for draft-fett-oauth-dpop-03.txt

"Richard Backman, Annabelle" <richanna@amazon.com> Mon, 02 December 2019 21:34 UTC

Return-Path: <prvs=232d12c63=richanna@amazon.com>
X-Original-To: oauth@ietfa.amsl.com
Delivered-To: oauth@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id EAFBA12008C for <oauth@ietfa.amsl.com>; Mon, 2 Dec 2019 13:34:53 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -11.8
X-Spam-Level:
X-Spam-Status: No, score=-11.8 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001, USER_IN_DEF_SPF_WL=-7.5] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (1024-bit key) header.d=amazon.com
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id OvhjzTXSje3w for <oauth@ietfa.amsl.com>; Mon, 2 Dec 2019 13:34:51 -0800 (PST)
Received: from smtp-fw-6001.amazon.com (smtp-fw-6001.amazon.com [52.95.48.154]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id CDAB1120018 for <oauth@ietf.org>; Mon, 2 Dec 2019 13:34:50 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1575322491; x=1606858491; h=from:to:cc:subject:date:message-id:references: in-reply-to:content-id:content-transfer-encoding: mime-version; bh=FKbNQPHnmLyMRy0lE3behaW1i/5Vm64PQF46QPC2ubU=; b=c9nNLcgTiZkEyuKF4i3LPPid+PP37eW8ft/01kUQegDwiuwWXDOEfoeP oXKlR+H4OvAnYsxo1/nn9zMsaAqGSXUVF3oAdAKRhMj54Of2Gx6SNmNlz RmGQHfHnCplsLte5lgH9lbTM9Jz5TTExxvPAsQIH4hVziXBshr4JTtf8E Q=;
IronPort-SDR: xAuLswDWvbZYL46K/WXAbs6EWhRjMEDjOcOhq65wowZd/yOp/xNdrYCmcy+bOu9oNacy7Fd5xc LDW06PpMfqKA==
X-IronPort-AV: E=Sophos;i="5.69,270,1571702400"; d="scan'208";a="7282797"
Received: from iad6-co-svc-p1-lb1-vlan3.amazon.com (HELO email-inbound-relay-2a-c5104f52.us-west-2.amazon.com) ([10.124.125.6]) by smtp-border-fw-out-6001.iad6.amazon.com with ESMTP; 02 Dec 2019 21:34:49 +0000
Received: from EX13MTAUWC001.ant.amazon.com (pdx4-ws-svc-p6-lb7-vlan3.pdx.amazon.com [10.170.41.166]) by email-inbound-relay-2a-c5104f52.us-west-2.amazon.com (Postfix) with ESMTPS id 6E4CBA18C6; Mon, 2 Dec 2019 21:34:48 +0000 (UTC)
Received: from EX13D11UWC001.ant.amazon.com (10.43.162.151) by EX13MTAUWC001.ant.amazon.com (10.43.162.135) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Mon, 2 Dec 2019 21:34:47 +0000
Received: from EX13D11UWC004.ant.amazon.com (10.43.162.101) by EX13D11UWC001.ant.amazon.com (10.43.162.151) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Mon, 2 Dec 2019 21:34:47 +0000
Received: from EX13D11UWC004.ant.amazon.com ([10.43.162.101]) by EX13D11UWC004.ant.amazon.com ([10.43.162.101]) with mapi id 15.00.1367.000; Mon, 2 Dec 2019 21:34:47 +0000
From: "Richard Backman, Annabelle" <richanna@amazon.com>
To: Torsten Lodderstedt <torsten=40lodderstedt.net@dmarc.ietf.org>
CC: oauth <oauth@ietf.org>
Thread-Topic: [UNVERIFIED SENDER] Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt
Thread-Index: AQHVqF3YgLh4lMdWs0CTPwFtLj4EVqem2cuA
Date: Mon, 02 Dec 2019 21:34:47 +0000
Message-ID: <3F3A350A-0EF3-4968-968E-9325F8434E80@amazon.com>
References: <6DD96E5F-2F26-4280-BDF9-41F43CB5A3AF@lodderstedt.net>
In-Reply-To: <6DD96E5F-2F26-4280-BDF9-41F43CB5A3AF@lodderstedt.net>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Microsoft-MacOutlook/10.1d.0.190908
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
x-originating-ip: [10.43.161.217]
Content-Type: text/plain; charset="utf-8"
Content-ID: <DD32F040C1035A41B947FBFE65744F0A@amazon.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
Archived-At: <https://mailarchive.ietf.org/arch/msg/oauth/zpBKVIWd10Jrezri1ZNjOJFarjg>
Subject: Re: [OAUTH-WG] [UNVERIFIED SENDER] Re: New Version Notification for draft-fett-oauth-dpop-03.txt
X-BeenThere: oauth@ietf.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: OAUTH WG <oauth.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/oauth>, <mailto:oauth-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/oauth/>
List-Post: <mailto:oauth@ietf.org>
List-Help: <mailto:oauth-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/oauth>, <mailto:oauth-request@ietf.org?subject=subscribe>
X-List-Received-Date: Mon, 02 Dec 2019 21:34:54 -0000

> Session cookies serve the same purpose in web apps as access tokens for APIs but there are much more web apps than APIs. I use the analogy to illustrate that either there are security issues with cloud deployments of web apps or the techniques used to secure web apps are ok for APIs as well.

"Security issues" is a loaded term, but if you mean that there are practical risks that are not addressed by bearer tokens (whether they be session cookies or access tokens) then yes, I think we both agree that there are. Otherwise we wouldn't be discussing PoP, sender-constrained tokens, etc. TLS-based solutions mitigate some risks, while leaving others unmitigated. Depending on your use case and threat model, these risks may or may not present practical threats. For my use cases, they do.

Ultimately I'd like to mitigate these risks for both service APIs and web applications. My focus is on service APIs for a couple reasons:

1. Interoperability is more important when the sender and recipient aren't necessarily owned by a single entity. I can do proprietary things in JavaScript if I want to just as I can in client SDKs, but this breaks down if my API implements a standard protocol and is expected to work with off-the-shelf clients and/or implementations from other vendors.

2. Web applications are just a special subset of service APIs that happens to be accessed via a browser. A solution for service APIs ought to be reusable for web applications, or at least serve as a foundation for their solution.

>    - Have you seen this kind of proxies intercepting the connection from on-prem service deployments to service provider? I’m asking because I thought the main use case was to intercept employees PC internet traffic.

I'm working from second-hand knowledge here, but like most things in the enterprise world, it depends. Separating employee device outbound traffic from internal service outbound traffic requires some level of sophistication, be it in network topology, routing rules, or configuration rules on the TLIS appliance. 

>    - Are you saying this kind of proxy does not support mutual TLS at all?

From what I understand, at the very least mTLS is not universally supported. There may be some vendors that support it, but it's not guaranteed. The documentation for Symantec's SSL Visibility product [1] indicates that sessions using client certificates will be rejected unless they are exempted based on destination whitelisting (which is problematic when the destination may be a general-purpose cloud service provider).

> On the other hand, I would expect these kind of proxy to understand a lot about the protocols running through it, otherwise they cannot fulfil their task of inspecting this traffic.

Maybe, maybe not. In any case there's a difference between understanding HTTP or SMTP or P2P-protocol-du-jour and understanding the application-level protocol running on top of HTTP. There hasn't been any need for these proxies to understand OAuth 2.0 thus far.

[1]: https://origin-symwisedownload.symantec.com/resources/webguides/sslv/45/Content/Topics/troubleshooting/Support_for_Client_Cert.htm
– 
Annabelle Richard Backman
AWS Identity


On 12/1/19, 7:41 AM, "Torsten Lodderstedt" <torsten=40lodderstedt.net@dmarc.ietf.org> wrote:

    
    Annabelle,
    
    > Am 27.11.2019 um 02:46 schrieb Richard Backman, Annabelle <richanna@amazon.com>:
    > 
    > Torsten,
    > 
    > I'm not tracking how cookies are relevant to the discussion.
    
    I’m still trying to understand why you and others argue mTLS cannot be used in public cloud deployments (and thus focus on application level PoP).
    
    Session cookies serve the same purpose in web apps as access tokens for APIs but there are much more web apps than APIs. I use the analogy to illustrate that either there are security issues with cloud deployments of web apps or the techniques used to secure web apps are ok for APIs as well.
    
    Here are the two main arguments and my conclusions/questions:  
    
    1) mTLS it’s not end 2 end: although that’s true from a connection perspective, there are solutions employed to secure the last hop(s) between TLS terminating proxy and service (private net, VPN, TLS). That works and is considered secure enough for (session) cookies, it should be the same for access tokens.
    
    2) TLS terminating proxies do not forward cert data: if the service itself terminates TLS this is feasible, we do it for our public-cloud-hosted mTLS-protected APIs. If TLS termination is provided by a component run by the cloud provider, the question is: is this component able to forward the client certificate to the service? If not, web apps using certs for authentication cannot be supported straightway by the cloud provider. Any insights?
    
    > I'm guessing that's because we're not on the same page regarding use cases, so allow me to clearly state mine:
    
    I think we are, we are just focusing on different ends of the TLS tunnel. My focus is on the service provider’s side, esp. public cloud hosting, whereas you are focusing on client side TLS terminating proxies.
    
    > 
    > The use case I am concerned with is requests between services where end-to-end TLS cannot be guaranteed. For example, an enterprise service running on-premise, communicating with a service in the cloud, where the enterprise's outbound traffic is routed through a TLS Inspection (TLSI) appliance. The TLSI appliance sits in the middle of the communication, terminating the TLS session established by the on-premise service and establishing a separate TLS connection with the cloud service.
    > 
    > In this kind of environment, there is no end-to-end TLS connection between on-premise service and cloud service, and it is very unlikely that the TLSI appliance is configurable enough to support TLS-based sender-constraint mechanisms without significantly compromising on the scope of "sender" (e.g., "this service at this enterprise" becomes "this enterprise”).
    
    I’m not familiar with these kind of proxies, but happy to learn more and to discuss potential solutions.
    
    Here are some questions:
    - Have you seen this kind of proxies intercepting the connection from on-prem service deployments to service provider? I’m asking because I thought the main use case was to intercept employees PC internet traffic. 
    - Are you saying this kind of proxy does not support mutual TLS at all? At least theoretically, the proxy could combine source and destination to select a cert/key pair to use for outbound TLS client authentication. 
    
    > Even if it is possible, it is likely to require advanced configuration that is non-trivial for administrators to deploy. It's no longer as simple as the developer passing a self-signed certificate to the HTTP stack.
    
    I agree. Cert binding is established in OAuth protocol messages, which would require the appliance to understand the protocol. On the other hand, I would expect these kind of proxy to understand a lot about the protocols running through it, otherwise they cannot fulfil their task of inspecting this traffic. 
    
    best regards,
    Torsten. 
    
    
    
    > 
    > – 
    > Annabelle Richard Backman
    > AWS Identity
    > 
    > 
    > On 11/23/19, 9:50 AM, "Torsten Lodderstedt" <torsten@lodderstedt.net> wrote:
    > 
    > 
    > 
    >>>>>>>>> On 23. Nov 2019, at 00:34, Richard Backman, Annabelle <richanna@amazon.com> wrote:
    >>>>>>>>> how are cookies protected from leakage, replay, injection in a setup like this?
    >> They aren’t.
    > 
    > Thats very interesting when compared to what we are discussing with respect to API security. 
    > 
    > It effectively means anyone able to capture a session cookie, e.g. between TLS termination point and application, by way of an HTML injection, or any other suitable attack is able to impersonate a legitimate user by injecting the cookie(s) in an arbitrary user agent. The impact of such an attack might be even worse than abusing an access token given the (typically) broad scope of a session.
    > 
    > TLS-based methods for sender constrained access tokens, in contrast, prevent this type of replay, even if the requests are protected between client and TLS terminating proxy, only. Ensuring the authenticity of the client certificate when forwarded from TLS terminating proxy to service, e.g. through another authenticated TLS connection, will even prevent injection within the data center/cloud environment. 
    > 
    > I come to the conclusion that we already have the mechanism at hand to implement APIs with a considerable higher security level than what is accepted today for web applications. So what problem do we want to solve?
    > 
    >> But my primary concern here isn't web browser traffic, it's calls from services/apps running inside a corporate network to services outside a corporate network (e.g., service-to-service API calls that pass through a corporate TLS gateway).
    > 
    > Can you please describe the challenges arising in these settings? I assume those proxies won’t support CONNECT style pass through otherwise we wouldn’t talk about them.
    > 
    >>> That’s a totally valid point. But again, such a solution makes the life of client developers harder.
    >>> I personally think, we as a community need to understand the pros and cons of both approaches. I also think we have not even come close to this point, which, in my option, is the prerequisite for making informed decisions.
    >> Agreed. It's clear that there are a number of parties coming at this from a number of different directions, and that's coloring our perceptions. That's why I think we need to nail down the scope of what we're trying to solve with DPoP before we can have a productive conversation how it should work.
    > 
    > We will do so.
    > 
    >> –
    >> Annabelle Richard Backman
    >> AWS Identity
    >> On 11/22/19, 10:51 PM, "Torsten Lodderstedt" <torsten@lodderstedt.net> wrote:
    >>>> On 22. Nov 2019, at 22:12, Richard Backman, Annabelle <richanna=40amazon.com@dmarc.ietf.org> wrote:
    >>> The service provider doesn't own the entire connection. They have no control over corporate or government TLS gateways, or other terminators that might exist on the client's side. In larger organizations, or when cloud hosting is involved, the service team may not even own all the hops on their side.
    >> how are cookies protected from leakage, replay, injection in a setup like this?
    >>> While presumably they have some trust in them, protection against leaked bearer tokens is an attractive defense-in-depth measure.
    >> That’s a totally valid point. But again, such a solution makes the life of client developers harder.
    >> I personally think, we as a community need to understand the pros and cons of both approaches. I also think we have not even come close to this point, which, in my option, is the prerequisite for making informed decisions.
    >>> –
    >>> Annabelle Richard Backman
    >>> AWS Identity
    >>> On 11/22/19, 9:37 PM, "OAuth on behalf of Torsten Lodderstedt" <oauth-bounces@ietf.org on behalf of torsten=40lodderstedt.net@dmarc.ietf.org> wrote:
    >>>> On 22. Nov 2019, at 21:21, Richard Backman, Annabelle <richanna=40amazon.com@dmarc.ietf.org> wrote:
    >>>> The dichotomy of "TLS working" and "TLS failed" only applies to a single TLS connection. In non-end-to-end TLS environments, each TLS terminator between client and RS introduces additional token leakage/exfiltration risk, irrespective of the quality of the TLS connections themselves. Each terminator also introduces complexity for implementing mTLS, Token Binding, or any other TLS-based sender constraint solution, which means developers with non-end-to-end TLS use cases will be more likely to turn to DPoP.
    >>> The point is we are talking about different developers here. The client developer does not need to care about the connection between proxy and service. She relies on the service provider to get it right. So the developers (or DevOps or admins) of the service provider need to ensure end to end security. And if the path is secured once, it will work for all clients.
    >>>> If DPoP is intended to address "cases where neither mTLS nor OAuth Token Binding are available" [1], then it should address this risk of token leakage between client and RS. If on the other hand DPoP is only intended to support the SPA use case and assumes the use of end-to-end TLS, then the document should be updated to reflect that.
    >>> I agree.
    >>>> [1]: https://tools.ietf.org/html/draft-fett-oauth-dpop-03#section-1
    >>>> –
    >>>> Annabelle Richard Backman
    >>>> AWS Identity
    >>>> On 11/22/19, 8:17 PM, "OAuth on behalf of Torsten Lodderstedt" <oauth-bounces@ietf.org on behalf of torsten=40lodderstedt.net@dmarc.ietf.org> wrote:
    >>>> Hi Neil,
    >>>>> On 22. Nov 2019, at 18:08, Neil Madden <neil.madden@forgerock.com> wrote:
    >>>>> On 22 Nov 2019, at 07:53, Torsten Lodderstedt <torsten=40lodderstedt.net@dmarc.ietf.org> wrote:
    >>>>>>> On 22. Nov 2019, at 15:24, Justin Richer <jricher@mit.edu> wrote:
    >>>>>>> I’m going to +1 Dick and Annabelle’s question about the scope here. That was the one major thing that struck me during the DPoP discussions in Singapore yesterday: we don’t seem to agree on what DPoP is for. Some (including the authors, it seems) see it as a quick point-solution to a specific use case. Others see it as a general PoP mechanism.
    >>>>>>> If it’s the former, then it should be explicitly tied to one specific set of things. If it’s the latter, then it needs to be expanded.
    >>>>>> as a co-author of the DPoP draft I state again what I said yesterday: DPoP is a mechanism for sender-constraining access tokens sent from SPAs only. The threat to be prevented is token replay.
    >>>>> I think the phrase "token replay" is ambiguous. Traditionally it refers to an attacker being able to capture a token (or whole requests) in use and then replay it against the same RS. This is already protected against by the use of normal TLS on the connection between the client and the RS. I think instead you are referring to a malicious/compromised RS replaying the token to a different RS - which has more of the flavour of a man in the middle attack (of the phishing kind).
    >>>> I would argue TLS basically prevents leakage and not replay. The threats we try to cope with can be found in the Security BCP. There are multiple ways access tokens can leak, including referrer headers, mix-up, open redirection, browser history, and all sorts of access token leakage at the resource server
    >>>> Please have a look at https://tools.ietf.org/html/draft-ietf-oauth-security-topics-13#section-4.
    >>>> https://tools.ietf.org/html/draft-ietf-oauth-security-topics-13#section-4.8 also has an extensive discussion of potential counter measures, including audience restricted access tokens and a conclusion to recommend sender constrained access tokens over other mechanisms.
    >>>>> But if that's the case then there are much simpler defences than those proposed in the current draft:
    >>>>> 1. Get separate access tokens for each RS with correct audience and scopes. The consensus appears to be that this is hard to do in some cases, hence the draft.
    >>>> How many deployments do you know that today are able to issue RS-specific access tokens?
    >>>> BTW: how would you identify the RS?
    >>>> I agree that would be an alternative and I’m a great fan of such tokens (and used them a lot at Deutsche Telekom) but in my perception this pattern needs still to be established in the market. Moreover, they basically protect from a rough RS (if the URL is used as audience) replaying the token someplace else, but they do not protect from all other kinds of leakage/replay (e.g. log files).
    >>>>> 2. Make the DPoP token be a simple JWT with an "iat" and the origin of the RS. This stops the token being reused elsewhere but the client can reuse it (replay it) for many requests.
    >>>>> 3. Issue a macaroon-based access token and the client can add a correct audience and scope restrictions at the point of use.
    >>>> Why is this needed if the access token is already audience restricted? Or do you propose this as alternative?
    >>>>> Protecting against the first kind of replay attacks only becomes an issue if we assume the protections in TLS have failed. But if DPoP is only intended for cases where mTLS can't be used, it shouldn't have to protect against a stronger threat model in which we assume that TLS security has been lost.
    >>>> I agree.
    >>>> best regards,
    >>>> Torsten.
    >>>>> -- Neil