Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

Torsten Lodderstedt <> Sun, 01 December 2019 15:41 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 49F3F12009C for <>; Sun, 1 Dec 2019 07:41:33 -0800 (PST)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -2
X-Spam-Status: No, score=-2 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001] autolearn=unavailable autolearn_force=no
Authentication-Results: (amavisd-new); dkim=pass (2048-bit key)
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id 0Mdi1CNtSCOV for <>; Sun, 1 Dec 2019 07:41:29 -0800 (PST)
Received: from ( [IPv6:2a00:1450:4864:20::430]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by (Postfix) with ESMTPS id DEE7612003F for <>; Sun, 1 Dec 2019 07:41:28 -0800 (PST)
Received: by with SMTP id y11so37869799wrt.6 for <>; Sun, 01 Dec 2019 07:41:28 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=google; h=from:mime-version:subject:message-id:date:cc:to; bh=D7j3iN7N+1EE0jEmuT2MZ/AxDpbC1ZSjZr++gYyxEnE=; b=NTLz18HXRxuHBpcyAd0JStp9j1GRMW8SsmPsQ36ugpQ/eCuzsF+w7y8uTzoLuSilI3 +eihJh4GGm4N1yvaON05SgeoGKhjKh3+1fucAuH/DUsDs941bJ2mWpukX9gyirEWhmph 1FmNyGs3XZ3oviw1BOUcy7AaIxSCRHwslNemoCM5FkvHw2axi9X6u/C0N1ZrNBC4thdq eTB9qetaBxXx2rPpNguVErH+jaiNiXyRozbkbToLxFZ8r/v4/aMcLbKKdnxs3YpCLz0p Z9y8fsC1Vg9oMklW5K1NnfimZ5Gz4vVNEqx2ihY5ArtsIPj5EbWrNTU7XboXtnPWuQVJ Rhdg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=20161025; h=x-gm-message-state:from:mime-version:subject:message-id:date:cc:to; bh=D7j3iN7N+1EE0jEmuT2MZ/AxDpbC1ZSjZr++gYyxEnE=; b=JfNg1/cNz8c2A9qa/EEdMoaWZCEGEignXgJjK1Zwv9paCWyWEZwV8cI808VhqY5eCm b5wEJSBTh3MtaeLqPBpuHJsYYMmvL6eEDIs6RBNgAVogZbAbCHHhdlkM05EFhdlI4JZG uyG+Wq2Pa6qRza4Z8irEmhQVMzdH0YLz1GIyLJ2mUQYvAl5OVUTBrUjLzaT45p3HtGIq Y1h0QSrqa6xKCL7kP7laIBkgR/XdoO27seelDzvFSGjDH8eUuMM/tpvZhpoZw7+1iS1d +pYUbH2UGbM6sw6qz+CfCy8Ga/R0xmzcikc2SBtJzPPC3Re+Qe748sVbMWUk39tlhx0z p8Lg==
X-Gm-Message-State: APjAAAWTv4UZd9J3H0fPJx/ikJSLh1l9FSneOaLbYNUk16IqiRTRDLfJ aBLG4kdfd9WnrdKZuh28Yy/7aA==
X-Google-Smtp-Source: APXvYqw+GqmSgOT5r0WYz0ZJyGwHZbA/ULxt9sL6nMBnsn6wAfs/ev3tc9Hd6jgrB4/nta+pYp0WeQ==
X-Received: by 2002:a5d:4752:: with SMTP id o18mr16074026wrs.330.1575214887005; Sun, 01 Dec 2019 07:41:27 -0800 (PST)
Received: from [] ( []) by with ESMTPSA id z6sm6365971wmz.12.2019. (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 01 Dec 2019 07:41:26 -0800 (PST)
From: Torsten Lodderstedt <>
Content-Type: multipart/signed; boundary="Apple-Mail=_8359F295-535C-426C-A4FA-7682B1BD3D41"; protocol="application/pkcs7-signature"; micalg="sha-256"
Mime-Version: 1.0 (Mac OS X Mail 13.0 \(3601.0.10\))
Message-Id: <>
Date: Sun, 01 Dec 2019 16:41:11 +0100
Cc: "Richard Backman, Annabelle" <>, oauth <>
To: Annabelle Backman <>
X-Mailer: Apple Mail (2.3601.0.10)
Archived-At: <>
Subject: Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: OAUTH WG <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Sun, 01 Dec 2019 15:41:33 -0000


> Am 27.11.2019 um 02:46 schrieb Richard Backman, Annabelle <>:
> Torsten,
> I'm not tracking how cookies are relevant to the discussion.

I’m still trying to understand why you and others argue mTLS cannot be used in public cloud deployments (and thus focus on application level PoP).

Session cookies serve the same purpose in web apps as access tokens for APIs but there are much more web apps than APIs. I use the analogy to illustrate that either there are security issues with cloud deployments of web apps or the techniques used to secure web apps are ok for APIs as well.

Here are the two main arguments and my conclusions/questions:  

1) mTLS it’s not end 2 end: although that’s true from a connection perspective, there are solutions employed to secure the last hop(s) between TLS terminating proxy and service (private net, VPN, TLS). That works and is considered secure enough for (session) cookies, it should be the same for access tokens.

2) TLS terminating proxies do not forward cert data: if the service itself terminates TLS this is feasible, we do it for our public-cloud-hosted mTLS-protected APIs. If TLS termination is provided by a component run by the cloud provider, the question is: is this component able to forward the client certificate to the service? If not, web apps using certs for authentication cannot be supported straightway by the cloud provider. Any insights?

> I'm guessing that's because we're not on the same page regarding use cases, so allow me to clearly state mine:

I think we are, we are just focusing on different ends of the TLS tunnel. My focus is on the service provider’s side, esp. public cloud hosting, whereas you are focusing on client side TLS terminating proxies.

> The use case I am concerned with is requests between services where end-to-end TLS cannot be guaranteed. For example, an enterprise service running on-premise, communicating with a service in the cloud, where the enterprise's outbound traffic is routed through a TLS Inspection (TLSI) appliance. The TLSI appliance sits in the middle of the communication, terminating the TLS session established by the on-premise service and establishing a separate TLS connection with the cloud service.
> In this kind of environment, there is no end-to-end TLS connection between on-premise service and cloud service, and it is very unlikely that the TLSI appliance is configurable enough to support TLS-based sender-constraint mechanisms without significantly compromising on the scope of "sender" (e.g., "this service at this enterprise" becomes "this enterprise”).

I’m not familiar with these kind of proxies, but happy to learn more and to discuss potential solutions.

Here are some questions:
- Have you seen this kind of proxies intercepting the connection from on-prem service deployments to service provider? I’m asking because I thought the main use case was to intercept employees PC internet traffic. 
- Are you saying this kind of proxy does not support mutual TLS at all? At least theoretically, the proxy could combine source and destination to select a cert/key pair to use for outbound TLS client authentication. 

> Even if it is possible, it is likely to require advanced configuration that is non-trivial for administrators to deploy. It's no longer as simple as the developer passing a self-signed certificate to the HTTP stack.

I agree. Cert binding is established in OAuth protocol messages, which would require the appliance to understand the protocol. On the other hand, I would expect these kind of proxy to understand a lot about the protocols running through it, otherwise they cannot fulfil their task of inspecting this traffic. 

best regards,

> – 
> Annabelle Richard Backman
> AWS Identity
> On 11/23/19, 9:50 AM, "Torsten Lodderstedt" <> wrote:
>>>>>>>>> On 23. Nov 2019, at 00:34, Richard Backman, Annabelle <> wrote:
>>>>>>>>> how are cookies protected from leakage, replay, injection in a setup like this?
>> They aren’t.
> Thats very interesting when compared to what we are discussing with respect to API security. 
> It effectively means anyone able to capture a session cookie, e.g. between TLS termination point and application, by way of an HTML injection, or any other suitable attack is able to impersonate a legitimate user by injecting the cookie(s) in an arbitrary user agent. The impact of such an attack might be even worse than abusing an access token given the (typically) broad scope of a session.
> TLS-based methods for sender constrained access tokens, in contrast, prevent this type of replay, even if the requests are protected between client and TLS terminating proxy, only. Ensuring the authenticity of the client certificate when forwarded from TLS terminating proxy to service, e.g. through another authenticated TLS connection, will even prevent injection within the data center/cloud environment. 
> I come to the conclusion that we already have the mechanism at hand to implement APIs with a considerable higher security level than what is accepted today for web applications. So what problem do we want to solve?
>> But my primary concern here isn't web browser traffic, it's calls from services/apps running inside a corporate network to services outside a corporate network (e.g., service-to-service API calls that pass through a corporate TLS gateway).
> Can you please describe the challenges arising in these settings? I assume those proxies won’t support CONNECT style pass through otherwise we wouldn’t talk about them.
>>> That’s a totally valid point. But again, such a solution makes the life of client developers harder.
>>> I personally think, we as a community need to understand the pros and cons of both approaches. I also think we have not even come close to this point, which, in my option, is the prerequisite for making informed decisions.
>> Agreed. It's clear that there are a number of parties coming at this from a number of different directions, and that's coloring our perceptions. That's why I think we need to nail down the scope of what we're trying to solve with DPoP before we can have a productive conversation how it should work.
> We will do so.
>> –
>> Annabelle Richard Backman
>> AWS Identity
>> On 11/22/19, 10:51 PM, "Torsten Lodderstedt" <> wrote:
>>>> On 22. Nov 2019, at 22:12, Richard Backman, Annabelle <> wrote:
>>> The service provider doesn't own the entire connection. They have no control over corporate or government TLS gateways, or other terminators that might exist on the client's side. In larger organizations, or when cloud hosting is involved, the service team may not even own all the hops on their side.
>> how are cookies protected from leakage, replay, injection in a setup like this?
>>> While presumably they have some trust in them, protection against leaked bearer tokens is an attractive defense-in-depth measure.
>> That’s a totally valid point. But again, such a solution makes the life of client developers harder.
>> I personally think, we as a community need to understand the pros and cons of both approaches. I also think we have not even come close to this point, which, in my option, is the prerequisite for making informed decisions.
>>> –
>>> Annabelle Richard Backman
>>> AWS Identity
>>> On 11/22/19, 9:37 PM, "OAuth on behalf of Torsten Lodderstedt" < on behalf of> wrote:
>>>> On 22. Nov 2019, at 21:21, Richard Backman, Annabelle <> wrote:
>>>> The dichotomy of "TLS working" and "TLS failed" only applies to a single TLS connection. In non-end-to-end TLS environments, each TLS terminator between client and RS introduces additional token leakage/exfiltration risk, irrespective of the quality of the TLS connections themselves. Each terminator also introduces complexity for implementing mTLS, Token Binding, or any other TLS-based sender constraint solution, which means developers with non-end-to-end TLS use cases will be more likely to turn to DPoP.
>>> The point is we are talking about different developers here. The client developer does not need to care about the connection between proxy and service. She relies on the service provider to get it right. So the developers (or DevOps or admins) of the service provider need to ensure end to end security. And if the path is secured once, it will work for all clients.
>>>> If DPoP is intended to address "cases where neither mTLS nor OAuth Token Binding are available" [1], then it should address this risk of token leakage between client and RS. If on the other hand DPoP is only intended to support the SPA use case and assumes the use of end-to-end TLS, then the document should be updated to reflect that.
>>> I agree.
>>>> [1]:
>>>> –
>>>> Annabelle Richard Backman
>>>> AWS Identity
>>>> On 11/22/19, 8:17 PM, "OAuth on behalf of Torsten Lodderstedt" < on behalf of> wrote:
>>>> Hi Neil,
>>>>> On 22. Nov 2019, at 18:08, Neil Madden <> wrote:
>>>>> On 22 Nov 2019, at 07:53, Torsten Lodderstedt <> wrote:
>>>>>>> On 22. Nov 2019, at 15:24, Justin Richer <> wrote:
>>>>>>> I’m going to +1 Dick and Annabelle’s question about the scope here. That was the one major thing that struck me during the DPoP discussions in Singapore yesterday: we don’t seem to agree on what DPoP is for. Some (including the authors, it seems) see it as a quick point-solution to a specific use case. Others see it as a general PoP mechanism.
>>>>>>> If it’s the former, then it should be explicitly tied to one specific set of things. If it’s the latter, then it needs to be expanded.
>>>>>> as a co-author of the DPoP draft I state again what I said yesterday: DPoP is a mechanism for sender-constraining access tokens sent from SPAs only. The threat to be prevented is token replay.
>>>>> I think the phrase "token replay" is ambiguous. Traditionally it refers to an attacker being able to capture a token (or whole requests) in use and then replay it against the same RS. This is already protected against by the use of normal TLS on the connection between the client and the RS. I think instead you are referring to a malicious/compromised RS replaying the token to a different RS - which has more of the flavour of a man in the middle attack (of the phishing kind).
>>>> I would argue TLS basically prevents leakage and not replay. The threats we try to cope with can be found in the Security BCP. There are multiple ways access tokens can leak, including referrer headers, mix-up, open redirection, browser history, and all sorts of access token leakage at the resource server
>>>> Please have a look at
>>>> also has an extensive discussion of potential counter measures, including audience restricted access tokens and a conclusion to recommend sender constrained access tokens over other mechanisms.
>>>>> But if that's the case then there are much simpler defences than those proposed in the current draft:
>>>>> 1. Get separate access tokens for each RS with correct audience and scopes. The consensus appears to be that this is hard to do in some cases, hence the draft.
>>>> How many deployments do you know that today are able to issue RS-specific access tokens?
>>>> BTW: how would you identify the RS?
>>>> I agree that would be an alternative and I’m a great fan of such tokens (and used them a lot at Deutsche Telekom) but in my perception this pattern needs still to be established in the market. Moreover, they basically protect from a rough RS (if the URL is used as audience) replaying the token someplace else, but they do not protect from all other kinds of leakage/replay (e.g. log files).
>>>>> 2. Make the DPoP token be a simple JWT with an "iat" and the origin of the RS. This stops the token being reused elsewhere but the client can reuse it (replay it) for many requests.
>>>>> 3. Issue a macaroon-based access token and the client can add a correct audience and scope restrictions at the point of use.
>>>> Why is this needed if the access token is already audience restricted? Or do you propose this as alternative?
>>>>> Protecting against the first kind of replay attacks only becomes an issue if we assume the protections in TLS have failed. But if DPoP is only intended for cases where mTLS can't be used, it shouldn't have to protect against a stronger threat model in which we assume that TLS security has been lost.
>>>> I agree.
>>>> best regards,
>>>> Torsten.
>>>>> -- Neil