Re: [websec] Comments on draft-ietf-websec-key-pinning

"Ryan Sleevi" <ryan-ietfhasmat@sleevi.com> Fri, 02 January 2015 21:18 UTC

Return-Path: <ryan-ietfhasmat@sleevi.com>
X-Original-To: websec@ietfa.amsl.com
Delivered-To: websec@ietfa.amsl.com
Received: from localhost (ietfa.amsl.com [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id B956C1A00E5 for <websec@ietfa.amsl.com>; Fri, 2 Jan 2015 13:18:36 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: 1.034
X-Spam-Level: *
X-Spam-Status: No, score=1.034 tagged_above=-999 required=5 tests=[BAYES_50=0.8, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, IP_NOT_FRIENDLY=0.334, RCVD_IN_DNSWL_NONE=-0.0001] autolearn=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id iwLkU56YfRdw for <websec@ietfa.amsl.com>; Fri, 2 Jan 2015 13:18:34 -0800 (PST)
Received: from homiemail-a25.g.dreamhost.com (sub4.mail.dreamhost.com [69.163.253.135]) by ietfa.amsl.com (Postfix) with ESMTP id 3F9481A00BB for <websec@ietf.org>; Fri, 2 Jan 2015 13:18:34 -0800 (PST)
Received: from homiemail-a25.g.dreamhost.com (localhost [127.0.0.1]) by homiemail-a25.g.dreamhost.com (Postfix) with ESMTP id 1B648678058; Fri, 2 Jan 2015 13:18:34 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=sleevi.com; h=message-id :in-reply-to:references:date:subject:from:to:cc:reply-to :mime-version:content-type:content-transfer-encoding; s= sleevi.com; bh=BA/JktJAzKOJy7N9uAqkbgu6wy8=; b=ReJODSu7qqsUEA8dj 1cMNbWMLRA+3Dcinl0+LVenVGwfzg1+NNjcy73QkBSnxEK6iJeM+92LmoSXsGv3c u2XretM8RiBTTwFkZV9tFFv1ZSTOmVfg89viC0eVTYv2WeBX1+QjSPqKuzyb//tG PTr1YP/201NjJlPmsOYmVCbgx4=
Received: from webmail.dreamhost.com (caiajhbihbdd.dreamhost.com [208.97.187.133]) (Authenticated sender: ryan@sleevi.com) by homiemail-a25.g.dreamhost.com (Postfix) with ESMTPA id DDC84678057; Fri, 2 Jan 2015 13:18:33 -0800 (PST)
Received: from 216.239.45.71 (proxying for 216.239.45.71) (SquirrelMail authenticated user ryan@sleevi.com) by webmail.dreamhost.com with HTTP; Fri, 2 Jan 2015 13:18:34 -0800
Message-ID: <dc62fa0e9a842c1dcd39ec8a1a09073c.squirrel@webmail.dreamhost.com>
In-Reply-To: <CAH8yC8=XEr9q8VHarucKa0rVqSPt3=oDzDRWXA3_u4rkhpZmoQ@mail.gmail.com>
References: <CAH8yC8=XEr9q8VHarucKa0rVqSPt3=oDzDRWXA3_u4rkhpZmoQ@mail.gmail.com>
Date: Fri, 02 Jan 2015 13:18:34 -0800
From: Ryan Sleevi <ryan-ietfhasmat@sleevi.com>
To: noloader@gmail.com
User-Agent: SquirrelMail/1.4.21
MIME-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Archived-At: http://mailarchive.ietf.org/arch/msg/websec/AH1k1q2gYBQfEtJdTBOxkGeg3uE
Cc: IETF WebSec WG <websec@ietf.org>
Subject: Re: [websec] Comments on draft-ietf-websec-key-pinning
X-BeenThere: websec@ietf.org
X-Mailman-Version: 2.1.15
Precedence: list
Reply-To: ryan-ietfhasmat@sleevi.com
List-Id: Web Application Security Minus Authentication and Transport <websec.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/websec>, <mailto:websec-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/websec/>
List-Post: <mailto:websec@ietf.org>
List-Help: <mailto:websec-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/websec>, <mailto:websec-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 02 Jan 2015 21:18:37 -0000

On Thu, January 1, 2015 9:21 pm, Jeffrey Walton wrote:
>  I'd like to share some comments on
>  https://tools.ietf.org/html/draft-ietf-websec-key-pinning-21.

Hi Jeffrey,

Though I see Yoav has already mentioned that these comments are well
beyond the IETF LC phase, thus will not result in any changes to the
document, I do want to take the time to reply to your comments and explain
on a more substantive grounds why no changes are warranted, even if there
were still opportunity for change.

My fear is that this reply may encourage further discussion, and while I'm
certainly happy to provide further clarifications, I do think it is
important to keep in mind Yoav's remarks on changes.

>  ***** (1) *****
>
>  The title "Public Key Pinning Extension for HTTP" seems overly broad
>  and misses the mark a bit. The Abstract states its for User Agents,
>  but the title does not narrow focus.
<snip>

This isn't so much a technical argument as quibbling over naming. I'm not
sure why you feel that "User Agent" is somehow a narrowing of focus -
anything that processes an HTTP header for a user is surely a User Agent,
and anything not processing HTTP headers surely shouldn't care about HTTP
headers.


>
>  ***** (2) *****
>
>  I think the document could benefit from a more comprehensive
>  discussion of the goals. The abstract states "… [some]
>  man-in-the-middle attacks due to compromised Certification
>  Authorities". That wet my appetite and I'd like to read more.
<snip>

Useful feedback, and perhaps more could have been detailed that would have
avoided some apparent confusion over the goals, but as noted, not likely
to result in substantive changes.

I think the discussion on the mailing list on the document serves as a bit
of a living discussion over the security goals, and I hope this message
(and the contents below) will help further clarify the non-goals.

>  ***** (3) *****
<snip>

I think these comments fall into the same as (2)

>  ***** (4) *****
>
> >From the 1. Introduction:
>
>      Key pinning is a trust-on-first-use (TOFU) mechanism.
>
>  That may be true for the general problem, but its completely untrue
>  when 'a priori' knowledge exists for a specific instance of the
>  problem. Gutmann's Engineering Security [1], OWASP [2] or Moxie
>  Marlinspike [3] have been providing the counter examples for years.

This seems merely a disagreement on terminology. It would be equivalent to
say "Key pinning, as described in this document, is a ..."

>
>  ***** (5) *****
<snip>

While I understand that you were confused by this, I'm not sure I agree
with the language being the source of the confusion or the proposed
changes being useful to avoiding that confusion.

>
>  ***** (6) *****
<snip>

This seems to be minor textual quibbles and would not be something I think
would have been terribly useful to change.

>  ***** (7) *****
<snip>

Possibly something to fix once it goes through the editor's queue fixing
up typographic and grammatical issues, although I'm not sure. Thanks for
highlighting this.

>  ***** (8) *****
>
> >From 2.1.  Response Header Field Syntax:
>
>      The "Public-Key-Pins" and "Public-Key-Pins-Report-Only" header...
<snip>

Renaming the header would be a non-starter. Among other reasons, I would
note the parallel here to Content-Security-Policy-Report-Only.

While I can see how you reached your interpretation, it's also entirely
reasonable to reach the correct conclusion from the text. As you note, the
text certainly expounds upon its non-exclusivity, and nothing in the text
(beyond your interpretation of the header name) supports your conclusion
of mutual exclusivity.

>  ***** (10) *****
>
> >From 2.2.2. HTTP Request Type:
>
>     Pinned Hosts SHOULD NOT include the PKP header field in HTTP
>     responses conveyed over non-secure transport.  UAs MUST ignore any
>     PKP header received in an HTTP response conveyed over non-secure
>     transport.
>
>  There could be some confusion here. What about anonymous protocols
>  that provide confidentiality only? Is it allowed or not allowed?

They're anonymous - e.g. they lack authenticity, aka a key binding. If
you're binding a key, you're effectively authenticating the peer.

Perhaps you're describing something like "opportunistic encryption", in
which case, no, it would not be supported. The threat model in 4.5
hopefully makes it obvious why this would be bad.


>  ***** (11) *****
>
> >From 2.3.3. Noting a Pinned Host - Storage Model:
>
>     Known Pinned Hosts are identified only by domain names, and never IP
>     addresses.
>
>  This is kind of interesting. This document specifies behavior for
>  browsers and other UAs, but browsers follow the CA/B.

Perhaps you meant the CA/Browser Forum's Baseline Requirements? In which
case, no, browsers don't follow the BRs - the BRs follow the browsers.

>  The CA/B does
>  not prohibit a CA from issuing certificates for an IP address except
>  in the case of an an IANA reserved address (see the Baseline
>  Requirements, section 11.1.2 Authorization for an IP Address).
>  Additionally, RFC 5280 allows IP addresses in section 4.2.1.6 Subject
>  Alternative Name. So its not clear to me why a more restrictive policy
>  is called out.
>
>  I also understand an IP address for a host can change. But in the case
>  a public IP address has been previously bound to a public key by way
>  of an authority, it again seems more restrictive.
>
>  *If* the proposed standard is trying to guard against some threats
>  posed by the Domain Name System, IANA reserved addresses, RFC 1918
>  addresses and similar, then that should be listed under Goals and/or
>  Threats.

I think it's a reasonable critique that we did not expound upon why IP
Addresses were forbidden from setting policy. As you note, currently the
CA/Browser Forum Baseline Requirements (and the browser policies that the
BRs are modeled after) permit publicly trusted certificates to IP
addresses.

This requirement is inherited from RFC 6797, which this document was
originally proposed as an extension of, and is documented in Appendix A,
Item 4 ( http://tools.ietf.org/html/rfc6797#appendix-A )

>
>  ***** (12) *****
>
> >From 2.6. Validating Pinned Connections:
>
>     … It is acceptable to allow Pin
>     Validation to be disabled for some Hosts according to local policy.
>     For example, a UA may disable Pin Validation for Pinned Hosts whose
>     validated certificate chain terminates at a user-defined trust
>     anchor, rather than a trust anchor built-in to the UA (or underlying
>     platform).
>
>  OK, this is the reason for the proposed title change in (1). This is
>  also the reason for the list of goals and threats in (2) and (3). Its
>  just not clear to me (at the moment) how a known good pinset can be
>  broken to facilitate proxying/interception in a proposed standard for
>  a security control that's supposed to stop that kind of funny
>  business.

This is, as I understand it, your main point of contention.

Rather than write a full rebuttal, I'd rather point you to
http://www.chromium.org/Home/chromium-security/security-faq#TOC-How-does-key-pinning-interact-with-local-proxies-and-filters-
which describes the design rationale.

In short, what you describe is a tension between the server operator (who
wishes to express a cryptographic policy) and the user (who wishes to use
their machine however they see fit). In such battles, User Agents (aka
things that processes requests for the user) implicitly and explicitly
choose to favour the user (for whom they act for) rather than the server.

>
>  ***** (13) *****
>
> >From 2.6. Validating Pinned Connections:
>
>     UAs send validation failure reports only when Pin Validation is
>     actually in effect.  Pin Validation might not be in effect e.g.
>     because the user has elected to disable it, or because a presented
>     certificate chain chains up to a user-defined trust anchor.  In such
>     cases, UAs SHOULD NOT send reports.
>
>  If I am reading/parsing this correctly: adversaries want to be able to
>  surreptitiously MitM the channel, and they don't want a spotlight
>  shone on them while doing it.

Then no, you're not reading it properly.

This is about protecting user privacy and their rights, rather than the
server's goals. As discussed in this group in the past, it's quite silly
to suggest that the server should be able to violate the users' rights or
privileges with some header - as silly as setting an evil bit (RFC 3514)

For example, consider popular anti-virus solutions that do local MITM for
purposes of traffic inspection/protection. The user has chosen to grant
this program the capability to do so (such as by installing its trust
anchor), and derives great benefit from doing so.

Such MITM software may (and often does) include the authorized users' name
and other personally identifying details (such as a license number) within
the certificate it uses to intercept.

Under today's web security model, there is no way for the remote server to
get access to this data, and that is a GOOD thing. Even if the certificate
didn't contain any PII, the very key of the issuing certificate itself
would represent a stable identifier that could be tracked across browsing
sessions, since each key would be unique per user.

This is not a "tremendous blow to transparency", as you note. This is
about protecting users' privacy.

This is not about violating the users right to know they're being
proxy'd/intercepted - they're already aware of it, because only they could
have authorized it. If you're concerned that the user may not have
authorized it, then I would remind you of the Immutable Laws of Security -
particularly 2 and 3.
http://technet.microsoft.com/en-us/magazine/2008.10.securitywatch.aspx

<snip>
>  IETF leadership: Carl Sagan once asked, who speaks for the Earth. Who
>  here speaks for the users and sites? Does it *really* sound like a
>  good idea to suppress evidence of validation failures and unauthorized
>  overrides for a security control that's specifically designed to
>  contend with the threats?

This spec speaks for the users, by ensuring that local privacy rights
trump remote server policy.

If this spec failed to respect the users' right to privacy, and disclose
to a remote server details that they would not otherwise have access to,
then it would just be a silly little arms race where the end result was
exactly what was specified - anyone doing this would block the reporting
(and rightfully so), as it would offer persistent tracking of users.

Remote policy requests CANNOT trump local security policy, full stop.

>  ***** (14) *****
>
>  2.7. Interactions With Preloaded Pin Lists:
>
>     The effective policy for a Known Pinned Host that has both built-in
>     Pins and Pins from previously observed PKP header response fields is
>     implementation-defined.
>
>  In the name of transparency, the site should receive at least one
>  report detailing the issue. The site should be able to specify the
>  frequency of the reports so it can assess the breadth of the reported
>  issue.

Implementation defined is implementation defined.

>  ***** (15) *****
>
>  2.8. Pinning Self-Signed End Entities
>
>  Kudos for this section. I think it fits nicely with Viktor Dukhovni's
>  Opportunistic Security.

MAY is MAY.

>  ***** (16) *****
>
<snip>

I think all of what you said is wrong here, for reasons expounded upon in
both (13) and in
http://www.chromium.org/Home/chromium-security/security-faq#TOC-How-does-key-pinning-interact-with-local-proxies-and-filters-
. That is, everything you site as arguments for your interpretation are
exactly the same reasons why it does NOT do what you wish, and why it
would be terrible to do so.

The device owner has expressly authorized the interception. They are the
first constituent. Your remote server operator has far less priority than
them, nor can you argue that there is any way to protect users who are not
the device owners - this is one of the immutable law of security.

>  ***** (17) *****
>
<snip>

This is just a restatement of (16) and (13).

The users' wishes - including allowing interception - trump the remote
servers wishes.

Though the remote server may not wish to allow the users' anti-virus to
protect them (and I can think of plenty hostile malware sites that would
want such a control), the users' wishes trump all.