[websec] Comments on draft-ietf-websec-key-pinning

Jeffrey Walton <noloader@gmail.com> Fri, 02 January 2015 05:21 UTC

Return-Path: <noloader@gmail.com>
X-Original-To: websec@ietfa.amsl.com
Delivered-To: websec@ietfa.amsl.com
Received: from localhost (ietfa.amsl.com []) by ietfa.amsl.com (Postfix) with ESMTP id 6C7E21A8724 for <websec@ietfa.amsl.com>; Thu, 1 Jan 2015 21:21:41 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: 0.7
X-Spam-Status: No, score=0.7 tagged_above=-999 required=5 tests=[BAYES_50=0.8, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FREEMAIL_FROM=0.001, SPF_PASS=-0.001] autolearn=ham
Received: from mail.ietf.org ([]) by localhost (ietfa.amsl.com []) (amavisd-new, port 10024) with ESMTP id mBz154U6Q6NB for <websec@ietfa.amsl.com>; Thu, 1 Jan 2015 21:21:36 -0800 (PST)
Received: from mail-ig0-x235.google.com (mail-ig0-x235.google.com [IPv6:2607:f8b0:4001:c05::235]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 0F3E11A871D for <websec@ietf.org>; Thu, 1 Jan 2015 21:21:36 -0800 (PST)
Received: by mail-ig0-f181.google.com with SMTP id l13so16173352iga.14 for <websec@ietf.org>; Thu, 01 Jan 2015 21:21:34 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:reply-to:date:message-id:subject:from:to:content-type :content-transfer-encoding; bh=OtyyZ+t2C4pffpA1k3/CwRBlp8wrlRB+4fks8ahUZBk=; b=MXniGSJfHCnIO11hAPnOxlG+KxCtU812MC0Dx05yh4Dnpk4dlFlORJZvdVqxDsqftz nzky56O0ge17oFjuzzlb5OLqmLnQM9cXs+Wa/B7i7Wfy2gLkEP/VM8Wl7Dugt4ZN8Umm ZggEXkgChzvqVTkiAOINyeOGD3uUdSGaLWzdDAJ0kcEsyqb5ghluFqJNKJtTLX+4rUgV zYBbbrp/J31hRkfpp/LcvpiwmlU/VGQL4RwKLSGZf37dOVqI2Q9jPDC4Mj+VNbmqoPCb ldp7Q/xzvhs/GmtuKNdQGNCD0e+ak8fui3KelCSOkM+Ambg0xhKyqNbHaWc4dZj4tgGY BOQw==
MIME-Version: 1.0
X-Received: by with SMTP id u2mr54857358ich.61.1420176093804; Thu, 01 Jan 2015 21:21:33 -0800 (PST)
Received: by with HTTP; Thu, 1 Jan 2015 21:21:33 -0800 (PST)
Date: Fri, 2 Jan 2015 00:21:33 -0500
Message-ID: <CAH8yC8=XEr9q8VHarucKa0rVqSPt3=oDzDRWXA3_u4rkhpZmoQ@mail.gmail.com>
From: Jeffrey Walton <noloader@gmail.com>
To: IETF WebSec WG <websec@ietf.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
Archived-At: http://mailarchive.ietf.org/arch/msg/websec/WxnLpkFlAdT58L51fmWUtikUSDY
Subject: [websec] Comments on draft-ietf-websec-key-pinning
X-BeenThere: websec@ietf.org
X-Mailman-Version: 2.1.15
Precedence: list
Reply-To: noloader@gmail.com
List-Id: Web Application Security Minus Authentication and Transport <websec.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/websec>, <mailto:websec-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/websec/>
List-Post: <mailto:websec@ietf.org>
List-Help: <mailto:websec-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/websec>, <mailto:websec-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 02 Jan 2015 05:21:42 -0000

I'd like to share some comments on

Pubic key pinning is an effective security control and I'm glad to see
the IETF moving forward with it. And I'm especially thankful to Palmer
and Sleevi for taking the time to put it together and shepherd it
through the process.

***** (1) *****

The title "Public Key Pinning Extension for HTTP" seems overly broad
and misses the mark a bit. The Abstract states its for User Agents,
but the title does not narrow focus.

There are different problem domains where public key pinning can be
utilized. Painting with a broad brush, I categorize them as "the
general problem" where no 'a priori' knowledge exists, and various
specific "instance problems", where 'a priori' does exits and can be
leveraged. An example of the general problem is browsers, and an
example of the instance problem is any custom software deployed by an

Suggestion: change the title to "Trust on First Use Pinsets and
Overrides for HTTP" or "Pinsets and Overrides for HTTP in User

There are a few reasons for the suggestion. First, the abstract
effectively states its. Second, the proposed standard is a TOFU scheme
used for the general problem. Third, the Introduction recognizes the
two classes of problems when it discusses the pre-shared keys. Fourth,
the embodiment described in the proposed standard is not a preferred
solution for the many instances of the specific problems. Fifth, the
overrides need to be highlighted since they are an integral and high
impact part of the proposed standard.

Above, when I said the "… is not a preferred solution for the many
instances of the specific problems", I'm referring to pinning as
described in Gutmann's Engineering Security [1], OWASP [2] or by Moxie
Marlinspike [3] and others.

***** (2) *****

I think the document could benefit from a more comprehensive
discussion of the goals. The abstract states "… [some]
man-in-the-middle attacks due to compromised Certification
Authorities". That wet my appetite and I'd like to read more.

I think it would be more helpful to state what is trying to be
achieved in terms of both operational goals and security goals. For
example, I don't see any operational goals, like business continuity
behind a proxy. And "some man-in-the-middle attacks due to compromised
Certification Authorities" seems to be a somewhat underspecifed
security goal.

***** (3) *****

I think the document could benefit from an enumeration of the threats
the security control is intended to defend against (or a normative
reference to the "Pinning Threat Model" stated in [4]).

Taken in its totality (pinning + overrides), its not clear to me what
the proposed standard defends against.

The Abstract mentions it could defend against "[some]
man-in-the-middle attacks due to compromised Certification
Authorities." Because we don't know what the control is intended to
protect against, we can't measure its effectiveness.

Additionally, when public key pinning is used in a more traditional
sense (like Gutmann's Engineering Security [1], OWASP [2] or Moxie
Marlinspike [3]), then pinning defends against some things the
proposed standard appears to allow.

The lack of clarity means there are some significant operational
differences between customary expectations and the proposed standard.
The misunderstood differences will clearly lead to confusion,
unexpected results and a false sense of security.

***** (4) *****

>From the 1. Introduction:

    Key pinning is a trust-on-first-use (TOFU) mechanism.

That may be true for the general problem, but its completely untrue
when 'a priori' knowledge exists for a specific instance of the
problem. Gutmann's Engineering Security [1], OWASP [2] or Moxie
Marlinspike [3] have been providing the counter examples for years.

***** (5) *****

>From the 1. Introduction:

   A Pin is a relationship between a hostname and a cryptographic
   identity (in this document, 1 or more of the public keys in a chain
   of X.509 certificates).  Pin Validation is the process a UA performs
   to ensure that a host is in fact authenticated with its previously-
   established Pin.

I was a little confused the first time I read this because I parsed it
incorrectly. Here's how I parsed the first time: only the server or
end-entity certificate has a hostname, so only that certificate can
contribute a public key to a pinset. The other certificates
(intermediate and ca) can't contribute to a pinset because they don't
have a hostname.

Perhaps something like "A Pin is a mapping of a hostname to a set (one
or more?) of public keys that can be used to cryptographically certify
the site's identity... The public keys can be any of (1) the host's
public key (2) any intermediate or ca public key...".

Also, 2.4 Semantics of Pins, offers a slightly different definition
(it includes an algorithm, which appears to be missing from this
definition). So maybe the definition above should be expanded to
include "contextual information" that can later be ratified.

***** (6) *****

The document might also consider introducing the term "Pinset". I
think Kenny Root or the Android Security Team introduced the term at
Google I/O a couple of years ago. They were probably using the term
internally before then.

***** (7) *****

>From the 1. Introduction:

        ...  UAs apply X.509 certificate chain validation in accord
        with [RFC5280].)

Typo: accord -> accordance.

***** (8) *****

>From 2.1.  Response Header Field Syntax:

    The "Public-Key-Pins" and "Public-Key-Pins-Report-Only" header...

The naming of the fields appear to indicate they are mutually
exclusive (Report Only seems to indicate anything other than reporting
is prohibited). But 2.3.2 allows them both, so it might be a good idea
to make a quick statement that both are allowed in 2.1, and then
detail it in 2.3.2. Or drop the "Only" from

***** (10) *****

>From 2.2.2. HTTP Request Type:

   Pinned Hosts SHOULD NOT include the PKP header field in HTTP
   responses conveyed over non-secure transport.  UAs MUST ignore any
   PKP header received in an HTTP response conveyed over non-secure

There could be some confusion here. What about anonymous protocols
that provide confidentiality only? Is it allowed or not allowed?

***** (11) *****

>From 2.3.3. Noting a Pinned Host - Storage Model:

   Known Pinned Hosts are identified only by domain names, and never IP

This is kind of interesting. This document specifies behavior for
browsers and other UAs, but browsers follow the CA/B. The CA/B does
not prohibit a CA from issuing certificates for an IP address except
in the case of an an IANA reserved address (see the Baseline
Requirements, section 11.1.2 Authorization for an IP Address).
Additionally, RFC 5280 allows IP addresses in section Subject
Alternative Name. So its not clear to me why a more restrictive policy
is called out.

I also understand an IP address for a host can change. But in the case
a public IP address has been previously bound to a public key by way
of an authority, it again seems more restrictive.

*If* the proposed standard is trying to guard against some threats
posed by the Domain Name System, IANA reserved addresses, RFC 1918
addresses and similar, then that should be listed under Goals and/or

***** (12) *****

>From 2.6. Validating Pinned Connections:

   … It is acceptable to allow Pin
   Validation to be disabled for some Hosts according to local policy.
   For example, a UA may disable Pin Validation for Pinned Hosts whose
   validated certificate chain terminates at a user-defined trust
   anchor, rather than a trust anchor built-in to the UA (or underlying

OK, this is the reason for the proposed title change in (1). This is
also the reason for the list of goals and threats in (2) and (3). Its
just not clear to me (at the moment) how a known good pinset can be
broken to facilitate proxying/interception in a proposed standard for
a security control that's supposed to stop that kind of funny

Also, as far as document flow is concerned, I think the sentences
quoted above should be removed from the second paragraph and placed as
the last stand alone paragraph in that section. I think it should be
moved because it breaks the flow of the discussion of "what to do"
with a discussion of the related topic of "not doing what you should
be doing".

***** (13) *****

>From 2.6. Validating Pinned Connections:

   UAs send validation failure reports only when Pin Validation is
   actually in effect.  Pin Validation might not be in effect e.g.
   because the user has elected to disable it, or because a presented
   certificate chain chains up to a user-defined trust anchor.  In such
   cases, UAs SHOULD NOT send reports.

If I am reading/parsing this correctly: adversaries want to be able to
surreptitiously MitM the channel, and they don't want a spotlight
shone on them while doing it.

As worded, the last two sentences are a tremendous blow to
transparency. The users and the site owner who are being
proxy'd/intercepted have a right to know what's occurring on his/her
[supposed] secure channel. In addition, the community has a right to
know how widespread potential problems are.

Users and sites have a right to know because non-trivial data is
sometimes traversing the channel, like site passwords and confidential
company information. Some organizations and verticals have auditing
and compliance requirements, so reporting a breach/loss of that data
is required. The community has a right to know so the breadth of the
problem can be ascertained, and plans of action can be formulated and
action taken.

Some proxying/interception performed by third parties or externalities
could be illegal in some jurisdictions, so the user or site will need
the evidence if they desire to have it addressed more formally. In the
US, I believe the law is the Computer Fraud and Abuse Act.

The perceived risk of a lawsuit could help stop some of the
unauthorized proxying/interception because some organizations will
weigh the risk with the reward. Considering this proposal is a
security control to stop unauthorized proxying/interception, that's a

IETF leadership: Carl Sagan once asked, who speaks for the Earth. Who
here speaks for the users and sites? Does it *really* sound like a
good idea to suppress evidence of validation failures and unauthorized
overrides for a security control that's specifically designed to
contend with the threats?

***** (14) *****

2.7. Interactions With Preloaded Pin Lists:

   The effective policy for a Known Pinned Host that has both built-in
   Pins and Pins from previously observed PKP header response fields is

In the name of transparency, the site should receive at least one
report detailing the issue. The site should be able to specify the
frequency of the reports so it can assess the breadth of the reported

***** (15) *****

2.8. Pinning Self-Signed End Entities

Kudos for this section. I think it fits nicely with Viktor Dukhovni's
Opportunistic Security.

***** (16) *****

Overrides are mentioned once in section 2.7. They effectively allow an
adversary to break known good pinsets and subvert the secure channel.
Section 4. Security Considerations does not discuss the impact of an

The impact of overrides should be discussed so that sites and software
architects can ensure the security control meets expectations and
properly assess risk.

In addition, for browsers (which the proposed standard appears to
target), discarding the user's wishes is a violation in Priority of
Constituencies [5]; and violates the user and site's expectation of
Secure By Design [6]. So its not clear to me how useful the proposal
will be for browsers and other UAs that follow the W3C's Design
Principles. But I think things can be improved so that the proposed
standard does satisfy them.

In the above, I claim the user typing HTTPS (or a site redirecting to
HTTPS) is an indicator that a secure connection to a site is desired,
and not a connection to folks who would like to proxy the connection
for them. By not delivering what they asked, the proposal falls short
of both Priority of Constituencies and Secure By Design.

***** (17) *****

There is no consideration for a site to set policy on overrides. That
is, a site should be able to determine whether it wants to allow
proxying/interception, and not an externality. Sites offering HTTPS,
and other security controls like HSTS or CSP, are strong indicators
that sites care about these things.

Sites should be allowed to set policy on overrides, just like they can
set HSTS or CSP policy.

IETF leadership: Who here speaks for the users and sites?


Sorry for the long post, and thanks for taking the time for consideration.

Jeffrey Walton


[1] https://www.cs.auckland.ac.nz/~pgut001/pubs/book.pdf
[2] https://www.owasp.org/index.php/Certificate_and_Public_Key_Pinning
[3] http://www.thoughtcrime.org/blog/authenticity-is-broken-in-ssl-but-your-app-ha/
[4] https://www.ietf.org/mail-archive/web/websec/current/msg02261.html
[5] http://www.w3.org/TR/html-design-principles/#priority-of-constituencies
[6] http://www.w3.org/TR/html-design-principles/#secure-by-design