Re: [websec] Comments on draft-ietf-websec-key-pinning

Yoav Nir <ynir.ietf@gmail.com> Fri, 02 January 2015 12:26 UTC

Return-Path: <ynir.ietf@gmail.com>
X-Original-To: websec@ietfa.amsl.com
Delivered-To: websec@ietfa.amsl.com
Received: from localhost (ietfa.amsl.com [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 822231A1A51 for <websec@ietfa.amsl.com>; Fri, 2 Jan 2015 04:26:43 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2
X-Spam-Level:
X-Spam-Status: No, score=-2 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FREEMAIL_FROM=0.001, SPF_PASS=-0.001] autolearn=ham
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id qLuNSf23mU-A for <websec@ietfa.amsl.com>; Fri, 2 Jan 2015 04:26:40 -0800 (PST)
Received: from mail-we0-x230.google.com (mail-we0-x230.google.com [IPv6:2a00:1450:400c:c03::230]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 07CE31A1A22 for <websec@ietf.org>; Fri, 2 Jan 2015 04:26:39 -0800 (PST)
Received: by mail-we0-f176.google.com with SMTP id w61so4410229wes.7 for <websec@ietf.org>; Fri, 02 Jan 2015 04:26:38 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=content-type:mime-version:subject:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to; bh=HsS7OmpJlg5gWYPnqp+60cFaeYq49uGg71htofqQtlg=; b=v5d2v3KsL1bmxu2sfoR4mqbOcgCXkj5Jugskrp//+A7Db6qHcxqI4oQJwqfzEBHCyD TlkH8pfuoMjRF99FFXQEKChGCIDgxOLrb72vPq/l7gWpGvJXNL9Tsozsc4poPwViUJ5v cIEywTNZ5P478qbQE9erwzzJ35pKc5brrWZoKDe4Uas1nrho+SrxTEJgrJqPcSm39DuR fMqPMtxo2u+OBFsJWKDTEXUkGUqkvZOhMSzSzaiKncxxZQdI+KqxgLlUsmq9ptAODuB9 pnk/GSoDvHXTRRpIjhzLXNfn2HPzIWxkHia5cVwSlfSDmwiSrgzC/dDLMTACvvR6ngpq l/EQ==
X-Received: by 10.180.98.42 with SMTP id ef10mr62185923wib.46.1420201598665; Fri, 02 Jan 2015 04:26:38 -0800 (PST)
Received: from [192.168.1.102] (IGLD-84-228-227-214.inter.net.il. [84.228.227.214]) by mx.google.com with ESMTPSA id qd2sm53960109wic.19.2015.01.02.04.26.37 (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 02 Jan 2015 04:26:38 -0800 (PST)
Content-Type: text/plain; charset="utf-8"
Mime-Version: 1.0 (Mac OS X Mail 8.1 \(1993\))
From: Yoav Nir <ynir.ietf@gmail.com>
In-Reply-To: <CAH8yC8=XEr9q8VHarucKa0rVqSPt3=oDzDRWXA3_u4rkhpZmoQ@mail.gmail.com>
Date: Fri, 02 Jan 2015 14:26:35 +0200
Content-Transfer-Encoding: quoted-printable
Message-Id: <8E436DD1-8EFB-4270-81CA-717B0FDD9A4F@gmail.com>
References: <CAH8yC8=XEr9q8VHarucKa0rVqSPt3=oDzDRWXA3_u4rkhpZmoQ@mail.gmail.com>
To: noloader@gmail.com
X-Mailer: Apple Mail (2.1993)
Archived-At: http://mailarchive.ietf.org/arch/msg/websec/s6dkqfdxbZsDn_K7At_juWTeKFQ
Cc: IETF WebSec WG <websec@ietf.org>
Subject: Re: [websec] Comments on draft-ietf-websec-key-pinning
X-BeenThere: websec@ietf.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: Web Application Security Minus Authentication and Transport <websec.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/websec>, <mailto:websec-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/websec/>
List-Post: <mailto:websec@ietf.org>
List-Help: <mailto:websec-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/websec>, <mailto:websec-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 02 Jan 2015 12:26:43 -0000

Hi, Jeffrey

Thanks for the review.

However, if you look at the datatracker link ([1]), you’ll see that this draft has been approved by the IESG 2.5 months ago. Its publication as RFC is only waiting for a referenced document to be published.

I’m afraid it’s too late now.

Yoav

[1] https://datatracker.ietf.org/doc/draft-ietf-websec-key-pinning/history/

> On Jan 2, 2015, at 7:21 AM, Jeffrey Walton <noloader@gmail.com> wrote:
> 
> I'd like to share some comments on
> https://tools.ietf.org/html/draft-ietf-websec-key-pinning-21.
> 
> Pubic key pinning is an effective security control and I'm glad to see
> the IETF moving forward with it. And I'm especially thankful to Palmer
> and Sleevi for taking the time to put it together and shepherd it
> through the process.
> 
> ***** (1) *****
> 
> The title "Public Key Pinning Extension for HTTP" seems overly broad
> and misses the mark a bit. The Abstract states its for User Agents,
> but the title does not narrow focus.
> 
> There are different problem domains where public key pinning can be
> utilized. Painting with a broad brush, I categorize them as "the
> general problem" where no 'a priori' knowledge exists, and various
> specific "instance problems", where 'a priori' does exits and can be
> leveraged. An example of the general problem is browsers, and an
> example of the instance problem is any custom software deployed by an
> organization.
> 
> Suggestion: change the title to "Trust on First Use Pinsets and
> Overrides for HTTP" or "Pinsets and Overrides for HTTP in User
> Agents".
> 
> There are a few reasons for the suggestion. First, the abstract
> effectively states its. Second, the proposed standard is a TOFU scheme
> used for the general problem. Third, the Introduction recognizes the
> two classes of problems when it discusses the pre-shared keys. Fourth,
> the embodiment described in the proposed standard is not a preferred
> solution for the many instances of the specific problems. Fifth, the
> overrides need to be highlighted since they are an integral and high
> impact part of the proposed standard.
> 
> Above, when I said the "… is not a preferred solution for the many
> instances of the specific problems", I'm referring to pinning as
> described in Gutmann's Engineering Security [1], OWASP [2] or by Moxie
> Marlinspike [3] and others.
> 
> ***** (2) *****
> 
> I think the document could benefit from a more comprehensive
> discussion of the goals. The abstract states "… [some]
> man-in-the-middle attacks due to compromised Certification
> Authorities". That wet my appetite and I'd like to read more.
> 
> I think it would be more helpful to state what is trying to be
> achieved in terms of both operational goals and security goals. For
> example, I don't see any operational goals, like business continuity
> behind a proxy. And "some man-in-the-middle attacks due to compromised
> Certification Authorities" seems to be a somewhat underspecifed
> security goal.
> 
> ***** (3) *****
> 
> I think the document could benefit from an enumeration of the threats
> the security control is intended to defend against (or a normative
> reference to the "Pinning Threat Model" stated in [4]).
> 
> Taken in its totality (pinning + overrides), its not clear to me what
> the proposed standard defends against.
> 
> The Abstract mentions it could defend against "[some]
> man-in-the-middle attacks due to compromised Certification
> Authorities." Because we don't know what the control is intended to
> protect against, we can't measure its effectiveness.
> 
> Additionally, when public key pinning is used in a more traditional
> sense (like Gutmann's Engineering Security [1], OWASP [2] or Moxie
> Marlinspike [3]), then pinning defends against some things the
> proposed standard appears to allow.
> 
> The lack of clarity means there are some significant operational
> differences between customary expectations and the proposed standard.
> The misunderstood differences will clearly lead to confusion,
> unexpected results and a false sense of security.
> 
> ***** (4) *****
> 
>> From the 1. Introduction:
> 
>    Key pinning is a trust-on-first-use (TOFU) mechanism.
> 
> That may be true for the general problem, but its completely untrue
> when 'a priori' knowledge exists for a specific instance of the
> problem. Gutmann's Engineering Security [1], OWASP [2] or Moxie
> Marlinspike [3] have been providing the counter examples for years.
> 
> ***** (5) *****
> 
>> From the 1. Introduction:
> 
>   A Pin is a relationship between a hostname and a cryptographic
>   identity (in this document, 1 or more of the public keys in a chain
>   of X.509 certificates).  Pin Validation is the process a UA performs
>   to ensure that a host is in fact authenticated with its previously-
>   established Pin.
> 
> I was a little confused the first time I read this because I parsed it
> incorrectly. Here's how I parsed the first time: only the server or
> end-entity certificate has a hostname, so only that certificate can
> contribute a public key to a pinset. The other certificates
> (intermediate and ca) can't contribute to a pinset because they don't
> have a hostname.
> 
> Perhaps something like "A Pin is a mapping of a hostname to a set (one
> or more?) of public keys that can be used to cryptographically certify
> the site's identity... The public keys can be any of (1) the host's
> public key (2) any intermediate or ca public key...".
> 
> Also, 2.4 Semantics of Pins, offers a slightly different definition
> (it includes an algorithm, which appears to be missing from this
> definition). So maybe the definition above should be expanded to
> include "contextual information" that can later be ratified.
> 
> ***** (6) *****
> 
> The document might also consider introducing the term "Pinset". I
> think Kenny Root or the Android Security Team introduced the term at
> Google I/O a couple of years ago. They were probably using the term
> internally before then.
> 
> ***** (7) *****
> 
>> From the 1. Introduction:
> 
>        ...  UAs apply X.509 certificate chain validation in accord
>        with [RFC5280].)
> 
> Typo: accord -> accordance.
> 
> ***** (8) *****
> 
>> From 2.1.  Response Header Field Syntax:
> 
>    The "Public-Key-Pins" and "Public-Key-Pins-Report-Only" header...
> 
> The naming of the fields appear to indicate they are mutually
> exclusive (Report Only seems to indicate anything other than reporting
> is prohibited). But 2.3.2 allows them both, so it might be a good idea
> to make a quick statement that both are allowed in 2.1, and then
> detail it in 2.3.2. Or drop the "Only" from
> "Public-Key-Pins-Report-Only".
> 
> ***** (10) *****
> 
>> From 2.2.2. HTTP Request Type:
> 
>   Pinned Hosts SHOULD NOT include the PKP header field in HTTP
>   responses conveyed over non-secure transport.  UAs MUST ignore any
>   PKP header received in an HTTP response conveyed over non-secure
>   transport.
> 
> There could be some confusion here. What about anonymous protocols
> that provide confidentiality only? Is it allowed or not allowed?
> 
> ***** (11) *****
> 
>> From 2.3.3. Noting a Pinned Host - Storage Model:
> 
>   Known Pinned Hosts are identified only by domain names, and never IP
>   addresses.
> 
> This is kind of interesting. This document specifies behavior for
> browsers and other UAs, but browsers follow the CA/B. The CA/B does
> not prohibit a CA from issuing certificates for an IP address except
> in the case of an an IANA reserved address (see the Baseline
> Requirements, section 11.1.2 Authorization for an IP Address).
> Additionally, RFC 5280 allows IP addresses in section 4.2.1.6 Subject
> Alternative Name. So its not clear to me why a more restrictive policy
> is called out.
> 
> I also understand an IP address for a host can change. But in the case
> a public IP address has been previously bound to a public key by way
> of an authority, it again seems more restrictive.
> 
> *If* the proposed standard is trying to guard against some threats
> posed by the Domain Name System, IANA reserved addresses, RFC 1918
> addresses and similar, then that should be listed under Goals and/or
> Threats.
> 
> ***** (12) *****
> 
>> From 2.6. Validating Pinned Connections:
> 
>   … It is acceptable to allow Pin
>   Validation to be disabled for some Hosts according to local policy.
>   For example, a UA may disable Pin Validation for Pinned Hosts whose
>   validated certificate chain terminates at a user-defined trust
>   anchor, rather than a trust anchor built-in to the UA (or underlying
>   platform).
> 
> OK, this is the reason for the proposed title change in (1). This is
> also the reason for the list of goals and threats in (2) and (3). Its
> just not clear to me (at the moment) how a known good pinset can be
> broken to facilitate proxying/interception in a proposed standard for
> a security control that's supposed to stop that kind of funny
> business.
> 
> Also, as far as document flow is concerned, I think the sentences
> quoted above should be removed from the second paragraph and placed as
> the last stand alone paragraph in that section. I think it should be
> moved because it breaks the flow of the discussion of "what to do"
> with a discussion of the related topic of "not doing what you should
> be doing".
> 
> ***** (13) *****
> 
>> From 2.6. Validating Pinned Connections:
> 
>   UAs send validation failure reports only when Pin Validation is
>   actually in effect.  Pin Validation might not be in effect e.g.
>   because the user has elected to disable it, or because a presented
>   certificate chain chains up to a user-defined trust anchor.  In such
>   cases, UAs SHOULD NOT send reports.
> 
> If I am reading/parsing this correctly: adversaries want to be able to
> surreptitiously MitM the channel, and they don't want a spotlight
> shone on them while doing it.
> 
> As worded, the last two sentences are a tremendous blow to
> transparency. The users and the site owner who are being
> proxy'd/intercepted have a right to know what's occurring on his/her
> [supposed] secure channel. In addition, the community has a right to
> know how widespread potential problems are.
> 
> Users and sites have a right to know because non-trivial data is
> sometimes traversing the channel, like site passwords and confidential
> company information. Some organizations and verticals have auditing
> and compliance requirements, so reporting a breach/loss of that data
> is required. The community has a right to know so the breadth of the
> problem can be ascertained, and plans of action can be formulated and
> action taken.
> 
> Some proxying/interception performed by third parties or externalities
> could be illegal in some jurisdictions, so the user or site will need
> the evidence if they desire to have it addressed more formally. In the
> US, I believe the law is the Computer Fraud and Abuse Act.
> 
> The perceived risk of a lawsuit could help stop some of the
> unauthorized proxying/interception because some organizations will
> weigh the risk with the reward. Considering this proposal is a
> security control to stop unauthorized proxying/interception, that's a
> win-win.
> 
> IETF leadership: Carl Sagan once asked, who speaks for the Earth. Who
> here speaks for the users and sites? Does it *really* sound like a
> good idea to suppress evidence of validation failures and unauthorized
> overrides for a security control that's specifically designed to
> contend with the threats?
> 
> ***** (14) *****
> 
> 2.7. Interactions With Preloaded Pin Lists:
> 
>   The effective policy for a Known Pinned Host that has both built-in
>   Pins and Pins from previously observed PKP header response fields is
>   implementation-defined.
> 
> In the name of transparency, the site should receive at least one
> report detailing the issue. The site should be able to specify the
> frequency of the reports so it can assess the breadth of the reported
> issue.
> 
> ***** (15) *****
> 
> 2.8. Pinning Self-Signed End Entities
> 
> Kudos for this section. I think it fits nicely with Viktor Dukhovni's
> Opportunistic Security.
> 
> ***** (16) *****
> 
> Overrides are mentioned once in section 2.7. They effectively allow an
> adversary to break known good pinsets and subvert the secure channel.
> Section 4. Security Considerations does not discuss the impact of an
> override.
> 
> The impact of overrides should be discussed so that sites and software
> architects can ensure the security control meets expectations and
> properly assess risk.
> 
> In addition, for browsers (which the proposed standard appears to
> target), discarding the user's wishes is a violation in Priority of
> Constituencies [5]; and violates the user and site's expectation of
> Secure By Design [6]. So its not clear to me how useful the proposal
> will be for browsers and other UAs that follow the W3C's Design
> Principles. But I think things can be improved so that the proposed
> standard does satisfy them.
> 
> In the above, I claim the user typing HTTPS (or a site redirecting to
> HTTPS) is an indicator that a secure connection to a site is desired,
> and not a connection to folks who would like to proxy the connection
> for them. By not delivering what they asked, the proposal falls short
> of both Priority of Constituencies and Secure By Design.
> 
> ***** (17) *****
> 
> There is no consideration for a site to set policy on overrides. That
> is, a site should be able to determine whether it wants to allow
> proxying/interception, and not an externality. Sites offering HTTPS,
> and other security controls like HSTS or CSP, are strong indicators
> that sites care about these things.
> 
> Sites should be allowed to set policy on overrides, just like they can
> set HSTS or CSP policy.
> 
> IETF leadership: Who here speaks for the users and sites?
> 
> ***************
> 
> Sorry for the long post, and thanks for taking the time for consideration.
> 
> Jeffrey Walton
> 
> ***************
> 
> [1] https://www.cs.auckland.ac.nz/~pgut001/pubs/book.pdf
> [2] https://www.owasp.org/index.php/Certificate_and_Public_Key_Pinning
> [3] http://www.thoughtcrime.org/blog/authenticity-is-broken-in-ssl-but-your-app-ha/
> [4] https://www.ietf.org/mail-archive/web/websec/current/msg02261.html
> [5] http://www.w3.org/TR/html-design-principles/#priority-of-constituencies
> [6] http://www.w3.org/TR/html-design-principles/#secure-by-design
> 
> _______________________________________________
> websec mailing list
> websec@ietf.org
> https://www.ietf.org/mailman/listinfo/websec