Re: [Uta] New proposal: SMTP Strict Transport Security

Viktor Dukhovni <ietf-dane@dukhovni.org> Tue, 22 March 2016 08:49 UTC

Return-Path: <ietf-dane@dukhovni.org>
X-Original-To: uta@ietfa.amsl.com
Delivered-To: uta@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id B4EB212D7EB for <uta@ietfa.amsl.com>; Tue, 22 Mar 2016 01:49:03 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.9
X-Spam-Level:
X-Spam-Status: No, score=-1.9 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, RCVD_IN_DNSWL_NONE=-0.0001] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 9t3ZjFF9QHJI for <uta@ietfa.amsl.com>; Tue, 22 Mar 2016 01:49:01 -0700 (PDT)
Received: from mournblade.imrryr.org (mournblade.imrryr.org [38.117.134.19]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 331C712D5C0 for <uta@ietf.org>; Tue, 22 Mar 2016 01:49:01 -0700 (PDT)
Received: by mournblade.imrryr.org (Postfix, from userid 1034) id 0DE47284F45; Tue, 22 Mar 2016 08:49:00 +0000 (UTC)
Date: Tue, 22 Mar 2016 08:49:00 +0000
From: Viktor Dukhovni <ietf-dane@dukhovni.org>
To: uta@ietf.org
Message-ID: <20160322084859.GF6602@mournblade.imrryr.org>
References: <CAB0W=GS2PXF-divC+SNs+A-jH1-_BBA889-TbQXHvrVsrbKLEA@mail.gmail.com> <CAB0W=GSQ4oTLT+qepMi7Pj5=UmBD70D_uW7c193RY-gw818ORA@mail.gmail.com> <CAB0W=GRB_6LhqEGYzeYq-srnM99wqwZrdjUEm=vJ7+oFiKbYoA@mail.gmail.com> <CAB0W=GTGja5JtxGuCzhD6O3B2Ow-wLN-B6WQ8XUDyvQRqdFZxw@mail.gmail.com> <20160322063527.GD6602@mournblade.imrryr.org> <CANtKdUeh8LV1uaWAyRqQ2ou4pdTNvKgzuJ5kKsQLwPFORqrDQA@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <CANtKdUeh8LV1uaWAyRqQ2ou4pdTNvKgzuJ5kKsQLwPFORqrDQA@mail.gmail.com>
User-Agent: Mutt/1.5.24 (2015-08-30)
Archived-At: <http://mailarchive.ietf.org/arch/msg/uta/tpZK98YUfWqsBOZemapXJtQ1ubM>
Subject: Re: [Uta] New proposal: SMTP Strict Transport Security
X-BeenThere: uta@ietf.org
X-Mailman-Version: 2.1.17
Precedence: list
Reply-To: uta@ietf.org
List-Id: UTA working group mailing list <uta.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/uta>, <mailto:uta-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/uta/>
List-Post: <mailto:uta@ietf.org>
List-Help: <mailto:uta-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/uta>, <mailto:uta-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 22 Mar 2016 08:49:04 -0000

On Tue, Mar 22, 2016 at 08:58:25AM +0100, Daniel Margolis wrote:

> >     A significant obstacle to a successful roll-out of WebPKI with
> >     SMTP is not [so] much that obtaining and deploying CA certs is
> >     onerous (enabling DNSSEC is likely more difficult at present),
> >     but rather that there is no single set of CAs that sending and
> >     receiving systems can (or perhaps should) reasonably agree on.
> >     On the one hand, because MTAs employing STS are non-interactive
> >     background processes with no human-operator in the loop to
> >     "click OK" for each exception, the set of CAs a sending system
> >     that employs STS would need to trust would need to be "comprehensive
> >     enough" to include all the CAs used by all the domains one
> >     might need to send email to.
> 
> This is of course a big topic, ...

I don't think this is central to my feedback, more of an observation
on the framing of comparisons with DANE.  For the large providers,
the choice of CA of a popular-enough will be easy enough.  For
everyone else, DV certs from the long-tail of CAs will provide the
appearance of security (first a CA you've never heard of makes a
leap-of-faith assertion via insecure email to admin@domain, then
everyone trusts assertions from that CA).  Yes this model makes
some opportunistic MiTM more difficult, but at the cost of making
all domains comparably insecure.

I would urge the large providers to use a substantially more
restrictive set of CAs for email delivery within the "walled garden".
You might even publish that list of CAs as the only ones to trust
for sending email to the 800lb-gorilla email services.

This does make some domains more equal than others, but I think
that the scope of this effort is primarily to secure email between
and to same said providers, with all the other domains a lower
priority.  If you like, an extension of EMiG ("Email Made in Germany"
for those not familiar with that precedent) to a larger more
international group of providers.

This said, I don't want the discussion to focus on this specific
issue, let's put it aside for now, we can return to it later once
the important protocol bits are sorted out.

> >     This requires domains that publish STS records to duplicate
> >     their MX records in the STS RRset.  It is not clear why that's
> >     useful.  If the STS record itself is not DNSSEC-validated, the
> >     payload is not more secure than the MX RRset.  If the payload
> >     is DNSSEC-validated, then the MX RRset in the same zone would
> >     (barring unexpected zone cuts) be equally secure.  I posit that
> >     this field is both onerous and superfluous.
> 
> There are two differences with the MX records here:
> 
> 1. A (most likely) longer TTL on the STS policy versus the MX record
> 2. The option for wildcards or broader patterns than merely a list of
> valid hosts
>
> This permits the publishing domain to declare that "for the next year, I
> plan to always host my mail at example.com" without publishing specific MX
> records that have a 1-year TTL (which of course could be brittle).

Yes, that's why the MX record patterns should be in the STS policy
published at the HTTPS URL (JSON encoded), but there's no reason
to put it in DNS.  As I said, the DNS record can basically be
compressed to a 1-bit value, but see below...

> >     It would be expensive for MTAs to attempt repeated HTTPS
> >     connections that timeout trying to connect to port 443 at
> >     the majority of domains which have not deployed STS.
> >
> >     All that's needed in DNS to support a pure WebPKI STS is a
> >     boolean value to signal the existence of the STS resource URI.
> >     This data can be obtained efficiently.  If the "_smtp-sts" RR
> >     exists (pick a suitable RRtype and fixed short payload) then
> >     the HTTPS URI should be consulted, otherwise the HTTPS URI is
> >     not consulted (at first contact), or is consulted asynchronously,
> >     in parallel with the first mail delivery (with appropriate spacing
> >     between probes, ...).
> >
> >     Thus some MTAs might compress the STS DNS record to zero bits,
> >     and just use asynchronous suitably spaced HTTPS probes to the
> >     domains for which no policy is presently known.  However the
> >     1-bit encoding is likely better.
> 
> I also dislike the copying of the policy into two places, ...
> 
> The only real reason to do this is to allow MTAs to cheaply see if the
> policy has been updated.

For that, the DNS record should, instead of a single absent/present
bit, consist of a short (DNS) TTL "nonce" that is also published
in the JSON policy obtained at the HTTPS well-known URL.  Then
invalidation of the policy is cheaply achieved by updating the
nonce in DNS and in the HTTPS data.  Clients that have a cached
policy can quickly check the DNS nonce, and refresh via HTTPS when
the DNS nonce no longer matches the cached value.

There is no need to duplicate any of the semantic payload of the
record in DNS given that it stored and authenticated via HTTPS.

> So all we're really doing here (in the webpki case) is leveraging DNS as a
> sort of caching layer.

My (strong) suggestion: use DNS for just cache invalidation, and
perhaps also publication (via a separate record) of the "rua"
reporting URI.  Do not duplicate data which one must in any case
obtain and cache via HTTPS in DNS.

Do not attempt to hedge your bets and support DANE/DNSSEC via STS,
I don't think that makes much sense either.

> If we are willing to force the webpki validation method (as you assumed, I
> think, above)

Yes, force webpki in STS, it dramatically simplifies the DNS part
of the protocol, losslessly to around zero bits (ok an expiration
nonce), and also simplifies the HTTPS payload to fewer/simpler JSON
fields.

> I'm somewhat on the fence about this trade-off myself, but I don't think
> it's unreasonable. Thoughts?

For hosting indirection, the customer domain still needs operate
a HTTPS well-known URI to avoid a variety of attacks.  They could
return a redirect from that server, or just cache and "proxy" the
payload from the provider's corresponding URL).  A once an hour
cron job to (securely) scrape the provider's data and re-publish
locally would be quite sufficient and keep everything simple.

Support for redirects would introduce substantial complexity for
the client, especially if allowed to nest...

Who should failure notifications go to?

In summary:

    * Radically simplify the DNS STS record to contain zero policy.

    * Consider publishing the "rua" separately in DNS, or else
      exclusively via the well-known URI.

    * Allow (DANE or other) domains to publish just the RUA,
      the feature is not STS-specific.

    * Drop all "pretense" of supporting DANE/DNSSEC via STS policy,
      this is not needed.

With that, we can then polish the draft's exposition, security
considerations, reporting format, ...

-- 
	Viktor.

P.S.  (Off topic for this thread, private email feedback welcome
if any of the large providers are interested in pursuing this
further):

I'm still looking forward to some of the large providers supporting
DANE outbound, no need to sign your own zones for that.  You just
need validating resolvers that can handle the lookup volume and a
TLS stack that can do DANE verification.  Perhaps BoringSSL will
at some point import the DANE bits from OpenSSL 1.1.0...

If OpenSSL is already the TLS library, then DANE TLSA validation
will soon be a built-in feature, but the MTA still needs to obtain
the TLSA records and provide them to the OpenSSL library.