Re: [dane] Digest Algorithm Agility discussion

Viktor Dukhovni <> Fri, 21 March 2014 01:08 UTC

Return-Path: <>
Received: from localhost ( []) by (Postfix) with ESMTP id DE5EC1A084F for <>; Thu, 20 Mar 2014 18:08:10 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -1.9
X-Spam-Status: No, score=-1.9 tagged_above=-999 required=5 tests=[BAYES_00=-1.9] autolearn=ham
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id WWQvPzaHNwi4 for <>; Thu, 20 Mar 2014 18:08:06 -0700 (PDT)
Received: from ( []) by (Postfix) with ESMTP id 827B71A0830 for <>; Thu, 20 Mar 2014 18:08:06 -0700 (PDT)
Received: by (Postfix, from userid 1034) id 37BD42AB22D; Fri, 21 Mar 2014 01:07:56 +0000 (UTC)
Date: Fri, 21 Mar 2014 01:07:56 +0000
From: Viktor Dukhovni <>
Message-ID: <>
References: <> <>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <>
User-Agent: Mutt/1.5.23 (2014-03-12)
Subject: Re: [dane] Digest Algorithm Agility discussion
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: DNS-based Authentication of Named Entities <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Fri, 21 Mar 2014 01:08:11 -0000

[ Forwarded on behalf of: Phil Pennock <> ]

On 2014-03-20 at 15:12 -0700, Wes Hardaker wrote:
> way that I'm firmly convinced everyone agrees on:  local policy always
> trumps everything.  EG this is perfectly acceptable:
> And that (wacky) policy would always trump any other decision tree in
> the code base.


(I apologise in advance for how long some of what follows is.)

>    o What is the default for "prefer", any_match [most people's opinion] or
>      first_existing_alg_published_from_list [Viktor's opinion]?

It's also my opinion [see next paragraph], speaking with my Exim
Developer hat on; alas, tends to blackhole my mails, making
it rather hard for me to participate.  I'd rather not start using
a Gmail account just to be able to participate in an IETF discussion,
but currently I've left the impression that Viktor is rather alone
in his current stance, which is deeply unfortunate.  (If this mail
doesn't make it through, perhaps a named recipient can forward it
on again?) 		[ done: Viktor. ]

I think Viktor's opinion is more accurately stated as
"first_existing_alg_published_by_remote_found_in_local_list"; a
distinction which emphasizes that it's not central control of
algorithms but is instead local control.  The issues are around
ensuring that default policies are not wacky and SMTP interop can
avoid breakage and flag-days.

The problem that I see is that the history of SMTP shows that if
we leave a lot of ambiguity, people will do some very strange
things, even in areas which aren't so security sensitive; further,
strange things now have the ability to hinder algorithm agility in
the future.  So this is not bike-shedding, but is instead trying
to provide solid guidance to implementers about what is most likely
to provide a solid interoperable solution which doesn't leave us
with a lot of unable-to-migrate systems come the day that a practical
second pre-image attack is announced against something in the SHA2
family.  And please let's not descend into debate about how likely
that might be -- the point is to help ensure that security features
added now don't needlessly repeat the mistakes of the past.

So, whichever path is chosen, "any match" or "first match from your
local policy of whichever algs you trust", the spec needs to provide
clear guidance so that most MTAs implementing DANE will end up
using similar logic.

> The choice:
>    Do you, Mr. System Administrator defining the local policy for the
>    *client*, want:
>    A) Accept any published hashing algorithm out of my "unordered set"
>       to validate the remotely presented certificate.  [Ordering it
>       doesn't buy you anything since you'll simply accept a match and it
>       doesn't matter which you try first, since any success in any
>       algorithm will equally indicate "ok"; in fact in an implementation
>       aiming for speed, it might be best to choose the order based on
>       how fast you can execute the algorithm].  If the server fails to
>       publish a perfect record set, as long as one matches I'm ok with that.
>    B) Believe that the server will always publish perfect records, and
>       if my "ordered set" of algorithms is [SHA512, SHA256] and they
>       publish SHA512, then I never want to accept SHA256 because I fear
>       an attack more than I fear a server administrator blowing their
>       configuration.

The issue is around the time period where there is a widely accepted
but then-known-to-be-broken hash algorithm (with a hope that it
will become less widely accepted) and another algorithm, at that
point in the future then-currently believed to be safe, and what
are the effects which will speed or hinder a safe orderly transition
to the use of the second algorithm.

A separate issue (which doesn't need to hold up this specification)
is letting the recipient system know what the impact would be of
dropping the old algorithm from the published set.  Tony Finch's
"Transmitted:" trace header from the draft-fanf-dane-smtp series
would help here and it's easy to see how this can build on top of
the current work (and does not need to be part of the current work).
I'll assume Transmitted:  below as shorthand for "or anything
equivalent" (eg, RCPT parameter stating for each recipient how the
client decided this was the correct server, etc etc).

If there is a clearly defined "you should prefer the algorithm
which you trust the most, when selecting a TLSA anchor from the
set I publish", and later an ability for a server operator to get
stats on which anchor the client used, then the server operator
can get metrics on which keys are actually used and what the impact
of withdrawing the publishing of the old key would be.

With the "accept any" approach, the only solution is to stop publishing
one day, 15 years after the point when anyone sane would have switched,
and discover from the howls of protest the few organisations who are
horribly broken.

I'd much rather see a designed in decay-curve in the use of the
known-broken algorithm, an ability for a future Transmitted: header
to be meaningful, a rational basis for recipient postmasters to be
able to say "If you cared, you should have told us which was in
use with the Transmitted: header.  You didn't tell us, and everyone
else stopped using it, so we stopped publishing the data which
~everyone thinks is worthless and dangerous if relied upon."

> So to recap, first: either of those is fine in local policy knob setting
> scheme.  Implementations can let administrators of clients configure
> sane or insane policies they want.

The client side can do whatever it wants, but if it expects to be
able to _stay_ secure in the face of ongoing crypto research and
not cause every postmaster to curse the idiot vendor, it should
use the "have a preferred list and find the first in our list which
the remote site published" approach as the default (absent prior
site-specific negotiation).

> I) what should we do generically?  RFC6698 already laid this out as A in
>    absence of local policy.

That's effectively a punt on algorithm agility and disappointing.

> II) what should we do in SMTP?  This is where Viktor, considering case
>     #2 above, is wanting to do B ("accept just the 'best' in an ordered set
>     of algorithms) instead of A.  The arguments, though, from both sides
>     are probably talking about different cases (generic vs SMTP) and I
>     think that is ending up with some of the confusion.

So Viktor and I, representing Postfix and Exim, both want this.

> Here's the multiple choice quiz to cap all this off:
> 1. For generic DANE, is the right *default* choice:
>    A) Accept any successful matching hash, regardless of "strength".
>       (This is what 6698 says to do today)
>    B) Accept only the strongest hash, from an ordered list, that the
>       server has published (and DNSSEC has validated the RRSET for).

C) Accept the strongest hash, from an ordered list maintained by the
   client, of those hashes published by the server (as confirmed by
   DNSSEC yada yada).

(This might be B, but your B is still not unambiguously worded, sorry.)

I'm only going to push back on those app-specific protocols where
I'm an implementer and my code talks, since otherwise I'm making
mandates of other people.

> 2. For SMTP, is the right (SMTP-application-specific) *default* choice:
>    A) Accept any successful matching hash, regardless of "strength".
>       (This is what 6698 says to do today)
>    B) Accept only the strongest hash, from an ordered list, that the
>       server has published (and DNSSEC has validated the RRSET for).

C above.  A perhaps implemented but needing explicit action to
activate, as a "bug compatibility workaround" for remote sites.

> 3. For XMPP:
> 4. For HTTP:
> 5. For IMAP:

No voice, am not an implementer, but I'll cough "C" if people are
willing to listen.

More generally: approach A can't distinguish between "successful
attack on hash" and "remote site bungled".  In the event of a
mishap, mail queues and postmasters investigate, nothing is lost.
This sort of thing happens routinely today (see, eg, mailops mailing
list, whenever some big site suddenly starts deferring a lot of
mail).  People notice.  Out-of-band mechanisms are found.

Codifying a solution of "thou shalt be vulnerable to the worst hash
algorithm you publish, because the clients should accept it as
sufficient" is cause for those who care about privacy of incoming
mail to stop publishing the hash fast, which breaks verification
if clients don't have an alternative.  Instead we want a model
where operators can allow for migration which always has a trust
path, knowing that continuing to publish known-broken only affects
integrity of the path for those senders who either only support,
or prefer, the known-broken hash algorithm and that everyone _else_
immediately gets the benefit of other published hash algorithms.

So, since we're still at the stage where we can avoid going wrong
instead of having to try to figure out compatible ways to recover
later, can we *please* codify that the default approach should
support algorithm agility instead of vulnerability-to-worst-published?
Queuing happens, mistakes will get caught with no worst problem
than some delayed email, which I argue is far more in keeping with
common expectation than "your mail will go through quickly, but we
might have sent it onto some attackers, we don't know".