Re: [dane] Digest Algorithm Agility discussion

Viktor Dukhovni <> Sun, 23 March 2014 23:57 UTC

Return-Path: <>
Received: from localhost ( []) by (Postfix) with ESMTP id 0794B1A0055 for <>; Sun, 23 Mar 2014 16:57:31 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: 0.1
X-Spam-Status: No, score=0.1 tagged_above=-999 required=5 tests=[BAYES_05=-0.5, J_CHICKENPOX_12=0.6] autolearn=no
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id VYFxLiXoATp7 for <>; Sun, 23 Mar 2014 16:57:29 -0700 (PDT)
Received: from ( []) by (Postfix) with ESMTP id 760861A0054 for <>; Sun, 23 Mar 2014 16:57:29 -0700 (PDT)
Received: by (Postfix, from userid 1034) id 73DCF2AB137; Sun, 23 Mar 2014 23:57:27 +0000 (UTC)
Date: Sun, 23 Mar 2014 23:57:27 +0000
From: Viktor Dukhovni <>
Message-ID: <>
References: <> <> <> <> <> <> <> <> <> <>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <>
User-Agent: Mutt/1.5.23 (2014-03-12)
Subject: Re: [dane] Digest Algorithm Agility discussion
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: DNS-based Authentication of Named Entities <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Sun, 23 Mar 2014 23:57:31 -0000

On Mon, Mar 24, 2014 at 10:01:25AM +1100, Mark Andrews wrote:

> > > No, that's not what the SMTP draft suggests.  When DANE is not there,
> > > then servers just fall back to not authenticating a peer's cert, as they
> > > do nowadays.
> The SMTP draft says how to securely go from a email domain to a
> CERT using DNSSEC and TLSA.  It does not say whether one should use
> a non secure connection or not.  That is separate policy.

Section 2.2:

    A Secure non-empty TLSA RRset where all the records are unusable:

	A connection to the MTA MUST be made via TLS, but authentication
	is not required. Failure to establish an encrypted TLS connection
	MUST result in falling back to the next SMTP server or delayed

No, it is primarily a draft for opportunistic SMTP TLS at Internet
scale, where per-destination policy does not scale.  The default
policy for the Internet as a whole, for reasons explained in the
introduction, cannot be PKIX authenticated TLS.  Therefore, when
the TLSA records are entirely unusable, and in keeping with Tony's
original work on the SRV draft, the client reverts to legacy
mandatory (practically always unauthenticated) TLS.

> > Indeed if one simply considers (again hypothetically) SHA1 to be
> > "unusable", then with no "usable" TLSA records, the connection
> > would fall back to unauthenticated TLS.
> It might or it might not.  That is a separate policy decision.

Policy can't revoke reality.  Without usable TLSA records SMTP TLS
cannot be authenticated at scale.  There can be pockets of
authenticated SMTP between some clusters of domains, but the scope
of the draft is the vast majority of MTA to MTA connections not
covered by explicit per-destination local policy.

> > To do what Mark suggests, we'd have to treat SHA1 as usable, but
> > always fails.  That is new code to make SHA1 never match.  
> 	One will have something like
> 		if (!supported(match))
> 			skip record;

Which needs to be different from considering the record "unusable",
because the matching type is unknown to the client.  The record is
retained for purposes of determining whether any "usable" records
are found, but skipped without being used.  And still an unnecessary
flag day.

> > It seems that, the unstated objection must be a belief that SHA2-256
> > will never fail, and thus we're wasting time designing solutions
> > to a non-problem.  While I don't believe in eternal unbounded
> > progress, and (barring a P=NP revolution) it is likely that at some
> > point we'll have algorithms that never need replacement, it is
> > perhaps premature to declare mission-complete with SHA2.
> I don't assume that it will never be broken.  I also don't think we
> need to "if has_alg(a) then {} else {}".

And yet you object to specifying a mechanism for non-disruptive
transition to better digests should one of the previously trusted
digests become tarnished.

Is the objection to the agility mechanism per-se, or to saying that
clients SHOULD employ it?

I seem to recall you saying that clients already *may* employ the
proposed mechanism.  If so server operators need to know that their
TLSA records SHOULD be structured to interoperate with policies
that ignore some subset of the published digests, so that part goes
into at least the operational considerations section, and likely
into the OPS draft.

Is it your view point then that the proposed mechanism should be
a BCP, and not a requirement?  Why?  What's the value of having a
free-for-all on how digests are phased out?  When servers don't
know that clients ignore weaker digests, they are more likely to
remove these from their TLSA records early, resulting in lower
security with clients that only support the weaker (but not
necessarily easily broken) digests.