Re: [dane] Digest Algorithm Agility discussion

Viktor Dukhovni <> Mon, 17 March 2014 17:35 UTC

Return-Path: <>
Received: from localhost ( []) by (Postfix) with ESMTP id A5A131A0442 for <>; Mon, 17 Mar 2014 10:35:17 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -1.9
X-Spam-Status: No, score=-1.9 tagged_above=-999 required=5 tests=[BAYES_00=-1.9] autolearn=ham
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id f53wmSISQN2V for <>; Mon, 17 Mar 2014 10:35:14 -0700 (PDT)
Received: from ( []) by (Postfix) with ESMTP id 803E91A02AD for <>; Mon, 17 Mar 2014 10:35:14 -0700 (PDT)
Received: by (Postfix, from userid 1034) id 1E54C2AB19D; Mon, 17 Mar 2014 17:35:06 +0000 (UTC)
Date: Mon, 17 Mar 2014 17:35:06 +0000
From: Viktor Dukhovni <>
Message-ID: <>
References: <> <> <> <>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <>
User-Agent: Mutt/1.5.23 (2014-03-12)
Subject: Re: [dane] Digest Algorithm Agility discussion
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: DNS-based Authentication of Named Entities <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Mon, 17 Mar 2014 17:35:17 -0000

On Mon, Mar 17, 2014 at 12:57:54PM -0400, Paul Wouters wrote:

> On Mon, 17 Mar 2014, Viktor Dukhovni wrote:
> >>>  * It should be possible for servers to publish TLSA records
> >>>    employing multiple digest algorithms allowing clients to
> >>>    choose the best mutually supported digest.
> >>
> >>Isn't that already possible?
> >
> >Not based on RFC 6698 alone.  With RFC 6698 the client trusts all
> >TLSA records whether "weak" and "strong".
> 4.1 states:
>       A TLSA RRSet whose DNSSEC validation state is secure MUST be used
>       as a certificate association for TLS unless a local policy would
>       prohibit the use of the specific certificate association in the
>       secure TLSA RRSet.
> Can that not be used to reject a weak digest?

The above proposal is for clients to black-list designated weak
digests, rather than select the strongest available digest.  If a
digest is not yet ready to be black-listed, but is no longer
recommended (as lets say SHA1 is now) we'd still have clients using
it even when stronger digests are published along-side.

I am proposing clients using *only* the strongest digest (as
determined by their configuration) and ignoring the rest.  This
results in a much clear phase-out process.  The weak digests are
only used when one of the sides only supports that digest.

It all boils down to how one might attempt to phase out a weak
digest.  If there is no "negotiation" (choice of strongest mutually
available option) transitions are much harder, because the weak
options are used until suddenly dropped, rather than gradually
becoming less common, until it is easy to phase them out because
almost nobody uses them.

> >My proposal is essentially the same.  The client uses the strongest
> >acceptable digest algorithm.  The *client* decides what "strongest"
> >means.  It never chooses an unsupported algorithm.
> but you want to fail if that one selected one fails. I don't think that
> is the right decision.

There is no "right" or "wrong", only trade-offs.

> I don't think we disagree. the server publishes a new strong digest, and
> clients that support that and consider sha2-256 weak will not use
> sha2-256.

The difference is that clients don't consider SHA2-256 (say) weak
right away, they initially get to prefer to not use it, whenever
(say) SHA2-512 is also published.

> If the admin messes up the new strong digest, than new clients
> will fail to get a TLSA record, and old clients will use an unsafe one.

The admin messing up is a red-herring.  Nobody will publish two
digests just in case one is "messed-up".  They will publish multiple
digests to allow clients to choose the strongest one.

> >>If the most prefered TLSA record fails validation, the client should try
> >>another TLSA record.
> >
> >This works poorly.  While the weak algorithm is being phased out
> >(years) even clients that support stronger algorithms are at risk.
> New clients can have a local policy that states never to accept weak
> digests. I don't see a problem with agility. The weak TLSA records
> are only left in for clients that support nothing stronger.

Such policy is difficult to enable, it applies even to servers that
only publish the weak digest.  My proposal incrementally phases out
the weak digest as servers publish the stronger version.

> Maybe I don't understand what you think the problem is?

Incremental transition from weak to strong algorithms with no flag
day (clients having to completely disable an algorithm to avoid
exposure to it even with servers that support a stronger one).

With TLS for example, clients don't have to disable all weaker
block ciphers, because the client and server will agree on the
strongest mutually available (based on either the client's or
server's preference list).  Once the mutually strongest option
is selected, the others are irrelevant.

The idea here is the same.

> >Perhaps Wes can chime in.  His comment to me was that the proposed
> >DAA (digest algorithm agility) is essentially the only possible
> >and largely analogous to the DNSSEC approach.
> So aren't we all agreeing?

Not yet.  To be specific suppose a server publishes:

	IN TLSA 3 1 2 {blob2}
	IN TLSA 3 1 1 {blob1}

and a client supports SHA2-512 (believed strong) and SHA2-256
(hypothetically tarnished, but not yet known broken).  In my proposal
the client completely ignores {blob1} *even if* {blob2} does not
match!  The client and server first agree on the strongest digest
(analogy with TLS cipher-suite negotiation) and then only that 
digest is used.  The same client authenticating a second server:

	IN TLSA 3 1 1 {blob3}

will use {blob3}.