Re: [dane] Digest Algorithm Agility discussion

Viktor Dukhovni <> Mon, 17 March 2014 15:51 UTC

Return-Path: <>
Received: from localhost ( []) by (Postfix) with ESMTP id E4AA51A01ED for <>; Mon, 17 Mar 2014 08:51:00 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -1.9
X-Spam-Status: No, score=-1.9 tagged_above=-999 required=5 tests=[BAYES_00=-1.9] autolearn=ham
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id 27Hro6hUFEuk for <>; Mon, 17 Mar 2014 08:50:58 -0700 (PDT)
Received: from ( []) by (Postfix) with ESMTP id 4A0861A01BE for <>; Mon, 17 Mar 2014 08:50:58 -0700 (PDT)
Received: by (Postfix, from userid 1034) id 0C4CB2AADF5; Mon, 17 Mar 2014 15:50:50 +0000 (UTC)
Date: Mon, 17 Mar 2014 15:50:49 +0000
From: Viktor Dukhovni <>
Message-ID: <>
References: <> <>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <>
User-Agent: Mutt/1.5.23 (2014-03-12)
Subject: Re: [dane] Digest Algorithm Agility discussion
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: DNS-based Authentication of Named Entities <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Mon, 17 Mar 2014 15:51:01 -0000

On Mon, Mar 17, 2014 at 11:26:58AM -0400, Paul Wouters wrote:

> On Sat, 15 Mar 2014, Viktor Dukhovni wrote:
> >Goal:
> >
> >   * It should be possible for servers to publish TLSA records
> >     employing multiple digest algorithms allowing clients to
> >     choose the best mutually supported digest.
> Isn't that already possible?

Not based on RFC 6698 alone.  With RFC 6698 the client trusts all
TLSA records whether "weak" and "strong".

> >    * The client SHOULD employ digest algorithm agility by ignoring
> >      all but the strongest non-zero digest for each usage/selector
> >      combination.  Note, records with matching type zero play no
> >      role in digest algorithm agility.
> I don't think that is a proper assumption. For example, a zone might
> need to publish a GOST based digest for legal reasons (eg not trusting
> US based digests) but might publish a FIPS approved digest for other
> people. The client has a more complicated reduction scheme going on
> for their local policy than "strongest".

I am not *assuming* anything.  We defined required client and server
behaviour which, if consistently applied by all parties, makes
algorithm agility possible.

The client decides which digest to use.  So long as the server
publishes each object (certificate or public key) with the same
set of digests as all other objects (i.e. the TLSA RRset is a "cross
product" of the desired objects and digest algorithms) no information
is lost when the client chooses just a single digest and ignores
the rest.

> Traditionally, for instance with the DS record, we allow publishing
> multiple digests, and the client's task is just to find one which is
> "acceptable". It would be nice if it starts with what it believes is
> the "strongest".

My proposal is essentially the same.  The client uses the strongest
acceptable digest algorithm.  The *client* decides what "strongest"
means.  It never chooses an unsupported algorithm.

> If a certain digest is so weak it is basically broken, it should not be
> left in a published TLSA record.

Weak digests (say SHA2-256 if/when broken) cannot be easily removed
from RRsets until all clients support stronger ones.  The idea is
to publish stronger digests and deploy stronger clients, then remove
weak digests later.  Stronger clients will never use the published
weak records.  Otherwise there's an Internet-wide flag-day.

> If the most prefered TLSA record fails validation, the client should try
> another TLSA record.

This works poorly.  While the weak algorithm is being phased out
(years) even clients that support stronger algorithms are at risk.

> The order in which is does so could be written down
> in the RFC if we think there is one true way of doing so.

The order is irrelevant.  Eventually some record matches.  It is
immaterial whether it is "first" or "last".

> This also gives the server admin some more protection. If they publish
> digests using SHA2-256 and SHA1, and it turns out their tool generates
> bad SHA2-256, than the clients still have a valid SHA1 to fall back to.

They could also publish a bogus CU or selector, or mess up in many other
ways.  I don't think that the intent of multiple algorithms in 6698 is
to mask bogus data.

> Perhaps there is text in the DS record RFC to look at that describes
> this better than I just did.

Perhaps Wes can chime in.  His comment to me was that the proposed
DAA (digest algorithm agility) is essentially the only possible
and largely analogous to the DNSSEC approach.