Re: [dane] Digest Algorithm Agility discussion

Paul Wouters <> Mon, 17 March 2014 15:27 UTC

Return-Path: <>
Received: from localhost ( []) by (Postfix) with ESMTP id 1C7F71A0409 for <>; Mon, 17 Mar 2014 08:27:13 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -0.5
X-Spam-Status: No, score=-0.5 tagged_above=-999 required=5 tests=[BAYES_05=-0.5] autolearn=ham
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id i6qthlOuFU-9 for <>; Mon, 17 Mar 2014 08:27:10 -0700 (PDT)
Received: from ( []) by (Postfix) with ESMTP id 481CE1A040E for <>; Mon, 17 Mar 2014 08:27:08 -0700 (PDT)
Received: from ( []) by (Postfix) with ESMTP id 56BC5800AA for <>; Mon, 17 Mar 2014 11:26:59 -0400 (EDT)
Received: from localhost (paul@localhost) by (8.14.7/8.14.7/Submit) with ESMTP id s2HFQw9b002046 for <>; Mon, 17 Mar 2014 11:26:59 -0400
X-Authentication-Warning: paul owned process doing -bs
Date: Mon, 17 Mar 2014 11:26:58 -0400 (EDT)
From: Paul Wouters <>
To: dane WG list <>
In-Reply-To: <>
Message-ID: <>
References: <>
User-Agent: Alpine 2.10 (LFD 1266 2009-07-14)
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; format=flowed; charset=US-ASCII
Subject: Re: [dane] Digest Algorithm Agility discussion
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: DNS-based Authentication of Named Entities <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Mon, 17 Mar 2014 15:27:13 -0000

On Sat, 15 Mar 2014, Viktor Dukhovni wrote:

> Goal:
>    * It should be possible for servers to publish TLSA records
>      employing multiple digest algorithms allowing clients to
>      choose the best mutually supported digest.

Isn't that already possible?

>     * The client SHOULD employ digest algorithm agility by ignoring
>       all but the strongest non-zero digest for each usage/selector
>       combination.  Note, records with matching type zero play no
>       role in digest algorithm agility.

I don't think that is a proper assumption. For example, a zone might
need to publish a GOST based digest for legal reasons (eg not trusting
US based digests) but might publish a FIPS approved digest for other
people. The client has a more complicated reduction scheme going on
for their local policy than "strongest".

Traditionally, for instance with the DS record, we allow publishing
multiple digests, and the client's task is just to find one which is
"acceptable". It would be nice if it starts with what it believes is
the "strongest".

If a certain digest is so weak it is basically broken, it should not be
left in a published TLSA record.

If the most prefered TLSA record fails validation, the client should try
another TLSA record. The order in which is does so could be written down
in the RFC if we think there is one true way of doing so.  Otherwise,
it should just be referenced in the RFC as "according to local policy".

This also gives the server admin some more protection. If they publish
digests using SHA2-256 and SHA1, and it turns out their tool generates
bad SHA2-256, than the clients still have a valid SHA1 to fall back to.

Perhaps there is text in the DS record RFC to look at that describes
this better than I just did.