Re: [dane] Digest Algorithm Agility discussion

Paul Wouters <paul@cypherpunks.ca> Mon, 17 March 2014 16:58 UTC

Return-Path: <paul@cypherpunks.ca>
X-Original-To: dane@ietfa.amsl.com
Delivered-To: dane@ietfa.amsl.com
Received: from localhost (ietfa.amsl.com [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id C79AA1A01ED for <dane@ietfa.amsl.com>; Mon, 17 Mar 2014 09:58:05 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.9
X-Spam-Level:
X-Spam-Status: No, score=-1.9 tagged_above=-999 required=5 tests=[BAYES_00=-1.9] autolearn=ham
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id qIQYnJ-KmuWu for <dane@ietfa.amsl.com>; Mon, 17 Mar 2014 09:58:04 -0700 (PDT)
Received: from bofh.nohats.ca (bofh.nohats.ca [76.10.157.69]) by ietfa.amsl.com (Postfix) with ESMTP id E664A1A0436 for <dane@ietf.org>; Mon, 17 Mar 2014 09:58:03 -0700 (PDT)
Received: from bofh.nohats.ca (bofh.nohats.ca [127.0.0.1]) by bofh.nohats.ca (Postfix) with ESMTP id 48BDF800AA for <dane@ietf.org>; Mon, 17 Mar 2014 12:57:55 -0400 (EDT)
Received: from localhost (paul@localhost) by bofh.nohats.ca (8.14.7/8.14.7/Submit) with ESMTP id s2HGvsCM008140 for <dane@ietf.org>; Mon, 17 Mar 2014 12:57:55 -0400
X-Authentication-Warning: bofh.nohats.ca: paul owned process doing -bs
Date: Mon, 17 Mar 2014 12:57:54 -0400
From: Paul Wouters <paul@cypherpunks.ca>
X-X-Sender: paul@bofh.nohats.ca
To: dane WG list <dane@ietf.org>
In-Reply-To: <20140317155049.GB24183@mournblade.imrryr.org>
Message-ID: <alpine.LFD.2.10.1403171235400.32251@bofh.nohats.ca>
References: <20140315051704.GY21390@mournblade.imrryr.org> <alpine.LFD.2.10.1403171115580.32251@bofh.nohats.ca> <20140317155049.GB24183@mournblade.imrryr.org>
User-Agent: Alpine 2.10 (LFD 1266 2009-07-14)
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset="US-ASCII"; format="flowed"
Archived-At: http://mailarchive.ietf.org/arch/msg/dane/9aULsaRq_gkNcIho1WdarErMlvY
Subject: Re: [dane] Digest Algorithm Agility discussion
X-BeenThere: dane@ietf.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: DNS-based Authentication of Named Entities <dane.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dane>, <mailto:dane-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dane/>
List-Post: <mailto:dane@ietf.org>
List-Help: <mailto:dane-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dane>, <mailto:dane-request@ietf.org?subject=subscribe>
X-List-Received-Date: Mon, 17 Mar 2014 16:58:06 -0000

On Mon, 17 Mar 2014, Viktor Dukhovni wrote:

>>>   * It should be possible for servers to publish TLSA records
>>>     employing multiple digest algorithms allowing clients to
>>>     choose the best mutually supported digest.
>>
>> Isn't that already possible?
>
> Not based on RFC 6698 alone.  With RFC 6698 the client trusts all
> TLSA records whether "weak" and "strong".

4.1 states:

       A TLSA RRSet whose DNSSEC validation state is secure MUST be used
       as a certificate association for TLS unless a local policy would
       prohibit the use of the specific certificate association in the
       secure TLSA RRSet.

Can that not be used to reject a weak digest?

> My proposal is essentially the same.  The client uses the strongest
> acceptable digest algorithm.  The *client* decides what "strongest"
> means.  It never chooses an unsupported algorithm.

but you want to fail if that one selected one fails. I don't think that
is the right decision.

>> If a certain digest is so weak it is basically broken, it should not be
>> left in a published TLSA record.
>
> Weak digests (say SHA2-256 if/when broken) cannot be easily removed
> from RRsets until all clients support stronger ones.  The idea is
> to publish stronger digests and deploy stronger clients, then remove
> weak digests later.  Stronger clients will never use the published
> weak records.  Otherwise there's an Internet-wide flag-day.

I don't think we disagree. the server publishes a new strong digest, and
clients that support that and consider sha2-256 weak will not use
sha2-256. If the admin messes up the new strong digest, than new clients
will fail to get a TLSA record, and old clients will use an unsafe one.

>> If the most prefered TLSA record fails validation, the client should try
>> another TLSA record.
>
> This works poorly.  While the weak algorithm is being phased out
> (years) even clients that support stronger algorithms are at risk.

New clients can have a local policy that states never to accept weak
digests. I don't see a problem with agility. The weak TLSA records
are only left in for clients that support nothing stronger.

>> This also gives the server admin some more protection. If they publish
>> digests using SHA2-256 and SHA1, and it turns out their tool generates
>> bad SHA2-256, than the clients still have a valid SHA1 to fall back to.
>
> They could also publish a bogus CU or selector, or mess up in many other
> ways.  I don't think that the intent of multiple algorithms in 6698 is
> to mask bogus data.

Maybe I don't understand what you think the problem is?

>> Perhaps there is text in the DS record RFC to look at that describes
>> this better than I just did.
>
> Perhaps Wes can chime in.  His comment to me was that the proposed
> DAA (digest algorithm agility) is essentially the only possible
> and largely analogous to the DNSSEC approach.

So aren't we all agreeing?

Paul