Re: [dnsext] Clarifying the mandatory algorithm rules

Olafur Gudmundsson <ogud@ogud.com> Fri, 10 December 2010 13:58 UTC

Return-Path: <ogud@ogud.com>
X-Original-To: dnsext@core3.amsl.com
Delivered-To: dnsext@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id F18033A6B07 for <dnsext@core3.amsl.com>; Fri, 10 Dec 2010 05:58:26 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -102.531
X-Spam-Level:
X-Spam-Status: No, score=-102.531 tagged_above=-999 required=5 tests=[AWL=0.069, BAYES_00=-2.599, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 8hc0WPKNRAlQ for <dnsext@core3.amsl.com>; Fri, 10 Dec 2010 05:58:25 -0800 (PST)
Received: from stora.ogud.com (stora.ogud.com [66.92.146.20]) by core3.amsl.com (Postfix) with ESMTP id 917173A6AE8 for <dnsext@ietf.org>; Fri, 10 Dec 2010 05:58:25 -0800 (PST)
Received: from [IPv6:::1] (nyttbox.md.ogud.com [10.20.30.4]) by stora.ogud.com (8.14.4/8.14.4) with ESMTP id oBADxt0b000857 for <dnsext@ietf.org>; Fri, 10 Dec 2010 08:59:55 -0500 (EST) (envelope-from ogud@ogud.com)
Message-ID: <4D02325A.1010403@ogud.com>
Date: Fri, 10 Dec 2010 08:59:54 -0500
From: Olafur Gudmundsson <ogud@ogud.com>
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-GB; rv:1.9.2.12) Gecko/20101027 Thunderbird/3.1.6
MIME-Version: 1.0
To: dnsext@ietf.org
References: <alpine.BSF.2.00.1011180553250.83352@fledge.watson.org> <4CE51293.5040605@nlnetlabs.nl> <a06240801c9101620d463@[192.168.128.163]> <22284.1290447209@nsa.vix.com> <4CF4D54B.5000407@nlnetlabs.nl> <4D00A86D.1040304@nlnetlabs.nl> <a06240800c9268ae26e12@[192.168.1.104]> <4D00F385.4010405@nlnetlabs.nl> <a06240801c926a690eaef@[10.31.200.118]> <4D01EE19.3060006@nlnetlabs.nl>
In-Reply-To: <4D01EE19.3060006@nlnetlabs.nl>
Content-Type: text/plain; charset="ISO-8859-1"; format="flowed"
Content-Transfer-Encoding: 7bit
X-Scanned-By: MIMEDefang 2.68 on 10.20.30.4
Subject: Re: [dnsext] Clarifying the mandatory algorithm rules
X-BeenThere: dnsext@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: DNS Extensions working group discussion list <dnsext.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/dnsext>, <mailto:dnsext-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dnsext>
List-Post: <mailto:dnsext@ietf.org>
List-Help: <mailto:dnsext-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dnsext>, <mailto:dnsext-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 10 Dec 2010 13:58:27 -0000

<no-hat>

Historical note and some observations:

The original intent of the DNSSEC design was as far as I can tell:
- One validating signature is sufficient
	
Once we had DSA rammed down our throat we started thinking about 
different algorithms, and the idea was:
- validator prioritizes the algorithms it supports, and sees signatures 
for.

Then during the RFC403x production the concept of downgrade attacks was 
brought up and the following rule was added
- all algorithms in DNSKEY must sign zone

At the time we failed to make the full connection between all the
moving parts in the DNS system
- The contents of the DS set
- The impact of zone distribution delays
- That for a while different caches/authorities may have different 
contents for the RRset and DNSKEY/DS records. (i.e. old DNSKEY set and 
newly signed RRset or vice versa).

For this reason the new text in DNSSEC best practices document says 
something like the following for introducing new algorithm:
	add RRSIG's,
	after a while add to DNSKEY,
	after a while add to DS
(and converse for retiring the algorithm)

During the long development phase of DNSSEC there was the concept of 
different policies by validators
  - paranoid
  - relaxed
The paranoid resolver will check everything,  the relaxed one will 
accept any single signature.

What Matthijs is effectively saying is he thinks paranoid is the way to 
go, while Ed argues that relaxed is the way to go (sorry for putting 
words in your mouths).
For what it is worth paranoid is what security people are more 
comfortable while relaxed is more what traditional DNS is about, and 
given how much some common DNS resolvers will bend backwards in 
accepting answers from broken implementations this is what large part of 
the world seems to expect.

My personal feeling is that zone publishers SHOULD assume all validators 
are paranoid and proceed accordingly. I to not think we can successfully 
prescribe either behavior or any variants in between.
Number of people have argued in the past that this should be a local 
configuration option.

The torture test is when you ask the following questions:
"Do you think paranoid is safer than relaxed?"

If the answer was yes:
"Are you willing accept not reaching <your best customer> because
their DNS setup is messed up?"

	Olafur



On 10/12/2010 4:08 AM, Matthijs Mekking wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> On 12/09/2010 04:47 PM, Edward Lewis wrote:
>> At 16:19 +0100 12/9/10, Matthijs Mekking wrote:
>>
>>> Because the zone operator specifically wants to be protected against
>>> algorithm compromises and does that by listing more than one algorithm
>>> in the DS RRset.
>>
>> DNSSEC is not designed to protect the zone operator, it is designed to
>> protect the relying party.  Listing more than one algorithm is done to
>> reach a wider audience, it does not enhance the security as far as the
>> zone operator is concerned.
>
> Maybe it is not designed for this, but it sure is nice that it is
> possible. I think both benefits apply: listing two algorithms (for
> example, x and y) does not only give you algorithm compromise
> protection, also a wider audience is reached. Now your zone is seen as
> secure by validators that understand only x, or understand only y, or
> understand both.
>
>> In the experimentation period before the first RFCs were produced (RFC
>> 2052 being the first milestone) we played with the notion of key
>> strengths.  (Keys, not algorithms because back then we only had the
>> unencumbered DSA and the copyrighted RSA available.)  Trying to enhance
>> the concept of DNSSEC by using the notion that one algorithm is better
>> or one key is better than another failed time and time again.  When
>> there's a weak link involved you can't overcome it.
>
> I am not making any assumptions about one algorithm being better than
> the other, just because I use two of them.
>
>> What if your zone is signed with algorithm 42?  You might be more
>> strongly signed than anyone else.  But what if you are delegated from a
>> parent using RSA/MD5?  Your strengths are negated and if algorithm 42 is
>> not widely deployed you are even seen as unsigned to the masses.
>
> No: Your strengths are negated only if you accept only one signature.
> But if the validator requires to check both signatures, your strength is
> not negated.
>
> If 42 is not widely deployed, meaning most validators do not understand
> algorithm 42, then most validators will only use RSA/MD5 in this case
> and construct the security status of responses based on that algorithm
> only. So the masses still have some sense of security, they don't see
> the zone as insecure.
>
>> So, so what if I say DNSSEC is not there to protect the zone operator.
>> Why does that mean that MUST is wrong?  The rationale is that no matter
>> what you do, local policy is going to be followed. Local policy is going
>> to prefer quick resolution of the query. Requiring DNSSEC to do extra or
>> extraneous work is not constructive - by that I mean, once you've gotten
>> a chain of trust why keep trying to find faults?  DNSSEC is not designed
>> to be the last line of defense.
>
> The additional work is there for algorithm compromise protection. And as
> I argued before, yes it is local policy to let the validator fallback to
> "One Signature Is Enough". But it is kind of insane to me to have a
> local policy that says 'allow badly signed zones'.
>
>> DNSSEC is never going to overcome weaknesses in the underlying
>> cryptography.  Trying to do so is a futile quest and the relying party
>> suffers.
>
> And that's why I think it is nice that you can have a set of algorithms
> to rely on. So in my opinion it is not futile.
>
> But you do make a good point here: A name server using more algorithms
> puts more work load on the validator. With that in mind, I am willing to
> say that a SHOULD should be there.
>
> Best regards,
>
> Matthijs