Re: [dnsext] Clarifying the mandatory algorithm rules

Matthijs Mekking <matthijs@NLnetLabs.nl> Thu, 09 December 2010 09:57 UTC

Return-Path: <matthijs@nlnetlabs.nl>
X-Original-To: dnsext@core3.amsl.com
Delivered-To: dnsext@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id 1B68F3A6AB9 for <dnsext@core3.amsl.com>; Thu, 9 Dec 2010 01:57:44 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -102.6
X-Spam-Level:
X-Spam-Status: No, score=-102.6 tagged_above=-999 required=5 tests=[BAYES_00=-2.599, NO_RELAYS=-0.001, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 6nsBuvbth30X for <dnsext@core3.amsl.com>; Thu, 9 Dec 2010 01:57:42 -0800 (PST)
Received: from open.nlnetlabs.nl (open.nlnetlabs.nl [IPv6:2001:7b8:206:1::1]) by core3.amsl.com (Postfix) with ESMTP id 23A583A6AA1 for <dnsext@ietf.org>; Thu, 9 Dec 2010 01:57:41 -0800 (PST)
Received: from [IPv6:2001:7b8:206:1:215:afff:fed2:e121] ([IPv6:2001:7b8:206:1:215:afff:fed2:e121]) (authenticated bits=0) by open.nlnetlabs.nl (8.14.4/8.14.4) with ESMTP id oB99x9Ra039572 for <dnsext@ietf.org>; Thu, 9 Dec 2010 10:59:09 +0100 (CET) (envelope-from matthijs@nlnetlabs.nl)
Message-ID: <4D00A86D.1040304@nlnetlabs.nl>
Date: Thu, 09 Dec 2010 10:59:09 +0100
From: Matthijs Mekking <matthijs@NLnetLabs.nl>
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1.15) Gecko/20101027 Thunderbird/3.0.10
MIME-Version: 1.0
To: dnsext@ietf.org
References: <alpine.BSF.2.00.1011180553250.83352@fledge.watson.org> <4CE51293.5040605@nlnetlabs.nl> <a06240801c9101620d463@[192.168.128.163]> <22284.1290447209@nsa.vix.com> <4CF4D54B.5000407@nlnetlabs.nl>
In-Reply-To: <4CF4D54B.5000407@nlnetlabs.nl>
X-Enigmail-Version: 1.0.1
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit
X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.6 (open.nlnetlabs.nl [IPv6:2001:7b8:206:1::53]); Thu, 09 Dec 2010 10:59:09 +0100 (CET)
Subject: Re: [dnsext] Clarifying the mandatory algorithm rules
X-BeenThere: dnsext@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: DNS Extensions working group discussion list <dnsext.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/dnsext>, <mailto:dnsext-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dnsext>
List-Post: <mailto:dnsext@ietf.org>
List-Help: <mailto:dnsext-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dnsext>, <mailto:dnsext-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 09 Dec 2010 09:57:44 -0000

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hi,

I would say you need to check the algorithms signaled in the DS RRset
(the already (in)famous section 2.2 from RFC 4035 implicitly says so).

Now whether the validator SHOULD or MUST check that the algorithms
signalled in the DS-set work: If the working group decides this should
be a SHOULD, validating resolvers will have a choice to fallback to "One
Signature Is Enough". In practice, this would probably be a
configuration knob algorithm-compromise-protection: "yes" or "no".

Although this is local policy, I also think it is bad policy for a
validator to allow algorithm compromises and badly signed zones. Thus, I
would go for a MUST here.

Best regards,

Matthijs






On 11/30/2010 11:43 AM, W.C.A. Wijngaards wrote:
> Hi workgroup,
> 
> The unbound implementation is the way it has been written, and I agree
> that changes have to be made to it.  Now, what is the right way?
> 
> On 11/22/2010 06:33 PM, Paul Vixie wrote:
>>> Date: Mon, 22 Nov 2010 07:58:53 -0500
>>> From: Edward Lewis <Ed.Lewis@neustar.biz>
>>> ...
>>> DNSSEC is designed to protect against cache poisoning.  If there is a chain
>>> of trust to an RRSet, then the set validates, it has proven source
>>> authenticity and data integrity.
>>>
>>> Since time began (for DNS), there is always something wrong somewhere.  The
>>> goal is a robust system, not a tightly correct system.
> 
>> +1.
> 
> It would be good to have some text in bis-updates on this.  (I mean on
> the algorithm topic, the general leniency has been put forward by Alfred
> and others and I wholeheartedly agree with it :-) ).
> 
> It is clear that checking the set of algorithms present in the DNSKEY
> set is not a good idea, and checking the set of algorithms from the DS
> set is the right, more lenient way to go.  (so this is what Mark was
> proposing).  This is what validators use to determine if they can
> support the algorithms present in the zone today (and tomorrow).
> 
> However, there are choices that I see here, that change the security
> properties.
> 
> 
> * One Signature Is Enough
> 
> Check one signature, if it works, it's enough, and the added text is
> 'regardless of the algorithms signalled in the DS-set'.
> + it is lenient.
>   (it would have avoided the operational problems that incited this).
> + if a zone is badly signed, so it includes algorithms in the DS set for
> which proper signatures are not available, the validators that support
> both algorithms approve of this situation.
> - no algorithm-protection (explanation below).
> 
> 
> * Check the Algorithms
> 
> For the algorithms signalled in the DS set, check that there is a valid
> chain of trust for them.  Added text is, the validator SHOULD or MUST
> check (choice here) that the algorithms signalled in the DS-set work
> (but only for algorithms supported by the validator, of course).
> 
> + it is lenient because uses DS-set not DNSKEY-set.
>   (this allows the easier algo rollovers; it would have avoided the
> operational problems that incited this).
> - but not as lenient as 'one signature', this is the case of zones that
> are badly signed but have signalled the algorithm with their DS, so that
> a portion of validators are certain to fail for it.  Validators that
> support the other algo now also fail for it.
> + algorithm protection (explanation below).
>   + if SHOULD, local-policy decides what protection to apply.
> 
> 
> Algorithm compromise protection.
> This is a security property that the validation result can have or not.
>  Security algorithms have different properties, and by checking them you
> receive joined benefits.  Specifically, if one algorithm is 'broken'
> (but you do not yet know which one that will be), then you are protected
> against this because you check the other algorithm.
> 
> For zones that perform a single-DS rollover, this never applies, as the
> single DS makes validators check one algorithm anyway (since this
> minimizes communication with the parent, many zones can be envisioned to
> do this).  Other zones may have multiple DSes (or DLVs) and thus can
> have multiple crypto algorithms checked, at their option (and the
> validator local policy).
> 
> 
> I would like to know what the (seems rough) consensus is, so that I can
> go forward and implement this right.  It is then a good idea that Sam
> puts conclusions in the dnssec-bis-updates draft.
> 
> Best regards,
>    Wouter
_______________________________________________
dnsext mailing list
dnsext@ietf.org
https://www.ietf.org/mailman/listinfo/dnsext
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQEcBAEBAgAGBQJNAKhsAAoJEA8yVCPsQCW51mwIAKdNkDzTSC0FMU2MftgLq3Js
ZVDhnT4kjwuOCnr002Y04ENgYyR+p9pM97k+Vuoku6c5BwTkH9O1Scv9AeexPDpA
nkC2XSL3fxKtRFWb+ycGUiOfGOsZ0mqTqaW8zXR+VRmNlj/0zPu1Wer7VjHpLYgZ
5SHXzSwWf6HpXjpVblbyvHoyZCrHCT/fGMwFexppIG09DXl4DWqZnJ/almx1ZAXP
8QeWnV8k/Zb0E95IW/im/m53630qhSM8Blxvgu/vLEyBWO1+1nl/1SFsMholFZBX
l0i5oTqdJReYwTmLHROCEqc7isSvVSjtmEtJxjDrjSO0DA1ZsLSNYjMIVRaagLY=
=ddHZ
-----END PGP SIGNATURE-----