Re: [dnsext] Clarifying the mandatory algorithm rules

"W.C.A. Wijngaards" <> Tue, 30 November 2010 10:42 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 15B943A6B3F for <>; Tue, 30 Nov 2010 02:42:30 -0800 (PST)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -2.55
X-Spam-Status: No, score=-2.55 tagged_above=-999 required=5 tests=[AWL=0.050, BAYES_00=-2.599, NO_RELAYS=-0.001]
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id aND4Bg7DQvYt for <>; Tue, 30 Nov 2010 02:42:28 -0800 (PST)
Received: from ( [IPv6:2001:7b8:206:1::1]) by (Postfix) with ESMTP id EEF813A6C56 for <>; Tue, 30 Nov 2010 02:42:16 -0800 (PST)
Received: from ( [IPv6:2001:7b8:206:1:216:76ff:feb8:1853]) (authenticated bits=0) by (8.14.4/8.14.4) with ESMTP id oAUAhNFV004930 (version=TLSv1/SSLv3 cipher=DHE-RSA-CAMELLIA256-SHA bits=256 verify=NO) for <>; Tue, 30 Nov 2010 11:43:26 +0100 (CET) (envelope-from
Message-ID: <>
Date: Tue, 30 Nov 2010 11:43:23 +0100
From: "W.C.A. Wijngaards" <>
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv: Gecko/20101027 Fedora/3.1.6-1.fc13 Lightning/1.0b3pre Thunderbird/3.1.6
MIME-Version: 1.0
References: <> <> <a06240801c9101620d463@[]> <>
In-Reply-To: <>
X-Enigmail-Version: 1.1.2
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.6 ( [IPv6:2001:7b8:206:1::1]); Tue, 30 Nov 2010 11:43:26 +0100 (CET)
Subject: Re: [dnsext] Clarifying the mandatory algorithm rules
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: DNS Extensions working group discussion list <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Tue, 30 Nov 2010 10:42:30 -0000

Hash: SHA1

Hi workgroup,

The unbound implementation is the way it has been written, and I agree
that changes have to be made to it.  Now, what is the right way?

On 11/22/2010 06:33 PM, Paul Vixie wrote:
>> Date: Mon, 22 Nov 2010 07:58:53 -0500
>> From: Edward Lewis <>
>> ...
>> DNSSEC is designed to protect against cache poisoning.  If there is a chain
>> of trust to an RRSet, then the set validates, it has proven source
>> authenticity and data integrity.
>> Since time began (for DNS), there is always something wrong somewhere.  The
>> goal is a robust system, not a tightly correct system.
> +1.

It would be good to have some text in bis-updates on this.  (I mean on
the algorithm topic, the general leniency has been put forward by Alfred
and others and I wholeheartedly agree with it :-) ).

It is clear that checking the set of algorithms present in the DNSKEY
set is not a good idea, and checking the set of algorithms from the DS
set is the right, more lenient way to go.  (so this is what Mark was
proposing).  This is what validators use to determine if they can
support the algorithms present in the zone today (and tomorrow).

However, there are choices that I see here, that change the security

* One Signature Is Enough

Check one signature, if it works, it's enough, and the added text is
'regardless of the algorithms signalled in the DS-set'.
+ it is lenient.
  (it would have avoided the operational problems that incited this).
+ if a zone is badly signed, so it includes algorithms in the DS set for
which proper signatures are not available, the validators that support
both algorithms approve of this situation.
- - no algorithm-protection (explanation below).

* Check the Algorithms

For the algorithms signalled in the DS set, check that there is a valid
chain of trust for them.  Added text is, the validator SHOULD or MUST
check (choice here) that the algorithms signalled in the DS-set work
(but only for algorithms supported by the validator, of course).

+ it is lenient because uses DS-set not DNSKEY-set.
  (this allows the easier algo rollovers; it would have avoided the
operational problems that incited this).
- - but not as lenient as 'one signature', this is the case of zones that
are badly signed but have signalled the algorithm with their DS, so that
a portion of validators are certain to fail for it.  Validators that
support the other algo now also fail for it.
+ algorithm protection (explanation below).
  + if SHOULD, local-policy decides what protection to apply.

Algorithm compromise protection.
This is a security property that the validation result can have or not.
 Security algorithms have different properties, and by checking them you
receive joined benefits.  Specifically, if one algorithm is 'broken'
(but you do not yet know which one that will be), then you are protected
against this because you check the other algorithm.

For zones that perform a single-DS rollover, this never applies, as the
single DS makes validators check one algorithm anyway (since this
minimizes communication with the parent, many zones can be envisioned to
do this).  Other zones may have multiple DSes (or DLVs) and thus can
have multiple crypto algorithms checked, at their option (and the
validator local policy).

I would like to know what the (seems rough) consensus is, so that I can
go forward and implement this right.  It is then a good idea that Sam
puts conclusions in the dnssec-bis-updates draft.

Best regards,
Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with Fedora -