Re: [dnsext] Clarifying the mandatory algorithm rules

Mark Andrews <marka@isc.org> Fri, 19 November 2010 06:03 UTC

Return-Path: <marka@isc.org>
X-Original-To: dnsext@core3.amsl.com
Delivered-To: dnsext@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id 66C513A67B0 for <dnsext@core3.amsl.com>; Thu, 18 Nov 2010 22:03:53 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: 1.151
X-Spam-Level: *
X-Spam-Status: No, score=1.151 tagged_above=-999 required=5 tests=[AWL=-1.250, BAYES_00=-2.599, GB_SUMOF=5]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id ZotlVYF4JwvQ for <dnsext@core3.amsl.com>; Thu, 18 Nov 2010 22:03:49 -0800 (PST)
Received: from mx.ams1.isc.org (mx.ams1.isc.org [IPv6:2001:500:60::65]) by core3.amsl.com (Postfix) with ESMTP id 238163A6907 for <dnsext@ietf.org>; Thu, 18 Nov 2010 22:03:49 -0800 (PST)
Received: from farside.isc.org (farside.isc.org [IPv6:2001:4f8:3:bb::5]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client CN "farside.isc.org", Issuer "ISC CA" (verified OK)) by mx.ams1.isc.org (Postfix) with ESMTPS id 5A2C75F98EA; Fri, 19 Nov 2010 06:04:21 +0000 (UTC) (envelope-from marka@isc.org)
Received: from drugs.dv.isc.org (drugs.dv.isc.org [IPv6:2001:470:1f00:820:ea06:88ff:fef3:4f9c]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by farside.isc.org (Postfix) with ESMTP id 1CB28E6030; Fri, 19 Nov 2010 06:04:19 +0000 (UTC) (envelope-from marka@isc.org)
Received: from drugs.dv.isc.org (localhost [127.0.0.1]) by drugs.dv.isc.org (Postfix) with ESMTP id 024456F87CD; Fri, 19 Nov 2010 17:04:16 +1100 (EST)
To: Brian Dickson <brian.peter.dickson@gmail.com>
From: Mark Andrews <marka@isc.org>
References: <alpine.BSF.2.00.1011180553250.83352@fledge.watson.org> <4CE53927.9090203@isc.org> <4CE58E90.6030607@nic.cz> <AANLkTin2H7UkP7FVfz3GN74CKtqn2OKo7MmcKMGOkvNY@mail.gmail.com> <4CE59B3D.5020109@nic.cz> <20101119004134.993F26EB749@drugs.dv.isc.org><AANLkTi=Hp6s4xwLQGyWv3BNtvUf5-SDtgUzHbNKtfCV1@mail.gmail.com>
In-reply-to: Your message of "Fri, 19 Nov 2010 01:02:02 EDT." <AANLkTi=Hp6s4xwLQGyWv3BNtvUf5-SDtgUzHbNKtfCV1@mail.gmail.com>
Date: Fri, 19 Nov 2010 17:04:15 +1100
Message-Id: <20101119060416.024456F87CD@drugs.dv.isc.org>
Cc: dnsext@ietf.org
Subject: Re: [dnsext] Clarifying the mandatory algorithm rules
X-BeenThere: dnsext@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: DNS Extensions working group discussion list <dnsext.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/dnsext>, <mailto:dnsext-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dnsext>
List-Post: <mailto:dnsext@ietf.org>
List-Help: <mailto:dnsext-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dnsext>, <mailto:dnsext-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 19 Nov 2010 06:03:54 -0000

In message <AANLkTi=Hp6s4xwLQGyWv3BNtvUf5-SDtgUzHbNKtfCV1@mail.gmail.com>, Bria
n Dickson writes:
> On Thu, Nov 18, 2010 at 8:41 PM, Mark Andrews <marka@isc.org> wrote:
> >
> > The intent of this section was to prevent downgrade to insecure due
> > to a attacker dropping RRSIGS.  DNSSEC has always been "if *any*
> > signature validates then the RRset validates".  There is no concept
> > of a "more secure algorithm" in DNSSEC.  There may be outside of
> > DNSSEC but not inside.
> 
> This is likely what the original intent was, and certainly within the conte=
> xt
> of a single algorithm, or a validator that only understands one of the
> algorithms
> present, then it makes sense that only one RRSIG needs to validate.

It was expressed in the context of multiple algorithms and no one rejected
the premise.
 
> However, consider the case of multiple algorithms used (to sign), and
> validators that understand just one of those, vs validators that understand
> more than one, ideally all of them.
> 
> What are the implications of "any RRSIG will do" versus the alternative of
> "one RRSIG of each understood algorithm must validate"?

The working group looked at this when making the decision to accept one
signature as good enough.
 
> For brevity, the algorithms are A, B, C, .... and the possible requirements
> are either "A | B | C..." (or any subset thereof), or "A & B & C ..."
> (or any subset thereof).
> 
> It might not be known apriori which of A, B, C, ... is "weaker", or
> which might be broken first.
> 
> The argument should be, if the "&" logic is used, it doesn't matter.
> And in fact, the bad
> things that result from "|" result regardless of which is broken first.
> 
> If any given algorithm is broken, and only one RRSIG needs to match
> (the "|" case),
> then every zone signed with that algorithm can subsequently be compromised.
> RRSIGs can be produced by an attacker, matching forged data. This
> applies not only
> to zone data, but also to DNSKEYs, and DS records and their RRSIGs.
> 
> The implication as far as both detection, and response, are
> substantial. Removal of the bad algorithm would be mandated, and need
> to happen extremely fast, on the data side. Zones would
> need to be re-signed completely. The rest is left as an exercise for
> the paranoid reader.

You don't need to remove the broken algorithm from the zone.  You
just need to stop the validator using it to validate.

> On the other hand, if the requirement is that all understood
> algorithms present, must have a valid matching RRSIG, then having even
> two algorithms (in common between the signer and validator)
> changes everything.

This isn't a requirement of DNSSEC and never has been.
 
> The compromise of one algorithm has no immediate implication, since
> forgeries cannot occur without forging the RRSIGs for all known
> algorithms. At worst, validators that only understand one of the
> algorithms used, and specifically the broken algorithm, are exposed to
> attack. Everyone else can safely respond at their leisure, with no need
> for drastic immediate action.

Which was understood when the decision to accept any signature was made.

> The incremental cost of signing, and of validating, multiple RRSIGs,
> is linear, O(n) where n is the number of algorithms.
> For zone owners who are already signing using multiple algorithms, the
> incremental cost is zero.
> And for validating, caching resolvers, the incremental cost occurs
> only the first time the record is retrieved, and validated.
> Once validated, and cached, there is no incremental cost.
> 
> So, my suggestion is, if we need to revisit the scope of the -bis to
> incorporate this logic, perhaps we should.
> And if it can be already considered in-scope, the benefits arguably
> outweigh the concerns of  "changing the meaning" of the  existing
> RFC(s).
> 
> Brian
> 
> > If the archives were online you could see that this was the intent
> > of this paragraph.
> >
> > When a operator of a zone publishes DS (or trust-anchors) for a
> > zone with a algorithm that is a claim that the entire zone is signed
> > with that algorithm.  That claim will continue to hold for the life
> > of the DS records or until the operator can be assured that the
> > trust anchors have been removed for the algorithm, whichever is
> > longer.
> >
> > Additionally the MUST is a directive to *zone operators*.  A zone
> > operator can comply with that MUST but validation will still fail
> > with unbound as unbound requires that a zone be signed with DNSKEYs
> > which are NOT in the zone prior to adding the DNSKEY to the zone
> > due to caching.  On this alone it is clear that unbound is in the
> > wrong.
> >
> > If you want to do downgrade protection between algorithms one can
> > but only for the algorithms listed in the DS / trust-anchors.  They
> > are the only algorithms that the zone operator has made any claims
> > of existance for.
> 
> I'm unclear here. Are you arguing that downgrade protection is signaled
> by the zone owner via DS using multiple algorithms, and if signaled is mand=
> atory
> for implementations to do?
> 
> Or that downgrade protection is optional, and can only be done for algorith=
> ms
> present in DS/trust anchors?

If you have multiple algorithms listed in the DS then as a validator
you should expect to be able to validate the response using either
of those algorithms.  If you have a local policy that says "only
trust A if A and B are present" then you can only do that reliably
when the DS lists A and B.  Note this local policy is NOT the default
policy for DNSSEC and would require extensions to a standard validator
to support.

Personally I don't think there is any real benefit in supporting
such a policy.  If you don't trust B just don't use it.

One could also have a local policy of verify all algorithms.  This
too would require extensions to the validator.  You would need to
check all the algorithms in the DNSKEY RRset and record which ones
successfully validated then examine the DS RRset/trust anchors and
check that all the algorithms there were in the successful list
produced from the DNSKEY RRset.  You get algorithm downgrade
protection if you do this.

Note this does not require that you check that there is a signature
for every algorithm in the DNSKEY RRset.

> > I think the paragraph in question should be relaxed to make to MUST
> > only apply to algorithms listed in the DS / published trust anchors.
> > This will allow for gradual introduction of RRSIG for new algorithms.
> > The current wording prevents this and is a real pain for large
> > zones.
> 
> I like gradual introduction of RRSIGs for new algorithms.
> 
> I do not like optional for downgrade protection.
> 
> Downgrade protection for algorithms is every bit as important as
> downgrade to insecure.
> Precisely because a broken algorithm *is* insecure, in fact.

If a algorithm is insecure then you should remove it from the list
of acceptable algorithms in you validator.  That way you get insecure
not trusted out of the validator.

> > It will also resolve this issue as now it is the sum of the algorithms
> > listed in DS (trust-anchors) + DNSKEY.
> 
> I'm fine with excluding DNSKEY, so long as it is the sum of
> DS/trust-anchors.
> 
> Brian
> 
> > Mark
> >
> > P.S. we need a better way to publish trust anchors in-band with
> > lifetimes etc. so that one can be assured that trust anchors expire
> > like DS and zones expire but that should be a different discussion.
> > RFC 5011 really isn't good enough.
> >
> > --
> > Mark Andrews, ISC
> > 1 Seymour St., Dundas Valley, NSW 2117, Australia
> > PHONE: +61 2 9871 4742                 INTERNET: marka@isc.org
> > _______________________________________________
> > dnsext mailing list
> > dnsext@ietf.org
> > https://www.ietf.org/mailman/listinfo/dnsext
> >
-- 
Mark Andrews, ISC
1 Seymour St., Dundas Valley, NSW 2117, Australia
PHONE: +61 2 9871 4742                 INTERNET: marka@isc.org