Re: [dnsext] Clarifying the mandatory algorithm rules

Brian Dickson <brian.peter.dickson@gmail.com> Fri, 19 November 2010 05:01 UTC

Return-Path: <brian.peter.dickson@gmail.com>
X-Original-To: dnsext@core3.amsl.com
Delivered-To: dnsext@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id 2BC1B3A68BC for <dnsext@core3.amsl.com>; Thu, 18 Nov 2010 21:01:17 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -0.212
X-Spam-Level:
X-Spam-Status: No, score=-0.212 tagged_above=-999 required=5 tests=[AWL=-2.613, BAYES_00=-2.599, GB_SUMOF=5]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id McGS1Ti84Y-e for <dnsext@core3.amsl.com>; Thu, 18 Nov 2010 21:01:16 -0800 (PST)
Received: from mail-fx0-f44.google.com (mail-fx0-f44.google.com [209.85.161.44]) by core3.amsl.com (Postfix) with ESMTP id A6FEC3A68AC for <dnsext@ietf.org>; Thu, 18 Nov 2010 21:01:15 -0800 (PST)
Received: by fxm3 with SMTP id 3so2393296fxm.31 for <dnsext@ietf.org>; Thu, 18 Nov 2010 21:02:04 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:received:in-reply-to :references:date:message-id:subject:from:to:cc:content-type :content-transfer-encoding; bh=yWbFlT8s0Y2WTKp4LA7QCtp8FNYMtEbnXwEVrDSC9C0=; b=LboUNwwU5YFXFULDCe7mtIVfMKHWFNomTxlzPQIU5sWCP4RpJBgNsOW4mJG4Ulggbv 7h0B2EYus9Fd0shcf/VVdQUuzGHciKSC8lJZUOhMbu1M0jf0z0jpNmn2m3Zgt3cLnHBd JTfFzgyA1FFUezNg/K5FIeivg0afjCHLzLwNw=
DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; b=a7AUf+yYrQpjY0PckGFRoOWg2KJrQTeKsqKm1e6p4oSpr+ANEADmRE1FBXcQMf5WNh 6tkIm1DNVRmgdNp1Jl3/wqHeyHP3rkZBGjPV7X7Q5K4X0qeFPeh2dj5VCOKqryslj6PM t4u7TU62Tg9WhTfc0ZzqgjeoOHM/wjC7yWV6Y=
MIME-Version: 1.0
Received: by 10.223.81.67 with SMTP id w3mr1344037fak.110.1290142922285; Thu, 18 Nov 2010 21:02:02 -0800 (PST)
Received: by 10.223.144.71 with HTTP; Thu, 18 Nov 2010 21:02:02 -0800 (PST)
In-Reply-To: <20101119004134.993F26EB749@drugs.dv.isc.org>
References: <alpine.BSF.2.00.1011180553250.83352@fledge.watson.org> <4CE53927.9090203@isc.org> <4CE58E90.6030607@nic.cz> <AANLkTin2H7UkP7FVfz3GN74CKtqn2OKo7MmcKMGOkvNY@mail.gmail.com> <4CE59B3D.5020109@nic.cz> <20101119004134.993F26EB749@drugs.dv.isc.org>
Date: Fri, 19 Nov 2010 01:02:02 -0400
Message-ID: <AANLkTi=Hp6s4xwLQGyWv3BNtvUf5-SDtgUzHbNKtfCV1@mail.gmail.com>
From: Brian Dickson <brian.peter.dickson@gmail.com>
To: Mark Andrews <marka@isc.org>
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: quoted-printable
Cc: dnsext@ietf.org
Subject: Re: [dnsext] Clarifying the mandatory algorithm rules
X-BeenThere: dnsext@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: DNS Extensions working group discussion list <dnsext.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/dnsext>, <mailto:dnsext-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dnsext>
List-Post: <mailto:dnsext@ietf.org>
List-Help: <mailto:dnsext-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dnsext>, <mailto:dnsext-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 19 Nov 2010 05:01:17 -0000

On Thu, Nov 18, 2010 at 8:41 PM, Mark Andrews <marka@isc.org> wrote:
>
> The intent of this section was to prevent downgrade to insecure due
> to a attacker dropping RRSIGS.  DNSSEC has always been "if *any*
> signature validates then the RRset validates".  There is no concept
> of a "more secure algorithm" in DNSSEC.  There may be outside of
> DNSSEC but not inside.

This is likely what the original intent was, and certainly within the context
of a single algorithm, or a validator that only understands one of the
algorithms
present, then it makes sense that only one RRSIG needs to validate.

However, consider the case of multiple algorithms used (to sign), and
validators that understand just one of those, vs validators that understand
more than one, ideally all of them.

What are the implications of "any RRSIG will do" versus the alternative of
 "one RRSIG of each understood algorithm must validate"?

For brevity, the algorithms are A, B, C, .... and the possible requirements
are either "A | B | C..." (or any subset thereof), or "A & B & C ..."
(or any subset thereof).

It might not be known apriori which of A, B, C, ... is "weaker", or
which might be broken first.

The argument should be, if the "&" logic is used, it doesn't matter.
And in fact, the bad
things that result from "|" result regardless of which is broken first.

If any given algorithm is broken, and only one RRSIG needs to match
(the "|" case),
then every zone signed with that algorithm can subsequently be compromised.
RRSIGs can be produced by an attacker, matching forged data. This
applies not only
to zone data, but also to DNSKEYs, and DS records and their RRSIGs.

The implication as far as both detection, and response, are
substantial. Removal of the bad
algorithm would be mandated, and need to happen extremely fast, on the
data side. Zones would
need to be re-signed completely. The rest is left as an exercise for
the paranoid reader.

On the other hand, if the requirement is that all understood
algorithms present, must have a valid
matching RRSIG, then having even two algorithms (in common between the
signer and validator)
changes everything.

The compromise of one algorithm has no immediate implication, since
forgeries cannot occur without
forging the RRSIGs for all known algorithms. At worst, validators that
only understand one of the
algorithms used, and specifically the broken algorithm, are exposed to
attack. Everyone else can safely
respond at their leisure, with no need for drastic immediate action.

The incremental cost of signing, and of validating, multiple RRSIGs,
is linear, O(n) where n is the number of algorithms.
For zone owners who are already signing using multiple algorithms, the
incremental cost is zero.
And for validating, caching resolvers, the incremental cost occurs
only the first time the record is retrieved, and validated.
Once validated, and cached, there is no incremental cost.

So, my suggestion is, if we need to revisit the scope of the -bis to
incorporate this logic, perhaps we should.
And if it can be already considered in-scope, the benefits arguably
outweigh the concerns of  "changing the meaning" of the  existing
RFC(s).

Brian


> If the archives were online you could see that this was the intent
> of this paragraph.
>
> When a operator of a zone publishes DS (or trust-anchors) for a
> zone with a algorithm that is a claim that the entire zone is signed
> with that algorithm.  That claim will continue to hold for the life
> of the DS records or until the operator can be assured that the
> trust anchors have been removed for the algorithm, whichever is
> longer.
>
> Additionally the MUST is a directive to *zone operators*.  A zone
> operator can comply with that MUST but validation will still fail
> with unbound as unbound requires that a zone be signed with DNSKEYs
> which are NOT in the zone prior to adding the DNSKEY to the zone
> due to caching.  On this alone it is clear that unbound is in the
> wrong.
>
> If you want to do downgrade protection between algorithms one can
> but only for the algorithms listed in the DS / trust-anchors.  They
> are the only algorithms that the zone operator has made any claims
> of existance for.

I'm unclear here. Are you arguing that downgrade protection is signaled
by the zone owner via DS using multiple algorithms, and if signaled is mandatory
for implementations to do?

Or that downgrade protection is optional, and can only be done for algorithms
present in DS/trust anchors?

> I think the paragraph in question should be relaxed to make to MUST
> only apply to algorithms listed in the DS / published trust anchors.
> This will allow for gradual introduction of RRSIG for new algorithms.
> The current wording prevents this and is a real pain for large
> zones.

I like gradual introduction of RRSIGs for new algorithms.

I do not like optional for downgrade protection.

Downgrade protection for algorithms is every bit as important as
downgrade to insecure.
Precisely because a broken algorithm *is* insecure, in fact.

> It will also resolve this issue as now it is the sum of the algorithms
> listed in DS (trust-anchors) + DNSKEY.

I'm fine with excluding DNSKEY, so long as it is the sum of DS/trust-anchors.

Brian

> Mark
>
> P.S. we need a better way to publish trust anchors in-band with
> lifetimes etc. so that one can be assured that trust anchors expire
> like DS and zones expire but that should be a different discussion.
> RFC 5011 really isn't good enough.
>
> --
> Mark Andrews, ISC
> 1 Seymour St., Dundas Valley, NSW 2117, Australia
> PHONE: +61 2 9871 4742                 INTERNET: marka@isc.org
> _______________________________________________
> dnsext mailing list
> dnsext@ietf.org
> https://www.ietf.org/mailman/listinfo/dnsext
>