Re: [dnsext] MAR proposal #1: Algorithm downgrade protection

Brian Dickson <> Sat, 02 April 2011 18:41 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id DB6023A6881 for <>; Sat, 2 Apr 2011 11:41:12 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -3.544
X-Spam-Status: No, score=-3.544 tagged_above=-999 required=5 tests=[AWL=0.054, BAYES_00=-2.599, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_LOW=-1]
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id YOXYBpJWQBdG for <>; Sat, 2 Apr 2011 11:41:11 -0700 (PDT)
Received: from ( []) by (Postfix) with ESMTP id DF83B3A6882 for <>; Sat, 2 Apr 2011 11:41:10 -0700 (PDT)
Received: by fxm15 with SMTP id 15so3659312fxm.31 for <>; Sat, 02 Apr 2011 11:42:51 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=gamma; h=domainkey-signature:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=Iu2Fkf5phK71LmJjGSEkP21YYoNERWImG1iBfvZdNPE=; b=tKUmc4mf50zz8pY3oSrWq0tTZNzlUpuEbhxmkzY/97itFBNgtG7Kszve1etjYKm5FR AvrgE8oJG64sWHAY9zOPBJzWMhMQz7TlIAxwNuhCP1hacTZB8PohQEMehzoqHG/cfM9d WenvK79E/XZRnLCDgrysm6zTGsAg8K5wa3Jfw=
DomainKey-Signature: a=rsa-sha1; c=nofws;; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=TcYcW2MC6EygVTk6vVJpotK+Lkbo1FOLRJ3i9JQ6kPxl0Fp53ysoSW0X+MclLhWa4i 1RJIQ6EUX2U2iVdkJD8vh9vQkdJokzeMcKqeQGWI0o5fIJAOLue7O0MOyEuK585J067b 6zpvxsb9I6pUx5e24dHDY5Jcd+K9YnJpB1+mE=
MIME-Version: 1.0
Received: by with SMTP id d5mr55938faa.6.1301769769976; Sat, 02 Apr 2011 11:42:49 -0700 (PDT)
Received: by with HTTP; Sat, 2 Apr 2011 11:42:49 -0700 (PDT)
In-Reply-To: <>
References: <> <> <a06240801c9101620d463@> <> <> <> <> <> <> <> <a06240800c9a50cf4632a@> <> <a06240802c9a7b6cb4cc3@> <> <a06240802c9a7e0807069@> <> <a06240802c9a93d762e13@> <a06240803c9a9417e1fe8@> <> <a06240800c9ba6184d535@> <> <a06240800c9bb6f86edae@> <> <>
Date: Sat, 02 Apr 2011 14:42:49 -0400
Message-ID: <>
From: Brian Dickson <>
To: Paul Hoffman <>
Content-Type: multipart/alternative; boundary="0015174766a0b80513049ff3e4a2"
Cc: DNSEXT Working Group <>
Subject: Re: [dnsext] MAR proposal #1: Algorithm downgrade protection
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: DNS Extensions working group discussion list <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Sat, 02 Apr 2011 18:41:13 -0000

I think there is one specific place, where (a) there is value in stricter
checking, where (b) the cost of doing so is reasonably low, and where (c)
that however low the risk may be, the risk-cost-value equation (low risk,
low cost, high value) favors the SHOULD level of "check all known
algorithms, and check for the presence of signatures which match algorithms,
in the DS RRSET.

That place is, signatures on both the apex DNSKEY RRSET and the
corresponding DS RRSET. (I.e., the SEP to the KSK).

However unlikely it is that an RRSIG for a RRSET could be forged, the
consequences of such a forged signature for a substituted "stripped down"
DNSKEY RRSET and/or DS RRSET (which, instead of a robust set of DNSKEY
algorithms, contains just one algorithm, for which the key has been
compromised or for which a new key has been installed) are significant.

The impact would be that, in that cache, every other algorithm and every
signature on every RRSET in the zone would then be BAD, since the algorithms
would not be found, and the RRSIGs would fail to validate (either for bad
signature, or lack of chain of trust back to the root or any trust anchor).
The resolver would then attempt to retrieve valid answers by querying the
forger's server, for everything in the zone.

The attacker would thus be able ensure the downgrade attack would succeed on
the entire zone, and in all likelihood the downgrade would not generally
risk detection (i.e. that the bulk of operators of validating resolvers
would either not recognize that there is a problem, or not understand what
the problem is). Depending on local policy and implementer's choices, it is
possible that no warnings or errors would even be generated.

Any cached data (i.e. from the "good" authority server, vs the forger) would
be either tossed, or be moved to the BAD cache and have a new (short) TTL
applied, and would not be returned to the client except when CD=0, and would
fail to validate anyway (where FAIL == BOGUS).

On the other hand, whenever there is more than one algorithm, and DS/DNSKEY
checking is stronger than "any algorithm, any signature", the requirements
on an attacker are to either look for a weaker spot (individual
other-RRtypes), dedicate a lot more resources, or just give up. If more than
one algorithm needs to be defeated, the risk of any single algorithm being
*the* problem goes away. The attack window on "regular" RRSIGs in a zone
will tend to be much smaller (due to lower TTLs and re-signing, and easy key
rollover within any given algorithm), and the duration of the results of a
successful attack correspondingly small, which also works against a
potential attacker. This is presuming the ZSKs are rolled reasonably
frequently - an action which invalidates cached RRSIGs signed by the
previous ZSKs, i.e. for any given algorithm.

I can't comment on the likelihood that weaknesses in two or more algorithms
being found and exploited within a given time window, but I suspect it's
rather low.

And having multiple signatures (which multiple algorithms necessarily
implies) all needing to validate, makes possible the separation of
key-holders, avoiding single-point-of-exploitation on the signing side, as
well. Specifically, key separation for the KSK (the real high-value target).


P.S. I know we're talking about extraordinarily low probabilities to start
with, in terms of being able to "get lucky" in doing a brute force attack.
However, there is benefit in terms of the social engineering side, if key
separation for even low-value keys becomes commonplace. In both cases, there
is benefit at reasonably low costs to both authority  operators and
validator operators. SEPs are everything in DNSSEC, and thus, SEPs need
pretty strong protection.

On Sat, Apr 2, 2011 at 2:37 AM, Paul Hoffman <> wrote:

> On Apr 1, 2011, at 4:24 PM, Samuel Weiler wrote:
> > This is a proposed change in DNSSECbis that arguably changes the
> mandatory algorithm rules.  I'm posting this as a summary of what I
> understand some may support, I don't support this change myself. Please post
> in this thread with your support or lack thereof.
> >
> >
> > In order to provide some protection against algorithm downgrade[1], we're
> defining a mechanism for zone signers to signal to validators that a SET of
> algorithms should ALL be checked, when possible, before determining that an
> answer from the zone is Secure.  Specifically, we're overloading the DS
> RRset to do that signalling.
> >
> > Validators SHOULD check signatures from all algorithms present in a
> zone's DS RRset or trust anchors before declaring an answer from the zone to
> be Secure.  If it is impossible to validate an answer with one or more of
> those algorithms, the answer SHOULD be treated as Bogus.
> >
> > This is a subset of the checks unbound was performing that let to the
> discovery of the problems with .cz's algorithm roll process.
> >
> > Please post in this thread with your support or objections.
> I do not support this change. It makes DNSSEC validation both more mushy
> and more brittle by having zones "signal" to relying parties what the zone
> wants the relying parties to do. It is more mushy because a clearer
> statement can be made: just sign with the algorithms you want. It makes it
> more brittle because now there are more ways to make a zone accidentally go
> bogus.
> --Paul Hoffman
> _______________________________________________
> dnsext mailing list