Re: [dnsext] Clarifying the mandatory algorithm rules

Phillip Hallam-Baker <hallam@gmail.com> Thu, 09 December 2010 14:02 UTC

Return-Path: <hallam@gmail.com>
X-Original-To: dnsext@core3.amsl.com
Delivered-To: dnsext@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id 88B6E28C103 for <dnsext@core3.amsl.com>; Thu, 9 Dec 2010 06:02:37 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -3.321
X-Spam-Level:
X-Spam-Status: No, score=-3.321 tagged_above=-999 required=5 tests=[AWL=0.278, BAYES_00=-2.599, RCVD_IN_DNSWL_LOW=-1]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id qgQBDWP3QTL8 for <dnsext@core3.amsl.com>; Thu, 9 Dec 2010 06:02:36 -0800 (PST)
Received: from mail-gw0-f42.google.com (mail-gw0-f42.google.com [74.125.83.42]) by core3.amsl.com (Postfix) with ESMTP id 0F58028C102 for <dnsext@ietf.org>; Thu, 9 Dec 2010 06:02:35 -0800 (PST)
Received: by gwb20 with SMTP id 20so2221974gwb.15 for <dnsext@ietf.org>; Thu, 09 Dec 2010 06:04:05 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:received:in-reply-to :references:date:message-id:subject:from:to:cc:content-type :content-transfer-encoding; bh=txVwMB187PSajIAgMaY9XBWhOE32DXHNUjG0jcQQaN0=; b=TuLo2IavEYemmRyMuuYDGjyxNk51E3mZVJxg2NU0GqbCYmD394fmOTxTmD+KfL34uf CAjdhYGeNQYYiD9ayJRRf6COdsVsBsNXOe2CKxEnbteXCs646pQoGYMY3FryR1PLUzGa 8d8Fnf4L1zK/XLvifh/GAmOtZrnwWDXqVgYkE=
DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; b=nsq2kIzKAeDIeQSqpxbQbqtJhXrLqXvRi0YBrUz93YM3E/5YbTnRXvgfpKjf0XpG7J ahR/LdhP0mXu1/9Emvb7XA/tpJXNTUkvkdIi1Fz6D91XncjEFpmlbQ6kNYgqACwrJZ1u 5i3p3GRPfTLrcLqXTLJ8fU+mPjO5xcpcGW1Rc=
MIME-Version: 1.0
Received: by 10.101.67.16 with SMTP id u16mr6938478ank.1.1291903445356; Thu, 09 Dec 2010 06:04:05 -0800 (PST)
Received: by 10.101.103.14 with HTTP; Thu, 9 Dec 2010 06:04:05 -0800 (PST)
In-Reply-To: <a06240800c9268ae26e12@192.168.1.104>
References: <alpine.BSF.2.00.1011180553250.83352@fledge.watson.org> <4CE51293.5040605@nlnetlabs.nl> <a06240801c9101620d463@192.168.128.163> <22284.1290447209@nsa.vix.com> <4CF4D54B.5000407@nlnetlabs.nl> <4D00A86D.1040304@nlnetlabs.nl> <a06240800c9268ae26e12@192.168.1.104>
Date: Thu, 09 Dec 2010 09:04:05 -0500
Message-ID: <AANLkTi=s3of4-vEm92xT34B-nTVPBeJaRwxZMr4CrE6f@mail.gmail.com>
From: Phillip Hallam-Baker <hallam@gmail.com>
To: Edward Lewis <Ed.Lewis@neustar.biz>
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: quoted-printable
Cc: dnsext@ietf.org
Subject: Re: [dnsext] Clarifying the mandatory algorithm rules
X-BeenThere: dnsext@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: DNS Extensions working group discussion list <dnsext.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/dnsext>, <mailto:dnsext-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dnsext>
List-Post: <mailto:dnsext@ietf.org>
List-Help: <mailto:dnsext-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dnsext>, <mailto:dnsext-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 09 Dec 2010 14:02:37 -0000

There are two considerations here:

1) Ensure that a service is available as long as there is an
acceptably secure path
2) Enable reliance of weak algorithms to be discontinued before the
algorithm is broken

It is very easy to meet the first at the expense of the second. And
when the real life operational requirements are concerned, we have
problems with TLS in this respect as well.


The ability of a relying party to process an algorithm does not mean
that the relying party trusts it as secure.

In particular an RP may process an algorithm it does not accept as
secure. This is necessary since by default there is no authentication
on DNS at all


One concern here is that we do not want to force zones to discontinue
signing with an obsolete algorithm prematurely because that is going
to weaken the security of their site.


In the normal course of action, an RP that sees a site advertise 2
algorithms, only one of which is trusted, will only process the
trusted one and simply ignore the other, even if the trusted algorithm
fails.

But the same RP that visits a site that advertises only the untrusted
algorithm should probably process the chain and check for consistency
in any case since the alternative is to accept the data without any
checks at all.

If an RP sees a site advertise two algs and both are trusted, the RP
should probably choose one and accept the records if it succeeds.
Otherwise it should make a second attempt but the site cannot rely on
that.


On Thu, Dec 9, 2010 at 8:44 AM, Edward Lewis <Ed.Lewis@neustar.biz> wrote:
> At 10:59 +0100 12/9/10, Matthijs Mekking wrote:
>
>> Although this is local policy, I also think it is bad policy for a
>> validator to allow algorithm compromises and badly signed zones. Thus, I
>> would go for a MUST here.
>
> The reason it can't be a MUST is that a zone may be signed with algorithm
> 42.  If a resolver does not know algorithm 42, how can it determine whether
> the signature using algorithm 42 is valid or invalid?
>
> Let's say a zone is signed by algorithms 5 and 7.  (A choice made
> specifically because the two algorithms have the same mathematical
> properties. ;))  What would you think if you received a set of data and
> signatures and discovered that the algorithm 5 signature validated the
> signature but the algorithm 7 signature did not.  What if the reason were
> cryptographic?  What if the reason was that the algorithm 7 signature had
> simply expired (with or without NTP running)?  What if the reason was that
> the algorithm 7 signature had not yet entered its effectivity period?
>
> If you can establish a chain of trust to a data set, why withhold it from
> the application requesting it in case you find that there is also a
> signature that fails?
>
> Assume a set of data has a legitimate signature in algorithm 5 that
> validates and you see an algorithm 7 signature that does not validate?  What
> if the cause of this was that an attacker wanted to perform a Denial of
> Service by applying an intentionally bad signature (which is trivial to do)
> to guarantee a failed chain of trust?  It's like allowing someone to say
> "don't trust the authorities, I'll insert some doubt so you won't use the
> information."
>
> When you consider that it is far easier to insert harmful bad signatures
> than harmful bad data (because the latter requires a valid signature), you
> have to build the (or any security) protocol to be somewhat skeptical of
> it's own potentially-false positives.
>
> --
> -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
> Edward Lewis
> NeuStar                    You can leave a voice message at +1-571-434-5468
>
> Ever get the feeling that someday if you google for your own life story,
> you'll find that someone has already written it and it's on sale at Amazon?
> _______________________________________________
> dnsext mailing list
> dnsext@ietf.org
> https://www.ietf.org/mailman/listinfo/dnsext
>



-- 
Website: http://hallambaker.com/