Re: [dnsext] Design team report on dnssec-bis-updates and CD bit

Anthony Iliopoulos <ailiop@lsu.edu> Mon, 12 July 2010 16:23 UTC

Return-Path: <owner-namedroppers@ops.ietf.org>
X-Original-To: ietfarch-dnsext-archive@core3.amsl.com
Delivered-To: ietfarch-dnsext-archive@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id 2EC773A69CA; Mon, 12 Jul 2010 09:23:34 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.634
X-Spam-Level:
X-Spam-Status: No, score=-2.634 tagged_above=-999 required=5 tests=[BAYES_40=-0.185, FH_RELAY_NODNS=1.451, RCVD_IN_DNSWL_MED=-4, RDNS_NONE=0.1]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id Enfn+sLo2DBR; Mon, 12 Jul 2010 09:23:32 -0700 (PDT)
Received: from psg.com (psg.com [IPv6:2001:418:1::62]) by core3.amsl.com (Postfix) with ESMTP id 03DE13A6897; Mon, 12 Jul 2010 09:23:26 -0700 (PDT)
Received: from majordom by psg.com with local (Exim 4.72 (FreeBSD)) (envelope-from <owner-namedroppers@ops.ietf.org>) id 1OYLfm-0003nm-Bn for namedroppers-data0@psg.com; Mon, 12 Jul 2010 16:16:02 +0000
Received: from [130.39.6.157] (helo=relay002.lsu.edu) by psg.com with esmtps (TLSv1:AES256-SHA:256) (Exim 4.72 (FreeBSD)) (envelope-from <ailiop@lsu.edu>) id 1OYLfb-0003mj-EK for namedroppers@ops.ietf.org; Mon, 12 Jul 2010 16:15:52 +0000
Received: from lsu.edu (nir-stealth.lsu.edu [130.39.193.41]) (authenticated bits=0) by relay002.lsu.edu (8.13.8/8.13.8) with ESMTP id o6CGFlaX017130 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Mon, 12 Jul 2010 11:15:47 -0500
Date: Mon, 12 Jul 2010 11:15:47 -0500
From: Anthony Iliopoulos <ailiop@lsu.edu>
To: Andrew Sullivan <ajs@shinkuro.com>
Cc: namedroppers@ops.ietf.org
Subject: Re: [dnsext] Design team report on dnssec-bis-updates and CD bit
Message-ID: <20100712161547.GE20290@lsu.edu>
References: <20100709142108.GA68527@shinkuro.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20100709142108.GA68527@shinkuro.com>
X-Virus-Scanned: clamav-milter 0.96.1 at relay002.lsu.edu
X-Virus-Status: Clean
Sender: owner-namedroppers@ops.ietf.org
Precedence: bulk
List-ID: <namedroppers.ops.ietf.org>
List-Unsubscribe: To unsubscribe send a message to namedroppers-request@ops.ietf.org with
List-Unsubscribe: the word 'unsubscribe' in a single line as the message text body.
List-Archive: <http://ops.ietf.org/lists/namedroppers/>

Greetings all,

Just some thoughts that might help stir further discussion..

On Fri, Jul 09, 2010 at 10:21:08AM -0400, Andrew Sullivan wrote:

> Model 1: "always set"
> 
> This model is so named because the validating resolver sets the CD bit
> on queries it makes regardless of whether it has a covering trust
> anchor for the query.  The general philosophy represented by this
> table is that only one resolver should be responsible for validation
> irrespective of the possibility that an upstream resolver may be
> present and with TAs that cover different or additional QNAMEs.

This is a sane expectation (one resolver being responsible for
the validation, and only *that* resolver). Given that there has
been some desire for pushing the entire validation to the
stub, this model has no surprise factor. I assume that the counterargument
here is, why would a validating resolver ever set the CD flag on
upstream queries when it is missing any covering TA for the QNAME ?
Expectation for consistent (validation, resolution) behavior, and
enforcement of policy. If a stub trusts a resolver/forwarder,
it also trusts its local policy, i.e. the set of configured TAs,
and any kind of validation restrictions.

> Model 2: "never set"
> 
> This model is so named because it sets CD=0 on upstream queries for
> all received CD=0 queries even if it has a covering trust anchor.
> ("Never" is really too strong: obviously, if the query arrives at the
> resolver with CD=1, it must set CD when it performs the query as well.)
> The general philosophy represented by this table is that more than one
> resolver may take responsibility for validating a QNAME and that a
> validation failure for a QNAME by any resolver in the chain is a
> validation failure for the query.

This option places blind trust in all the intermediate validators. It's
a kind of "opportunistic" dnssec, in a sense. It's pushing the validation
away from the stub too, in a non-deterministic manner.

> Model 3: "sometimes set"
> 
> This model is so named because it sets the CD bit on upstream queries
> triggered by received CD=0 queries based on whether the validator has
> a TA configured that covers the query.  If there is no covering TA,
> the resolver clears the CD bit in the upstream query.  If there is a
> covering TA, it sets CD=1 and performs validation itself.  The general
> philosophy represented by this table is that a resolver should try and
> validate QNAMEs for which is has trust anchors and should not preclude
> validation by other resolvers for QNAMEs for which it does not have
> covering trust anchors.

It really looks like this is a policy issue. The real question here is,
should the standards offer flexibility or enforce one of the above models ?
What are the implications of not officially settling with a single model ?
Under which scenario would it be desired to have more than one validators
between a stub and the authoritatives, configured with a differing set
of policies ? (assuming these validators are under the same domain of control)

What if the validators are *not* governed by the same policy,
and are under control of separate parties ? Would then models 2,3
be acceptable at all (i.e. rely on an unknown or disagreeing
3rd party policy for validation) rather than simply comply with the
local policy (and fail resolution if the local policy leads to it,
isn't that the purpose of a policy, especially when this is
a security-related protocol ?)

Models 2,3 can lead to some "unexpected" conditions. What if one of
the upstream intermediate validators has a stricter policy than
the "local" validator ? What about when it has a more liberal policy ?

In general, since there are no protocol means of communicating TA
configuration and validation policies between validators, aren't models
2 and 3 placing more faith than they should on their upstreams ?
Would it even make sense or would there be any benefit to have any kind
of trust relationship protocol between validators ? Models 2 and 3 appear
to be implicitly "forcing" stubs to transitively trust all of the upstream
validators (and their respective policies).

I would generally support Model 1, that is, push all validation as
closest to the stub as possible (or to the closest validator to
the querier). Ensure that the DNSSEC security material is propagated
intact from the authoritative back to the first validator under *all*
conditions (models 2,3 can obviously prevent that).


Regards,
Anthony