Re: [DNSOP] [Ext] Working Group Last Call for draft-ietf-dnsop-7706bis

Paul Hoffman <> Sat, 16 November 2019 09:34 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id CA0CD12010E for <>; Sat, 16 Nov 2019 01:34:20 -0800 (PST)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -1.9
X-Spam-Status: No, score=-1.9 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, SPF_HELO_NONE=0.001, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id Bb6Ay4xTLHQk for <>; Sat, 16 Nov 2019 01:34:18 -0800 (PST)
Received: from ( []) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by (Postfix) with ESMTPS id 30FC312006D for <>; Sat, 16 Nov 2019 01:34:18 -0800 (PST)
Received: from ( []) by ( with ESMTPS id xAG9YFFT008714 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sat, 16 Nov 2019 09:34:15 GMT
Received: from ( by ( with Microsoft SMTP Server (TLS) id 15.0.1497.2; Sat, 16 Nov 2019 01:34:13 -0800
Received: from ([]) by PMBX112-W1-CA-1.PEXCH112.ICANN.ORG ([]) with mapi id 15.00.1497.000; Sat, 16 Nov 2019 01:34:13 -0800
From: Paul Hoffman <>
To: Wes Hardaker <>
CC: dnsop <>
Thread-Topic: [Ext] [DNSOP] Working Group Last Call for draft-ietf-dnsop-7706bis
Thread-Index: AQHVnGEB7+f1maIqVUGBciC0ETlGRw==
Date: Sat, 16 Nov 2019 09:34:12 +0000
Message-ID: <>
References: <> <>
In-Reply-To: <>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
x-originating-ip: []
x-source-routing-agent: Processed
Content-Type: multipart/signed; boundary="Apple-Mail=_600A58E0-9B34-4A10-9F86-5922469078CD"; protocol="application/pkcs7-signature"; micalg=sha-256
MIME-Version: 1.0
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-11-16_02:, , signatures=0
Archived-At: <>
Subject: Re: [DNSOP] [Ext] Working Group Last Call for draft-ietf-dnsop-7706bis
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: IETF DNSOP WG mailing list <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Sat, 16 Nov 2019 09:34:21 -0000

On Nov 14, 2019, at 7:49 AM, Wes Hardaker <> wrote:
> Tim Wicinski <> writes:
>> This starts a Working Group Last Call for draft-ietf-dnsop-7706bis
> Up front: as you know, I support this document fully being the driver
> behind that's mentioned in the document.  That being
> said, I do want to raise a few points/suggestions:
> 1. "... because negative answers are sometimes cached for a much shorter
>   period of time."
>   I'd like to suggest amending that with "time and because many queries
>   received at the roots are for leaked or garbage strings as well as
>   some many that are generated algorithmic by applications attempting
>   to detect NXDOMAIN rewriting".  In particular, the vast vast majority
>   of the junk is not from low negative caching, but from the fact that
>   each queries are frequently different random garbage strings and the
>   vast majority of those are generated by Chromium based products (and
>   probably others at this point too).

This is correct. Given that, we can eliminate the clause and leave the sentence as "Research shows that the vast majority of queries going to the root are for names that do not exist in the root zone."

> 2. The next paragraph talks about the benefits being weak for true
>   responses, due to caching, but as the root becomes more and more flat
>   as new gTLDs are added (with another round likely "soon"), this
>   statement will become more and more false.

I have seen no research that indicates that this assertion is true to any significant degree.

> 3. Though the document has "relaxed" the specification to not require a
>   loopback address/interface, this is still functionally a no-op since
>   it still requires only serving data from the localhost.  That's not
>   really relaxing it much; it just means functionally the same thing
>   with a slightly wider view of how to implement it.

Please note that the word "localhost" no longer appears in the document.

> 4. In the second to last paragraph in the introduction, it might be
>   worth adding 'Some resolver software supports being both an
>   authoritative server and a resolver but separated by logical "views",
>   allowing a local root to be implemented within a single process'.
>   And you could even list the appendixes, where most of the example
>   configuration actually makes use of this notion.

Good catch; done.

> 5. In section 3, there are broken sentences:
>   "In a system that is using a local authoritative server for the root
>   zone.  if the contents of the root zone cannot be refreshed before
>   the expire time in the SOA, the local root server MUST return a
>   SERVFAIL error response for all queries sent to it until the zone can
>   be successfully be set up again."
>   I think this is meant to be
>   "In a system that is using a local authoritative server for the root
>   zone, if the contents of the root zone cannot be refreshed before
>   the expire time in the SOA, the local root server MUST return a
>   SERVFAIL error response for all queries sent to it until the zone can
>   be successfully be set up again."
> 6. regarding the next paragraph: "In a resolver that is using an
>   internal service for the root zone.  if the contents of the root zone
>   cannot be refreshed before the expire time in the SOA, the resolver
>   MUST immediatly switch to using non-local root servers."
>   You're prescribing implementation choices here, not the goal: the
>   goal is to prevent resolvers from returning stale data (though....
>   with serve-stale in effect too.......).  There are two immediately
>   obvious choices for how to do this: 1) switch to non-local root
>   service, as you state.  or 2) return SERVFAIL from the resolver.
>   E.G., I'm not sure existing implementations, including the config in
>   the appendix, follow this MUST.  To aggrevate this issue further: I'm
>   not sure how a resolver would *know* that the data is stale from the
>   local root it is querying (there are hints of this later, but I have
>   issues with that too; see below).  Functionally, this text is
>   mandating an architecture that is not likely implemented today and is
>   not likely to be implemented in the future (I'm guessing, but happy
>   to be proven wrong).
>   Wouldn't it be better to state simply "Resolvers MUST NOT return
>   stale data."  But again, I'm not sure how to implement that without
>   requiring a very specific binding between the resolver and the local
>   root server, which is rather special-case.  You have already
>   prescribed that the local root server itself should return SERVFAIL
>   and I think that's really all you need to, or can, prescribe.

Yes, we are being prescriptive, similar to the way that RFC 7706 was. That was a requirement from the WG. We also decided to go away from requiring (or even mentioning) "return SERVFAIL" because that's much less helpful to the resolvers' users than falling back.

You make a good point that we don't know what the configurations in the appendix do. I'm not sure how to address that because they might change in the future in different directions.

Your concern that a resolver cannot tell if its root information is stale seems overblown. The root zone's SOA record says when it expires. Resolvers typically remember when they got a record so that they can make a TTL expiration decision; this is no different.

> 7. In the last paragraph in 3, it requires the administrator to check
>   whether or not the SOA is fresh and shows a potential mechanism for
>   doing that. But, even that is subject to fail unfortunately.  

See above.

> Again,
>   you're trying to get the resolver to be intelligent with data that it
>   doesn't have access to.  

If it has the zone, and it remembers when it got it, it has the data. 

> That requirement should be put into the
>   local root server instead, and again is covered by having it return
>   SERVFAIL on out-of-compliance data.  The resolver half of the
>   deployment can not test this easily, but the (potentially
>   pseudo-)authoritative server can.  That's what the requirement should
>   be aimed at.

We can change the draft to this style if the WG wants, but I personally don't see the value of it. If you want to pursue this, it would be useful for specific text to compare to.

> 8. The security considerations talk about limiting damage from broken
>   deployments to "any other system that might try to rely on an altered
>   copy of the root.".  I think, however, that the damage may be seen by
>   any client that depends on the resolver making use of a local root
>   server when the local root server becomes problematic.  IE, if the
>   resolver is serving an enterprise and its localhost accessible local
>   root has a problem, the entire enterprise will have a problem.

Good catch; updated.

> 9. In appendix A, can you please add "" as a domain
>   from which you can AXFR the zone too?  It's mentioned in the next
>   section.

The sources in the first part of Appendix A are well-known and long-established. As covered in the next section, is experimental.

> 10. For A.1, second paragraph about LocalRoot, can you replace with the
>    following text (assuming you agree with it):
>    The LocalRoot project (<>) is a service
>    that embodies many of the ideas in this document and is operated in
>    cooperation with USC/ISI's root server.  It distributes the root
>    zone by AXFR, but also offers DNS NOTIFY messages when the LocalRoot
>    system sees that the root zone has changed, providing potentially
>    faster updates to local root implementations.  It additionally
>    provides secured AXFR transfers, which helps defeat issues with
>    unsigned glue records being potentially modified in transit (see
>    Section 2).

I'm hesitant to do this because the text you give here is different than the text on the web site with respect to the status of the server, and with respect to the availability of TSIG.

> 11. For B.1, example config for Bind 9.12 - why is this not bind 9.11 (too)?
>    It's still a viable platform and is supported till 2021 Q4.

ISC suggested that this was for BIND 9.12. If they want to assure that it works for BIND 9.11 as well, we can change this.

>    (also, it would be worth adding to that and other
>    similar configs as well)

These examples should probably not list experimental services.

> 12. For B.3 (bind 9.14 with mirror): it's worth adding a note that by
>    using the mirror implementation you're actually using less upstreams
>    because it won't include the ICANN xfr (or localroot) sources.

That seems like a minor savings for a typical resolver.

> 13. B.3 says "when it is released" referencing 9.14.  It was released
>    early this year I believe.

Yep, and we missed that. Fixed now.

We'll put out a new draft in a few days when the submission window opens. For any of the responses above that you disagree with, sending text against the new version for WG discussion would be useful.

--Paul Hoffman