Re: [DNSOP] New Version Notification for draft-pusateri-dnsop-update-timeout-01.txt

Tom Pusateri <> Fri, 22 February 2019 08:19 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 37262126D00 for <>; Fri, 22 Feb 2019 00:19:39 -0800 (PST)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -1.9
X-Spam-Status: No, score=-1.9 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, HTML_MESSAGE=0.001, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id gebj0CrqWT7W for <>; Fri, 22 Feb 2019 00:19:36 -0800 (PST)
Received: from ( []) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by (Postfix) with ESMTPS id 6E2DA127287 for <>; Fri, 22 Feb 2019 00:19:36 -0800 (PST)
Received: from [] (unknown []) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by (Postfix) with ESMTPSA id 222762826F; Fri, 22 Feb 2019 03:19:33 -0500 (EST)
From: Tom Pusateri <>
Message-Id: <>
Content-Type: multipart/alternative; boundary="Apple-Mail=_7C2F004E-EB31-4D51-AE30-49A0720611CC"
Mime-Version: 1.0 (Mac OS X Mail 11.5 \(3445.9.1\))
Date: Thu, 21 Feb 2019 22:19:30 -1000
In-Reply-To: <>
Cc: Mark Andrews <>, Tony Finch <>, dnsop <>, Paul Wouters <>, Joe Abley <>, Dick Franks <>
To: Ted Lemon <>
References: <> <> <> <> <> <> <> <> <> <> <> <> <> <> <> <> <> <> <> <>
X-Mailer: Apple Mail (2.3445.9.1)
Archived-At: <>
Subject: Re: [DNSOP] New Version Notification for draft-pusateri-dnsop-update-timeout-01.txt
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: IETF DNSOP WG mailing list <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Fri, 22 Feb 2019 08:19:39 -0000

> On Feb 21, 2019, at 1:29 PM, Ted Lemon <> wrote:
> On Feb 21, 2019, at 2:24 PM, Mark Andrews < <>> wrote:
>> Implementation details are beyond the scope of RFCs.
> Indeed they are.  My point is that if you want to be careful of memory usage or disk usage, you can be—there is no need to use a hash.   In essence, requiring us to use a hash is specifying an implementation detail that needn’t be specified: you can in fact implement this using a hash, although I wouldn’t.   It would be nice if I were not required to implement it that way, since I think that’s not actually going to work reliably.
>> Also you mentioned caches which basically will never see these records unless they are queried for.
> I mentioned caches because they are by far the biggest consumers of resources—authoritative name servers have much smaller memory footprints.   I assume the reason you think using hashes is a good idea and not a premature optimization is because you’ve done a lot of work with caching name servers, and are seeing this discussion through that lens.   That’s the wrong lens to be seeing it through.   This is only relevant for authoritative name servers, and in that case, storing the whole RR-to-be-deleted is fine.

I’ve been mostly listening and learning from this discussion which has been great. Thanks for all the input. Let me summarize what I’m hearing and we will open issues to adjust the document.

1. We need a motivational section to explain the purpose better

2. The HASH was my idea to simplify the records by making them all the same. It appears that simplicity in this form was not noticed or not appreciated. :)

3. The HASH algorithm selection was intended to work long term. It was my hope that there would only ever be one algorithm and there would never exist the case when one implementation supported an algorithm that another implementation did not. The HASH algorithm index was only intended to be used if a vulnerability was found in the ONE selected algorithm and it needed replaced. In this case, the old algorithm would be deprecated and everyone would switch to a new single algorithm. I am strongly opposed to having more than one HASH algorithm defined. Not being a security expert and not being able to find any papers proving that I could take an existing algorithm like SHA-256 that was 32 bytes and shorten it to 16 bytes using the first 16 bytes or the last 16 bytes or 16 bytes in the middle, I opted to select an algorithm that was already 16 bytes and proven to have terrific non-collision properties. Since some of the RDATA can be very short (A records), there are cases when there’s not a lot of data from which to base the hash value on. This was another reason to start with a hash like SHAKE128. But from the sounds of it, people prefer SHA-256 and so I will research this more to see about its applicability in this case (if a hash is even needed anymore).

4. We are open to using RDATA instead of a hash. Or we can define RDATA as an algorithm index as Mark suggested and define a hash as another algorithm (now or later if it ever becomes a problem). By adding the record type to the TIMEOUT instance, we have eliminated most uses of the hash already and only in rare cases will it be needed so including large RDATA in the TIMEOUT record should be rare.

5. Storing the TIMEOUT information as resource records seemed like a convenient way to use an existing database to store timeout information across restarts and to synchronize with secondaries. It can certainly be stored in a proprietary database by each authoritative server vendor but allowing them to interoperate seemed like a feature and when they each already have a database that holds resource records, why create another database type? But if the consensus is that the TIMEOUT info shouldn’t be stored in the existing resource record database but instead authoritative servers should create a new database for this info, then that is fine. This document itself can TIMEOUT. :)