Re: Transport requirements for DNS-like protocols

John C Klensin <> Fri, 28 June 2002 14:06 UTC

Return-Path: <>
Received: from by (PMDF V6.0-025 #44856) id <> (original mail from; Fri, 28 Jun 2002 10:06:17 -0400 (EDT)
Received: from by (PMDF V6.0-025 #44856) id <> for (ORCPT; Fri, 28 Jun 2002 10:06:17 -0400 (EDT)
Received: from by (PMDF V6.0-025 #44856) id <> for (ORCPT; Fri, 28 Jun 2002 10:06:16 -0400 (EDT)
Received: from ( []) by (PMDF V6.0-025 #44856) with ESMTP id <> for; Fri, 28 Jun 2002 10:06:16 -0400 (EDT)
Received: from [] (helo=P2) by with esmtp (Exim 3.35 #1) id 17NwNp-0003b7-00; Fri, 28 Jun 2002 14:05:58 +0000
Date: Fri, 28 Jun 2002 10:05:55 -0400
From: John C Klensin <>
Subject: Re: Transport requirements for DNS-like protocols
In-reply-to: <>
To: Michael Mealling <>, =?ISO-8859-1?Q?Patrik_F=E4ltstr=F6m?= <>
Cc: Rob Austein <>,
Message-id: <77978829.1025258755@localhost>
MIME-version: 1.0
X-Mailer: Mulberry/3.0.0a3 (Win32)
Content-type: text/plain; charset=us-ascii
Content-transfer-encoding: 7BIT
Content-disposition: inline
References: <> <> <> <15430645.1025273478@localhost> <>
List-Owner: <>
List-Post: <>
List-Subscribe: <>, <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Help: <>, <>
List-Id: <>

--On Friday, 28 June, 2002 08:44 -0400 Michael Mealling
<> wrote:

>> > It's not a big deal for the server to perform the entire
>> > query operation again.
>> ...given the matching operation is cheap. I.e. what you don't
>> say explicitly is that we should use as much as possible of
>> the CPU on the client side.
>> This is one of the reasons why I when I came up with IDNA
>> want the client to do the normalization etc, so the server
>> can do bitwise comparison which make handling of hash tables
>> easier.
> I have to agree. Our experience running .com suggests that,
> due to  the scale of queries, any small percentage change in
> processing requirements on the server side results in huge
> swings in operational requirements that affect the entire
> robustness of the system. 

Mumble.  I don't question your data, but I think you have drawn
a seriously-wrong inference from it.

I suggest that we have a rather wide range of options for making
databases perform acceptably.  They include:

	* Having them do as little as possible (that would be an
	extreme reading of Patrik's argument).
	* Distributing the data widely.
	* Using extensive replication.
	* Using sophisticated caching so as to keep some load
	out of the database entirely.

	* Designing the servers so that they function as
	efficiently as possible given the characteristics of the

Now, the DNS tries to do some of these things.  But the caching
isn't good enough, the distribution model may be inadequate to
today's load, the most popular implementation may not represent
today's state of the art in high-retrieval, low-update
databases, etc.  And, probably more important than all of these,
it was designed on the assumption of deep hierarchy and
relatively small zones.  COM breaks that assumption entirely.

I suggest that the inference we should draw is that our design
assumptions about database size and organization should match
usage patterns so that we can design and tune for efficiency.
To tune such a system for the experience with COM would likely
be the beginning of another mistake.

Example:  The DNS model permits stub resolvers with no caching
at all under client control or bound to the client.  Names are
also bound to objects whose addresses/ definitions change on
short notice, so typical TTLs are a day or less.  Now, contrast
this with where IRNSS seems to be headed.  A faceted name is
used to locate one or more DNS names, with user determination of
which DNS name, or names, are relevant.  We assume that, once
the user makes that choice, we will cache the query and its
results on a per-user basis.  The high-volatility situation
still occurs in the mapping between DNS name and resource, so,
in most cases, IRNSS TTLs should be quite large, keeping the
query rate per user on the IRNSS structure fairly low.  The DNS
query rate might benefit too, from fewer queries for names that
don't exist, but we will have to see how that develops.   We
certainly need to be careful about the architecture, but it is
well-known how to distribute and scale canonicalization
processes independent of database search, and how to do rapid
database searches on properly-organized and indexed databases of
much larger size than COM.  

> I think that's a valuable enough guideline to warrant putting
> it into at least the SLS document I'm working on...

I'm increasingly seeing SLS as a server location mechanism and
don't have a clear sense as to how often (per user, per hour,
per user-level query, per... ?) that will occur.  Intuitively,
it would seem to me that SLS, too, might want to focus as much
or more on getting caching right than on particular
characteristics of the server-side function.  If one can
eliminate 20% of the queries, that should produce, all other
things being equal, more performance improvement than making the
server-side algorithms 20% more efficient (or more minimal in
function).  And it is probably easier to achieve.

But I don't have a clear enough picture of what you have in mind.