Re: Transport requirements for DNS-like protocols

Michael Mealling <> Fri, 28 June 2002 14:19 UTC

Return-Path: <>
Received: from by (PMDF V6.0-025 #44856) id <> (original mail from ; Fri, 28 Jun 2002 10:19:16 -0400 (EDT)
Received: from by (PMDF V6.0-025 #44856) id <> for (ORCPT; Fri, 28 Jun 2002 10:19:15 -0400 (EDT)
Received: from by (PMDF V6.0-025 #44856) id <> for (ORCPT; Fri, 28 Jun 2002 10:19:15 -0400 (EDT)
Received: from ( []) by (PMDF V6.0-025 #44856) with ESMTP id <> for; Fri, 28 Jun 2002 10:19:15 -0400 (EDT)
Received: from (localhost []) by (8.12.1/8.12.1) with ESMTP id g5SEH5uK007907; Fri, 28 Jun 2002 10:17:06 -0400 (EDT)
Received: (from michael@localhost) by (8.12.1/8.12.1/Submit) id g5SEH5a0007906; Fri, 28 Jun 2002 10:17:05 -0400 (EDT)
Date: Fri, 28 Jun 2002 10:17:04 -0400
From: Michael Mealling <>
Subject: Re: Transport requirements for DNS-like protocols
In-reply-to: <77978829.1025258755@localhost>
To: John C Klensin <>
Cc: Michael Mealling <>, Patrik =?iso-8859-1?B?RuRsdHN0cvZt?= <>, Rob Austein <>,
Reply-to: Michael Mealling <>
Message-id: <>
MIME-version: 1.0
Content-type: text/plain; charset=us-ascii
Content-disposition: inline
User-Agent: Mutt/
References: <> <> <> <15430645.1025273478@localhost> <> <77978829.1025258755@localhost>
List-Owner: <>
List-Post: <>
List-Subscribe: <>, <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Help: <>, <>
List-Id: <>

On Fri, Jun 28, 2002 at 10:05:55AM -0400, John C Klensin wrote:
> --On Friday, 28 June, 2002 08:44 -0400 Michael Mealling
> <> wrote:
> >> > It's not a big deal for the server to perform the entire
> >> > query operation again.
> >> 
> >> ...given the matching operation is cheap. I.e. what you don't
> >> say explicitly is that we should use as much as possible of
> >> the CPU on the client side.
> >> 
> >> This is one of the reasons why I when I came up with IDNA
> >> want the client to do the normalization etc, so the server
> >> can do bitwise comparison which make handling of hash tables
> >> easier.
> >>...
> > I have to agree. Our experience running .com suggests that,
> > due to  the scale of queries, any small percentage change in
> > processing requirements on the server side results in huge
> > swings in operational requirements that affect the entire
> > robustness of the system. 
> Mumble.  I don't question your data, but I think you have drawn
> a seriously-wrong inference from it.

Let me be clearer, I agree with Patrik's concen. I don't necessarily
agree with his conclusion concerning where normalization occurs and
when/where you do byte-level comparison since they are specifically
concerned with DNS and IDN. Since we're doing something new
here we have the ability to distribute the costs differently given
these possible optimizations:

> I suggest that we have a rather wide range of options for making
> databases perform acceptably.  They include:
> 	* Having them do as little as possible (that would be an
> 	extreme reading of Patrik's argument).
> 	* Distributing the data widely.
> 	* Using extensive replication.
> 	* Using sophisticated caching so as to keep some load
> 	out of the database entirely.
> 	* Designing the servers so that they function as
> 	efficiently as possible given the characteristics of the
> 	queries.

Completely agree.

> I suggest that the inference we should draw is that our design
> assumptions about database size and organization should match
> usage patterns so that we can design and tune for efficiency.
> To tune such a system for the experience with COM would likely
> be the beginning of another mistake.

Yes. I am talking usage patterns. I'm assuming that since we
are talking about something "above DNS" that we have the ability
to design to the usage pattern that ".com" is being pressed into,
not how its exactly been done.

> Example:  The DNS model permits stub resolvers with no caching
> at all under client control or bound to the client.  Names are
> also bound to objects whose addresses/ definitions change on
> short notice, so typical TTLs are a day or less.  Now, contrast
> this with where IRNSS seems to be headed.  A faceted name is
> used to locate one or more DNS names, with user determination of
> which DNS name, or names, are relevant.  We assume that, once
> the user makes that choice, we will cache the query and its
> results on a per-user basis.  The high-volatility situation
> still occurs in the mapping between DNS name and resource, so,
> in most cases, IRNSS TTLs should be quite large, keeping the
> query rate per user on the IRNSS structure fairly low.  

Yes. The late binding to the correct host-node is done at the DNS
level with regard to the domain-name in the URI, and not at the
IRNSS (NRS in SLS) level....

> The DNS query rate might benefit too, from fewer queries for names that
> don't exist, but we will have to see how that develops.   We
> certainly need to be careful about the architecture, but it is
> well-known how to distribute and scale canonicalization
> processes independent of database search, and how to do rapid
> database searches on properly-organized and indexed databases of
> much larger size than COM.  

Yes. I think I mispoke (as I laid out in another message). I think
the concern about the cost of server side operations is an extremely
valid one as it relates to network performance and scalability.
Would I come to Patrik's same conclusion? No since its based on DNS
and not doing something completely new like IRNSS.

> > I think that's a valuable enough guideline to warrant putting
> > it into at least the SLS document I'm working on...
> I'm increasingly seeing SLS as a server location mechanism and
> don't have a clear sense as to how often (per user, per hour,
> per user-level query, per... ?) that will occur.  Intuitively,
> it would seem to me that SLS, too, might want to focus as much
> or more on getting caching right than on particular
> characteristics of the server-side function.  If one can
> eliminate 20% of the queries, that should produce, all other
> things being equal, more performance improvement than making the
> server-side algorithms 20% more efficient (or more minimal in
> function).  And it is probably easier to achieve.
> But I don't have a clear enough picture of what you have in mind.

I'm simply stating in the introduction as one of the design criteria.
I haven't really decided if any parts of the design actaully change.
I.e. I kept that design constraint in the back of my mind when I think
about this but I never state it explicitly....


Michael Mealling	|      Vote Libertarian!       | urn:pin:1      |                              |