Transport requirements for DNS-like protocols

Rob Austein <> Fri, 28 June 2002 10:35 UTC

Return-Path: <>
Received: from by (PMDF V6.0-025 #44856) id <> (original mail from; Fri, 28 Jun 2002 06:35:01 -0400 (EDT)
Received: from by (PMDF V6.0-025 #44856) id <> for (ORCPT; Thu, 27 Jun 2002 23:41:48 -0400 (EDT)
Received: from by (PMDF V6.0-025 #44856) id <> for (ORCPT ; Thu, 27 Jun 2002 23:41:48 -0400 (EDT)
Received: from by (PMDF V6.0-025 #44856) id <> for (ORCPT; Thu, 27 Jun 2002 23:41:48 -0400 (EDT)
Received: from by (PMDF V6.0-025 #44856) id <> for (ORCPT; Thu, 27 Jun 2002 23:41:48 -0400 (EDT)
Received: from by (PMDF V6.0-025 #44856) id <> for (ORCPT; Thu, 27 Jun 2002 23:41:47 -0400 (EDT)
Received: from ( []) by (PMDF V6.0-025 #44856) with ESMTP id <> for; Thu, 27 Jun 2002 23:41:47 -0400 (EDT)
Received: from (localhost []) by (Postfix) with ESMTP id D50C518AC for <>; Thu, 27 Jun 2002 23:41:33 -0400 (EDT)
Date: Thu, 27 Jun 2002 23:41:33 -0400
From: Rob Austein <>
Subject: Transport requirements for DNS-like protocols
Message-id: <>
MIME-version: 1.0 (generated by SEMI 1.13.7 - "Awazu")
Content-type: multipart/mixed; boundary="Boundary_(ID_kR2vbgphEDacWJMKRvJ3gg)"
User-Agent: Wanderlust/2.4.1 (Stand By Me) SEMI/1.13.7 (Awazu) FLIM/1.13.2 (Kasanui) Emacs/20.7 (i386--freebsd) MULE/4.0 (HANANOEN)
References: <> <>
List-Owner: <>
List-Post: <>
List-Subscribe: <>, <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Help: <>, <>
List-Id: <>

This is the piece on DNS transport requirements to which I referred in
the previous message, written as input to a BOF in 1998 that I wasn't
able to attend in person.  It could be construed as off-topic for
IRNSS per se, but Michael did ask, so for what it's worth....

DNS has somewhat peculiar transport requirements (not news to you
folks, obviously).  There are several interwoven factors at work here:

a) The relatively huge number of DNS clients compared to the relatively
   small number of DNS servers, particularly as one gets close to the
   root of the DNS tree (root zone itself, TLDs, big SLDs, etc).

b) The idempotence of normal DNS queries, and the relatively small
   amount of work that a DNS server has to do in order to process a
   normal query.

c) The relatively low probability that any particular DNS response
   message will be dropped by the network.

Taken together, these factors suggest that DNS as we currently know it
is a classic example of what should be a "stateless" protocol.  The
server is a critical resource and expects to receive *lots* of
queries, a vanishingly small number of which will be from any single
client.  It's not a big deal for the server to perform the entire
query operation again.  The result is that the aggregate cost to the
network of using an unreliable transport and recomputing responses
when retransmission is necessary is less than the cost would be for
the server to maintain *any* kind of state on behalf of its clients,
including the kind of state required for even the most lightweight of
reliable transport protocols.  This has a number of implications,
perhaps the most troubling of which is the relative uselessness of
attempting to do conventional path-MTU discovery.

Therefore, horrifying as it may be to all right-thinking engineers,
one can make a very strong case that the correct transport protocol
for normal DNS queries is exactly what we have now, even though
implies that the correct way of handling bigger DNS response packets
is via IP fragmentation.  This topic comes up regularly in the DNS
working groups, but so far the consensus has been that while UDP is a
terrible transport protocol for DNS, the known alternatives are worse.

There are other factors besides the ones that I've listed, but these
are the most interesting, because they're all subject to change as the
Internet evolves.

Factor (a) seems likely to get worse, not better, at least for the
near term.  For non-technical reasons, the overall trend in the DNS
today is towards a flatter namespace, which increases the load on the
servers close to the root (particularly the big TLD servers).  This is
bad network engineering, but the people making these decisions are not
engineers, and let's not go down that rathole :).

In the long term, if we ever get a real white pages protocol and
people stop caring about having cute DNS names, we're still going to
need an underlying system for associating long-term identifiers with
IP addresses.  At a technical level, such a system will probably look
an awful lot like the DNS, but perhaps it'll be sufficently
decentralized that a reliable transport protocol wouldn't be such a
burden on the servers.

Factor (b) is probably the most interesting at a technical level.  One
of the design goals of the DNSSEC extensions was to allow RRs to be
signed offline rather than in real time during query processing.
There were several reasons for this, but one of the reasons was that
cryptographic operations are computationally expensive.  However, in
the case of transaction signatures and secure dynamic DNS update, some
cryptographic operations must be done in real time.  At the moment, we
expect these operations to be relatively infrequent, but if these
operations (or any other "expensive" operation which must be performed
on every query) become frequent enough, that could (theoretically)
change the current balance to the point where a reliable transport
protocol for DNS might pay for itself.  But it may never happen, since
we may be able to stay ahead of the game just by putting enough money
into the relatively small number of heavily loaded servers.

Factor (c) is a little weird.  The current trade-off for DNS works
because the underlying network service is usually pretty good even
though it's "unreliable".  If this were to change significantly for
the worse, it would tip the balance in favor of a reliable transport
protocol for DNS.  A major shift away from hardwired links to noisy
radio links at the physical layer might do it, or a sufficiently bad
episode of widespread network congestion.  DNS is far from the only
thing that would break if unreliable service got significantly worse,
of course, but DNS bears close scrutiny because its transport model
does not degrade very gracefully.  Basic engineering paranoia suggests
that we should continue looking at candidate lightweight transport
protocols for DNS, in the hope of eliminating this risk.

One last observation.  There are a few things that could be done to
improve DNS's transport environment without imposing the burden of
keeping state on the server.  DNS clients usually talk to a relatively
small number of DNS name servers at any given time, so there's no
reason why the client couldn't keep some transport-related state on
the servers it's been talking to recently.  For example, since the
client handles all retransmissions, there's no reason why the client
couldn't use the same variance-based retransmission timeout
calculation that TCP uses.  Some DNS clients may already do this, but
it's not a requirement.  Perhaps it should be.

Another form of state that the client could keep would be path-MTU
information.  Since the server isn't keeping any state and the client
can't dictate the server's behavior, this mechanism would necessarily
be in the form of a request from the client to the server as to how
big a response packet the client would like the server to try sending,
and as to whether or not the client wants the server to allow
fragmentation of the response message.  Some recent work in the DNSIND
group is heading in this general direction, but there are some details
of how the client should vary its use of this mechanism during
retransmission that haven't really been thrashed out yet.  If this
approach pans out, it won't completely eliminate DNS's reliance on IP
fragmentation, but it might reduce the associated risks to an
acceptable level.