[Doh] Are we missing an architecture? (was Re: DNS Camel thoughts: TC and message size)

Andrew Sullivan <ajs@anvilwalrusden.com> Fri, 08 June 2018 17:08 UTC

Return-Path: <ajs@anvilwalrusden.com>
X-Original-To: doh@ietfa.amsl.com
Delivered-To: doh@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 88E03130F33 for <doh@ietfa.amsl.com>; Fri, 8 Jun 2018 10:08:21 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.901
X-Spam-Level:
X-Spam-Status: No, score=-1.901 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (1024-bit key) header.d=yitter.info header.b=doxMaTTp; dkim=pass (1024-bit key) header.d=yitter.info header.b=X7X6WdXh
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id i229WtTDWst7 for <doh@ietfa.amsl.com>; Fri, 8 Jun 2018 10:08:19 -0700 (PDT)
Received: from mx4.yitter.info (mx4.yitter.info [159.203.56.111]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id B5021130F32 for <doh@ietf.org>; Fri, 8 Jun 2018 10:08:19 -0700 (PDT)
Received: from localhost (localhost [127.0.0.1]) by mx4.yitter.info (Postfix) with ESMTP id D42C7BDEF9 for <doh@ietf.org>; Fri, 8 Jun 2018 17:07:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yitter.info; s=default; t=1528477668; bh=1JUvvxfEatzn3hDQSIqKl7Gd2xLcnE5vN2Q+q/1NIJ4=; h=Date:From:To:Subject:References:In-Reply-To:From; b=doxMaTTpnATVTOVK4SEv9v+pd3/dBJ/k7YdQdK+I8nt6H2ffumKw1hnnw5svgbVur X+8n5bCckmePdoNlPTvL2BP/ScSKzfFU/O6k/iDYHynNHxjaocb3FQT8Kmg33UjuVp uIfTcH8/4KKIuAngIKJ7H/2wYadglBO17J3ewQgE=
X-Virus-Scanned: Debian amavisd-new at crankycanuck.ca
Received: from mx4.yitter.info ([127.0.0.1]) by localhost (mx4.yitter.info [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id xdZ5D4RngFXq for <doh@ietf.org>; Fri, 8 Jun 2018 17:07:47 +0000 (UTC)
Date: Fri, 08 Jun 2018 13:07:44 -0400
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yitter.info; s=default; t=1528477667; bh=1JUvvxfEatzn3hDQSIqKl7Gd2xLcnE5vN2Q+q/1NIJ4=; h=Date:From:To:Subject:References:In-Reply-To:From; b=X7X6WdXh/xVos2b1wRp7dUT2zMeCcU1u5ZcFUPkU22uOvHrXj5TJqZ4Fv9d0yemtv Hel1znayLDxOLFbSQxUb24NQC9gtim3hPBAWqvwfsbJT6TXcdANo52eLrX89t3xYbG DS+2V1mBH1wGaK2h0NHA8QZwiEgd0zxg0ybBwcp4=
From: Andrew Sullivan <ajs@anvilwalrusden.com>
To: doh@ietf.org
Message-ID: <20180608170744.GY11227@mx4.yitter.info>
References: <20180606093212.GA23880@server.ds9a.nl>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20180606093212.GA23880@server.ds9a.nl>
Archived-At: <https://mailarchive.ietf.org/arch/msg/doh/oM5NPzFxTyAosR8qFcTd6AJklqQ>
Subject: [Doh] Are we missing an architecture? (was Re: DNS Camel thoughts: TC and message size)
X-BeenThere: doh@ietf.org
X-Mailman-Version: 2.1.26
Precedence: list
List-Id: DNS Over HTTPS <doh.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/doh>, <mailto:doh-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/doh/>
List-Post: <mailto:doh@ietf.org>
List-Help: <mailto:doh-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/doh>, <mailto:doh-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 08 Jun 2018 17:08:22 -0000

Dear colleagues,

The discussion of the "DNS Camel" and the worry about max sizes had
led me to think about what exactly we think we're doing with DOH.  I
am not sure we have agreement on a goal, so I thought I'd try to make
this explicit.

The charter discussion went to some lengths, I thought, to try to
constrain the stuff we were doing effectively just to "the DNS we
always had, only over HTTPS".  I thought that's what this language meant:

  The primary focus of this working group is to develop a mechanism that
  provides confidentiality and connectivity between DNS clients (e.g., operating
  system stub resolvers) and recursive resolvers.

To person with a long history in the DNS, anything that says "DNS
clients" automatically means, "That vast, tractless wilderness of
half-baked implementations that might send a query for DNS and might
accept inbound answers."  DNS nerds assume, in general, that any
condition that has "always been true" is almost certainly deployed
somewhere, which is why we still have things like upward referrals,
EDNS version numbering failures, and so on.  There's even been a flag
day announced to try to cope with some of this, but it's anyone's
guess whether it'll work.

Part of the problem is that the DNS lives in a sort of ambiguous
place.  In the layer model, it's plainly an application.  But it
functions as a kind of connectivity infrastructure, which is why it
was handled in the INT area for so long.  Because it is so old and
crufty, and contains no proper versioning or negotiation or anything
like that, DNS nerds have always been ultraconservative about
operations and deployment.  The downstream consumers of DNS data (the
OS resolver libraries and so on) were often more than a little
primitive, and often turn out to contain assumptions that are not true
and that cause trouble.  Finally, the DNS is defined in a _lot_ of
RFCs, and not always in language that is entirely clear and unsubtle.
While the best-known implementations are high quality, some
implementers appear to have been give 30 minutes with RFC 1035 and
half a day of allotted coding time.

HTTPS is, of course, much more modern, and HTTP 2 has been recently
gone over with a great deal of care.  It is reasonable, therefore, to
make assumptions about what an HTTP 2-implementing system will do.
So, we might think that the right boundary to be drawing is around the
DNS subsystems in a few code bases, plus the transport to be used.
Anyone who is going to implement a DNS API client is going to follow
the rules and do things in a clean and modern way.  I think the
disconnect with some people with a lot of experience in DNS is that
this boundary is too tight: the dependent systems will actually not
expect whatever the RFCs say, but instead what they're used to
expecting based on three things they found in Google plus some code
from 1993 that they inherited and don't understand.

I don't have any trouble saying, "We are actually just going to insist
on assuming a clean, modern architecture; things that make assumptions
not justified by the letter of the RFCs -- including assumptions about
transport -- are going to be broken."  But I think that's the sort of
principle that really does need to be written down somewhere.  At the
moment, the I-D is too cute about this:

   The integration with HTTP provides a transport suitable for both
   existing DNS clients and native web applications seeking access to
   the DNS.

As several posters have already pointed out, this is only true if you
accept a somewhat unusual meaning of "existing DNS clients".  Those
clients are going to have assumptions in them, and they are
assumptions that are based on more than 30 years of deployment -- an
eternity on the Internet.

Architecture inheres not only in how the elements of the thing fit
together, but also how the thing fits with everything else.  In this
case, we have to fit with the deployed Internet systems.  It is
perfectly fine to decide that 64k is not enough for anybody.  But if
that's an assumption we're going to modify, then I think we are in
fact specifying a new architecture for the DNS, and we need to be less
glib about that.

Best regards,

A

-- 
Andrew Sullivan
ajs@anvilwalrusden.com