Re: [Doh] Proposal to close off these threads

Dave Lawrence <tale@dd.org> Mon, 11 June 2018 04:50 UTC

Return-Path: <tale@dd.org>
X-Original-To: doh@ietfa.amsl.com
Delivered-To: doh@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 31279130DFA for <doh@ietfa.amsl.com>; Sun, 10 Jun 2018 21:50:25 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.901
X-Spam-Level:
X-Spam-Status: No, score=-1.901 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id whO7YGmaISn4 for <doh@ietfa.amsl.com>; Sun, 10 Jun 2018 21:50:23 -0700 (PDT)
Received: from gro.dd.org (gro.dd.org [207.136.192.136]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 60E76130EDF for <doh@ietf.org>; Sun, 10 Jun 2018 21:50:23 -0700 (PDT)
Received: by gro.dd.org (Postfix, from userid 102) id E073C2AFD3; Mon, 11 Jun 2018 00:50:21 -0400 (EDT)
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Message-ID: <23325.65421.902726.322697@gro.dd.org>
Date: Mon, 11 Jun 2018 00:50:21 -0400
From: Dave Lawrence <tale@dd.org>
To: DoH WG <doh@ietf.org>
In-Reply-To: <1D917C05-2B74-4607-9EE2-55D367FF48B5@icann.org>
References: <1D917C05-2B74-4607-9EE2-55D367FF48B5@icann.org>
Archived-At: <https://mailarchive.ietf.org/arch/msg/doh/qfb9Z5dyeocv8EsXYpqWBMpYmSw>
Subject: Re: [Doh] Proposal to close off these threads
X-BeenThere: doh@ietf.org
X-Mailman-Version: 2.1.26
Precedence: list
List-Id: DNS Over HTTPS <doh.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/doh>, <mailto:doh-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/doh/>
List-Post: <mailto:doh@ietf.org>
List-Help: <mailto:doh-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/doh>, <mailto:doh-request@ietf.org?subject=subscribe>
X-List-Received-Date: Mon, 11 Jun 2018 04:50:26 -0000

As should surprise absolutely nobody, I don't think this would be good
to adopt.  It's trying to address a legacy issue that hasn't been
demonstrated and instead introduces a new one.  If it is what is
ultimately adopted I want it to be clear for the archives that the
decision was not made based on technical merits but rather
philosophical grounds that have been externalized when they could be
handled internally by an implementation.  It's kicking the can down
the road; we expect better of our politicians (who consistently fail
to come though) and I believe we can do better.

The legacy issue is fundamentally different from other cases where
we've had to be conservative in the DNS because we were rebuilding the
airplane in flight.  We had to interoperate with existing production
software in a heterogeneous environment using only the tools that were
given to us by RFC 1035, because messages would be pumped directly
into software that had been written under a different understanding of
the standards.

DoH is a new tool though that changes nothing at all that happens on
DNS/UDP or DNS/TCP.  There is no good reason why legacy code should
end up dealing accidentally with wire format messages larger than 64k
because it doesn't have any standard way of getting them that isn't
the result of new code.

I totally get that there are these legacy functions that you want to
reuse with DoH.  It makes completely rational sense, as does the fact
that they were coded with an understanding that they wouldn't deal
with messages larger than 64k. Yet even if DoH specifies no limits,
you can still reuse them.  The new code that you have calling those
functions can trivially enforce the limits that you want, either
encoding or decoding.

For encoding, nothing at all changes from your existing code.  DNS
server software already has to deal with the possibility that the data
producer might have asked for more data to be encoded than can
actually fit in the message.  It just keeps filling up the buffer
until it hits the 64k limit and it's done.

For decoding, you're going to have to be sanitizing Content-Length
anyway.  I'm confident that none of you would just blindly pass the
message body into a parser without checking the length, especially if
the draft says there's a 64k max.  You can make the exact same check
even if DoH says messages can be greater than 64k.  Your end result is
the same.  

Without a clear technical objection, the other issues raised are
philosophical.

You don't think DNS messages should ever be greater than 64k, because
who needs messages that big anyway?  Fair enough, I respect that point
of view even despite having previously stated that I could make use of
messages that big.  I don't think email messages should ever be bigger
than a couple of megabytes either, but I can just set my server to
reject those messages.  As noted above, a DoH implementation can
easily do similar on either side of the transaction.

You don't want your resources used up on potentially large messages?
I get that, I hate when my resources get used up too.  Of course
that's not a new risk of unlimited-DoH; if you're telling me you have
insufficient cache control tools to manage that, I've just been told
of an attack vector exploitable by non-DoH DNS.

A message greater than 64k wouldn't be representable on DNS/UDP or
DNS/TCP?  This is an issue that goes back the founding of the DNS;
there's always been the potential for a message to only be fully
available on one transport.  Not a new problem.

Setting a limit on DoH, even constrained to one particular media type,
creates a new legacy problem because if larger messages are eventually
deemed acceptable then you'll have an installed base that all wrote in
a limit per this standard.  I posit that even despite Paul's attempt to
make more clear the media type versus the substrate, the chances that
the limit is incorrectly associated as a property of https versus a
property of the media type are high.

Putting in the limit on this media type sends a signal that, despite
the ability of a DNS wire format to handle messages greater than 64k
and a new transport that is not hamstrung by the three decade old
design decisions, people should continue to write software that uses
this arbitrary limit in all contexts.

We already expect a new media type, JSON, that will not have natively
have any sense of the size of a DNS wire format encoding by which to
adjudge the propriety of its own encoding.  Are we going to hamstring
that too?  If not, why not?

The limits you need for your antique network of T1s and PDP-11s
shouldn't be foist on people writing software for the 21st century.