Re: [DNSOP] More complete review of draft-grothoff-iesg-special-use-p2p-names-01

Andrew Sullivan <ajs@anvilwalrusden.com> Thu, 02 January 2014 11:42 UTC

Return-Path: <ajs@anvilwalrusden.com>
X-Original-To: dnsop@ietfa.amsl.com
Delivered-To: dnsop@ietfa.amsl.com
Received: from localhost (ietfa.amsl.com [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id EE1D81AE659 for <dnsop@ietfa.amsl.com>; Thu, 2 Jan 2014 03:42:00 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: 3.16
X-Spam-Level: ***
X-Spam-Status: No, score=3.16 tagged_above=-999 required=5 tests=[BAYES_50=0.8, HELO_MISMATCH_INFO=1.448, HOST_MISMATCH_NET=0.311, J_CHICKENPOX_84=0.6, LOTS_OF_MONEY=0.001] autolearn=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id S9IIXuu5Ja2T for <dnsop@ietfa.amsl.com>; Thu, 2 Jan 2014 03:41:59 -0800 (PST)
Received: from mx1.yitter.info (ow5p.x.rootbsd.net [208.79.81.114]) by ietfa.amsl.com (Postfix) with ESMTP id 02DEA1AE656 for <dnsop@ietf.org>; Thu, 2 Jan 2014 03:41:58 -0800 (PST)
Received: from mx1.yitter.info (c-75-69-155-67.hsd1.nh.comcast.net [75.69.155.67]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.yitter.info (Postfix) with ESMTPSA id 6AB8D8A031 for <dnsop@ietf.org>; Thu, 2 Jan 2014 11:41:51 +0000 (UTC)
Date: Thu, 02 Jan 2014 06:41:46 -0500
From: Andrew Sullivan <ajs@anvilwalrusden.com>
To: dnsop@ietf.org
Message-ID: <20140102114146.GA6913@mx1.yitter.info>
References: <20131231000412.GV4291@mx1.yitter.info> <52C323CE.3090909@grothoff.org> <20131231234421.GA5732@mx1.yitter.info> <52C48A4A.6090303@in.tum.de>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <52C48A4A.6090303@in.tum.de>
User-Agent: Mutt/1.5.21 (2010-09-15)
Subject: Re: [DNSOP] More complete review of draft-grothoff-iesg-special-use-p2p-names-01
X-BeenThere: dnsop@ietf.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: IETF DNSOP WG mailing list <dnsop.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dnsop>, <mailto:dnsop-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dnsop/>
List-Post: <mailto:dnsop@ietf.org>
List-Help: <mailto:dnsop-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dnsop>, <mailto:dnsop-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 02 Jan 2014 11:42:01 -0000

On Wed, Jan 01, 2014 at 10:36:10PM +0100, Christian Grothoff wrote:
> 
> Well, my point is that if you expect everybody to first get an RFC through
> to document everything they are doing, expect squatting.

As I suggested at the start of this thread, that's actually a
different question, and one the IETF needs to engage broadly.  I am
not sure what to think.

On the one hand, I agree with you that there is the practical fact:
people might just do this.

On the other hand, the reason people historically felt this was ok was
because the root zone was (as zones go) small and stable.  That is, if
you stepped on a namespace at the top level, you had reasons to
suppose you weren't getting in anyone's way and precedent to expect
that you wouldn't face many negative effects in the future.  In the
present régime, however, those assumptions are both false.  So this
might actually just be an education problem to the rest of the world:
"You used to think this was safe.  It never was, but it used to work.
In future, you should expect it to break."  

> Nobody working on maintaining the software and/or no user community
> would be a strong indicator, and that's something one can find out
> by looking / asking people.

Which people?  We have learned (the hard way) on the Internet that any
assumption something will go away is extremely brittle.  In practice,
once you've started using a namespace, it's there for good (or at
least a very long time.  Just for instance, A6 records have long since
been deprecated, but one continues to see some queries).

> It is unclear to me how we even _could_ provide such a stable
> document, especially for Namecoin and I2P where I don't even
> see academic publications documenting the system on the horizon.

That's going to be awkward, then.  It's kind of hard to justify the
allocation of a TLD by this special procedure if it's for some sort of
protocol that might never go anywhere and might not even use that TLD
in the long run.

> For Tor/GNS, we might be able to point to such references, but
> of course then we have the problem that the protocols may still
> evolve (there are ongoing discussions about changes to ".onion"),
> so the "stable" links might then be outdated, whereas the links
> we provided might remain current.

That's an excellent way to document software systems but a poor way to
document specifications.  A specification document needs to say, "This
is what $x is for."  If the purpose of $x changes, you deprecate the
old and publish something new in order that someone else can come
along and code against whatever the spec is.  Should people coding
know about the changes that are happening while they're looking at the
"stale" spec?  Yes, certainly.  But at the very least, if one wants to
evaluate interoperability, it becomes necessary to point at something
stable.

Now, you might argue that this is a case of no interoperability.  But
that's not quite true: the specification is saying (for instance) that
an intermediate resolver SHOULD intercept these names and generate
RCODE=3.  That's a testable question of interoperation.  Could it
change?  Who knows?  We need a stable reference in order to know what
the behaviour is supposed to be at a given time.

I'm willing to help write stubby I-Ds, if need be, to document these
things.  They wouldn't need to describe everything about the various
systems, I think; just to describe the rules about the names in each
case.  I'll see if I can work up an example today or tomorrow and send
it off-list.

> Well, you're running a DNS operation doing this, whereas these
> are names that should not escape to DNS and users are supposed
> to be aware that using those names gives them additional privacy.
> Adding some DNS name as the suffix suggests to users that they're
> using the "traditional" Internet, and here it is important to
> make it clear that they are not.  And as it is not just a few
> names, just using the label also is not nice, hence "label.i2p".

I think this missed the point.  It could just as easily say
"label.i2p.invalid", for instance, which would guarantee that the
whole thing would already be covered by the .invalid rules.  The i2p
system would need to look for a name ending in "i2p.invalid", of
course, which is very slightly more work.  But once you're looking at
labels, peeling one off is nothing.

Now, I suppose some applications would check for names under .invalid
and stop immediately, knowing they're not allowed to be looked up.
For that reason, we might want to create a special space in (say)
.arpa for this.  Call it localdef.arpa.  Inside localdef.arpa is a
completely locally-defined namespace, and applications would be free
to do whatever they wanted there.  Then you could have
i2p.localdef.arpa.  Applications that don't know about localdef.arpa
still won't block because .arpa is a perfectly good domain.
Applications that do will know to strip off localdef.arpa and do stuff
according to their magic label.  This is _still_ not using the DNS.
It's just living comfortably inside the DNS namespace, which is not
what setting up random TLDs does.

> Given that the code is GPL, why do you then allude to the fact
> that it might make lots of money for someone else? 

It seems pretty clear that most of the alternative-currency systems
tend to attract speculators at the beginning.  I note that namecoin
attempts to blunt this compared to bitcoin, but new currencies are
_always_ the domain of speculators at the beginning.  To someone
interested in the new-TLD business (and I assure you that neither my
employer nor I are), that could look like someone trying to make money
without "paying their dues".

I don't know how far I buy this argument; I'm just trying to point out
the collision here between the IETF allocation procedure and the
regular TLD allocation procedure at ICANN.  (I freely admit that the
problem is caused at least partly by the somewhat dubious "cost
recovery" amount of US$185,000.  If I were boss of the world, there'd
be a lot of things I'd change.  You may imagine that ICANN's rules
would not make my first pass :-). )

> get the value they (presumably) wanted.  However, you're right that
> this turns the "recursive DNS resolver" into a "recursive resolver".
> I'm not exactly sure why you think this would be so terrible, but
> I'm happy to debate this issue (I don't even have a particularly
> strong opinion).

Unfortunately, many people have had the bright idea of injecting
things into "the recursive server" in order to "satisfy user
expectations".  This causes a number of (widely-documented) problems,
and in any case relies on a faulty picture of the real data paths for
DNS data.  It's true that the model is mostly ok for >90% of all
traffic, but the problems show up in the minority.  Even a very small
minority of problems turns into a major headache for help desks --
usually not the help desk of whever had the bright idea.  So, the IETF
cannot responsibly suggest yet another class of positive answer be
added to these intermediate resolvers.  NXDOMAIN seems to be harmless.
Everything else seems to cause trouble.

> > Quite apart from what happens in the presence of DNSSEC (I think it's
> > "fails completely"), I think it is in general bad advice to tell
> > people to populate their DNS caches with data from outside the DNS.
> 
> Well, it'd be a "resolver cache", not a "DNS cache" at this point ;-).

So what happens when that resolver gets a query from an end system
that is doing its own validation?  What if the CD bit is set on that
query?  Now, what if different applications on the same end system
have different resolver libraries, and one is asking for DNSSEC and
the other isn't?  Won't that be kinda confusing?  All this would need
to be spelled out in detail for an intermediate multimode resolver
that might synthesize DNS responses.  Yes, these are corner cases.
Those furry blobs you see in the corners of the DNS are not dust
bunnies.  They're monsters, and they bite.

Best regards,

A

-- 
Andrew Sullivan
ajs@anvilwalrusden.com