Re: [DNSOP] More complete review of draft-grothoff-iesg-special-use-p2p-names-01

Christian Grothoff <> Wed, 01 January 2014 21:36 UTC

Return-Path: <>
Received: from localhost ( []) by (Postfix) with ESMTP id 966801AC43F for <>; Wed, 1 Jan 2014 13:36:23 -0800 (PST)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -2.25
X-Spam-Status: No, score=-2.25 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, HELO_EQ_DE=0.35, RCVD_IN_DNSWL_LOW=-0.7] autolearn=ham
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id tFFMDtlxj9nD for <>; Wed, 1 Jan 2014 13:36:20 -0800 (PST)
Received: from ( []) by (Postfix) with ESMTP id 1C9B41A802B for <>; Wed, 1 Jan 2014 13:36:20 -0800 (PST)
Received: from [] ( []) by (Postfix) with ESMTPSA id 7B2CD188E857 for <>; Wed, 1 Jan 2014 22:36:12 +0100 (CET)
Message-ID: <>
Date: Wed, 01 Jan 2014 22:36:10 +0100
From: Christian Grothoff <>
Organization: TU Munich
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
References: <> <> <>
In-Reply-To: <>
X-Enigmail-Version: 1.5.1
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Mailman-Approved-At: Wed, 01 Jan 2014 14:29:54 -0800
Subject: Re: [DNSOP] More complete review of draft-grothoff-iesg-special-use-p2p-names-01
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: IETF DNSOP WG mailing list <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Wed, 01 Jan 2014 21:36:23 -0000

On 01/01/2014 12:44 AM, Andrew Sullivan wrote:
> On Tue, Dec 31, 2013 at 09:06:38PM +0100, Christian Grothoff wrote:
>> What kind of metric do you propose, and how do you propose to acquire it?
> From what you're saying about the different cases, it sounds like
> different metrics are going to be relevant.  For instance, it seems
> you could just count for .bit, but presumably the Tor examples could
> be worked out from rough estimates of the size of the Tor network.  

Yes, but to what end?

>> And again, a key question for me is, if you really want to _encourage_
>> people to _first_ deploy at large scale and _then_ reserve the name.
> No, I don't.  What has happened here is, frankly, that the namespace
> was squatted on, and you're trying to normalize.  I think that's
> valuable and I want to support it.  But what would have been _better_
> is to have a discussion early when the names were being built into
> protocols.  How to do that is also a problem, but maybe if we can make
> it relatively painless in this case it will be more obvious that it
> could be less hard in future when it's needed.

Well, my point is that if you expect everybody to first get an RFC through
to document everything they are doing, expect squatting.

>> so a better question might be if there should be a process
>> to "unreserve" names should they fall into disuse.
> How would you know?  If you can't count users of .gnu now (don't even
> know what "users" means, indeed), how would you know when it was safe?

Nobody working on maintaining the software and/or no user community
would be a strong indicator, and that's something one can find out
by looking / asking people.

>>> _Many_ of the references seem to be works in progress or unstable web
>>> pages.  That needs to be fixed for this document to be a useful
>>> specification.
>> I was told that recently moved to  Which other
>> pages are unstable in your experience? ("Worked for me...").
> Oh, sorry.  I meant the other "stable": normally, we want the
> references in documents (and require the normative references) to be
> "stable references" in the sense that the target can't change.  So a
> URI isn't enough -- we need a permanent reference to a versioned
> document.  (This is the reason RFCs can't be changed once they're
> published.  They're an archival series.  If you want to "update" an
> RFC, you replace it with a new one.)

It is unclear to me how we even _could_ provide such a stable
document, especially for Namecoin and I2P where I don't even
see academic publications documenting the system on the horizon.
For Tor/GNS, we might be able to point to such references, but
of course then we have the problem that the protocols may still
evolve (there are ongoing discussions about changes to ".onion"),
so the "stable" links might then be outdated, whereas the links
we provided might remain current.

So even in the cases where "stable" links may exist, I'm
not convinced that only providing such "stable" links is
desirable.  But we could try to add additional references
to more stable design documents in cases where these

>>>     […] Note that ".gnu"
>>>    names SHOULD follow the naming conventions of DNS.
>>> What does this mean exactly? 
>>> Is this the LDH rule?
>> Yes, in the sense of "SHOULD".
> Ok.  This is probably the right interpretation of the STD 13
> "preferred syntax" (== LDH rule).  What STD 13 says, in effect, is
> that you can put anything in there, but things will tend to work
> better if you follow LDH.  (Note that RFC 1123 actually relaxed the
> rule, because originally labels couldn't start with a number either.)
> What is the reason for continuing this tradition in GNS?  (That's
> probably out of scope for this document and even this draft, but the
> LDH rule has been a PITA and it's one of the things that would be nice
> to ditch if we could.)

Just backwards-compatibility with applications.

>>> Is GNS subject to IDNA?
>> Yes.
> Wow.  Are you sure you want to buy that much pain?

Well, again it's about backwards-compatibility with applications.
Browsers give us IDNA, so we deal with that.

>>> Or are labels just bags
>>> of octets?  
>> No.
> Why not?  If the LDH rule is only SHOULD, then what do you do with
> octets outside that range?  

Here's the short answer:

    idna_to_ascii_8z (label, &output, IDNA_ALLOW_UNASSIGNED))

Everything else goes.

>>> Are they only 63 octets each, with a maximum 255 limit?
>> Yes, in the sense of "SHOULD". However, if the DNS protocol is not
>> used ("pure" GNS resolution with GNS-enabled applications where
>> a DNS packet is never created) is is theoretically permitted for
>> an application to not enforce these constraints.
> This advice is worth what you're paying for it :) but I think that's
> going to hurt you in the long run.  You're going to get
> interoperability problems between people who actually enforce 63/255
> and people who don't.  Again, out of scope for this.

Well, right now our code enforces the 255 limit on everything, but
I wanted to leave a loophole in case that limit ever becomes a
problem in the future, for example if we ever need to resist
quantum cryptography and need much longer public keys.

>>  Also, yes, in theory appending
>> "f97f3b153fed6604230cd497a3d1e9815b007637.exit"
>> even once can cause one to run over the 255 limit on DNS names,
>> in which case the answer is "tough luck, won't work".
> Ok.  It would be good to note that.
> For the whole issue, then, I'd suggest something along these lines:
>     Because every name ending in .exit is at least possibly a valid
>     hostname in itself, it should be noted that it is theoretically
>     possible for those names to be picked up and used in the construction
>     of .exit names again.  There is no reason to do this, but it is
>     possible.  In constructing such a name, it is possible to exceed
>     the total length limit for DNS names.  Such names may fail or
>     behave unpredictably when used with DNS resolvers.

Fine with me.

>> Well, if I had a 4,000-entry /etc/hosts file shared with tens of
>> thousands of other users, I think I would also try to put all of
>> those entries under some special TLD to ensure everyone knew that
>> I was using names from that list, as opposed to ordinary DNS names
> Why?  My employer runs, among other names.  We have way
> more than 4,000 names under there.  Why wouldn't that work just as
> well in this case?

Well, you're running a DNS operation doing this, whereas these
are names that should not escape to DNS and users are supposed
to be aware that using those names gives them additional privacy.
Adding some DNS name as the suffix suggests to users that they're
using the "traditional" Internet, and here it is important to
make it clear that they are not.  And as it is not just a few
names, just using the label also is not nice, hence "label.i2p".

>> Well, I always thought of Namecoin and bitcoin as money-burning
>> activities, but feel free to invest in Namecoin then ;-). 
> Snort.  I didn't say it'd make money _for me_!

Given that the code is GPL, why do you then allude to the fact
that it might make lots of money for someone else?  My point was
that if you really believe it to be that profitable, you should
make it work for you. If you believe that there is not much money
to be made, then suggesting that the users should pay 150k for
using some GPL'ed P2P software makes little sense.

>> I agree with you that the _technical_ reasons for a TLD are likely
>> the weakest for Namecoin, and I suspect the marketing/usability
>> concerns were the dominating reason.  But again, we're documenting
>> how it was deployed.
> Yes.  But this is one of those joints between mere protocol and
> policy, because some time ago ICANN became responsible for the policy
> of root zone operation.  (The analogy with Apple is a good one -- the
> deployment of .local was, frankly, a similar case of
> namespace-squatting.)  Nobody cared about this before ICANN adopted an
> in-principle indefinite expansion of the root zone, but now we have a
> problem.
> Getting evidence of the number of actual use (however we work that
> out) will go a long way in the argument, though.  On the other hand,
> suppose the registration doesn't happen.  The reason that .local (or,
> probably more directly comparable in this case, .corp) should possibly
> be on the special-use registration list is because we know that there
> are services using those names and that there will be security and
> user-confusion side effects if those names are also registered in the
> DNS name.  Similar arguments for this case wouldn't hurt.

Well, that is obviously the main argument for RFC 6761, and merely
by referencing that document this should be clear. But, as I think
you already suggested, we can stress that point more in the abstract.

>>> ajs@[]?  I bet
>>> not.  It's simply not true that names and numbers can everywhere be
>>> used interchangeably.
>> I don't quite see why this would not work in principle, assuming your
>> mail client is configured to route SMTP over Tor and the specified
>> exit supports exiting on port 25 to
> Because the [ ] notation signals that what's in there is an IP
> address, not a name.

Ah, got it. We'll change the wording to try to address this.

>> I guess the question is what you define as "DNS resolver".  For example,
>> is the "dns2gns" proxy --- which speaks DNS (to applications) and then
>> forwards
>> ".gnu" and ".zkey" to GNS and everything else to DNS --- a "DNS resolver"?
> Only partly.  This is similar to the confusion we had about recursive
> resolvers and caching name servers.  In the DNSSEC documents, you can
> see discussion about "the resovler side of a recursive name server"
> and "the name server side of a recursive name server" (or something
> similar).  Your example is of a general-purpose resolver system that
> does both DNS and GNS.  For the purposes of protocol, then, you have
> something like (ASCII art, sorry):
>     resolver ---->  DNS ---->  DNS resolver ---> DNS
>      |----->  GNS --->  GNS resolver ---> GNS
> For the purposes of implementation, these may well all be the same
> code.  But for the purposes of the protocol, we need to distinguish
> them.

Agreed.  We should probably add your ASCII art and adopt this kind
of terminology to clarify resolver vs. DNS resolver vs. GNS resolver.

>> I think you're mis-reading the point here, as the Namecoin block chain
>> is not obtained using the usual DNS protocol from some authority. So
>> DNS server operators that wanted to do this would NOT go out to some
>> IP address of a TLD operator that was granted ".bit" operation from
>> ICANN to fetch the block chain; instead, they'd have to participate
>> in the Namecoin network to observe block chain updates (a rather
>> daunting process, so this "MAY" is one that I personally don't expect
>> to see happening anytime soon by any major ISP).
> This is an even worse idea.  What you're suggesting there is that
> recursive name server operators should add the .bit name to the
> answers they give out; but they don't get that answer from the DNS.

I think we wrote this as "MAY", not "SHOULD".  The point was that
if you do this, you still don't break user expectations --- they
get the value they (presumably) wanted.  However, you're right that
this turns the "recursive DNS resolver" into a "recursive resolver".
I'm not exactly sure why you think this would be so terrible, but
I'm happy to debate this issue (I don't even have a particularly
strong opinion).

> Quite apart from what happens in the presence of DNSSEC (I think it's
> "fails completely"), I think it is in general bad advice to tell
> people to populate their DNS caches with data from outside the DNS.

Well, it'd be a "resolver cache", not a "DNS cache" at this point ;-).

>> I expect them to be handed around by end-users and within applications.
>> I do not expect them to be handed around within UDP packets on port 53
>> (with the exception of ".bit").
> But they will be:
>> I expect that this MAY happen, but if the draft is accepted, one
>> of our goals is to explicitly authorize DNS operators to prevent
>> this.
> You can't "prevent it".  You can just have them swallowed somewhere
> else in the DNS system.  They _will_ get passed around on port 53.
> Therefore,

Well, I mean that DNS operators can "prevent it" in some cases,
but of course not categorically.

>>> "resolved or did not".  But perhaps what you are saying is that these
>>> ought to be added to the list in RFC 6303, and that they should always
>>> answer NXDOMAIN?  I could certainly live with that.
>> Yes, exactly.  The formulation did end up a bit unclear and should
>> be fixed.  Our intention was exactly to allow servers to immediately
>> return NXDOMAIN without going to the root.
> what I'd do is take a page out of RFC 6303, or follow the text in
> question 4 in the RFC 6761 discussion of .test:
>    4.  Caching DNS servers SHOULD recognize test names as special and
>        SHOULD NOT, by default, attempt to look up NS records for them,
>        or otherwise query authoritative DNS servers in an attempt to
>        resolve test names.  Instead, caching DNS servers SHOULD, by
>        default, generate immediate negative responses for all such
>        queries. […]

Yep, that's fine.

>> Again, thanks for your comments, we'll try to address those in the
>> next iteration, many of these were really helpful.
> Glad to be of help.  
> Best regards (and happy 2014),

Happy 2014 to you, too.