Re: [DNSOP] Some thoughts on special-use names, from an application standpoint

Mark Nottingham <mnot@mnot.net> Sun, 29 November 2015 10:11 UTC

Return-Path: <mnot@mnot.net>
X-Original-To: dnsop@ietfa.amsl.com
Delivered-To: dnsop@ietfa.amsl.com
Received: from localhost (ietfa.amsl.com [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 3CC731B3165 for <dnsop@ietfa.amsl.com>; Sun, 29 Nov 2015 02:11:40 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: 0.098
X-Spam-Level:
X-Spam-Status: No, score=0.098 tagged_above=-999 required=5 tests=[BAYES_50=0.8, RCVD_IN_DNSWL_LOW=-0.7, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001] autolearn=ham
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id Ym6xSC_NKevw for <dnsop@ietfa.amsl.com>; Sun, 29 Nov 2015 02:11:37 -0800 (PST)
Received: from mxout-07.mxes.net (mxout-07.mxes.net [216.86.168.182]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 4E2881B315F for <dnsop@ietf.org>; Sun, 29 Nov 2015 02:11:37 -0800 (PST)
Received: from [192.168.1.109] (unknown [120.149.194.112]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by smtp.mxes.net (Postfix) with ESMTPSA id 2454922E1F4; Sun, 29 Nov 2015 05:11:34 -0500 (EST)
From: Mark Nottingham <mnot@mnot.net>
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: quoted-printable
Date: Sun, 29 Nov 2015 21:11:31 +1100
Message-Id: <80FD8D43-1552-4E10-97CD-9781FED204F2@mnot.net>
To: dnsop@ietf.org
Mime-Version: 1.0 (Mac OS X Mail 8.2 \(2104\))
X-Mailer: Apple Mail (2.2104)
Archived-At: <http://mailarchive.ietf.org/arch/msg/dnsop/EsLihD1NKlTw-VV9aFWIPjdM5lE>
Cc: George Michaelson <ggm@algebras.org>
Subject: Re: [DNSOP] Some thoughts on special-use names, from an application standpoint
X-BeenThere: dnsop@ietf.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: IETF DNSOP WG mailing list <dnsop.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dnsop>, <mailto:dnsop-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/dnsop/>
List-Post: <mailto:dnsop@ietf.org>
List-Help: <mailto:dnsop-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dnsop>, <mailto:dnsop-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sun, 29 Nov 2015 10:11:40 -0000

Hi George,

> I have a different perspective on this question Mark.
> 
> Firstly, I find use of .magic as the extreme RHS of a name, to force
> special behaviour architecturally disqueting.
> 
> I really do worry about what we think we're building when we encode this
> behaviour into name strings. It leads to all kinds of bad places. Some of
> them, like the homoglyph problems John Klensin has raised, simply don't
> have good answers (the assumption the string .onion is the literal ASCII
> 'o' 'n' 'i' 'o' 'n' is not well founded)

On the face of it, it sounds like that's a problem shared by any application of DNS names. 


> We were here a long time ago, when we had pre-Internet mail and used things
> like .UUCP as magic break-out signals in email. This rapidly becomes the
> problem: its bound to applications-level decisions about when to honour
> magic, and when not, and it certainly doesn't avoid lower level
> gethostbyname() calls everywhere. So the .magic label winds up being
> half-true, depending.

.onion was the chosen approach precisely because nothing else but lookup and subsequent routing has to change; there are no other application-level decisions about .onion, and that's a feature. HTTP still works, TLS still works (once you can get a cert), links still work, HTML still works. Same-origin policy still works. 

The one difference is that .onion asks that applications and resolvers not leak requests, to avoid privacy issues with misconfigured or misused clients. This is defence in depth, not a hard requirement (after all, .onion has been running for several years with the benefit of that requirement). 

This doesn't seem like "magic" to me, it's just fencing off part of the name space and asking others not to play there. 


> Secondly, While I think  I now understand some of the problems you have in
> web/apps layer (from talking to Wendy Seltzer) and I have sympathies about
> the syntactic constructs welded into code around URL forms, I think these
> problems are different to the architectural/layer violation explicit in
> forcing .magic names into the namespace.

"Architectural/layer violation" is not in and of itself a knock-down blow; what's important is whether the harm that's caused is greater than any other alternative approach available. I haven't seen much detail on that yet.

What is the actual harm, discounting aesthetics? I make that qualification because I very much acknowledge that this is messy. I don't yet see how it's creating technical problems, though. What you bring up is more akin to limitations of the approach -- limitations which the folks defining .onion chose to work within. Presumably, future approaches to distributed naming will make the same tradeoffs. 


> What really got me floored, was the qualities of cryptographic protection
> which a project like TOR needs, and the implication a public/commercial CA
> service embedded in the browser TA set is the right path. I'm frankly
> horrified, even under certificate pinning, that we've gone to a space where
> any TA can claim to sign over .onion, and excluding the pinned
> applications, lead people into paths where their assumptions of TLS backed
> security are simply not true.

Again, that's not unique to Tor and .onion; it's a problem shared by the whole Web. This is not new; it's unfortunately the result of many choices over the years, and there are many efforts to improve it (e.g., CT/TRANS, pinning, etc.). Considering what's happening over the "normal" Web (e.g., banking), that's just as bad, I'd say.


> As I understand things, TOR *wanted* .onion to get X509 PKI over the label
> in a browser, and the CA community refused unless its TLD status was
> confirmed. Is this the kind of rigour in technical process we expect, to
> make technical calls to pre-empt the namespace? (which btw, we passed
> otherwise to another body, reserving an RFC backed process to get names,
> but I think that was a hugely unwise decision)

Some would call this pragmatism. 


> To protect .onion certs, the TOR developers are going to have to code in
> cert pinning behaviour, all kinds of things, which frankly sound to me a
> lot like the cost of not having the name, or having a name buried under a
> 2LD instead.

Not necessarily. CT, pinning and similar approaches can help as well, and these are already getting deployed on the Web overall. Regardless, putting .onion into a 2LD doesn't help avoid these problems.


> So I come to a different place. I come to a place where requests for magic
> names look like violations of any spirit of an architectural view of the
> network, and where retaining some technical basis to reserve them looks
> like violations of the separation of functional roles between ICANN and
> IETF, absent very very clear, strong reasons to have the name held back.
> 
> I don't entirely see these reasons emerging. I see the opposite. I see
> expediency from apps communities, seeking to use .magic tricks to avoid
> cost explicitly in their layer, but at a cost of polluting the public
> commons.

What *exactly* is the cost? Everything you have brought up above (excepting "architecture") is regarding the potential impact upon applications -- impact that was considered and accepted by the people doing the registration of .onion. How does this affect DNS itself? 

And, are you really saying that anything that hasn't gone through ICANN is by definition "pollution"? 

Very thought-provoking analogy between Internet naming and enclosure there, BTW. 


> I am pretty firmly in the camp which says the revision of the RFC should be
> complete: we stop doing this, and people who want names go into ICANN
> process to establish them.

That's certainly a choice that can be made, but it seems pretty unilateral -- just as much as (for example) the W3C deciding that HTTP URLs in browsers don't always use DNS as a root of naming and setting up an overlay registry (as is seemingly already contemplated by the HTTP and URL specs). 

I'd hope that we could work together as a community to find some agreement here, rather than going straight to absolutist positions.

Cheers,

--
Mark Nottingham   https://www.mnot.net/