Re: [dnsext] Re: I-D ACTION:draft-vandergaast-edns-client-ip-00.txt

Alex Bligh <alex@alex.org.uk> Thu, 28 January 2010 20:00 UTC

Return-Path: <owner-namedroppers@ops.ietf.org>
X-Original-To: ietfarch-dnsext-archive@core3.amsl.com
Delivered-To: ietfarch-dnsext-archive@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id 791313A689A; Thu, 28 Jan 2010 12:00:57 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -106.599
X-Spam-Level:
X-Spam-Status: No, score=-106.599 tagged_above=-999 required=5 tests=[BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id jM9L5Wvck7Fl; Thu, 28 Jan 2010 12:00:56 -0800 (PST)
Received: from psg.com (psg.com [147.28.0.62]) by core3.amsl.com (Postfix) with ESMTP id 6595228C0E7; Thu, 28 Jan 2010 12:00:40 -0800 (PST)
Received: from majordom by psg.com with local (Exim 4.71 (FreeBSD)) (envelope-from <owner-namedroppers@ops.ietf.org>) id 1NaaOq-0009T4-0F for namedroppers-data0@psg.com; Thu, 28 Jan 2010 19:51:32 +0000
Received: from [89.16.176.221] (helo=mail.avalus.com) by psg.com with esmtp (Exim 4.71 (FreeBSD)) (envelope-from <alex@alex.org.uk>) id 1NaaOn-0009Sd-MJ for namedroppers@ops.ietf.org; Thu, 28 Jan 2010 19:51:30 +0000
Received: from [192.168.100.15] (87-194-71-186.bethere.co.uk [87.194.71.186]) by mail.avalus.com (Postfix) with ESMTPSA id BF334C565F2; Thu, 28 Jan 2010 19:51:27 +0000 (GMT)
Date: Thu, 28 Jan 2010 19:51:27 +0000
From: Alex Bligh <alex@alex.org.uk>
Reply-To: Alex Bligh <alex@alex.org.uk>
To: Nicholas Weaver <nweaver@ICSI.Berkeley.EDU>
cc: Nicholas Weaver <nweaver@ICSI.Berkeley.EDU>, Paul Vixie <vixie@isc.org>, namedroppers@ops.ietf.org, Alex Bligh <alex@alex.org.uk>
Subject: Re: [dnsext] Re: I-D ACTION:draft-vandergaast-edns-client-ip-00.txt
Message-ID: <64E75C1F63E69611DE870231@Ximines.local>
In-Reply-To: <EEAAE4BF-BBA9-4141-BECC-A8440715597F@icsi.berkeley.edu>
References: <7c31c8cc1001271556w4918093er6e94e07cb92c4dc4@mail.gmail.com> <6184.1264657589@nsa.vix.com> <4966825a1001280807i768a33ccs98f809366bce33d8@mail.gmail.com> <48894.1264695230@nsa.vix.com> <50A91B20-5AC1-4819-91ED-E5141F068D48@wiggum.com> <52065.1264699087@nsa.vix.com> <FDD5D1103B8EA4D13C4A2C4C@Ximines.local> <EEAAE4BF-BBA9-4141-BECC-A8440715597F@icsi.berkeley.edu>
X-Mailer: Mulberry/4.0.8 (Mac OS X)
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"; format="flowed"
Content-Transfer-Encoding: 7bit
Content-Disposition: inline
Sender: owner-namedroppers@ops.ietf.org
Precedence: bulk
List-ID: <namedroppers.ops.ietf.org>
List-Unsubscribe: To unsubscribe send a message to namedroppers-request@ops.ietf.org with
List-Unsubscribe: the word 'unsubscribe' in a single line as the message text body.
List-Archive: <http://ops.ietf.org/lists/namedroppers/>

Nicholas,

>> * I am concerned that the IP included is the transport layer IP
>> visible to the client/resolver. That might have little to do
>> with the IP address from which a subsequent (e.g.) http connection
>> appears to originate.
>
> Only VERY rarely.  Very few users in our observations with Netalyzr are
> behind manually or automatically configured proxies which route traffic
> through a different address from the remainder of their network traffic
> (<3%), with 1/4 of those having the proxy in the same /24 subnet.

I am surprised if that is the case and wonder whether you are reading
my sentence in another way. So, e.g. take this set up

                                 192.168.0.2
            192.0.2.0/24       |------------ Resolver
  Internet <-----------[NAT]---|
                               |------------ Client
                                 192.100.0.3

Now, the client IP address in the DNS query is going to 192.100.0.3.
The resolver has no (reliable) way for finding out its external
IP address. It may heuristically determine it is 192.0.2.1 (say)
but that presumes that it doesn't have (e.g.) a forwarder configured.
However, the current draft says it should use 192.168.0.2, but
for the fact that is RFC1918 space, and it should thus discard
it. Best case, the subsequent http connection (or whatever) appears
to come from 192.0.2.0/24.

If you think my using NAT is cheating, then consider the example
of (say) the university with no NAT, but a squid proxy. What's
to say their internal resolvers are on the same /24 as the
squid proxy?

>> * I am concerned about section 4.3 causing algorithmic
>> complexity, and an explosion in cache size. If NETMASK
>> is set small, you will get a ton of records for
>> www.google.com cached. That's going to be unworkable,
>> which will encourage large settings for NETMASK.
>
> Memory is cheap.  Seriously, who should worry when any resolver big
> enough to have this issue is going to be a cluster:  Lets say I have a
> open recursive resolver for EVERYONE, and 1000 names uses this and are
> queried by EVERYONE, and I have netmask set to /24.
>
> The total caching space required is 16 Giga-entries.

Fair point re memory, but cache misses cost more than memory. They result
in higher load upstream.

>> I have to wonder whether the simpler solution to fix this is
>> not just to deploy more resolvers closer to the edge.
>
> Actually, that IS the fix:  Use the ISP's recursive resolver.
>
> This is necessary if you want to use something OTHER than the ISP's
> recursive resolver to work well on today's Internet.

I was presuming one problem is that the ISP's recursive resolver
is not topologically close to the source, given the draft authors
say:
:    When the Recursive Resolver does not use an IP address that appears
:    to be topologically close to the end user, the results returned by
:    those Authoritative Nameservers will be at best sub-optimal.

but if the whole problem is people using a DNS provider other than
the one they are "intended to" ...

> Do you want third-party DNS providers to not suck for youtube, akamai,
> Google, and a good fraction of the net people use every day?  If so, you
> need this.

... my answer would be "don't do that then". Either:
a) run your own recursive resolver on your local box (which is easy
   enough to do); or
b) run one on your own subnet; or
c) accept that if you use someone else's local resolver you will
   suffer

I suspect the reasons for using third party DNS providers are:

1. DHCP results in a dysfunctional resolver (e.g. the middlebox
   broken resolver). See draft-bellis-dns-recursive-discovery for
   a better way around that.

2. Their ISP's nameserver is broken/evil. I'm happy to leave that
   for the market to fix, rather than standardize a behavioural
   change in every caching resolver.

-- 
Alex Bligh