Re: [radext] Fwd: RE: Fwd: RE: Fwd: RE: Mail reguarding draft-ietf-radext-dynamic-discovery

Stefan Winter <stefan.winter@restena.lu> Thu, 25 July 2013 07:23 UTC

Return-Path: <stefan.winter@restena.lu>
X-Original-To: radext@ietfa.amsl.com
Delivered-To: radext@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id BC50621F9A44 for <radext@ietfa.amsl.com>; Thu, 25 Jul 2013 00:23:46 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.599
X-Spam-Level:
X-Spam-Status: No, score=-2.599 tagged_above=-999 required=5 tests=[AWL=0.000, BAYES_00=-2.599]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id nfFOzCYUEZmP for <radext@ietfa.amsl.com>; Thu, 25 Jul 2013 00:23:41 -0700 (PDT)
Received: from smtprelay.restena.lu (smtprelay.restena.lu [IPv6:2001:a18:1::62]) by ietfa.amsl.com (Postfix) with ESMTP id 8876921F9A2D for <radext@ietf.org>; Thu, 25 Jul 2013 00:23:37 -0700 (PDT)
Received: from smtprelay.restena.lu (localhost [127.0.0.1]) by smtprelay.restena.lu (Postfix) with ESMTP id 67BB610584 for <radext@ietf.org>; Thu, 25 Jul 2013 09:23:36 +0200 (CEST)
Received: from aragorn.restena.lu (aragorn.restena.lu [IPv6:2001:a18:1:8::155]) by smtprelay.restena.lu (Postfix) with ESMTPS id 58A0910581 for <radext@ietf.org>; Thu, 25 Jul 2013 09:23:36 +0200 (CEST)
Message-ID: <51F0D274.8070701@restena.lu>
Date: Thu, 25 Jul 2013 09:23:32 +0200
From: Stefan Winter <stefan.winter@restena.lu>
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130620 Thunderbird/17.0.7
MIME-Version: 1.0
To: radext@ietf.org
References: <88ACDECA21EE5B438CA26316163BC14C25D334A9@BASS.ad.clarku.edu> <51DD5683.3070202@restena.lu> <51DE5730.4080008@deployingradius.com> <51E545A6.6040008@restena.lu> <51E54C2E.80002@deployingradius.com> <51EFEB39.4060102@restena.lu> <51F0798F.4@deployingradius.com>
In-Reply-To: <51F0798F.4@deployingradius.com>
X-Enigmail-Version: 1.5.2
Content-Type: multipart/signed; micalg="pgp-sha1"; protocol="application/pgp-signature"; boundary="AewkpMIiO4iMTCaxddlHNVuD4xGou8l63"
X-Virus-Scanned: ClamAV
Subject: Re: [radext] Fwd: RE: Fwd: RE: Fwd: RE: Mail reguarding draft-ietf-radext-dynamic-discovery
X-BeenThere: radext@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: RADIUS EXTensions working group discussion list <radext.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/radext>, <mailto:radext-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/radext>
List-Post: <mailto:radext@ietf.org>
List-Help: <mailto:radext-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/radext>, <mailto:radext-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 25 Jul 2013 07:23:48 -0000

Hi,

>> It also just occured to me that your attack is too complex to be useful
>> for an attacker; it would require actual humans lured into trying to log
>> into realmA in masses, so that their login attempts overload proxy B. It
>> would require hundreds or thousands of humans trying to log in per
>> second so that proxy B feels the load.
> 
>   I suppose there are no companies with 10M customers, and imperfect
> administrators?

Even 10M customers distribute their login timestamps over the day; any
reasonable server should be able to cope with that.

And to be honest, I'd expect an ISP of this size not do such stupid
mistakes, or at least to find out about it *very* shortly after it
happened because lots of customers will be unable to authenticate when
it's happened.

>   I'm not trying to make perfect security.  I'm trying to make it
> cheaper to catch mistakes.
> 
>> In short... once your server is up in the open, it can be contacted by
>> anyone, like it or not. An implementation needs to cope with that.
> 
>   Yes.  There's no question there.
> 
>   I would like to discover *misconfiguration* quickly.  A reverse IP
> check is cheap.  It's more expensive to have 100K customers hit your
> proxy because an admin mis-configured something.

Reverse IP checks are also "something completely different", and the
semantics of hostname identification is uneasy. Imagine a proxy with IP
1.2.3.4 who is AAA proxy for "cnn.com" "microsoft.com" "yahoo.com"
"telekom.de" and has corresponding PTRs to fulfill your proposed
reverse-matching. Now someone looks up the reverse of that IP address
for totally non-AAA reasons and finds that the IP address claims to be
all four domains. Huh?

Unless we invent a new PTR-like construct just for the AAA
misconfiguration catching purpose, I'm against using such a mechanism.
And frankly, inventing an own DNS RR just for that seems like enormous
overkill to me.

I take your point on hammering the server with misconfigured discovery
entries though. I see it as a deficiency in the current spec,
particularly section 2.1.1.2: Definition of Conditions for Retry/Failure

That section basically states: if client and server found out they don't
like each other, ignore the entry and try the next.

Which is fine in that instant, but doesn't speak about the persistence
of ignoring the target.

I've added a new last sentence to my working copy to ensure that there
is some backoff time:

"  If the TLS session setup to a discovered target does not succeed,
   that target (as identified by IP address and port number) SHOULD be
   ignored from the result set of any subsequent executions of the
   discovery algorithm at least until the target's Effective TTL has
   expired or until the entity which executes the algorithm changes its
   TLS context to either send a new client certificate or expect a
   different server certificate."


This provides an extra level of rate-limiting; well-behaved clients will
stop hammering the target for a while, independently of the NAI realm
that led to that server. I.e. one hotspot generates one TLS session
request, and will then be quiet for a significant amount of time.

I believe this takes the heat out of such pathological misconfigurations.

I know, this conflates the Effective TTL timer with something that's
unrelated to DNS: the retry should happen when the server-side changed
their TLS cert, because then it might work. But that condition can't be
signalled, so the timeout that DNS provides us with is the best we have.
The alternative would be to make that particular backoff time a
configuration variable.

Greetings,

Stefan Winter

-- 
Stefan WINTER
Ingenieur de Recherche
Fondation RESTENA - Réseau Téléinformatique de l'Education Nationale et
de la Recherche
6, rue Richard Coudenhove-Kalergi
L-1359 Luxembourg

Tel: +352 424409 1
Fax: +352 422473