Re: [v6ops] Please review draft-donley-behave-deterministic-cgn

Chris Grundemann <C.Grundemann@cablelabs.com> Wed, 11 January 2012 17:45 UTC

Return-Path: <C.Grundemann@cablelabs.com>
X-Original-To: v6ops@ietfa.amsl.com
Delivered-To: v6ops@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 7629621F86FD for <v6ops@ietfa.amsl.com>; Wed, 11 Jan 2012 09:45:46 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: 1.5
X-Spam-Level: *
X-Spam-Status: No, score=1.5 tagged_above=-999 required=5 tests=[AWL=0.474, BAYES_05=-1.11, HELO_EQ_MODEMCABLE=0.768, HOST_EQ_MODEMCABLE=1.368]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id Ex0lIYKC6r3R for <v6ops@ietfa.amsl.com>; Wed, 11 Jan 2012 09:45:46 -0800 (PST)
Received: from ondar.cablelabs.com (ondar.cablelabs.com [192.160.73.61]) by ietfa.amsl.com (Postfix) with ESMTP id 0397A21F86FA for <v6ops@ietf.org>; Wed, 11 Jan 2012 09:45:45 -0800 (PST)
Received: from kyzyl.cablelabs.com (kyzyl [10.253.0.7]) by ondar.cablelabs.com (8.14.5/8.14.5) with ESMTP id q0BHjhZ3031701; Wed, 11 Jan 2012 10:45:43 -0700
Received: from srvxchg.cablelabs.com (10.5.0.15) by kyzyl.cablelabs.com (F-Secure/fsigk_smtp/303/kyzyl.cablelabs.com); Wed, 11 Jan 2012 10:45:43 -0700 (MST)
X-Virus-Status: clean(F-Secure/fsigk_smtp/303/kyzyl.cablelabs.com)
Received: from srvxchg.cablelabs.com ([10.5.0.15]) by srvxchg ([10.5.0.15]) with mapi; Wed, 11 Jan 2012 10:45:43 -0700
From: Chris Grundemann <C.Grundemann@cablelabs.com>
To: "Joel M. Halpern" <jmh@joelhalpern.com>, Dave Thaler <dthaler@microsoft.com>
Date: Wed, 11 Jan 2012 10:45:41 -0700
Thread-Topic: [v6ops] Please review draft-donley-behave-deterministic-cgn
Thread-Index: AczQiNclTwW04G2GTtmksmBFvldprw==
Message-ID: <CB331433.4200%c.grundemann@cablelabs.com>
In-Reply-To: <4F04D370.7030403@joelhalpern.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Microsoft-MacOutlook/14.14.0.111121
acceptlanguage: en-US
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-Approved: ondar
Cc: v6ops v6ops WG <v6ops@ietf.org>, Behave Chairs <behave-chairs@tools.ietf.org>, "draft-donley-behave-deterministic-cgn@tools.ietf.org" <draft-donley-behave-deterministic-cgn@tools.ietf.org>
Subject: Re: [v6ops] Please review draft-donley-behave-deterministic-cgn
X-BeenThere: v6ops@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: v6ops discussion list <v6ops.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/v6ops>, <mailto:v6ops-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/v6ops>
List-Post: <mailto:v6ops@ietf.org>
List-Help: <mailto:v6ops-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/v6ops>, <mailto:v6ops-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 11 Jan 2012 17:45:46 -0000

On 1/4/12 3:32 PM, "Joel M. Halpern" <jmh@joelhalpern.com> wrote:

>When I asked on operator about the port distribution, their answer was
>somewhat different.
>What they said was that when the users are idle, they don't use many
>ports.  But when the user is busy, they use a lot of ports.
>This makes repeated, largeish, block allocation sensible, but makes
>pre-allocation of a small number, and then individual allocation
>thereafter, a bad strategy, as it will basically log as much as current.

We (the authors) actually agree that very small port allocations are a bad
idea with this methodology. Deterministic CGN should be used with low
compression ratios for the greatest gains over dynamic block allocations.

Remember that with just a 2:1 compression ratio (2 subscribers to each 1
public IPv4 address) you double the number of customers possible with your
existing IP allocation(s). Plus, each of those customers has access to
over 30,000 ports. However, you probably don't want to subject ALL of your
customers to CGN, so higher compression ratios are likely needed. If you
go all the way up to 64:1, you can put roughly 16,000 subscribers behind a
single /24 and still dedicate about 1,000 ports to each of them (with zero
need for logging). Then, you add in the ability to overflow into a shared,
block-allocation pool for extreme bursts (because, again, we agree that
some users are port-hogs and are often bursty in their usage). We feel
that this approach truly is the best strategy, as long as you are able to
keep the compression ratio's low (low enough that the static/deterministic
allocation covers the majority of your customer's peak port utilization)
in your network.

Cheers,
~Chris


>
>I do not have access to any research results to disambiguate these two
>reports.
>
>Yours,
>Joel