Re: [v6ops] Interesting problems with using IPv6

Nick Hilliard <> Mon, 15 September 2014 23:40 UTC

Return-Path: <>
Received: from localhost ( []) by (Postfix) with ESMTP id 65C801A0027 for <>; Mon, 15 Sep 2014 16:40:50 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -1.9
X-Spam-Status: No, score=-1.9 tagged_above=-999 required=5 tests=[BAYES_00=-1.9] autolearn=ham
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id JUrqxLTEMkq3 for <>; Mon, 15 Sep 2014 16:40:48 -0700 (PDT)
Received: from ( [IPv6:2a03:8900:0:100::5]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by (Postfix) with ESMTPS id C72DF1A002A for <>; Mon, 15 Sep 2014 16:40:47 -0700 (PDT)
X-Envelope-To: <>
Received: from ([IPv6:2001:4d68:2002:100::110]) (authenticated bits=0) by (8.14.9/8.14.5) with ESMTP id s8FNeeXD054069 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES128-SHA bits=128 verify=NO) for <>; Tue, 16 Sep 2014 00:40:40 +0100 (IST) (envelope-from
X-Authentication-Warning: Host [IPv6:2001:4d68:2002:100::110] claimed to be
Message-ID: <>
Date: Tue, 16 Sep 2014 00:40:40 +0100
From: Nick Hilliard <>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:24.0) Gecko/20100101 Thunderbird/24.6.0
MIME-Version: 1.0
References: <> <> <> <> <> <> <> <> <> <>
In-Reply-To: <>
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: 8bit
Subject: Re: [v6ops] Interesting problems with using IPv6
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: v6ops discussion list <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Mon, 15 Sep 2014 23:40:50 -0000

On 15/09/2014 21:20, Brian E Carpenter wrote:
>> How else is L2 equipment going to make intelligent decisions about
>> multicast forwarding within an L2?
> They're not. That's why DEC Gigaswitches had ARP throttling, for example,
> to control broadcast storms. It's unavoidable if you build big L2
> networks. I learnt not to do that.

let's be careful what we're talking about here.  The problems of yesteryear
were related to large broadcast domains.  The problems we're discussing
here relate to modest-sized broadcast domains, but potentially with a large
number of them traversing a single switch.  These are different problem
spaces and it is not helpful to confuse the two.

This original issue brings up a number of awkward questions about
scalability, which at heart is a protocol design issue and not - as stated
by others - either a vendor implementation issue or a protocol issue.  The
IETF created a protocol dependency mechanism to assist scalability by
allowing large numbers of v6 addresses to exist on the same l2 network
(both single broadcast domain and multiple broadcast domains).

The mechanism operates along the lines of:

- ND depends on multicast for basic functionality

- ND multicast addressing uses multicast groups to assist scalability

- MLD is implemented to help l2 devices decide how and where to prune v6
multicast transmission

- this mechanism pushes state into the l2 forwarding control plane (SP =
switch processor)

- privacy addresses increase the number of v6 addresses on any single/multi
broadcast domains by an order of magnitude, give or take.

- anecdotally, it seems that the continued existence of large-ish but
segmented multiple vlan l2 networks and the advent of privacy addresses
means that switches are seeing a problematic quantity of state being pushed
to the SP on some networks.

Stepping back a little to broadcast vs multicast, the v6 protocol was
designed this way to work around the rampant broadcast storm problems of
the 1990s where large broadcast domains were all the rage.  Those who were
around at the time will remember windows 3.1 cpus pegging due to 10 megs of
broadcast traffic on the campus /16.  In this sort of situation, mld
snooping might well have worked nicely to stop problems at the network edge

The problem set has changed - modern networks rarely use large flat
networks but instead segment networks into large numbers of vlans on a
shared physical infrastructure.  This means that today, the protocol that
was designed to help at the network edge is hurting at the network core and
we are applying a solution to a problem which largely no longer exists in
its original context.

The irony is that the solution which was created to allow ipv6 to scale on
large layer 2 domains is restricting scalability on large layer 2 domains.
 We have merely swapped one scaling problem for another.


(*) there is an exception in large scale virtualised infrastructure, where
VM networking people have not learned the lessons of the past and there is
a massive push to create underlay protocols to allow enormous l2 domains
which span multiple locations.