Re: [v6ops] Interesting problems with using IPv6

Owen DeLong <> Fri, 12 September 2014 22:34 UTC

Return-Path: <>
Received: from localhost ( []) by (Postfix) with ESMTP id 420601A00C2 for <>; Fri, 12 Sep 2014 15:34:59 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -3.053
X-Spam-Status: No, score=-3.053 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, J_CHICKENPOX_57=0.6, RP_MATCHES_RCVD=-1.652, SPF_PASS=-0.001] autolearn=ham
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id O0bEvAKbaiPi for <>; Fri, 12 Sep 2014 15:34:55 -0700 (PDT)
Received: from ( [IPv6:2620:0:930::200:2]) by (Postfix) with ESMTP id 2C7131A008B for <>; Fri, 12 Sep 2014 15:34:54 -0700 (PDT)
Received: from [] ([]) (authenticated bits=0) by (8.14.2/8.14.2) with ESMTP id s8CMYJ9U023950 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=NOT); Fri, 12 Sep 2014 15:34:20 -0700
X-DKIM: Sendmail DKIM Filter v2.8.3 s8CMYJ9U023950
DKIM-Signature: v=1; a=rsa-sha1; c=simple/simple;; s=mail; t=1410561263; bh=QKf2lWuD5NbUoK2q1ArBO04D0NU=; h=Content-Type:Mime-Version:Subject:From:In-Reply-To:Date:Cc: Content-Transfer-Encoding:Message-Id:References:To; b=qFqVxvUWAprtylDmBmiYK96xIrCeQ9Sy82AUITU054ZeVIt6dI2WhcKENpGG2usjj QZgwbqUoOIrYDmxxRMA36v59n1UIqHurtOzt/fK9njcuk1RWAw+odhdigDY9LEwpbf 7N8dDnfeXzhqiIOLV9sVr/Sb7frePtgxbu8lZSWA=
Content-Type: text/plain; charset=windows-1252
Mime-Version: 1.0 (Mac OS X Mail 7.3 \(1878.6\))
From: Owen DeLong <>
In-Reply-To: <20140909142226.GP15839@angus.ind.WPI.EDU>
Date: Fri, 12 Sep 2014 15:33:52 -0700
Content-Transfer-Encoding: quoted-printable
Message-Id: <>
References: <> <> <> <> <> <> <> <> <20140909142226.GP15839@angus.ind.WPI.EDU>
To: Chuck Anderson <cra@WPI.EDU>
X-Mailer: Apple Mail (2.1878.6)
Subject: Re: [v6ops] Interesting problems with using IPv6
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: v6ops discussion list <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Fri, 12 Sep 2014 22:34:59 -0000

On Sep 9, 2014, at 7:22 AM, Chuck Anderson <cra@WPI.EDU> wrote:

> Disagree.  
> In common practice, ARP doesn't need to scale to hundreds of IPv4
> addresses per host.  Unforutnately, IPv6 SLAAC with Privacy Addressing
> means that ND and hence MLD does need to scale much larger than ARP
> ever did.  The cure of using Solicited Node multicast for ND is worse
> than the disease of using broadcast for ARP.

Privacy addressing is the true disease here, not ND. Solicited Node multicast actually isn’t all that bad in any sane implementation. ARP has certainly created its share of problems over the years.

However, I’ve never seen privacy addressing result in hundreds of addresses per host. That would have to be a pretty broken implementation or some really strange values for preferred/valid.

If you use the (relatively standard) 3600/86400, then any addresses that are more than a day old should get removed from the interface.
Assuming you generate a new privacy address every hour, that shouldn’t be more than ~25 addresses per interface at any given time.

> It is not reasonable to expect a switch to scale to maintaining state
> for 300 * (# of hosts connected) Solicited-Node multicast groups.

Nor does any rational use of SLAAC+Privacy addresses result in anything near that.

> 15,000 MLD groups for 48 hosts on a 48 port switch is unreasonable.

25*48 = 1,200, not 15,000. Even if we double that, it’s still 2,400 max.

A potential improvement could be if privacy addressing created a persistent random lower 24 bits and only rehashed the upper 40 bits of the suffix with each address rotation. In that way, you’d only have ~2 ND multicast groups per host.

> 90,000 MLD groups for a 300 port switch stack is also unreasonable.
> Most switches cannot handle that many IPv4 multicast groups--expecting
> them to handle that many IPv6 multicast groups is unreasonable.

Again, this is not anything close to a real world scale for the situation. Using hyperbole derived numbers to make things sound bad enough that you can blame the protocol for a situation that is much more directly the result of bad network design is absurd.

> How many MLD reports per second have to be processed by the switch CPU
> given reports every 150 seconds by each of the many Solicited Node
> multicast groups on every connected host?  100 pps?  600 pps?

Let’s assume a 300 node subnet (rather large by most standards). Let’s further assume that you have 50 SNMGs per host with zero overlap.
That’s a total of 15,000 SNMGs network wide which, divided by 150 seconds results in a PPS rate of 100. Note, this is double the actual anticipated number of SNMGs per host and that zero overlap is a worst case scenario. A switch stack supporting 300 nodes shouldn’t have any problem at all forwarding 100PPS.

MLD snooping of ND traffic is a pretty bad idea and there’s really no good reason for the switch to not simply flood ND traffic.

> What are the scaling numbers for IGMP groups and MLD groups and how
> many IGMP/MLD reports can be processed per second on commonly
> available access-layer switches?

You probably aren’t running a 300-node subnet on lower end common access-layer switch hardware. It’s going to require something higher end than that.

> Having designed a ubiquitous protocol 16 years ago that can't be
> implemented reasonably cheaply on current hardware is an operational
> problem that can and should be dealt with by tweaking the protocol in
> some cases.  I think the protocol change proposals put forward in the
> article are a quite reasonable starting point for discussing how to
> mitigate this issue and shouldn't be dismissed out of hand.

I think that the protocol can be implemented reasonably cheaply on current hardware if you don’t design your network so poorly that the fact it hasn’t collapsed in on itself in IPv4 is a minor miracle. Lots of people have very large IPv6 networks running just fine on a  pretty wide variety of commodity hardware at this point.

The OP makes clear a number of places where his network design is an utter failure waiting to happen, then goes on to blame IPv6 based on little more than speculation about the cause.

Smaller layer 2 zones would greatly improve his situation. Turning off privacy addressing would also help. (This is, as has been pointed out, under the control of the network operator as he can set the M bit, turn off the A bit, and give out DHCP addresses as an example.)

I suspect there are a number of other viable solutions.  OTOH, modifying ipv6 to support such an environment seems a fools errand to me.


> On Tue, Sep 09, 2014 at 10:43:07AM +0000, Hemant Singh (shemant) wrote:
>> Agreed.  Just because one vendor’s switch melted an ipv6 network is not enough justification to change protocols.
>> Hemant
>> From: v6ops [] On Behalf Of Lorenzo Colitti
>> Sent: Tuesday, September 09, 2014 6:29 AM
>> To: Nick Hilliard
>> Cc:; IPv6 Operations
>> Subject: Re: [v6ops] Interesting problems with using IPv6
>> On Tue, Sep 9, 2014 at 6:42 PM, Nick Hilliard <<>> wrote:
>> This happened because the switch CPUs were overloaded with mld report packets due to end hosts on the extended L2 network replying to MLD all-groups queries every 150 seconds.
>> So the switch was configured to send all-groups queries to all hosts, but did not have the CPU power to process them, and could not, or was not configured, to rate-limit them.
>> News at 11: building a network beyond the capabilities of the gear that runs it will result in failure.
>> That does not mean the protocol is flawed. ARP could have done the same thing, and in fact that was a common problem many years ago... except that these days ARP is usually processed on the fast path and it doesn't matter.
> _______________________________________________
> v6ops mailing list