Re: [v6ops] Interesting problems with using IPv6

Ray Hunter <> Fri, 12 September 2014 08:09 UTC

Return-Path: <>
Received: from localhost ( []) by (Postfix) with ESMTP id CF45B1A0676 for <>; Fri, 12 Sep 2014 01:09:02 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -0.654
X-Spam-Status: No, score=-0.654 tagged_above=-999 required=5 tests=[BAYES_40=-0.001, J_BACKHAIR_22=1, RP_MATCHES_RCVD=-1.652, SPF_PASS=-0.001] autolearn=ham
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id YfhBThZufVd7 for <>; Fri, 12 Sep 2014 01:09:00 -0700 (PDT)
Received: from ( [IPv6:2001:470:1f15:62e::2]) by (Postfix) with ESMTP id F20BA1A059F for <>; Fri, 12 Sep 2014 01:08:29 -0700 (PDT)
Received: from localhost (localhost []) by (Postfix) with ESMTP id AA1D9871612; Fri, 12 Sep 2014 10:08:28 +0200 (CEST)
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id rfjN0CeT+hHa; Fri, 12 Sep 2014 10:08:28 +0200 (CEST)
Received: from Rays-iMac.local (unknown [IPv6:2001:470:1f15:73a:7c61:ec8c:3d2a:1d78]) (Authenticated sender: by (Postfix) with ESMTPSA id 29D94870064; Fri, 12 Sep 2014 10:08:28 +0200 (CEST)
Message-ID: <>
Date: Fri, 12 Sep 2014 10:07:55 +0200
From: Ray Hunter <>
User-Agent: Postbox 3.0.11 (Macintosh/20140602)
MIME-Version: 1.0
To: Chuck Anderson <cra@WPI.EDU>
References: <> <> <> <> <> <> <> <> <20140909142226.GP15839@angus.ind.WPI.EDU>
In-Reply-To: <20140909142226.GP15839@angus.ind.WPI.EDU>
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Transfer-Encoding: 8bit
Subject: Re: [v6ops] Interesting problems with using IPv6
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: v6ops discussion list <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Fri, 12 Sep 2014 08:09:03 -0000

Chuck Anderson wrote:
> Disagree.
> In common practice, ARP doesn't need to scale to hundreds of IPv4
> addresses per host.  Unforutnately, IPv6 SLAAC with Privacy Addressing
> means that ND and hence MLD does need to scale much larger than ARP
> ever did.  The cure of using Solicited Node multicast for ND is worse
> than the disease of using broadcast for ARP.
The obvious history here is that Ethernet was a big yellow cable, 
multicast was "for free", an (ARP) broadcast disturbed all end nodes 
causing them to grind to a halt on large L2 nets, whereas a multicast 
only interrupted a subset of the end nodes.
Now Ethernet is effectively a point to point protocol (wired), multicast 
is certainly not for free (silicon) nor reliable (wireless).
> It is not reasonable to expect a switch to scale to maintaining state
> for 300 * (# of hosts connected) Solicited-Node multicast groups.
> 15,000 MLD groups for 48 hosts on a 48 port switch is unreasonable.
> 90,000 MLD groups for a 300 port switch stack is also unreasonable.
> Most switches cannot handle that many IPv4 multicast groups--expecting
> them to handle that many IPv6 multicast groups is unreasonable.
> How many MLD reports per second have to be processed by the switch CPU
> given reports every 150 seconds by each of the many Solicited Node
> multicast groups on every connected host?  100 pps?  600 pps?
> What are the scaling numbers for IGMP groups and MLD groups and how
> many IGMP/MLD reports can be processed per second on commonly
> available access-layer switches?
> Having designed a ubiquitous protocol 16 years ago that can't be
> implemented reasonably cheaply on current hardware is an operational
> problem that can and should be dealt with by tweaking the protocol in
> some cases.  I think the protocol change proposals put forward in the
> article are a quite reasonable starting point for discussing how to
> mitigate this issue and shouldn't be dismissed out of hand.

I read the article but did not see the specific recommended tweaks.

Can you enumerate?

One thing that might be an operational workaround (via OS patch) would 
be to update 4941 RID generation Section 3.2.12 to add Step 4a so that 
the candidate privacy addresses all hash to a limited subset of MLD 
group(s) per host that are used for solicited-node multicast address 
(RFC 4291 section 2.7.1) e.g. max of 16 MLD active groups per host out 
of the range FF02:0:0:0:0:1:FFXX:XXXX. As privacy addresses expire, new 
solicited-node multicast address groups could be activated.

That would avoid the large number of multicast update messages per host. 
It would also put an upper limit of multicast groups a L2 switch would 
have to support to n * number of hosts in the L2 domain.

Downside is that there is obviously less privacy address space to brute 
force track at any one time.

The more fundamental question is whether multicast is the correct way to 
resolve addresses in modern L2 networks.
Or whether on certain link technologies, a protocol that registers hosts 
over unicast to one or more L3<->L2 nameservers might be more 
appropriate (as per 6lo), but that is beyond v6ops.

> On Tue, Sep 09, 2014 at 10:43:07AM +0000, Hemant Singh (shemant) wrote:
>> Agreed.  Just because one vendor’s switch melted an ipv6 network is not enough justification to change protocols.
>> Hemant
>> From: v6ops [] On Behalf Of Lorenzo Colitti
>> Sent: Tuesday, September 09, 2014 6:29 AM
>> To: Nick Hilliard
>> Cc:; IPv6 Operations
>> Subject: Re: [v6ops] Interesting problems with using IPv6
>> On Tue, Sep 9, 2014 at 6:42 PM, Nick Hilliard<<>>  wrote:
>> This happened because the switch CPUs were overloaded with mld report packets due to end hosts on the extended L2 network replying to MLD all-groups queries every 150 seconds.
>> So the switch was configured to send all-groups queries to all hosts, but did not have the CPU power to process them, and could not, or was not configured, to rate-limit them.
>> News at 11: building a network beyond the capabilities of the gear that runs it will result in failure.
>> That does not mean the protocol is flawed. ARP could have done the same thing, and in fact that was a common problem many years ago... except that these days ARP is usually processed on the fast path and it doesn't matter.