Re: [v6ops] Interesting problems with using IPv6

"Hemant Singh (shemant)" <> Tue, 09 September 2014 15:17 UTC

Return-Path: <>
Received: from localhost ( []) by (Postfix) with ESMTP id CCAF31A6FEC for <>; Tue, 9 Sep 2014 08:17:32 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -16.153
X-Spam-Status: No, score=-16.153 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, RCVD_IN_DNSWL_HI=-5, RP_MATCHES_RCVD=-1.652, SPF_PASS=-0.001, USER_IN_DEF_DKIM_WL=-7.5] autolearn=ham
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id C6YNdOX4Kckh for <>; Tue, 9 Sep 2014 08:17:30 -0700 (PDT)
Received: from ( []) (using TLSv1 with cipher RC4-SHA (128/128 bits)) (No client certificate requested) by (Postfix) with ESMTPS id 6B17F1A6FEE for <>; Tue, 9 Sep 2014 08:17:23 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;;; l=5314; q=dns/txt; s=iport; t=1410275844; x=1411485444; h=from:to:subject:date:message-id:references:in-reply-to: content-transfer-encoding:mime-version; bh=ssnmWaPbNn1a8XSsuxKJyzUK/YWBrxYqkHM5cabCwVw=; b=P+v/Vkl6kAJthrNdVNtKr9hEBbtY/8LoN8o8BW3qnIRC8P4E1JKuQxff Yr/sasnPoZsgmTPTikgJSo5JW7UxADpPMoW0YLdUHuXZXjBSnlYS93EVm VzupW+QEZi4u0ejMZDfDTWR1jEtDjTf95TSeHicTYOD3rqju3vA8p9+GL A=;
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-AV: E=Sophos;i="5.04,491,1406592000"; d="scan'208";a="76321072"
Received: from ([]) by with ESMTP; 09 Sep 2014 15:17:23 +0000
Received: from ( []) by (8.14.5/8.14.5) with ESMTP id s89FHMOt003567 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL); Tue, 9 Sep 2014 15:17:22 GMT
Received: from ([]) by ([]) with mapi id 14.03.0195.001; Tue, 9 Sep 2014 10:17:22 -0500
From: "Hemant Singh (shemant)" <>
To: Chuck Anderson <cra@WPI.EDU>, "" <>
Thread-Topic: [v6ops] Interesting problems with using IPv6
Thread-Index: AQHPy7BYcufcLcCSE0CP+QdxqYnKUZv4XruAgACCkQCAAAzxgP//rUdQgACT44D//7fXgA==
Date: Tue, 9 Sep 2014 15:17:22 +0000
Message-ID: <>
References: <> <> <> <> <> <> <> <> <20140909142226.GP15839@angus.ind.WPI.EDU>
In-Reply-To: <20140909142226.GP15839@angus.ind.WPI.EDU>
Accept-Language: en-US
Content-Language: en-US
x-originating-ip: []
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
Subject: Re: [v6ops] Interesting problems with using IPv6
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: v6ops discussion list <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Tue, 09 Sep 2014 15:17:33 -0000

There are several mitigations that exist.  

a. The switch should have been designed for rate-limiting beyond a certain number of MLD reports/sec.  
b. Use DHCPv6 in the network to dispense with privacy addresses and SLAAC.  
c. ARP could be processed in the fastpath because ARP uses a different ethertype than the IP packet.  The different ethertype made the fastpath filter ARP pkts with relative ease.  One could consider defining a new ethertype for IPv6 ND and or MLD and look into moving certain IPv6 processing to the fastpath.

I do agree that a 48-port switch should not have to deal with 15k MLD groups.  ND uses multicast and multicast uses MLD.  Changing multicast and/or ND needs more discussion.  


-----Original Message-----
From: v6ops [] On Behalf Of Chuck Anderson
Sent: Tuesday, September 09, 2014 10:22 AM
Subject: Re: [v6ops] Interesting problems with using IPv6


In common practice, ARP doesn't need to scale to hundreds of IPv4 addresses per host.  Unforutnately, IPv6 SLAAC with Privacy Addressing means that ND and hence MLD does need to scale much larger than ARP ever did.  The cure of using Solicited Node multicast for ND is worse than the disease of using broadcast for ARP.

It is not reasonable to expect a switch to scale to maintaining state for 300 * (# of hosts connected) Solicited-Node multicast groups.
15,000 MLD groups for 48 hosts on a 48 port switch is unreasonable.
90,000 MLD groups for a 300 port switch stack is also unreasonable.
Most switches cannot handle that many IPv4 multicast groups--expecting them to handle that many IPv6 multicast groups is unreasonable.

How many MLD reports per second have to be processed by the switch CPU given reports every 150 seconds by each of the many Solicited Node multicast groups on every connected host?  100 pps?  600 pps?

What are the scaling numbers for IGMP groups and MLD groups and how many IGMP/MLD reports can be processed per second on commonly available access-layer switches?

Having designed a ubiquitous protocol 16 years ago that can't be implemented reasonably cheaply on current hardware is an operational problem that can and should be dealt with by tweaking the protocol in some cases.  I think the protocol change proposals put forward in the article are a quite reasonable starting point for discussing how to mitigate this issue and shouldn't be dismissed out of hand.

On Tue, Sep 09, 2014 at 10:43:07AM +0000, Hemant Singh (shemant) wrote:
> Agreed.  Just because one vendor’s switch melted an ipv6 network is not enough justification to change protocols.
> Hemant
> From: v6ops [] On Behalf Of Lorenzo 
> Colitti
> Sent: Tuesday, September 09, 2014 6:29 AM
> To: Nick Hilliard
> Cc:; IPv6 Operations
> Subject: Re: [v6ops] Interesting problems with using IPv6
> On Tue, Sep 9, 2014 at 6:42 PM, Nick Hilliard <<>> wrote:
> This happened because the switch CPUs were overloaded with mld report packets due to end hosts on the extended L2 network replying to MLD all-groups queries every 150 seconds.
> So the switch was configured to send all-groups queries to all hosts, but did not have the CPU power to process them, and could not, or was not configured, to rate-limit them.
> News at 11: building a network beyond the capabilities of the gear that runs it will result in failure.
> That does not mean the protocol is flawed. ARP could have done the same thing, and in fact that was a common problem many years ago... except that these days ARP is usually processed on the fast path and it doesn't matter.

v6ops mailing list