Re: [v6ops] Interesting problems with using IPv6

Chuck Anderson <cra@WPI.EDU> Tue, 09 September 2014 14:22 UTC

Return-Path: <cra@WPI.EDU>
X-Original-To: v6ops@ietfa.amsl.com
Delivered-To: v6ops@ietfa.amsl.com
Received: from localhost (ietfa.amsl.com [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id E8C5A1A0B77 for <v6ops@ietfa.amsl.com>; Tue, 9 Sep 2014 07:22:32 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -4.553
X-Spam-Level:
X-Spam-Status: No, score=-4.553 tagged_above=-999 required=5 tests=[BAYES_05=-0.5, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, RCVD_IN_DNSWL_MED=-2.3, RP_MATCHES_RCVD=-1.652, SPF_HELO_PASS=-0.001] autolearn=ham
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id dvEtyoJ_G4ct for <v6ops@ietfa.amsl.com>; Tue, 9 Sep 2014 07:22:31 -0700 (PDT)
Received: from MAIL1.WPI.EDU (MAIL1.WPI.EDU [130.215.36.91]) by ietfa.amsl.com (Postfix) with ESMTP id F07B61A0BEA for <v6ops@ietf.org>; Tue, 9 Sep 2014 07:22:30 -0700 (PDT)
Received: from MAIL1.WPI.EDU (MAIL1.WPI.EDU [130.215.36.91]) by MAIL1.WPI.EDU (8.14.9/8.14.9) with ESMTP id s89EMTvU021188 for <v6ops@ietf.org>; Tue, 9 Sep 2014 10:22:29 -0400
X-DKIM: Sendmail DKIM Filter v2.8.3 MAIL1.WPI.EDU s89EMTvU021188
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=wpi.edu; s=_dkim; t=1410272549; bh=KytXWspYOE1CVyGGr6xSUpTch9ZhzIW+auG/fQEpJbo=; h=Date:From:To:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Transfer-Encoding:In-Reply-To; b=Bo4R6JknPNOIglb1Gz+xJyr1g4XAwXBWvRXPOmpPhxpDq2ZrdxBOiITz8C4cackAc K69xUd0WsGGZUAEgqfSf+3x/fqqUZQcYAKtl5VL5T9zGoPbv0QKMvVzApmsuQcuctQ Owcu/3DkOa6alwacPL4TQ5UW5fP0JUZ9TWCqnrag=
Received: from MX3.WPI.EDU (mx3.wpi.edu [130.215.36.147]) by MAIL1.WPI.EDU (8.14.9/8.14.9) with ESMTP id s89EMTWK021184 for <v6ops@ietf.org>; Tue, 9 Sep 2014 10:22:29 -0400
Received: from angus.ind.WPI.EDU (ANGUS.IND.WPI.EDU [130.215.130.21]) by MX3.WPI.EDU (8.14.4/8.14.4) with ESMTP id s89EMSJQ021753 for <v6ops@ietf.org>; Tue, 9 Sep 2014 10:22:29 -0400 (envelope-from cra@WPI.EDU)
Received: from angus.ind.WPI.EDU (localhost [127.0.0.1]) by angus.ind.WPI.EDU (8.14.4/8.14.4) with ESMTP id s89EMSdG014418 for <v6ops@ietf.org>; Tue, 9 Sep 2014 10:22:28 -0400
Received: (from cra@localhost) by angus.ind.WPI.EDU (8.14.4/8.14.4/Submit) id s89EMR8Q014416 for v6ops@ietf.org; Tue, 9 Sep 2014 10:22:27 -0400
X-Authentication-Warning: angus.ind.WPI.EDU: cra set sender to cra@WPI.EDU using -f
Date: Tue, 9 Sep 2014 10:22:27 -0400
From: Chuck Anderson <cra@WPI.EDU>
To: v6ops@ietf.org
Message-ID: <20140909142226.GP15839@angus.ind.WPI.EDU>
References: <1410082125488.85722@surrey.ac.uk> <540CB702.3000605@gmail.com> <20140908183339.GB98785@ricotta.doit.wisc.edu> <540E26D9.3070907@gmail.com> <1410227735.13436.YahooMailNeo@web162204.mail.bf1.yahoo.com> <540ECB9E.9000102@foobar.org> <CAKD1Yr1_sCLHv=D3MeCe47Fa0dxXTXH5B+=wOKpvmEDFkJFiZw@mail.gmail.com> <75B6FA9F576969419E42BECB86CB1B89155AF364@xmb-rcd-x06.cisco.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <75B6FA9F576969419E42BECB86CB1B89155AF364@xmb-rcd-x06.cisco.com>
User-Agent: Mutt/1.5.20 (2009-12-10)
Archived-At: http://mailarchive.ietf.org/arch/msg/v6ops/vTcPkf0z0q8pqJPFhxpVZvhmOrU
Subject: Re: [v6ops] Interesting problems with using IPv6
X-BeenThere: v6ops@ietf.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: v6ops discussion list <v6ops.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/v6ops>, <mailto:v6ops-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/v6ops/>
List-Post: <mailto:v6ops@ietf.org>
List-Help: <mailto:v6ops-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/v6ops>, <mailto:v6ops-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 09 Sep 2014 14:22:33 -0000

Disagree.  

In common practice, ARP doesn't need to scale to hundreds of IPv4
addresses per host.  Unforutnately, IPv6 SLAAC with Privacy Addressing
means that ND and hence MLD does need to scale much larger than ARP
ever did.  The cure of using Solicited Node multicast for ND is worse
than the disease of using broadcast for ARP.

It is not reasonable to expect a switch to scale to maintaining state
for 300 * (# of hosts connected) Solicited-Node multicast groups.
15,000 MLD groups for 48 hosts on a 48 port switch is unreasonable.
90,000 MLD groups for a 300 port switch stack is also unreasonable.
Most switches cannot handle that many IPv4 multicast groups--expecting
them to handle that many IPv6 multicast groups is unreasonable.

How many MLD reports per second have to be processed by the switch CPU
given reports every 150 seconds by each of the many Solicited Node
multicast groups on every connected host?  100 pps?  600 pps?

What are the scaling numbers for IGMP groups and MLD groups and how
many IGMP/MLD reports can be processed per second on commonly
available access-layer switches?

Having designed a ubiquitous protocol 16 years ago that can't be
implemented reasonably cheaply on current hardware is an operational
problem that can and should be dealt with by tweaking the protocol in
some cases.  I think the protocol change proposals put forward in the
article are a quite reasonable starting point for discussing how to
mitigate this issue and shouldn't be dismissed out of hand.

On Tue, Sep 09, 2014 at 10:43:07AM +0000, Hemant Singh (shemant) wrote:
> Agreed.  Just because one vendor’s switch melted an ipv6 network is not enough justification to change protocols.
> 
> Hemant
> 
> From: v6ops [mailto:v6ops-bounces@ietf.org] On Behalf Of Lorenzo Colitti
> Sent: Tuesday, September 09, 2014 6:29 AM
> To: Nick Hilliard
> Cc: l.wood@surrey.ac.uk; IPv6 Operations
> Subject: Re: [v6ops] Interesting problems with using IPv6
> 
> On Tue, Sep 9, 2014 at 6:42 PM, Nick Hilliard <nick@foobar.org<mailto:nick@foobar.org>> wrote:
> This happened because the switch CPUs were overloaded with mld report packets due to end hosts on the extended L2 network replying to MLD all-groups queries every 150 seconds.
> 
> So the switch was configured to send all-groups queries to all hosts, but did not have the CPU power to process them, and could not, or was not configured, to rate-limit them.
> 
> News at 11: building a network beyond the capabilities of the gear that runs it will result in failure.
> 
> That does not mean the protocol is flawed. ARP could have done the same thing, and in fact that was a common problem many years ago... except that these days ARP is usually processed on the fast path and it doesn't matter.