Re: [pim] IGP based mutlicast arch RE: pim minutes from IETF 92 in Dallas

Mike Davison <mike.davison.tech@gmail.com> Thu, 30 April 2015 21:43 UTC

Return-Path: <mike.davison.tech@gmail.com>
X-Original-To: pim@ietfa.amsl.com
Delivered-To: pim@ietfa.amsl.com
Received: from localhost (ietfa.amsl.com [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id EFC821A1A7B for <pim@ietfa.amsl.com>; Thu, 30 Apr 2015 14:43:36 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.999
X-Spam-Level:
X-Spam-Status: No, score=-1.999 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FREEMAIL_FROM=0.001, HTML_MESSAGE=0.001, SPF_PASS=-0.001] autolearn=ham
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id o1hDk26a7kV6 for <pim@ietfa.amsl.com>; Thu, 30 Apr 2015 14:43:33 -0700 (PDT)
Received: from mail-wi0-x236.google.com (mail-wi0-x236.google.com [IPv6:2a00:1450:400c:c05::236]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 089791A1B6B for <pim@ietf.org>; Thu, 30 Apr 2015 14:43:32 -0700 (PDT)
Received: by wizk4 with SMTP id k4so35272757wiz.1 for <pim@ietf.org>; Thu, 30 Apr 2015 14:43:31 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=BCNItiGChYjGSCbiMLEGxAKLwVqOo9aN0lFiaZXXguw=; b=JOt5pIXqFO5Iwx0txdNXr+RDNxaBWrYKI5rGEP8K0o43vy2qDfSz6Zw5yFK3Cbf7mB s1dFvno+T6QYydbN5xFKqNGDx2dUNaOFit73VHsCVDmNxQdttHFPV0pzkz+6QitLZmgA r9PQ7371cWQiG0RFGsNcwv8fevpyV86pZQ5rg0xNKuILKo5CRHnE/h6koU7wtV5UvIss 7KoUioJfxWfIGRqoLN7ftMY6f5UzyipzvazgdqY0F0ZV8wVgYrNZZ4CHyFaqm+pU4zUE ff3GFkvDJR9mh38uTkywgWYJqKOuixlpH/Rpt70e5tIRr0ZQtpqfZ869TG+9gToqLB8S zxaA==
MIME-Version: 1.0
X-Received: by 10.180.231.4 with SMTP id tc4mr9209914wic.27.1430430211507; Thu, 30 Apr 2015 14:43:31 -0700 (PDT)
Received: by 10.180.75.239 with HTTP; Thu, 30 Apr 2015 14:43:31 -0700 (PDT)
In-Reply-To: <2691CE0099834E4A9C5044EEC662BB9D57164333@dfweml701-chm>
References: <552BFED7.50501@venaas.com> <2691CE0099834E4A9C5044EEC662BB9D57164333@dfweml701-chm>
Date: Thu, 30 Apr 2015 14:43:31 -0700
Message-ID: <CAE3cmzDkCXvo6uwY4jL0efO4L0jMjH925JpGS8GVKvhrZOVYSA@mail.gmail.com>
From: Mike Davison <mike.davison.tech@gmail.com>
To: pim@ietf.org
Content-Type: multipart/alternative; boundary="001a1134cc28a20cbf0514f7fc9c"
Archived-At: <http://mailarchive.ietf.org/arch/msg/pim/TKPEvKIySSGhJi1h3s9j5lQjga0>
Subject: Re: [pim] IGP based mutlicast arch RE: pim minutes from IETF 92 in Dallas
X-BeenThere: pim@ietf.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: Protocol Independent Multicast <pim.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/pim>, <mailto:pim-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/pim/>
List-Post: <mailto:pim@ietf.org>
List-Help: <mailto:pim-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/pim>, <mailto:pim-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 30 Apr 2015 21:43:37 -0000

Disclosure: I'm with Brocade, as is one of the authors of this draft. We've
not collaborated on this; I just find it an interesting option.

It's important to note that this mechanism is proposed for NVO3
environments where there can be many overlay virtual networks. The
potential for excessive multicast state or excessive PIM messages in this
environment should generate some evaluation of the current approaches and
possible new approaches. It's not clear that existing multicast mechanisms
work well in this case.

PIM-Bidir looks workable for NVO3 underlay, but some operators prefer not
to run PIM so perhaps an IGP option would be more palatable. It would be
interesting to to evaluate the differences in PIM-bidir and the IGP
multicast method with an eye towards data centers and/or NVO3. How each
affects control plane and forwarding plane state. Protocol overhead? Seems
quite worthy of evaluation.

cheers,
Mike


On Mon, Apr 27, 2015 at 1:18 PM, Lucy yong <lucy.yong@huawei.com> wrote:

>  Hi,
>
>
>
> During Dallas meeting, the presentation of
> draft-yong-pim-igp-mutlicast-arch triggered a lot of discussions/debates as
> shown in the meeting minutes. Co-authors would like to address these
> concerns on the mailing list again so we can have further discussions in a
> broader community. Since several questions are similar but were asked in
> different ways, we summarize them into the following questions:
>
>
>
> 1.  What is the relationship between IGP based multicast and M-OSPF
> (RFC1584)?
>
> ·         IGP based multicast and M-OSPF has one common, i.e., use one
> protocol to support both unicast and multicast routing.
>
> ·         M-OSPF (RFC1584) specified one solution that is the source
> rooted distribution tree for multicast delivery in 20 year ago when there
> was no multicast requirement due to, obviously, lacking of applications.
>
> ·         M-OSPF solution, i.e., (S,G) based distribution, can’t apply to
> the Data Center that supports NVO3. We'll document the multicast
> requirement in Data Center in the next draft revision.
>
> ·         IGP based multicast has a primary use case for NVO3 Data
> Centers. Each multicast group uses a bidirectional rooted tree for
> multicast packet delivery and both sources and receivers are the member of
> the group. Such solution meets NVO3 DC multicast requirements.
>
> ·         The solution draft is different from RFC1584 in terms of
> algorithm and protocol extension. Since we have implemented this in IS-IS
> first, prefer working on IS-IS based solution first, then OSPF.
>
> ·         M-OSPF was done 20 year ago. Since then, IGP has enriched many
> properties that could be beneficial for multicast delivery as well, such as
> traffic engineering, fast re-route and loop-free convergence, etc. This is
> a new area to explore. IGP multicast architecture intends describing
> general architecture and architecture components, rather than describing a
> specific solution. A solution will be a separate draft.
>
>
>
> 2.  Is this just like PIM but implemented by IGP protocol? Why do it
> again?
>
>
>
> ·         Answer is Yes and No. There are several PIM based multicast
> delivery methods that apply to intra IGP and inter IGP. One IGP based
> multicast solution we are working on is like PIM bidir method (RFC5015) in
> theory, but just works in intra IGP. This solution targets for the Data
> Centers where NVO3 is deployed. For this use case, our solution is better
> than PIM bidir: 1) enable infrastructure network self-establishment; 2)
> meet the requirement of supporting both unicast and multicast routing in DC
> network; 3) save the multicast convergence time. The benefit of using one
> protocol is highly desired by Data Center operators due to their
> requirement, e.g., reducing operating cost, simplifying network
> management/provisioning effort, and the "automation", i.e., when the IP
> network is brought up, both unicast and multicast routing engines are
> running.
>
> ·         There are already similar deployments such as TRILL (RFC5556)
> and IP fabric path in DCs. They both uses single protocol to handle unicast
> and multicast routing, where the operating experience and benefit have
> greatly encouraged the single-protocol vision, especially in Data Center
> networking environment.
>
> ·         IGP bidir tree solution is different from PIM bidir tree
> solution because IGP protocol and PIM protocol are different, e.g., group
> membership announcement mechanism is different. Our IGP multicast solution
> draft will have the details.
>
> ·         Improving/simplifying infrastructure network is a key
> requirement for Data Center network. We need to think of it from network
> perspective, not at individual protocol view.
>
>
>
> 3.  Concern on IGP Group Membership Flooding:
>
>
>
> ·         Yes, IGP floods the group membership while PIM does not. We
> believe that for some use case, such behavior is acceptable and other
> benefits of it comparing to PIM remains. In addition, some optimization at
> edge routers to minimize the flooding can be done in DC use case.
>
> ·         The flooding makes IGP multicast convergence faster than PIM
> solution.
>
> ·         In Data Center environment, the membership subscription is
> static in nature, i.e., the re-flooding due to new member's joining or
> existing member's leaving is rare. Therefore, the concern on the flooding
> is unnecessary.
>
> ·         We're currently conducting comparison between IGP-based
> multicast routing and PIM-based multicast routing in terms of performance,
> convergence, volume of protocol messages, etc. We'd share the result with
> the community once the data is ready.
>
>
>
> 4.  Inter-AS support?
>
>
>
> ·         IGP based multicast focuses on intra-AS use cases that, we
> believe, have huge market as the trend of network virtualization.
>
> ·         Our solution can support single area and multi-areas, and this
> covers the current Data Center networking environment well. However,
> inter-AS support can be added at later time if there is a requirement.
>
>
>
> 5.  Need to see and evaluate the solution.
>
>
>
> ·         We will upload a bidir tree solution based on IS-IS protocol
> first, which is similar to the draft-yong-isis-ext-4-distribution-tree-02,
> since we had an implementation.
>
>
>
> We like to hear people’s thoughts/suggestions on these and get people
> support for PIM WG to work on this development work.
>
>
>
> Thanks,
>
> Lucy on behalf of the co-authors of draft-yong-pim-igp-mutlicast-arch
>
>
>
> -----Original Message-----
> From: pim [mailto:pim-bounces@ietf.org] On Behalf Of Stig Venaas
> Sent: Monday, April 13, 2015 12:37 PM
> To: pim@ietf.org
> Subject: [pim] pim minutes from IETF 92 in Dallas
>
>
>
> Hi
>
>
>
> The minutes have finally been posted. They are available at
> http://www.ietf.org/proceedings/92/minutes/minutes-92-pim
>
> Stig
>
>
>
> _______________________________________________
>
> pim mailing list
>
> pim@ietf.org
>
> https://www.ietf.org/mailman/listinfo/pim
>
> _______________________________________________
> pim mailing list
> pim@ietf.org
> https://www.ietf.org/mailman/listinfo/pim
>
>