Re: [nvo3] Draft NVO3 WG Charter

<david.black@emc.com> Fri, 17 February 2012 16:31 UTC

Return-Path: <david.black@emc.com>
X-Original-To: nvo3@ietfa.amsl.com
Delivered-To: nvo3@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id A9DFD21F88A2 for <nvo3@ietfa.amsl.com>; Fri, 17 Feb 2012 08:31:21 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -109.544
X-Spam-Level:
X-Spam-Status: No, score=-109.544 tagged_above=-999 required=5 tests=[AWL=1.055, BAYES_00=-2.599, RCVD_IN_DNSWL_HI=-8, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id V4pCfH4vp7dO for <nvo3@ietfa.amsl.com>; Fri, 17 Feb 2012 08:31:20 -0800 (PST)
Received: from mexforward.lss.emc.com (mexforward.lss.emc.com [128.222.32.20]) by ietfa.amsl.com (Postfix) with ESMTP id 3702421F873C for <nvo3@ietf.org>; Fri, 17 Feb 2012 08:31:19 -0800 (PST)
Received: from hop04-l1d11-si01.isus.emc.com (HOP04-L1D11-SI01.isus.emc.com [10.254.111.54]) by mexforward.lss.emc.com (Switch-3.4.3/Switch-3.4.3) with ESMTP id q1HGVHAD013055 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Fri, 17 Feb 2012 11:31:19 -0500
Received: from mailhub.lss.emc.com (mailhub.lss.emc.com [10.254.221.145]) by hop04-l1d11-si01.isus.emc.com (RSA Interceptor); Fri, 17 Feb 2012 11:31:00 -0500
Received: from mxhub25.corp.emc.com (mxhub25.corp.emc.com [10.254.110.181]) by mailhub.lss.emc.com (Switch-3.4.3/Switch-3.4.3) with ESMTP id q1HGUxMI006482; Fri, 17 Feb 2012 11:30:59 -0500
Received: from mx14a.corp.emc.com ([169.254.1.157]) by mxhub25.corp.emc.com ([10.254.110.181]) with mapi; Fri, 17 Feb 2012 11:30:59 -0500
From: david.black@emc.com
To: jdrake@juniper.net, narten@us.ibm.com, nvo3@ietf.org
Content-Class: urn:content-classes:message
Date: Fri, 17 Feb 2012 11:31:03 -0500
Thread-Topic: [nvo3] Draft NVO3 WG Charter
Thread-Index: Acztg+kaIkhoIAdYRfObB19+NBEOEQAAD2CwAAJSOjMAABZvLAAAIiRAAACZ7TMAADQgrg==
Message-ID: <B56CFB4A-2393-42C7-9A89-0AA397512F12@mimectl>
References: <201202171451.q1HEptR3027370@cichlid.raleigh.ibm.com>, <5E893DB832F57341992548CDBB333163A55C70661A@EMBX01-HQ.jnpr.net> <5E613872-0E27-46D2-8097-B31E7F0F37C5@mimectl>, <5E893DB832F57341992548CDBB333163A55C70669D@EMBX01-HQ.jnpr.net>
In-Reply-To: <5E893DB832F57341992548CDBB333163A55C70669D@EMBX01-HQ.jnpr.net>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
acceptlanguage: en-US
x-mimectl: Produced By Microsoft Exchange V8.3.105.0
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-EMM-MHVC: 1
Cc: rbonica@juniper.net, nitinb@juniper.net, afarrel@juniper.net
Subject: Re: [nvo3] Draft NVO3 WG Charter
X-BeenThere: nvo3@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: "L2 \"Network Virtualization Over l3\" overlay discussion list \(nvo3\)" <nvo3.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/nvo3>, <mailto:nvo3-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/nvo3>
List-Post: <mailto:nvo3@ietf.org>
List-Help: <mailto:nvo3-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/nvo3>, <mailto:nvo3-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 17 Feb 2012 16:31:21 -0000

Hi John,

> > BGP and MPLS are non-starters for a lot of datacenter-internal
> > networks.
>
> [JD]  This is an assertion.  It is also the misses the fact that MPLS
> is only required to mux/demux packets at the edges of the VPN network.



Indeed it is, but I stand by it.  The interesting "edges of the VPN

network" for NVO include datacenter ToR switches, datacenter access

switches and hypervisor softswitches - there are plenty of examples of

these for which MPLS and BGP are non-starters.



I suggest reading the NVGRE and VXLAN drafts for more context:

   http://tools.ietf.org/html/draft-sridharan-virtualization-nvgre-00

   http://tools.ietf.org/html/draft-mahalingam-dutt-dcops-vxlan-00



Thanks,
--David
----------------------------------------------------
David L. Black, Distinguished Engineer
EMC Corporation, 176 South St., Hopkinton, MA  01748
+1 (508) 293-7953             FAX: +1 (508) 293-7786
david.black@emc.com<mailto:david.black@emc.com>        Mobile: +1 (978) 394-7754
----------------------------------------------------
________________________________
From: John E Drake [jdrake@juniper.net]
Sent: Friday, February 17, 2012 11:13 AM
To: Black, David; narten@us.ibm.com; nvo3@ietf.org
Cc: Ronald Bonica; Nitin Bahadur; Adrian Farrel
Subject: RE: [nvo3] Draft NVO3 WG Charter

Comments inline

> -----Original Message-----
> From: david.black@emc.com [mailto:david.black@emc.com]
> Sent: Friday, February 17, 2012 8:04 AM
> To: John E Drake; narten@us.ibm.com; nvo3@ietf.org
> Cc: Ronald Bonica; Nitin Bahadur; Adrian Farrel
> Subject: RE: [nvo3] Draft NVO3 WG Charter
>
> John,
>
> > This basically is a re-statement of what is done by L3/L2 VPNs.  It'
> > might be useful to do a gap analysis of these existing technologies,
> > in particular E-VPNs (http://tools.ietf.org/html/draft-raggarwa-
> sajassi-l2vpn-evpn-04),
> > before asserting that something new is required.
> BGP and MPLS are non-starters for a lot of datacenter-internal
> networks.

[JD]  This is an assertion.  It is also the misses the fact that MPLS is only required to mux/demux packets at the edges of the VPN network.

> Some of the more important NVO deployment scenarios involve map-and-
> encap in a hypervisor software network switch.

[JD]  Your point eludes me.

>
> Thanks,
> --David
> ----------------------------------------------------
> David L. Black, Distinguished Engineer
> EMC Corporation, 176 South St., Hopkinton, MA  01748
> +1 (508) 293-7953             FAX: +1 (508) 293-7786
> david.black@emc.com<mailto:david.black@emc.com>        Mobile: +1 (978)
> 394-7754
> ----------------------------------------------------
> ________________________________
> From: nvo3-bounces@ietf.org [nvo3-bounces@ietf.org] On Behalf Of John E
> Drake [jdrake@juniper.net]
> Sent: Friday, February 17, 2012 10:00 AM
> To: Thomas Narten; nvo3@ietf.org
> Cc: Ronald Bonica; Nitin Bahadur; Adrian Farrel
> Subject: Re: [nvo3] Draft NVO3 WG Charter
>
> Thomas,
>
> This basically is a re-statement of what is done by L3/L2 VPNs.  It
> might be useful to do a gap analysis of these existing technologies, in
> particular E-VPNs (http://tools.ietf.org/html/draft-raggarwa-sajassi-
> l2vpn-evpn-04), before asserting that something new is required.
>
> Thanks,
>
> John
>
> > -----Original Message-----
> > From: nvo3-bounces@ietf.org [mailto:nvo3-bounces@ietf.org] On Behalf
> Of
> > Thomas Narten
> > Sent: Friday, February 17, 2012 6:52 AM
> > To: nvo3@ietf.org
> > Subject: [nvo3] Draft NVO3 WG Charter
> >
> > Below is a draft charter for this effort. One detail is that we
> > started out calling this effort NVO3 (Network Virtualization Over
> L3),
> > but have subsequently realized that we should not focus on just "over
> > L3". One goal of this effort is to develop an overlay standard that
> > works over L3, but we do not want to restrict ourselves only to "over
> > L3". The framework and architecture that we are proposing to work on
> > should be applicable to other overlays as well (e.g., L2 over
> > L2). This is (hopefully) captured in the proposed charter.
> >
> > Comments?
> >
> > Thomas
> >
> > NVO: Network Virtualization Overlays
> >
> > Support for multi-tenancy has become a core requirement of data
> > centers, especially in the context of data centers which include
> > virtualized servers known as virtual machines (VMs).  With
> > multi-tenancy, a data center can support the needs of many thousands
> > of individual tenants, ranging from individual groups or departments
> > within a single organization all the way up to supporting thousands
> of
> > individual customers.  A key multi-tenancy requirement is traffic
> > isolation, so that a tenant's traffic (and internal address usage) is
> > not visible to any other tenant and does not collide with addresses
> > used within the data center itself.  Such isolation can be achieved
> by
> > creating and assigning one or more virtual networks to each tenant
> > such that traffic within a virtual network is isolated from traffic
> in
> > other virtual networks.
> >
> > Tenant isolation is primarily achieved today within data centers
> using
> > Ethernet VLANs. But the 12-bit VLAN tag field isn't large enough to
> > support existing and future needs. A number of approaches to
> extending
> > VLANs and scaling L2s have been proposed or developed, including IEEE
> > 802.1ah Shortest Path Bridging (SPB) and TRILL (with the proposed
> > fine-grained labeling extension).  At the L3 (IP) level, VXLAN and
> > NVGRE have also been proposed. As outlined in
> > draft-narten-nvo3-overlay-problem-statement-01.txt, however, existing
> > L2 approaches are not satisfactory for all data center operators,
> > e.g., larger data centers that desire to keep L2 domains small or
> push
> > L3 further into the data center (e.g., all the way to top-of-rack
> > switches). Furthermore, there is a desire to decouple the
> > configuration of the data center network from the configuration
> > associated with individual tenant applications and to seamlessly and
> > rapidly update the network state to handle live VM migrations or fast
> > spin-up and spin-down of new tenant VMs (or servers). Such tasks are
> > complicated by the need to simultaneously reconfigure and update data
> > center network state (e.g., VLAN settings on individual switches).
> >
> > This WG will develop an approach to multi-tenancy that does not rely
> > on any underlying L2 mechanisms to support multi-tenancy. In
> > particular, the WG will develop an approach where multitenancy is
> > provided at the IP layer using an encapsulation header that resides
> > above IP. This effort is explicitly intended to leverage the interest
> > in L3 overlay approaches as exemplified by VXLAN
> > (draft-mahalingam-dutt-dcops-vxlan-00.txt) and NVGRE
> > (draft-sridharan-virtualization-nvgre-00.txt).
> >
> > Overlays are a form of "map and encap", where an ingress node maps
> the
> > destination address of an arriving packet (e.g., from a source tenant
> > VM) into the address of an egress node to which the packet can be
> > tunneled to. The ingress node then encapsulates the packet in an
> outer
> > header and tunnels it to the egress node, which decapsulates the
> > packet and forwards the original (unmodified) packet to its ultimate
> > destination (e.g., a destination tenant VM). All map-and-encap
> > approaches must address two issues: the encapsulation format (i.e.,
> > the contents of the outer header) and how to distribute and manage
> the
> > mapping tables used by the tunnel end points.
> >
> > The first area of work concerns encapsulation formats. This WG will
> > develop requirements and desirable properties for any encapsulation
> > format. Given the number of already existing encapsulation formats,
> > it is not an explicit goal of this effort to choose exactly one
> format
> > or to develop yet another new one.
> >
> > A second work area is in the control plane, which allows an ingress
> > node to map the "inner" (tenant VM) address into an "outer"
> > (underlying transport network) address in order to tunnel a packet
> > across the data center. We propose to develop two control planes. One
> > control plane will use a learning mechanism similar to IEEE 802.1D
> > learning, and could be appropriate for smaller data centers. A
> second,
> > more scalable control plane would be aimed at large sites, capable of
> > scaling to hundreds of thousands of nodes. Both control planes will
> > need to handle the case of VMs moving around the network in a dynamic
> > fashion, meaning that they will need to support tunnel endpoints
> > registering and deregistering mappings as VMs change location and
> > ensuring that out-of-date mapping tables are only used for short
> > periods of time. Finally, the second control plane must also be
> > applicable to geographically dispersed data centers.
> >
> > Although a key objective of this WG is to produce a solution that
> > supports an L2 over L3 overlay, an important goal is to develop a
> > "layer agnostic" framework and architecture, so that any specific
> > overlay approach can reuse the output of this working group. For
> > example, there is no inherent reason why the same framework could not
> > be used to provide for L2 over L2 or L3 over L3. The main difference
> > would be in the address formats of the inner and outer headers and
> the
> > encapsulation header itself.
> >
> > Finally, some work may be needed in connecting an overlay network
> with
> > traditional L2 or L3 VPNs (e.g., VPLS). One approach appears straight
> > forward, in that there is a clear boundary between a VPN device and
> > the edge of an overlay network. Packets forwarded across the boundary
> > would simply need to have the tenant identifier on the overlay side
> > mapped into a corresponding VPN identifier on the VPN
> > side. Conceptually, this would appear to be analogous to what is done
> > already today when interfacing between L2 VLANs and VPNs.
> >
> > The specific deliverables for this group include:
> >
> > 1) Finalize and publish the overall problem statement as an
> > Informational RFC (basis:
> > draft-narten-nvo3-overlay-problem-statement-01.txt)
> >
> > 2) Develop requirements and desirable properties for any
> encapsulation
> > format, and identify suitable encapsulations. Given the number of
> > already existing encapsulation formats, it is not an explicit goal of
> > this effort to choose exactly one format or to develop a new one.
> >
> > 3) Produce a Standards Track control plane document that specifies
> how
> > to build mapping tables using a "learning" approach. This document is
> > expected to be short, as the algorithm itself will use a mechanism
> > similar to IEEE 802.1D learning.
> >
> > 4) Develop requirements (and later a Standards Track protocol) for a
> > more scalable control plane for managing and distributing the
> mappings
> > of "inner" to "outer" addresses. We will develop a reusable framework
> > suitable for use by any mapping function in which there is a need to
> > map "inner" to outer addresses. Starting point:
> > draft-kreeger-nvo3-overlay-cp-00.txt
> >
> > _______________________________________________
> > nvo3 mailing list
> > nvo3@ietf.org
> > https://www.ietf.org/mailman/listinfo/nvo3
> _______________________________________________
> nvo3 mailing list
> nvo3@ietf.org
> https://www.ietf.org/mailman/listinfo/nvo3