Re: [nvo3] Draft NVO3 WG Charter

Paul Unbehagen <paul@unbehagen.net> Fri, 17 February 2012 23:58 UTC

Return-Path: <paul@unbehagen.net>
X-Original-To: nvo3@ietfa.amsl.com
Delivered-To: nvo3@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 9BCC921F864F for <nvo3@ietfa.amsl.com>; Fri, 17 Feb 2012 15:58:36 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -3.075
X-Spam-Level:
X-Spam-Status: No, score=-3.075 tagged_above=-999 required=5 tests=[AWL=0.524, BAYES_00=-2.599, RCVD_IN_DNSWL_LOW=-1]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id uoWwZ9RkZTBE for <nvo3@ietfa.amsl.com>; Fri, 17 Feb 2012 15:58:35 -0800 (PST)
Received: from mail-yx0-f172.google.com (mail-yx0-f172.google.com [209.85.213.172]) by ietfa.amsl.com (Postfix) with ESMTP id 95C5921F864D for <nvo3@ietf.org>; Fri, 17 Feb 2012 15:58:35 -0800 (PST)
Received: by yenm3 with SMTP id m3so2378054yen.31 for <nvo3@ietf.org>; Fri, 17 Feb 2012 15:58:35 -0800 (PST)
Received-SPF: pass (google.com: domain of paul@unbehagen.net designates 10.236.178.72 as permitted sender) client-ip=10.236.178.72;
Authentication-Results: mr.google.com; spf=pass (google.com: domain of paul@unbehagen.net designates 10.236.178.72 as permitted sender) smtp.mail=paul@unbehagen.net
Received: from mr.google.com ([10.236.178.72]) by 10.236.178.72 with SMTP id e48mr16062268yhm.28.1329523115296 (num_hops = 1); Fri, 17 Feb 2012 15:58:35 -0800 (PST)
Received: by 10.236.178.72 with SMTP id e48mr12332463yhm.28.1329523115216; Fri, 17 Feb 2012 15:58:35 -0800 (PST)
Received: from [10.0.1.9] (c-67-161-144-217.hsd1.co.comcast.net. [67.161.144.217]) by mx.google.com with ESMTPS id n63sm26283442yhb.8.2012.02.17.15.58.32 (version=TLSv1/SSLv3 cipher=OTHER); Fri, 17 Feb 2012 15:58:34 -0800 (PST)
References: <201202171451.q1HEptR3027370@cichlid.raleigh.ibm.com> <alpine.LRH.2.00.1202171525030.16944@netops1.corp.bf1.yahoo.com>
In-Reply-To: <alpine.LRH.2.00.1202171525030.16944@netops1.corp.bf1.yahoo.com>
Mime-Version: 1.0 (1.0)
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"
Message-Id: <A0973CB1-5081-46C2-9A28-C0D8E43C5A4F@unbehagen.net>
X-Mailer: iPhone Mail (9A405)
From: Paul Unbehagen <paul@unbehagen.net>
Date: Fri, 17 Feb 2012 16:58:29 -0700
To: Igor Gashinsky <igor@yahoo-inc.com>
X-Gm-Message-State: ALoCoQkiflmuuuOzvVHKTFRPLwsa0B5+WuTvKIwFo4F3TkSlW1ib0HlR+oozap91PcjWCGWSUJEv
Cc: Thomas Narten <narten@us.ibm.com>, "nvo3@ietf.org" <nvo3@ietf.org>
Subject: Re: [nvo3] Draft NVO3 WG Charter
X-BeenThere: nvo3@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: "L2 \"Network Virtualization Over l3\" overlay discussion list \(nvo3\)" <nvo3.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/nvo3>, <mailto:nvo3-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/nvo3>
List-Post: <mailto:nvo3@ietf.org>
List-Help: <mailto:nvo3-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/nvo3>, <mailto:nvo3-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 17 Feb 2012 23:58:36 -0000

L2 over L2 overlay already exist in some DC's via SPBm. 

--
Paul Unbehagen


Sent from my iPhone

On Feb 17, 2012, at 4:26 PM, Igor Gashinsky <igor@yahoo-inc.com> wrote:

> I think this is a great charter, but, do we really need to bother with L2 
> over L2 type of overlays here? Does anybody actually plan on building 
> something like that? 
> 
> -igor
> 
> On Fri, 17 Feb 2012, Thomas Narten wrote:
> 
> :: Below is a draft charter for this effort. One detail is that we
> :: started out calling this effort NVO3 (Network Virtualization Over L3),
> :: but have subsequently realized that we should not focus on just "over
> :: L3". One goal of this effort is to develop an overlay standard that
> :: works over L3, but we do not want to restrict ourselves only to "over
> :: L3". The framework and architecture that we are proposing to work on
> :: should be applicable to other overlays as well (e.g., L2 over
> :: L2). This is (hopefully) captured in the proposed charter.
> :: 
> :: Comments?
> :: 
> :: Thomas
> :: 
> :: NVO: Network Virtualization Overlays 
> :: 
> :: Support for multi-tenancy has become a core requirement of data
> :: centers, especially in the context of data centers which include
> :: virtualized servers known as virtual machines (VMs).  With
> :: multi-tenancy, a data center can support the needs of many thousands
> :: of individual tenants, ranging from individual groups or departments
> :: within a single organization all the way up to supporting thousands of
> :: individual customers.  A key multi-tenancy requirement is traffic
> :: isolation, so that a tenant's traffic (and internal address usage) is
> :: not visible to any other tenant and does not collide with addresses
> :: used within the data center itself.  Such isolation can be achieved by
> :: creating and assigning one or more virtual networks to each tenant
> :: such that traffic within a virtual network is isolated from traffic in
> :: other virtual networks.
> :: 
> :: Tenant isolation is primarily achieved today within data centers using
> :: Ethernet VLANs. But the 12-bit VLAN tag field isn't large enough to
> :: support existing and future needs. A number of approaches to extending
> :: VLANs and scaling L2s have been proposed or developed, including IEEE
> :: 802.1ah Shortest Path Bridging (SPB) and TRILL (with the proposed
> :: fine-grained labeling extension).  At the L3 (IP) level, VXLAN and
> :: NVGRE have also been proposed. As outlined in
> :: draft-narten-nvo3-overlay-problem-statement-01.txt, however, existing
> :: L2 approaches are not satisfactory for all data center operators,
> :: e.g., larger data centers that desire to keep L2 domains small or push
> :: L3 further into the data center (e.g., all the way to top-of-rack
> :: switches). Furthermore, there is a desire to decouple the
> :: configuration of the data center network from the configuration
> :: associated with individual tenant applications and to seamlessly and
> :: rapidly update the network state to handle live VM migrations or fast
> :: spin-up and spin-down of new tenant VMs (or servers). Such tasks are
> :: complicated by the need to simultaneously reconfigure and update data
> :: center network state (e.g., VLAN settings on individual switches).
> :: 
> :: This WG will develop an approach to multi-tenancy that does not rely
> :: on any underlying L2 mechanisms to support multi-tenancy. In
> :: particular, the WG will develop an approach where multitenancy is
> :: provided at the IP layer using an encapsulation header that resides
> :: above IP. This effort is explicitly intended to leverage the interest
> :: in L3 overlay approaches as exemplified by VXLAN
> :: (draft-mahalingam-dutt-dcops-vxlan-00.txt) and NVGRE
> :: (draft-sridharan-virtualization-nvgre-00.txt).
> :: 
> :: Overlays are a form of "map and encap", where an ingress node maps the
> :: destination address of an arriving packet (e.g., from a source tenant
> :: VM) into the address of an egress node to which the packet can be
> :: tunneled to. The ingress node then encapsulates the packet in an outer
> :: header and tunnels it to the egress node, which decapsulates the
> :: packet and forwards the original (unmodified) packet to its ultimate
> :: destination (e.g., a destination tenant VM). All map-and-encap
> :: approaches must address two issues: the encapsulation format (i.e.,
> :: the contents of the outer header) and how to distribute and manage the
> :: mapping tables used by the tunnel end points.
> :: 
> :: The first area of work concerns encapsulation formats. This WG will
> :: develop requirements and desirable properties for any encapsulation
> :: format. Given the number of already existing encapsulation formats,
> :: it is not an explicit goal of this effort to choose exactly one format
> :: or to develop yet another new one.
> :: 
> :: A second work area is in the control plane, which allows an ingress
> :: node to map the "inner" (tenant VM) address into an "outer"
> :: (underlying transport network) address in order to tunnel a packet
> :: across the data center. We propose to develop two control planes. One
> :: control plane will use a learning mechanism similar to IEEE 802.1D
> :: learning, and could be appropriate for smaller data centers. A second,
> :: more scalable control plane would be aimed at large sites, capable of
> :: scaling to hundreds of thousands of nodes. Both control planes will
> :: need to handle the case of VMs moving around the network in a dynamic
> :: fashion, meaning that they will need to support tunnel endpoints
> :: registering and deregistering mappings as VMs change location and
> :: ensuring that out-of-date mapping tables are only used for short
> :: periods of time. Finally, the second control plane must also be
> :: applicable to geographically dispersed data centers.
> :: 
> :: Although a key objective of this WG is to produce a solution that
> :: supports an L2 over L3 overlay, an important goal is to develop a
> :: "layer agnostic" framework and architecture, so that any specific
> :: overlay approach can reuse the output of this working group. For
> :: example, there is no inherent reason why the same framework could not
> :: be used to provide for L2 over L2 or L3 over L3. The main difference
> :: would be in the address formats of the inner and outer headers and the
> :: encapsulation header itself.
> :: 
> :: Finally, some work may be needed in connecting an overlay network with
> :: traditional L2 or L3 VPNs (e.g., VPLS). One approach appears straight
> :: forward, in that there is a clear boundary between a VPN device and
> :: the edge of an overlay network. Packets forwarded across the boundary
> :: would simply need to have the tenant identifier on the overlay side
> :: mapped into a corresponding VPN identifier on the VPN
> :: side. Conceptually, this would appear to be analogous to what is done
> :: already today when interfacing between L2 VLANs and VPNs.
> :: 
> :: The specific deliverables for this group include:
> :: 
> :: 1) Finalize and publish the overall problem statement as an
> :: Informational RFC (basis:
> :: draft-narten-nvo3-overlay-problem-statement-01.txt)
> :: 
> :: 2) Develop requirements and desirable properties for any encapsulation
> :: format, and identify suitable encapsulations. Given the number of
> :: already existing encapsulation formats, it is not an explicit goal of
> :: this effort to choose exactly one format or to develop a new one.
> :: 
> :: 3) Produce a Standards Track control plane document that specifies how
> :: to build mapping tables using a "learning" approach. This document is
> :: expected to be short, as the algorithm itself will use a mechanism
> :: similar to IEEE 802.1D learning.
> :: 
> :: 4) Develop requirements (and later a Standards Track protocol) for a
> :: more scalable control plane for managing and distributing the mappings
> :: of "inner" to "outer" addresses. We will develop a reusable framework
> :: suitable for use by any mapping function in which there is a need to
> :: map "inner" to outer addresses. Starting point:
> :: draft-kreeger-nvo3-overlay-cp-00.txt
> :: 
> :: _______________________________________________
> :: nvo3 mailing list
> :: nvo3@ietf.org
> :: https://www.ietf.org/mailman/listinfo/nvo3
> :: 
> 
> --------------------+----------------------+------------------
>   Igor Gashinsky   | Network Architecture | Yahoo! Inc.
> igor@yahoo-inc.com |  cell 917.807.2213   | Do You... Yahoo?
> --------------------+----------------------+------------------
> _______________________________________________
> nvo3 mailing list
> nvo3@ietf.org
> https://www.ietf.org/mailman/listinfo/nvo3