[nvo3] Draft NVO3 WG Charter

Thomas Narten <narten@us.ibm.com> Fri, 17 February 2012 14:53 UTC

Return-Path: <narten@us.ibm.com>
X-Original-To: nvo3@ietfa.amsl.com
Delivered-To: nvo3@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 6EA3A21F865C for <nvo3@ietfa.amsl.com>; Fri, 17 Feb 2012 06:53:26 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -109.432
X-Spam-Level:
X-Spam-Status: No, score=-109.432 tagged_above=-999 required=5 tests=[AWL=1.167, BAYES_00=-2.599, RCVD_IN_DNSWL_HI=-8, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id RDJ+c3Bx6yCM for <nvo3@ietfa.amsl.com>; Fri, 17 Feb 2012 06:53:25 -0800 (PST)
Received: from e32.co.us.ibm.com (e32.co.us.ibm.com [32.97.110.150]) by ietfa.amsl.com (Postfix) with ESMTP id 6AF4E21F8650 for <nvo3@ietf.org>; Fri, 17 Feb 2012 06:53:25 -0800 (PST)
Received: from /spool/local by e32.co.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for <nvo3@ietf.org> from <narten@us.ibm.com>; Fri, 17 Feb 2012 07:53:24 -0700
Received: from d03dlp02.boulder.ibm.com (9.17.202.178) by e32.co.us.ibm.com (192.168.1.132) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Fri, 17 Feb 2012 07:52:46 -0700
Received: from d03relay02.boulder.ibm.com (d03relay02.boulder.ibm.com [9.17.195.227]) by d03dlp02.boulder.ibm.com (Postfix) with ESMTP id 3DEE73E40048 for <nvo3@ietf.org>; Fri, 17 Feb 2012 07:52:46 -0700 (MST)
Received: from d03av03.boulder.ibm.com (d03av03.boulder.ibm.com [9.17.195.169]) by d03relay02.boulder.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id q1HEqcM6079488 for <nvo3@ietf.org>; Fri, 17 Feb 2012 07:52:39 -0700
Received: from d03av03.boulder.ibm.com (loopback [127.0.0.1]) by d03av03.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id q1HEqWls025879 for <nvo3@ietf.org>; Fri, 17 Feb 2012 07:52:32 -0700
Received: from cichlid.raleigh.ibm.com (sig-9-65-227-26.mts.ibm.com [9.65.227.26]) by d03av03.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id q1HEqVFR025797 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO) for <nvo3@ietf.org>; Fri, 17 Feb 2012 07:52:31 -0700
Received: from cichlid.raleigh.ibm.com (localhost [127.0.0.1]) by cichlid.raleigh.ibm.com (8.14.5/8.12.5) with ESMTP id q1HEptR3027370 for <nvo3@ietf.org>; Fri, 17 Feb 2012 09:51:57 -0500
Message-Id: <201202171451.q1HEptR3027370@cichlid.raleigh.ibm.com>
To: nvo3@ietf.org
Date: Fri, 17 Feb 2012 09:51:55 -0500
From: Thomas Narten <narten@us.ibm.com>
X-Content-Scanned: Fidelis XPS MAILER
x-cbid: 12021714-3270-0000-0000-0000041B3C76
Subject: [nvo3] Draft NVO3 WG Charter
X-BeenThere: nvo3@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: "L2 \"Network Virtualization Over l3\" overlay discussion list \(nvo3\)" <nvo3.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/nvo3>, <mailto:nvo3-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/nvo3>
List-Post: <mailto:nvo3@ietf.org>
List-Help: <mailto:nvo3-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/nvo3>, <mailto:nvo3-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 17 Feb 2012 14:53:26 -0000

Below is a draft charter for this effort. One detail is that we
started out calling this effort NVO3 (Network Virtualization Over L3),
but have subsequently realized that we should not focus on just "over
L3". One goal of this effort is to develop an overlay standard that
works over L3, but we do not want to restrict ourselves only to "over
L3". The framework and architecture that we are proposing to work on
should be applicable to other overlays as well (e.g., L2 over
L2). This is (hopefully) captured in the proposed charter.

Comments?

Thomas

NVO: Network Virtualization Overlays 

Support for multi-tenancy has become a core requirement of data
centers, especially in the context of data centers which include
virtualized servers known as virtual machines (VMs).  With
multi-tenancy, a data center can support the needs of many thousands
of individual tenants, ranging from individual groups or departments
within a single organization all the way up to supporting thousands of
individual customers.  A key multi-tenancy requirement is traffic
isolation, so that a tenant's traffic (and internal address usage) is
not visible to any other tenant and does not collide with addresses
used within the data center itself.  Such isolation can be achieved by
creating and assigning one or more virtual networks to each tenant
such that traffic within a virtual network is isolated from traffic in
other virtual networks.

Tenant isolation is primarily achieved today within data centers using
Ethernet VLANs. But the 12-bit VLAN tag field isn't large enough to
support existing and future needs. A number of approaches to extending
VLANs and scaling L2s have been proposed or developed, including IEEE
802.1ah Shortest Path Bridging (SPB) and TRILL (with the proposed
fine-grained labeling extension).  At the L3 (IP) level, VXLAN and
NVGRE have also been proposed. As outlined in
draft-narten-nvo3-overlay-problem-statement-01.txt, however, existing
L2 approaches are not satisfactory for all data center operators,
e.g., larger data centers that desire to keep L2 domains small or push
L3 further into the data center (e.g., all the way to top-of-rack
switches). Furthermore, there is a desire to decouple the
configuration of the data center network from the configuration
associated with individual tenant applications and to seamlessly and
rapidly update the network state to handle live VM migrations or fast
spin-up and spin-down of new tenant VMs (or servers). Such tasks are
complicated by the need to simultaneously reconfigure and update data
center network state (e.g., VLAN settings on individual switches).

This WG will develop an approach to multi-tenancy that does not rely
on any underlying L2 mechanisms to support multi-tenancy. In
particular, the WG will develop an approach where multitenancy is
provided at the IP layer using an encapsulation header that resides
above IP. This effort is explicitly intended to leverage the interest
in L3 overlay approaches as exemplified by VXLAN
(draft-mahalingam-dutt-dcops-vxlan-00.txt) and NVGRE
(draft-sridharan-virtualization-nvgre-00.txt).

Overlays are a form of "map and encap", where an ingress node maps the
destination address of an arriving packet (e.g., from a source tenant
VM) into the address of an egress node to which the packet can be
tunneled to. The ingress node then encapsulates the packet in an outer
header and tunnels it to the egress node, which decapsulates the
packet and forwards the original (unmodified) packet to its ultimate
destination (e.g., a destination tenant VM). All map-and-encap
approaches must address two issues: the encapsulation format (i.e.,
the contents of the outer header) and how to distribute and manage the
mapping tables used by the tunnel end points.

The first area of work concerns encapsulation formats. This WG will
develop requirements and desirable properties for any encapsulation
format. Given the number of already existing encapsulation formats,
it is not an explicit goal of this effort to choose exactly one format
or to develop yet another new one.

A second work area is in the control plane, which allows an ingress
node to map the "inner" (tenant VM) address into an "outer"
(underlying transport network) address in order to tunnel a packet
across the data center. We propose to develop two control planes. One
control plane will use a learning mechanism similar to IEEE 802.1D
learning, and could be appropriate for smaller data centers. A second,
more scalable control plane would be aimed at large sites, capable of
scaling to hundreds of thousands of nodes. Both control planes will
need to handle the case of VMs moving around the network in a dynamic
fashion, meaning that they will need to support tunnel endpoints
registering and deregistering mappings as VMs change location and
ensuring that out-of-date mapping tables are only used for short
periods of time. Finally, the second control plane must also be
applicable to geographically dispersed data centers.

Although a key objective of this WG is to produce a solution that
supports an L2 over L3 overlay, an important goal is to develop a
"layer agnostic" framework and architecture, so that any specific
overlay approach can reuse the output of this working group. For
example, there is no inherent reason why the same framework could not
be used to provide for L2 over L2 or L3 over L3. The main difference
would be in the address formats of the inner and outer headers and the
encapsulation header itself.

Finally, some work may be needed in connecting an overlay network with
traditional L2 or L3 VPNs (e.g., VPLS). One approach appears straight
forward, in that there is a clear boundary between a VPN device and
the edge of an overlay network. Packets forwarded across the boundary
would simply need to have the tenant identifier on the overlay side
mapped into a corresponding VPN identifier on the VPN
side. Conceptually, this would appear to be analogous to what is done
already today when interfacing between L2 VLANs and VPNs.

The specific deliverables for this group include:

1) Finalize and publish the overall problem statement as an
Informational RFC (basis:
draft-narten-nvo3-overlay-problem-statement-01.txt)

2) Develop requirements and desirable properties for any encapsulation
format, and identify suitable encapsulations. Given the number of
already existing encapsulation formats, it is not an explicit goal of
this effort to choose exactly one format or to develop a new one.

3) Produce a Standards Track control plane document that specifies how
to build mapping tables using a "learning" approach. This document is
expected to be short, as the algorithm itself will use a mechanism
similar to IEEE 802.1D learning.

4) Develop requirements (and later a Standards Track protocol) for a
more scalable control plane for managing and distributing the mappings
of "inner" to "outer" addresses. We will develop a reusable framework
suitable for use by any mapping function in which there is a need to
map "inner" to outer addresses. Starting point:
draft-kreeger-nvo3-overlay-cp-00.txt