Re: [nvo3] Draft NVO3 WG Charter

Larry Kreeger <kreeger@cisco.com> Fri, 17 February 2012 22:10 UTC

Return-Path: <kreeger@cisco.com>
X-Original-To: nvo3@ietfa.amsl.com
Delivered-To: nvo3@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id B058A21F865D for <nvo3@ietfa.amsl.com>; Fri, 17 Feb 2012 14:10:45 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -8.532
X-Spam-Level:
X-Spam-Status: No, score=-8.532 tagged_above=-999 required=5 tests=[BAYES_00=-2.599, RCVD_IN_DNSWL_HI=-8, RCVD_NUMERIC_HELO=2.067]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id OyFoph7AFlQr for <nvo3@ietfa.amsl.com>; Fri, 17 Feb 2012 14:10:44 -0800 (PST)
Received: from rcdn-iport-6.cisco.com (rcdn-iport-6.cisco.com [173.37.86.77]) by ietfa.amsl.com (Postfix) with ESMTP id DA74A21F865C for <nvo3@ietf.org>; Fri, 17 Feb 2012 14:10:43 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=kreeger@cisco.com; l=9561; q=dns/txt; s=iport; t=1329516644; x=1330726244; h=date:subject:from:to:message-id:in-reply-to:mime-version: content-transfer-encoding; bh=Sa94xB6jxBG3pGcN29LrPYcchw8IZPR/8w71i1kqkDE=; b=M6QDKnZHKjA3KrkeH5FBICqT/AbTEYMtKJCIcfug+e5jsOwo9+rZ6ToY oFW6i1RS4iq5jyw5Kc35YeAHfnFNloRCeWLsJPSIFtmcwLIYtyM8uNPWj iy4z7mbRiDz2+Y/oRz86fjKWMVT6lrcv/VBc/1J/BrBCaXjyvHi3iqy+A s=;
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: Aj8JAMLPPk+tJXHA/2dsb2JhbAA6CrIeAoEHgXUBAQEDAQEBAQ8BJwIBMAEQDQEIElsiDgEBBAESGweHXgmaCwGeUgSJN4JACQwBDgISAgIsAQiDcwgHBgiDLASIToxpjkeEQg
X-IronPort-AV: E=Sophos;i="4.73,440,1325462400"; d="scan'208";a="59886855"
Received: from rcdn-core2-5.cisco.com ([173.37.113.192]) by rcdn-iport-6.cisco.com with ESMTP; 17 Feb 2012 22:10:43 +0000
Received: from xbh-sjc-231.amer.cisco.com (xbh-sjc-231.cisco.com [128.107.191.100]) by rcdn-core2-5.cisco.com (8.14.3/8.14.3) with ESMTP id q1HMAgeS007428; Fri, 17 Feb 2012 22:10:43 GMT
Received: from xmb-sjc-21e.amer.cisco.com ([171.70.151.156]) by xbh-sjc-231.amer.cisco.com with Microsoft SMTPSVC(6.0.3790.4675); Fri, 17 Feb 2012 14:10:42 -0800
Received: from 171.71.13.147 ([171.71.13.147]) by xmb-sjc-21e.amer.cisco.com ([171.70.151.156]) via Exchange Front-End Server email.cisco.com ([128.107.191.87]) with Microsoft Exchange Server HTTP-DAV ; Fri, 17 Feb 2012 22:10:42 +0000
User-Agent: Microsoft-Entourage/12.20.0.090605
Date: Fri, 17 Feb 2012 14:10:41 -0800
From: Larry Kreeger <kreeger@cisco.com>
To: Thomas Narten <narten@us.ibm.com>, nvo3@ietf.org
Message-ID: <CB641061.58AF2%kreeger@cisco.com>
Thread-Topic: [nvo3] Draft NVO3 WG Charter
Thread-Index: AcztwPxXg3naQFX3YEmqK6Vi9/VPsQ==
In-Reply-To: <201202171451.q1HEptR3027370@cichlid.raleigh.ibm.com>
Mime-version: 1.0
Content-type: text/plain; charset="US-ASCII"
Content-transfer-encoding: 7bit
X-OriginalArrivalTime: 17 Feb 2012 22:10:42.0779 (UTC) FILETIME=[FD66FAB0:01CCEDC0]
Subject: Re: [nvo3] Draft NVO3 WG Charter
X-BeenThere: nvo3@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: "L2 \"Network Virtualization Over l3\" overlay discussion list \(nvo3\)" <nvo3.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/nvo3>, <mailto:nvo3-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/nvo3>
List-Post: <mailto:nvo3@ietf.org>
List-Help: <mailto:nvo3-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/nvo3>, <mailto:nvo3-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 17 Feb 2012 22:10:45 -0000

Hi Thomas,

Thanks for getting the ball rolling on this.  I do have a few comments which
are inline below.

Thanks, Larry


On 2/17/12 6:51 AM, "Thomas Narten" <narten@us.ibm.com> wrote:

> Below is a draft charter for this effort. One detail is that we
> started out calling this effort NVO3 (Network Virtualization Over L3),
> but have subsequently realized that we should not focus on just "over
> L3". One goal of this effort is to develop an overlay standard that
> works over L3, but we do not want to restrict ourselves only to "over
> L3". The framework and architecture that we are proposing to work on
> should be applicable to other overlays as well (e.g., L2 over
> L2). This is (hopefully) captured in the proposed charter.
> 
> Comments?
> 
> Thomas
> 
> NVO: Network Virtualization Overlays
> 
> Support for multi-tenancy has become a core requirement of data
> centers, especially in the context of data centers which include
> virtualized servers known as virtual machines (VMs).  With
> multi-tenancy, a data center can support the needs of many thousands
> of individual tenants, ranging from individual groups or departments
> within a single organization all the way up to supporting thousands of
> individual customers.  A key multi-tenancy requirement is traffic
> isolation, so that a tenant's traffic (and internal address usage) is
> not visible to any other tenant and does not collide with addresses
> used within the data center itself.  Such isolation can be achieved by
> creating and assigning one or more virtual networks to each tenant
> such that traffic within a virtual network is isolated from traffic in
> other virtual networks.
> 
> Tenant isolation is primarily achieved today within data centers using
> Ethernet VLANs. But the 12-bit VLAN tag field isn't large enough to
> support existing and future needs. A number of approaches to extending
> VLANs and scaling L2s have been proposed or developed, including IEEE
> 802.1ah Shortest Path Bridging (SPB) and TRILL (with the proposed
> fine-grained labeling extension).  At the L3 (IP) level, VXLAN and
> NVGRE have also been proposed. As outlined in
> draft-narten-nvo3-overlay-problem-statement-01.txt, however, existing
> L2 approaches are not satisfactory for all data center operators,
> e.g., larger data centers that desire to keep L2 domains small or push
> L3 further into the data center (e.g., all the way to top-of-rack
> switches). Furthermore, there is a desire to decouple the
> configuration of the data center network from the configuration
> associated with individual tenant applications and to seamlessly and
> rapidly update the network state to handle live VM migrations or fast
> spin-up and spin-down of new tenant VMs (or servers). Such tasks are
> complicated by the need to simultaneously reconfigure and update data
> center network state (e.g., VLAN settings on individual switches).

Regarding the last two sentences above, I just want to be clear that there
is a difference between data center network "configuration" (typically
meaning manual human action), and dynamic network state that is established
using a protocol exchange.  I bring this up because I don't think the goals
of this effort are to necessarily remove all knowledge/state/visibility of
the virtual networks from the data center networking equipment (although
this could be done).  To me, the goals of the protocols/protocol reqs in
this charter must allow for the ability of the data center networking
equipment to terminate the overlay tunnels on the behalf of attached devices
that either cannot perform the encap/decap function, or if they can, do so
at significantly reduced performance.  These protocols are needed to
eliminate the manual configuration of the data center network equipment as
the virtual network requirements dynamically change.  This ties to my next
comment further below.

> This WG will develop an approach to multi-tenancy that does not rely
> on any underlying L2 mechanisms to support multi-tenancy. In
> particular, the WG will develop an approach where multitenancy is
> provided at the IP layer using an encapsulation header that resides
> above IP. This effort is explicitly intended to leverage the interest
> in L3 overlay approaches as exemplified by VXLAN
> (draft-mahalingam-dutt-dcops-vxlan-00.txt) and NVGRE
> (draft-sridharan-virtualization-nvgre-00.txt).
> 
> Overlays are a form of "map and encap", where an ingress node maps the
> destination address of an arriving packet (e.g., from a source tenant
> VM) into the address of an egress node to which the packet can be
> tunneled to. The ingress node then encapsulates the packet in an outer
> header and tunnels it to the egress node, which decapsulates the
> packet and forwards the original (unmodified) packet to its ultimate
> destination (e.g., a destination tenant VM). All map-and-encap
> approaches must address two issues: the encapsulation format (i.e.,
> the contents of the outer header) and how to distribute and manage the
> mapping tables used by the tunnel end points.
> 
> The first area of work concerns encapsulation formats. This WG will
> develop requirements and desirable properties for any encapsulation
> format. Given the number of already existing encapsulation formats,
> it is not an explicit goal of this effort to choose exactly one format
> or to develop yet another new one.
> 
> A second work area is in the control plane, which allows an ingress
> node to map the "inner" (tenant VM) address into an "outer"
> (underlying transport network) address in order to tunnel a packet
> across the data center. We propose to develop two control planes. One
> control plane will use a learning mechanism similar to IEEE 802.1D
> learning, and could be appropriate for smaller data centers. A second,
> more scalable control plane would be aimed at large sites, capable of
> scaling to hundreds of thousands of nodes. Both control planes will
> need to handle the case of VMs moving around the network in a dynamic
> fashion, meaning that they will need to support tunnel endpoints
> registering and deregistering mappings as VMs change location and
> ensuring that out-of-date mapping tables are only used for short
> periods of time. Finally, the second control plane must also be
> applicable to geographically dispersed data centers.
> 
> Although a key objective of this WG is to produce a solution that
> supports an L2 over L3 overlay, an important goal is to develop a
> "layer agnostic" framework and architecture, so that any specific
> overlay approach can reuse the output of this working group. For
> example, there is no inherent reason why the same framework could not
> be used to provide for L2 over L2 or L3 over L3. The main difference
> would be in the address formats of the inner and outer headers and the
> encapsulation header itself.
> 
> Finally, some work may be needed in connecting an overlay network with
> traditional L2 or L3 VPNs (e.g., VPLS). One approach appears straight
> forward, in that there is a clear boundary between a VPN device and
> the edge of an overlay network. Packets forwarded across the boundary
> would simply need to have the tenant identifier on the overlay side
> mapped into a corresponding VPN identifier on the VPN
> side. Conceptually, this would appear to be analogous to what is done
> already today when interfacing between L2 VLANs and VPNs.
> 
> The specific deliverables for this group include:
> 
> 1) Finalize and publish the overall problem statement as an
> Informational RFC (basis:
> draft-narten-nvo3-overlay-problem-statement-01.txt)
> 
> 2) Develop requirements and desirable properties for any encapsulation
> format, and identify suitable encapsulations. Given the number of
> already existing encapsulation formats, it is not an explicit goal of
> this effort to choose exactly one format or to develop a new one.
> 
> 3) Produce a Standards Track control plane document that specifies how
> to build mapping tables using a "learning" approach. This document is
> expected to be short, as the algorithm itself will use a mechanism
> similar to IEEE 802.1D learning.
> 
> 4) Develop requirements (and later a Standards Track protocol) for a
> more scalable control plane for managing and distributing the mappings
> of "inner" to "outer" addresses. We will develop a reusable framework
> suitable for use by any mapping function in which there is a need to
> map "inner" to outer addresses. Starting point:
> draft-kreeger-nvo3-overlay-cp-00.txt

This starting point draft lists protocol requirements beyond the "inner" to
"outer" address mappings mentioned in point 4 above.  The remainder of the
protocol functions in the referenced draft are important for allowing the
data center networking equipment to be aware of the dynamically changing
virtual network requirements and other information needed to allow the
networking equipment to perform the encap/decap function on the behalf of
the end devices which need access the virtual networks - without resorting
to manual configuration.  I think the other protocol functions mentioned in
the draft should be explicitly called out in the WG charter in addition to
the important "inner" to "outer" mapping protocol.

> _______________________________________________
> nvo3 mailing list
> nvo3@ietf.org
> https://www.ietf.org/mailman/listinfo/nvo3