Re: [nvo3] inter-CUG traffic [was Re: call for adoption: draft-narten-nvo3-overlay-problem-statement-02]

"Stiliadis, Dimitrios (Dimitri)" <dimitri.stiliadis@alcatel-lucent.com> Sat, 30 June 2012 03:53 UTC

Return-Path: <dimitri.stiliadis@alcatel-lucent.com>
X-Original-To: nvo3@ietfa.amsl.com
Delivered-To: nvo3@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id C86A911E80C4 for <nvo3@ietfa.amsl.com>; Fri, 29 Jun 2012 20:53:36 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -6.328
X-Spam-Level:
X-Spam-Status: No, score=-6.328 tagged_above=-999 required=5 tests=[AWL=-2.271, BAYES_00=-2.599, CN_BODY_35=0.339, J_BACKHAIR_32=1, J_BACKHAIR_33=1, MIME_BASE64_TEXT=1.753, MIME_CHARSET_FARAWAY=2.45, RCVD_IN_DNSWL_HI=-8]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id LRmUyQtmAODl for <nvo3@ietfa.amsl.com>; Fri, 29 Jun 2012 20:53:33 -0700 (PDT)
Received: from ihemail1.lucent.com (ihemail1.lucent.com [135.245.0.33]) by ietfa.amsl.com (Postfix) with ESMTP id CF77A11E80A0 for <nvo3@ietf.org>; Fri, 29 Jun 2012 20:53:32 -0700 (PDT)
Received: from usnavsmail1.ndc.alcatel-lucent.com (usnavsmail1.ndc.alcatel-lucent.com [135.3.39.9]) by ihemail1.lucent.com (8.13.8/IER-o) with ESMTP id q5U3rECr006131 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Fri, 29 Jun 2012 22:53:14 -0500 (CDT)
Received: from USNAVSXCHHUB03.ndc.alcatel-lucent.com (usnavsxchhub03.ndc.alcatel-lucent.com [135.3.39.112]) by usnavsmail1.ndc.alcatel-lucent.com (8.14.3/8.14.3/GMO) with ESMTP id q5U3rC9l020057 (version=TLSv1/SSLv3 cipher=RC4-MD5 bits=128 verify=NOT); Fri, 29 Jun 2012 22:53:12 -0500
Received: from USNAVSXCHMBSA3.ndc.alcatel-lucent.com ([135.3.39.127]) by USNAVSXCHHUB03.ndc.alcatel-lucent.com ([135.3.39.112]) with mapi; Fri, 29 Jun 2012 22:53:12 -0500
From: "Stiliadis, Dimitrios (Dimitri)" <dimitri.stiliadis@alcatel-lucent.com>
To: Xuxiaohu <xuxiaohu@huawei.com>, Pedro Roque Marques <pedro.r.marques@gmail.com>, "Joel M. Halpern" <jmh@joelhalpern.com>
Date: Fri, 29 Jun 2012 22:53:09 -0500
Thread-Topic: [nvo3] inter-CUG traffic [was Re: call for adoption: draft-narten-nvo3-overlay-problem-statement-02]
Thread-Index: AQHNVlcPezjsL4jLvU6tLh467m22+ZcRj/iAgACXweCAABGhUA==
Message-ID: <F5EF891E30B2AE46ACA20EB848689C21253A38C937@USNAVSXCHMBSA3.ndc.alcatel-lucent.com>
References: <3657FA59-508C-4B18-88E8-00109F56A61E@cisco.com> <10DD265B-C45B-4DED-B0C2-8A642D3C32F5@gmail.com> <201206292156.q5TLu0cn020234@cichlid.raleigh.ibm.com> <8D3D17ACE214DC429325B2B98F3AE71208D3A9F0@MX15A.corp.emc.com> <B38441C5-A193-4531-BCF3-B27F771D15A9@gmail.com> <8D3D17ACE214DC429325B2B98F3AE71208D3A9F9@MX15A.corp.emc.com> <4FEE47C5.9020202@joelhalpern.com> <9DFF0AC1-5C29-4365-BC0B-DFFA0050A6ED@gmail.com> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE07525282@szxeml525-mbx.china.huawei.com>
In-Reply-To: <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE07525282@szxeml525-mbx.china.huawei.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
acceptlanguage: en-US
Content-Type: text/plain; charset="gb2312"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.57 on 135.245.2.33
X-Scanned-By: MIMEDefang 2.64 on 135.3.39.9
Cc: "narten@us.ibm.com" <narten@us.ibm.com>, "david.black@emc.com" <david.black@emc.com>, "nvo3@ietf.org" <nvo3@ietf.org>
Subject: Re: [nvo3] inter-CUG traffic [was Re: call for adoption: draft-narten-nvo3-overlay-problem-statement-02]
X-BeenThere: nvo3@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: "L2 \"Network Virtualization Over L3\" overlay discussion list \(nvo3\)" <nvo3.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/nvo3>, <mailto:nvo3-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/nvo3>
List-Post: <mailto:nvo3@ietf.org>
List-Help: <mailto:nvo3-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/nvo3>, <mailto:nvo3-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sat, 30 Jun 2012 03:53:37 -0000

> 
> Still take the 3-tier app as an example, if the app servers each could be
> configured with two interfaces with one located in the VLAN/subnet of the web
> servers while the another located in the VLAN/subnet of the DB servers, the
> network only needs to care about the optimization for intra-subnet traffic
> (e.g., web<->app intra-subnet traffic and app<->DB intra-subnet traffic).


This would violate several security best practices. It would make the VM 
an easy entry point to the sensitive business logic and/or DB networks.
A vulnerability in the VM OS and/or apps would allow anyone to gain access
to the VM and through there the internal network. That's why people
are using DMZs. 

A router/firewall has a much lower code footprint and most likely
is more resilient to such vulnerabilities. So, not only does one 
need L3 isolation, but most often a real firewall in 
between the web-servers and the rest of the applications.

> 
> Of course, if the above demand can't be met due to some reason, the network
> should consider the optimization for inter-subnet traffic, and as had been
> pointed out by someone before, one practical solution is to deploy the default
> gateway functions as close as possible to the servers, e.g., inside the NVEs.
> 
> Best regards,
> Xiaohu
> 
> > -----邮件原件-----
> > 发件人: nvo3-bounces@ietf.org [mailto:nvo3-bounces@ietf.org] 代表 Pedro
> > Roque Marques
> > 发送时间: 2012年6月30日 9:41
> > 收件人: Joel M. Halpern
> > 抄送: narten@us.ibm.com; david.black@emc.com; nvo3@ietf.org
> > 主题: Re: [nvo3] inter-CUG traffic [was Re: call for adoption:
> > draft-narten-nvo3-overlay-problem-statement-02]
> >
> > Joel,
> > A very common model currently is to have a 3 tier app where each tier is in
> its
> > VLAN. You will find that web-servers for instance don't actually talk much
> to
> > each other… although they are on the same VLAN 100% of their traffic goes
> > outside VLAN. Very similar story applies to app logic tier. The database
> tier may
> > have some replication traffic within its VLAN but hopefully than is less
> than the
> > requests that it serves.
> >
> > There isn't a whole lot of intra-CUG/subnet traffic under that deployment
> > model. A problem statement that assumes (implicitly) that most or a
> significant
> > part of the traffic stays local to a VLAN/subnet/CUG is not a good match for
> the
> > common 3-tier application model. Even if you assume that web and app tiers
> > use a  VLAN/subnet/CUG per tenant (which really is an application in
> > enterprise) the database is typically common for a large number of
> > apps/tenants.
> >
> >   Pedro.
> >
> > On Jun 29, 2012, at 5:26 PM, Joel M. Halpern wrote:
> >
> > > Depending upon what portion of the traffic needs inter-region handling
> (inter
> > vpn, inter-vlan, ...) it is not obvious that "optimal" is an important goal.
> As a
> > general rule, perfect is the enemy of good.
> > >
> > > Yours,
> > > Joel
> > >
> > > On 6/29/2012 7:54 PM, david.black@emc.com wrote:
> > >> Pedro,
> > >>
> > >>> Can you please describe an example of how you could set up such
> > >>> straightforward routing, assuming two Hosts belong to different "CUGs"
> > such
> > >>> that these can be randomly spread across the DC  ? My question is where
> > is the
> > >>> "gateway", how is it provisioned and  how can traffic paths be
> > guaranteed to
> > >>> be optimal.
> > >>
> > >> Ok, I see your point - the routing functionality is straightforward to
> move
> > over,
> > >> but ensuring optimal pathing is significantly more work, as noted in
> another
> > one
> > >> of your messages:
> > >>
> > >>> Conceptually, that means that the functionality of the "gateway" should
> be
> > >>> implemented at the overlay ingress and egress points, rather than
> > requiring
> > >>> a mid-box.
> > >>
> > >> Thanks,
> > >> --David
> > >>
> > >>
> > >>> -----Original Message-----
> > >>> From: Pedro Roque Marques [mailto:pedro.r.marques@gmail.com]
> > >>> Sent: Friday, June 29, 2012 7:38 PM
> > >>> To: Black, David
> > >>> Cc: narten@us.ibm.com; nvo3@ietf.org
> > >>> Subject: Re: [nvo3] inter-CUG traffic [was Re: call for adoption: draft-
> > >>> narten-nvo3-overlay-problem-statement-02]
> > >>>
> > >>>
> > >>> On Jun 29, 2012, at 4:02 PM, <david.black@emc.com> wrote:
> > >>>
> > >>>>> There is an underlying assumption in NVO3 that isolating tenants from
> > >>>>> each other is a key reason to use overlays. If 90% of the traffic is
> > >>>>> actually between different tenants, it is not immediately clear to me
> > >>>>> why one has set up a system with a lot of "inter tenant" traffic. Is
> > >>>>> this is a case we need to focus on optimizing?
> > >>>>
> > >>>> A single tenant may have multiple virtual networks with routing used to
> > >>>> provide/control access among them.  The crucial thing is to avoid
> > assuming
> > >>>> that a tenant or other administrative entity has a single virtual
> network
> > >>>> (or CUG in Pedro's email).  For example, consider moving a portion of
> > >>>> a single data center that uses multiple VLANs and routers to
> selectively
> > >>>> connect them into an nvo3 environment - each VLAN gets turned into a
> > virtual
> > >>>> network, and the routers now route among virtual networks instead of
> > VLANs.
> > >>>>
> > >>>> One of the things that's been pointed out to me in private is that the
> level
> > >>>> of importance that one places on routing across virtual networks may
> > depend
> > >>>> on one's background.  If one is familiar with VLANs and views nvo3
> > overlays
> > >>>> providing VLAN-like functionality, IP routing among virtual networks is
> a
> > >>>> straightforward application of IP routing among VLANs (e.g., the
> previous
> > >>>> mention of L2/L3 IRB functionality that is common in data center
> network
> > >>>> switches).
> > >>>
> > >>> Can you please describe an example of how you could set up such
> > >>> straightforward routing, assuming two Hosts belong to different "CUGs"
> > such
> > >>> that these can be randomly spread across the DC  ? My question is where
> > is the
> > >>> "gateway", how is it provisioned and  how can traffic paths be
> > guaranteed to
> > >>> be optimal.
> > >>>
> > >>>>  OTOH, if one is familiar with VPNs where access among
> > >>>> otherwise-closed groups has to be explicitly configured, particularly
> > >>>> L3 VPNs where one cannot look to L2 to help with grouping the end
> > systems,
> > >>>> this sort of cross-group access can be a significant area of
> functionality.
> > >>>
> > >>> Considering that in a VPN one can achieve inter-CUG traffic exchange
> > without
> > >>> an gateway in the middle via policy, it is unclear why you suggest that
> "look
> > >>> to L2" would help.
> > >>>
> > >>>>
> > >>>> Thanks,
> > >>>> --David
> > >>>>
> > >>>>> -----Original Message-----
> > >>>>> From: nvo3-bounces@ietf.org [mailto:nvo3-bounces@ietf.org] On Behalf
> > Of
> > >>> Thomas
> > >>>>> Narten
> > >>>>> Sent: Friday, June 29, 2012 5:56 PM
> > >>>>> To: Pedro Roque Marques
> > >>>>> Cc: nvo3@ietf.org
> > >>>>> Subject: [nvo3] inter-CUG traffic [was Re: call for adoption: draft-
> narten-
> > >>>>> nvo3-overlay-problem-statement-02]
> > >>>>>
> > >>>>> Pedro Roque Marques <pedro.r.marques@gmail.com> writes:
> > >>>>>
> > >>>>>> I object to the document on the following points:
> > >>>>>>
> > >>>>>> 3) Does not discuss the requirements for inter-CUG traffic.
> > >>>>>
> > >>>>> Given that the problem statement is not supposed to be the
> > >>>>> requirements document,, what exactly should the problem statement
> > say
> > >>>>> about this topic?
> > >>>>>
> > >>>>> <david.black@emc.com> writes:
> > >>>>>
> > >>>>>> Inter-VN traffic (what you refer to as inter-CUG traffic) is handled
> > >>>>>> by a straightforward application of IP routing to the inner IP
> > >>>>>> headers; this is similar to the well-understood application of IP
> > >>>>>> routing to forward traffic across VLANs.  We should talk about VRFs
> > >>>>>> as something other than a limitation of current approaches - for
> > >>>>>> VLANs, VRFs (separate instances of routing) are definitely a
> > >>>>>> feature, and I expect this to carry forward to nvo3 VNs.  In
> > >>>>>> addition, we need to make changes to address Dimitri's comments
> > >>>>>> about problems with the current VRF text.
> > >>>>>
> > >>>>> Pedro Roque Marques <pedro.r.marques@gmail.com> writes:
> > >>>>>
> > >>>>>> That is where again the differences between different types of
> > >>>>>> data-centers do play in. If for instance 90% of a VMs traffic
> > >>>>>> happens to be between the Host OS and a network attached storage
> > >>>>>> file system run as-a-Service (with the appropriate multi-tenent
> > >>>>>> support) then the question of where are the routers becomes a very
> > >>>>>> important issue. In a large scale data-center where the Host VM and
> > >>>>>> the CPU that hosts the filesystem block can be randomly spread
> > >>>>>> where is the router ?
> > >>>>>
> > >>>>> Where is what router? Are you assuming the Host OS and NAS are in the
> > >>>>> different VNs? And hence, traffic has to (at least conceptually) exit
> > >>>>> one VN and reenter another whenever there is HostOS - NAS traffic?
> > >>>>>
> > >>>>>> Is every switch a router ? Does it have all the CUGs present ?
> > >>>>>
> > >>>>> The underlay can be a mixture of switches and routers... that is not
> > >>>>> our concern. So long as the underlay delivers traffic sourced by an
> > >>>>> ingress NVE to the appropriate egress NVE, we are good.
> > >>>>>
> > >>>>> If there are issues with the actual path taken being suboptimal in
> > >>>>> some sense, that is an underlay problem to solve, not for the overlay.
> > >>>>>
> > >>>>>> In some DC designs the problem to solve is the inter-CUG
> > >>>>>> traffic. With L2 headers being totally irrelevant.
> > >>>>>
> > >>>>> There is an underlying assumption in NVO3 that isolating tenants from
> > >>>>> each other is a key reason to use overlays. If 90% of the traffic is
> > >>>>> actually between different tenants, it is not immediately clear to me
> > >>>>> why one has set up a system with a lot of "inter tenant" traffic. Is
> > >>>>> this is a case we need to focus on optimizing?
> > >>>>>
> > >>>>> But in any case, if one does have inter-VN traffic, that will have to
> > >>>>> get funneled through a "gateway" between VNs, at least conceptually, I
> > >>>>> would assume that an implementation of overlays would provide at least
> > >>>>> one, and likely more such gateways on each VN. How many and where
> > to
> > >>>>> place them will presumably depend on many factors but would be done
> > >>>>> based on traffic patterns and network layout. I would not think every
> > >>>>> NVE has to provide such functionality.
> > >>>>>
> > >>>>> What do you propose needs saying in the problem statement about
> > that?
> > >>>>>
> > >>>>> Thomas
> > >>>>>
> > >>>>> _______________________________________________
> > >>>>> nvo3 mailing list
> > >>>>> nvo3@ietf.org
> > >>>>> https://www.ietf.org/mailman/listinfo/nvo3
> > >>>>
> > >>>
> > >>
> > >> _______________________________________________
> > >> nvo3 mailing list
> > >> nvo3@ietf.org
> > >> https://www.ietf.org/mailman/listinfo/nvo3
> > >>
> > >
> >
> > _______________________________________________
> > nvo3 mailing list
> > nvo3@ietf.org
> > https://www.ietf.org/mailman/listinfo/nvo3
> _______________________________________________
> nvo3 mailing list
> nvo3@ietf.org
> https://www.ietf.org/mailman/listinfo/nvo3