Re: [armd] soliciting typical network designs for ARMD

"Balus, Florin Stelian (Florin)" <florin.balus@alcatel-lucent.com> Mon, 19 September 2011 23:34 UTC

Return-Path: <florin.balus@alcatel-lucent.com>
X-Original-To: armd@ietfa.amsl.com
Delivered-To: armd@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 9DEA921F8B2F for <armd@ietfa.amsl.com>; Mon, 19 Sep 2011 16:34:22 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -6.599
X-Spam-Level:
X-Spam-Status: No, score=-6.599 tagged_above=-999 required=5 tests=[AWL=0.000, BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id K5yFjzD68mYs for <armd@ietfa.amsl.com>; Mon, 19 Sep 2011 16:34:21 -0700 (PDT)
Received: from ihemail4.lucent.com (ihemail4.lucent.com [135.245.0.39]) by ietfa.amsl.com (Postfix) with ESMTP id 8425221F8B2B for <armd@ietf.org>; Mon, 19 Sep 2011 16:34:21 -0700 (PDT)
Received: from usnavsmail2.ndc.alcatel-lucent.com (usnavsmail2.ndc.alcatel-lucent.com [135.3.39.10]) by ihemail4.lucent.com (8.13.8/IER-o) with ESMTP id p8JNaene026924 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Mon, 19 Sep 2011 18:36:41 -0500 (CDT)
Received: from USNAVSXCHHUB01.ndc.alcatel-lucent.com (usnavsxchhub01.ndc.alcatel-lucent.com [135.3.39.110]) by usnavsmail2.ndc.alcatel-lucent.com (8.14.3/8.14.3/GMO) with ESMTP id p8JNadAr018614 (version=TLSv1/SSLv3 cipher=RC4-MD5 bits=128 verify=NOT); Mon, 19 Sep 2011 18:36:40 -0500
Received: from USNAVSXCHMBSC3.ndc.alcatel-lucent.com ([135.3.39.139]) by USNAVSXCHHUB01.ndc.alcatel-lucent.com ([135.3.39.110]) with mapi; Mon, 19 Sep 2011 18:36:39 -0500
From: "Balus, Florin Stelian (Florin)" <florin.balus@alcatel-lucent.com>
To: Murari Sridharan <muraris@microsoft.com>, "david.black@emc.com" <david.black@emc.com>, "armd@ietf.org" <armd@ietf.org>
Date: Mon, 19 Sep 2011 18:36:37 -0500
Thread-Topic: [armd] soliciting typical network designs for ARMD
Thread-Index: AQHMVt0qWVZI0cjSpkywQ6H6jnPxdJUVhDmAgAOGoACAHuGmgIAYwDCAgAGOU4CAA0PKgA==
Message-ID: <2073A6C5467C99478898544C6EBA3F4602BBFE512F@USNAVSXCHMBSC3.ndc.alcatel-lucent.com>
References: <CAP_bo1b_2D=fbJJ8uGb8LPWb-6+sTQn1Gsh9YAp8pFs3JY_rrw@mail.gmail.com> <CAOyVPHTLYv=-GbjimpDr5NsxMUeWKtVKzStY9yxQO7s4YD2Ywg@mail.gmail.com> <CAP_bo1Ya7p+OS7fS40jE4+UZuhmeO+MAroC=CZK5sMEE625z8Q@mail.gmail.com> <CAOyVPHTcFr7F4ymQyXyECtS6f8z1XyZn40a_5WcpcjF9y0hZvQ@mail.gmail.com> <CA+-tSzx6DGPptGdtx5awzhnPPJgRHow2SWfuwRP4rwjdN1MXmw@mail.gmail.com> <CAOyVPHRUFrm2xqwrd4OVQbRotae+3+E8xhOF4n1dmWERVdLPEg@mail.gmail.com> <CA+-tSzzvj=eUYT4ZOKiy9yGssmrx71eby2f1xkKKh4NkXL5-Vg@mail.gmail.com> <CAOyVPHS-OF8+GRpmcAxbCj5_HEvgVSOvRMA2hC66v1pxs526Nw@mail.gmail.com> <35BAFA1F-25E8-442E-8FE6-2D5691DCBEAC@kumari.net> <7C4DFCE962635144B8FAE8CA11D0BF1E058CCE4D4C@MX14A.corp.emc.com> <EF5EF2B13ED09B4F871D9A0DBCA463C216C1E72D@TK5EX14MBXC300.redmond.corp.microsoft.com>
In-Reply-To: <EF5EF2B13ED09B4F871D9A0DBCA463C216C1E72D@TK5EX14MBXC300.redmond.corp.microsoft.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
acceptlanguage: en-US
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.57 on 135.245.2.39
X-Scanned-By: MIMEDefang 2.64 on 135.3.39.10
Subject: Re: [armd] soliciting typical network designs for ARMD
X-BeenThere: armd@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: "Discussion of issues associated with large amount of virtual machines being introduced in data centers and virtual hosts introduced by Cloud Computing." <armd.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/armd>, <mailto:armd-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/armd>
List-Post: <mailto:armd@ietf.org>
List-Help: <mailto:armd-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/armd>, <mailto:armd-request@ietf.org?subject=subscribe>
X-List-Received-Date: Mon, 19 Sep 2011 23:34:22 -0000

These two drafts and draft-wkumari-dcops-l3-vmmobility-00.txt seem to be submitted as either informational or experimental. Can the authors of these two drafts clarify what do they plan to do with them? If relevant feedback is required I think they seem to fit the scope of L2VPN WG. At least two of them talk about some new encapsulation used to provide L2 Multi-tenancy in the Data Centers over some sort of overlay and UDP/IP tunnels. A new L2VPN charter was released recently which includes requirements and solutions targeted at the Data Center space.

Florin

-----Original Message-----
From: armd-bounces@ietf.org [mailto:armd-bounces@ietf.org] On Behalf Of Murari Sridharan
Sent: Saturday, September 17, 2011 1:02 PM
To: david.black@emc.com; armd@ietf.org
Subject: Re: [armd] soliciting typical network designs for ARMD

FYI, here is a talk that I gave last week in relation to the nvgre draft below. 
http://channel9.msdn.com/Events/BUILD/BUILD2011/SAC-442T

Thanks
Murari 

-----Original Message-----
From: armd-bounces@ietf.org [mailto:armd-bounces@ietf.org] On Behalf Of david.black@emc.com
Sent: Friday, September 16, 2011 6:14 AM
To: armd@ietf.org
Subject: Re: [armd] soliciting typical network designs for ARMD

And two more drafts on this topic:

http://www.ietf.org/id/draft-mahalingam-dutt-dcops-vxlan-00.txt
http://www.ietf.org/id/draft-sridharan-virtualization-nvgre-00.txt

The edge switches could be the software switches in hypervisors. 

Thanks,
--David


> -----Original Message-----
> From: armd-bounces@ietf.org [mailto:armd-bounces@ietf.org] On Behalf 
> Of Warren Kumari
> Sent: Wednesday, August 31, 2011 3:16 PM
> To: Vishwas Manral
> Cc: armd@ietf.org
> Subject: Re: [armd] soliciting typical network designs for ARMD
> 
> 
> On Aug 11, 2011, at 11:40 PM, Vishwas Manral wrote:
> 
> > Hi Linda/ Anoop,
> >
> > Here is the example of the design I was talking about, as defined by google.
> 
> Just a clarification -- s/as defined by google/as described by someone 
> who happens to work for google/
> 
> W
> 
> > http://www.ietf.org/id/draft-wkumari-dcops-l3-vmmobility-00.txt
> >
> > Thanks,
> > Vishwas
> > On Tue, Aug 9, 2011 at 2:50 PM, Anoop Ghanwani <anoop@alumni.duke.edu> wrote:
> >
> > >>>>
> > (though I think if there was a standard way to map Multicast MAC to 
> > Multicast IP, they could
> probably use such a standard mechanisms).
> > >>>>
> >
> > They can do that, but then this imposes requirements on the 
> > equipment to be able to do multicast forwarding, and even if does, 
> > because of pruning requirements the number of groups would be very 
> > large.  The average data center switch probably won't handle that 
> > many groups.
> >
> > On Tue, Aug 9, 2011 at 2:41 PM, Vishwas Manral <vishwas.ietf@gmail.com> wrote:
> > Hi Anoop,
> >
> > From what I know they do not use Multicast GRE (I hear the extra 4 
> > bytes in the GRE header is a
> proprietery extension).
> >
> > I think a directory based mechanism is what is used (though I think 
> > if there was a standard way to
> map Multicast MAC to Multicast IP, they could probably use such a standard mechanisms).
> >
> > Thanks,
> > Vishwas
> > On Tue, Aug 9, 2011 at 2:03 PM, Anoop Ghanwani <anoop@alumni.duke.edu> wrote:
> > Hi Vishwas,
> >
> > How do they get multicast through the network in that case?
> > Are they planning to use multicast GRE, or just use directory based 
> > lookups and not worry about multicast applications for now?
> >
> > Anoop
> >
> > On Tue, Aug 9, 2011 at 1:27 PM, Vishwas Manral <vishwas.ietf@gmail.com> wrote:
> > Hi Linda,
> >
> > The data packets can be tunnelled at the ToR over say a GRE packet 
> > and the core is a Layer-3 core
> (except for the downstream ports). So we could have encapsulation/ 
> decapsulation of L2 over GRE at the ToR.
> >
> > The very same thing can be done at the hypervisor layer too, in 
> > which case the entire DC network
> would look like a Layer-3 flat network including the ToR to server 
> link and the hypervisor would do the tunneling.
> >
> > I am not sure if you got the points above or not. I know cloud OS 
> > companies that provide the service
> and have big announced customers.
> >
> > Thanks,
> > Vishwas
> > On Tue, Aug 9, 2011 at 11:51 AM, Linda Dunbar <dunbar.ll@gmail.com> wrote:
> > Vishwas,
> >
> > In my mind the bullet 1) in the list refers to ToR switches 
> > downstream ports (facing servers)
> running Layer 2 and ToR uplinks ports run IP Layer 3.
> >
> > Have you seen data center networks with ToR switches downstream 
> > ports (i.e. facing servers) enabling
> IP routing, even though the physical links are Ethernet?
> > If yes, we should definitely include it in the ARMD draft.
> >
> > Thanks,
> > Linda
> > On Tue, Aug 9, 2011 at 12:58 PM, Vishwas Manral <vishwas.ietf@gmail.com> wrote:
> > Hi Linda,
> > I am unsure what you mean by this, but:
> > 	* layer 3 all the way to TOR (Top of Rack switches), We can also 
> > have a heirarchical network, with the core totally Layer-3 (and 
> > having seperate
> routing), from the hosts still in a large Layer-3 subnet. Another 
> aspect could be to have a totally
> Layer-3 network.
> >
> > The difference between them is the link between the servers and the ToR.
> >
> > Thanks,
> > Vishwas
> > On Tue, Aug 9, 2011 at 10:22 AM, Linda Dunbar <dunbar.ll@gmail.com> wrote:
> > During the 81st IETF ARMD WG discussion, it was suggested that it is 
> > necessary to document typical
> data center network designs so that address resolution scaling issues 
> can be properly described. Many data center operators have expressed that they can't openly reveal their detailed network designs.
> Therefore, we only want to document anonymous designs without too much 
> detail. During the journey of establishing ARMD, we have come across the following typical data center network designs:
> > 	* layer 3 all the way to TOR (Top of Rack switches),
> > 	* large layer 2 with hundreds (or thousands) of ToRs being 
> > interconnected by Layer 2. This
> design will have thousands of hosts under the L2/L3 boundary router 
> (s)
> > 	* CLOS design  with thousands of switches. This design will have 
> > thousands of hosts under the
> L2/L3 boundary router(s)
> > We have heard that each of the designs above has its own problems. 
> > ARMD problem statements might
> need to document DC problems under each typical design.
> > Please send feedback to us (either to the armd email list  or to the 
> > ARMD chair Benson & Linda) to
> indicate if we have missed any typical Data Center network designs.
> >
> > Your contribution can greatly accelerate the progress of ARMD WG.
> >
> > Thank you very much.
> >
> > Linda & Benson
> >
> >
> 
> _______________________________________________
> armd mailing list
> armd@ietf.org
> https://www.ietf.org/mailman/listinfo/armd

_______________________________________________
armd mailing list
armd@ietf.org
https://www.ietf.org/mailman/listinfo/armd

_______________________________________________
armd mailing list
armd@ietf.org
https://www.ietf.org/mailman/listinfo/armd