Re: [armd] datacenter reference architecture draft

Manish Karir <mkarir@merit.edu> Fri, 04 November 2011 14:59 UTC

Return-Path: <mkarir@merit.edu>
X-Original-To: armd@ietfa.amsl.com
Delivered-To: armd@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 3269C21F8C73 for <armd@ietfa.amsl.com>; Fri, 4 Nov 2011 07:59:50 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.599
X-Spam-Level:
X-Spam-Status: No, score=-2.599 tagged_above=-999 required=5 tests=[BAYES_00=-2.599]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id yOTBwiDrYs6H for <armd@ietfa.amsl.com>; Fri, 4 Nov 2011 07:59:49 -0700 (PDT)
Received: from merit-proxy01.merit.edu (merit-proxy01.merit.edu [207.75.116.193]) by ietfa.amsl.com (Postfix) with ESMTP id 2EAD921F8C5E for <armd@ietf.org>; Fri, 4 Nov 2011 07:59:49 -0700 (PDT)
Received: from localhost (localhost.localdomain [127.0.0.1]) by merit-proxy01.merit.edu (Postfix) with ESMTP id 45E0D2039875; Fri, 4 Nov 2011 10:59:40 -0400 (EDT)
X-Virus-Scanned: amavisd-new at merit-proxy01.merit.edu
Received: from merit-proxy01.merit.edu ([127.0.0.1]) by localhost (merit-proxy01.merit.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id GpXXkTI0OXSB; Fri, 4 Nov 2011 10:59:34 -0400 (EDT)
Received: from dhcp60-218.merit.edu (dhcp60-218.merit.edu [198.108.60.218]) by merit-proxy01.merit.edu (Postfix) with ESMTPSA id 3A837203984C; Fri, 4 Nov 2011 10:59:34 -0400 (EDT)
Mime-Version: 1.0 (Apple Message framework v1084)
Content-Type: text/plain; charset="us-ascii"
From: Manish Karir <mkarir@merit.edu>
In-Reply-To: <4EAEE8E4.90302@bogus.com>
Date: Fri, 04 Nov 2011 10:59:34 -0400
Content-Transfer-Encoding: quoted-printable
Message-Id: <DFB92A28-6C3A-4360-AB3E-F0D7BA96E6F8@merit.edu>
References: <905C201F-E6DD-4FA2-A65A-38472BA39571@merit.edu> <4EAEE8E4.90302@bogus.com>
To: Joel jaeggli <joelja@bogus.com>
X-Mailer: Apple Mail (2.1084)
Cc: armd@ietf.org
Subject: Re: [armd] datacenter reference architecture draft
X-BeenThere: armd@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: "Discussion of issues associated with large amount of virtual machines being introduced in data centers and virtual hosts introduced by Cloud Computing." <armd.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/armd>, <mailto:armd-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/armd>
List-Post: <mailto:armd@ietf.org>
List-Help: <mailto:armd-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/armd>, <mailto:armd-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 04 Nov 2011 14:59:50 -0000

Hi Joel,

The goal of 3.4.1-3.4.4 was to discuss variations on how L2/L3 topologies can 
vary from one data center to the next even with the generic data center
logical model.  These came out of various discussions at previous ietf/nanog
meetings and offline discussion.  Some may not make sense to us, but were
the perfect solution to others who needed to solve a particular problem.  I
think what we wanted to do with this draft is put these discussions down in 
one place to make sure we were all aware of what architectural decisions
(tricks?) different people use to solve their application requirements.
Hope this helps clarify things?  

Also, is the basic description here consistent from what you have seen 
in terms of data center architectures?  What do you think is missing here?
Are there any other particularly interesting designs that you have come across
that should be included here?

Thanks for your help.

-manish



On Oct 31, 2011, at 2:28 PM, Joel jaeggli wrote:

> so, I looked at it for a while...
> 
> I'm a bit mystified by 3.4.1-3.4.4
> 
> we've already arrived I guess at what we conclude is the ideal topology,
> and a particular model of mobility.
> 
> When faced with this choice and a desire to constrain both the
> complexity and the diameter failure domain one response is to not make
> the network the arbiter of mobility, e.g. move that to provisioning or
> the application layer. the result is a lot closer to 3.4.1 than it is
> the others.
> 
> One of he problems I have with managing a large l2 domain, particularly
> one constructed as an overlay is that there's effectively no upper bound
> appart from physics and good taste for how far it can spread, first they
> want across the rack, the module, then across the whole datacenter, then
> to the adjacent datacenter, across the country, to other cloud
> providers, etc. if you  constrain it sufficiently small that
> availability is not a design consideration (1 or 2 switches is big l2
> domain) then it doesn't become a dependency.
> 
> Insisting that ip addresses move around with virtualized machines is one
> way to view the world but it not the only way. nor is constraining
> tenets to common l2 buckets the only way to segment applications from
> each other, the hosts for the virtual machines can just as well be (are)
> policy enforcement points.
> 
> On 10/28/11 09:01 , Manish Karir wrote:
>> 
>> The following draft was submitted to hopefully help focus the ARMD discussion around a common architecture.
>> 
>> Comments and feedback are welcome.  The goal of the writeup really is to abstract away specific datacenter designs
>> each of which focuses on solving a particular application/traffic pattern by trying to talk about what is common between 
>> the various designs.  Hopefully this will help some of the very varied discussion that has taken place in this WG so far.
>> 
>> Thanks.
>> -manish
>> 
>> http://www.ietf.org/id/draft-armd-datacenter-reference-arch-01.txt
>> -----------------------------------------
>> Filename:	 draft-karir-armd-datacenter-reference-arch
>> Revision:	 00
>> Title:		 Data Center Reference Architectures
>> Creation date:	 2011-10-24
>> WG ID:		 Individual Submission
>> Number of pages: 11
>> 
>> Abstract:
>>  The continued growth of large-scale data centers has resulted in a
>>  wide range of architectures and designs.  Each design is tuned to
>>  address the challenges and requirements of the specific applications
>>  and workload that the data is being built for.  Each design evolves
>>  as engineering solutions are developed to workaround limitations of
>>  existing protocols, hardware, as well as software implementations.
>> 
>>  The goal of this document is to characterize this problem space in
>>  detail in order to better understand if there is any gap in making
>>  address resolution scale in various network designs for data
>>  centers.  In particular it is our goal to peel back the various
>>  optimization and engineering solutions to develop generalized
>>  reference architectures for a data center.  We also discuss the
>>  various factors that influence design choices in developing various
>>  data center designs.
>> --------------------------------------------------------------------------
>> _______________________________________________
>> armd mailing list
>> armd@ietf.org
>> https://www.ietf.org/mailman/listinfo/armd
>> 
>