RE: FW: [Ips] Recent comments about FCoE and iSCSI

"Robert Snively" <rsnively@Brocade.COM> Thu, 26 April 2007 13:12 UTC

Return-path: <ips-bounces@ietf.org>
Received: from [127.0.0.1] (helo=stiedprmman1.va.neustar.com) by megatron.ietf.org with esmtp (Exim 4.43) id 1Hh3lr-0008CX-Jr; Thu, 26 Apr 2007 09:12:27 -0400
Received: from ips by megatron.ietf.org with local (Exim 4.43) id 1Hgqaj-0005sB-GK for ips-confirm+ok@megatron.ietf.org; Wed, 25 Apr 2007 19:08:05 -0400
Received: from [10.91.34.44] (helo=ietf-mx.ietf.org) by megatron.ietf.org with esmtp (Exim 4.43) id 1Hgqai-0005lU-To for ips@ietf.org; Wed, 25 Apr 2007 19:08:04 -0400
Received: from mx20.brocade.com ([66.243.153.19]) by ietf-mx.ietf.org with esmtp (Exim 4.43) id 1HgqX1-00009e-KM for ips@ietf.org; Wed, 25 Apr 2007 19:04:17 -0400
Received: from mailhost.brocade.com (HELO discus.brocade.com) ([192.168.126.240]) by mx20.brocade.com with ESMTP; 25 Apr 2007 16:04:14 -0700
X-IronPort-AV: i="4.14,452,1170662400"; d="scan'208,217"; a="8627162:sNHT39266199"
Received: from HQ-EXCHFE-2.corp.brocade.com (hq-vipexchfe-2.brocade.com [192.168.126.214]) by discus.brocade.com (Postfix) with ESMTP id 5F93423836B; Wed, 25 Apr 2007 16:03:04 -0700 (PDT)
Received: from hq-exch-1.corp.brocade.com ([10.3.8.21]) by HQ-EXCHFE-2.corp.brocade.com with Microsoft SMTPSVC(6.0.3790.1830); Wed, 25 Apr 2007 16:04:14 -0700
X-MimeOLE: Produced By Microsoft Exchange V6.5
Content-class: urn:content-classes:message
MIME-Version: 1.0
Subject: RE: FW: [Ips] Recent comments about FCoE and iSCSI
Date: Wed, 25 Apr 2007 16:04:13 -0700
Message-ID: <6002A63FDB393D4F9ADB36DE70C4847502C80970@hq-exch-1.corp.brocade.com>
In-Reply-To: <007701c78782$6dd93c40$05faa8c0@ivivity.com>
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Thread-Topic: FW: [Ips] Recent comments about FCoE and iSCSI
Thread-Index: AceHgq4ypt5zILt2TKKPjQpNbkHJHgACxghQ
References: <OF4C56325B.537CF828-ON852572C8.00570F7B-852572C8.0057295E@il.ibm.com> <007701c78782$6dd93c40$05faa8c0@ivivity.com>
From: Robert Snively <rsnively@Brocade.COM>
To: Eddy Quicksall <Quicksall_iSCSI@Bellsouth.net>, Julian Satran <Julian_Satran@il.ibm.com>
X-OriginalArrivalTime: 25 Apr 2007 23:04:14.0818 (UTC) FILETIME=[0B4EFC20:01C7878E]
X-Spam-Score: 0.5 (/)
X-Scan-Signature: 6bce69840c70b11602de7eeb3bb2b5f5
X-Mailman-Approved-At: Thu, 26 Apr 2007 09:12:25 -0400
Cc: ips@ietf.org, John Hufferd <jhufferd@Brocade.COM>
X-BeenThere: ips@ietf.org
X-Mailman-Version: 2.1.5
Precedence: list
List-Id: IP Storage <ips.ietf.org>
List-Unsubscribe: <https://www1.ietf.org/mailman/listinfo/ips>, <mailto:ips-request@ietf.org?subject=unsubscribe>
List-Post: <mailto:ips@ietf.org>
List-Help: <mailto:ips-request@ietf.org?subject=help>
List-Subscribe: <https://www1.ietf.org/mailman/listinfo/ips>, <mailto:ips-request@ietf.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0090503334=="
Errors-To: ips-bounces@ietf.org

The standardization process is just beginning on FCoE.  Brocade and 
at least two other organizations have done significant work on the 
subject.  I expect content similar to the Cisco patent to be
a significant contribution to the FCoE standard.  As you know,
standards development often involves negotiation of details of
a technology.
 
Bob

________________________________

From: Eddy Quicksall [mailto:Quicksall_iSCSI@Bellsouth.net] 
Sent: Wednesday, April 25, 2007 2:41 PM
To: Julian Satran
Cc: ips@ietf.org; John Hufferd
Subject: Re: FW: [Ips] Recent comments about FCoE and iSCSI


I notice that Cisco has a patent on Fibre Channel over Ethernet. Is FCoE
something different?

	----- Original Message ----- 
	From: Julian Satran <mailto:Julian_Satran@il.ibm.com>  
	To: Eddy Quicksall <mailto:Quicksall_iSCSI@Bellsouth.net>  
	Cc: ips@ietf.org ; John Hufferd <mailto:jhufferd@Brocade.COM>  
	Sent: Wednesday, April 25, 2007 11:52 AM
	Subject: Re: FW: [Ips] Recent comments about FCoE and iSCSI


	Sorry Eddy by sizable I meant even at the size of a modern data
center. Julo 
	
	
	
"Eddy Quicksall" <Quicksall_iSCSI@Bellsouth.net> 

25/04/07 11:08 

To
Julian Satran/Haifa/IBM@IBMIL 
cc
ips@ietf.org, John Hufferd <jhufferd@Brocade.COM> 
Subject
Re: FW: [Ips] Recent comments about FCoE and iSCSI	

		




	I basically said that in the summery line by saying "it will not
route on the "global" scale like TCP/IP would". 
	----- Original Message ----- 
	From: Julian Satran <mailto:Julian_Satran@il.ibm.com>  
	To: Eddy Quicksall <mailto:Quicksall_iSCSI@Bellsouth.net>  
	Cc: ips@ietf.org <mailto:ips@ietf.org>  ; John Hufferd
<mailto:jhufferd@Brocade.COM>  
	Sent: Wednesday, April 25, 2007 11:04 AM 
	Subject: Re: FW: [Ips] Recent comments about FCoE and iSCSI 
	
	
	Eddy, 
	
	That is oversimplified and ignore the drop rate assumption and
error rate assumptions made in FCP(FCP has no transport layer). To get
to it on a sizable network requires more than PAUSE. 
	
	Julo 
	
	
"Eddy Quicksall" <Quicksall_iSCSI@Bellsouth.net
<mailto:Quicksall_iSCSI@Bellsouth.net> > 

25/04/07 10:07 



To
"John Hufferd" <jhufferd@Brocade.COM <mailto:jhufferd@Brocade.COM> >,
Julian Satran/Haifa/IBM@IBMIL <mailto:Satran/Haifa/IBM@IBMIL>  
cc
<ips@ietf.org <mailto:ips@ietf.org> > 
Subject
Re: FW: [Ips] Recent comments about FCoE and iSCSI	


		


	
	
	
	Basically, it is sending FC frames over Ethernet. This localizes
the traffic unless you route based on MAC addresses. So you send 2146
bytes of FC frame plus 18 bytes of Ethernet overhead as FCoE "standard"
packet. 18 bytes of Ethernet gets stripped and you have straight FC
frame that can go through any FC network. Now you can have 10G Ethernet
pipes into existing FC SANs. Limited market potential as far as I can
see. The key argument is it much easier to implement than iSCSI and also
has less overhead and uses all the benefits of FC. End to End credits
are simulated using PAUSE command on Ethernet and MAC addresses are
mapped into WWNs. 

	Biggest knock is that it will not route on the "global" scale
like TCP/IP would. 

	Eddy 

	 ----- Original Message ----- 

	From: Julian Satran <mailto:Julian_Satran@il.ibm.com>  
	To: John Hufferd <mailto:jhufferd@Brocade.COM>  
	Cc: ips@ietf.org <mailto:ips@ietf.org>  
	Sent: Wednesday, April 25, 2007 8:09 AM 
	Subject: Re: FW: [Ips] Recent comments about FCoE and iSCSI 
	
	
	
	"John Hufferd" <jhufferd@Brocade.COM
<mailto:jhufferd@Brocade.COM> > wrote on 25/04/2007 02:45:51:
	
	> Julian, 
	> To be sure you understand our position; Brocade is pushing
iSCSI as an
	> outreach protocol from the Data Center. We also believe iSCSI
is very
	> useful for installations that do not have a Fibre Channel
	> infrastructure, and in that case we will be able to sell them
our new
	> iSCSI and TOE offload HBAs.  
	> 
	> When I say iSCSI is an outreach protocol, this is a statement
that iSCSI
	> is very important to connect "stranded" servers to the Fibre
Channel
	> Fabric.  That is, we sell iSCSI-to-FC Gateway devices which
will permit
	> iSCSI Servers (software or HBA iSCSI initiators) to connect to
the
	> Enterprise "Bet Your Business" FC Storage.  This of course
also applies
	> to Desktops and Laptop systems, and systems at distance.
	> 
	
	You make it sound like: 
	1.        most of the servers in the world have their storage on
the network - and that is not the case 
	2.        FCP is basically better performing than iSCSI - and
that is not true either 
	3.        Gatewaying is expensive - and it is perhaps so but
only if you are completely relying on FCP storage (and there are plenty
of good iSCSI vendors of storage)and pushing the price on the servers is
not cheap either - at least not for the server buyer 
	
	> Now with that positioning, it is important to understand the
limitation
	> to this strategy.  The primary problem is that iSCSI to FC
Bridging
	> (Gatewaying) is relatively expensive (compared to simple FC
	> connections).  Though we have some of the best priced Gateways
on the
	> market, it is not cost feasible to replace all the server
connectivity
	> to FC storage with iSCSI for the hundred to thousands of
servers in the
	> Data Center.  And since, if there is to be a consolidated
Network
	> connection to the servers in the Data Center, there must be an
	> evolutionary replacement of Server Connections to Storage.
That means
	> there must be a bridge/Gateway approach.  And as I mentioned
before,
	> there is just too much cost in the iSCSI to FC Gateway.
	> 
	> The issue is the server requirement to have a single
connection type to
	> handle cluster messaging, general messaging, and storage.
iSCSI is
	> clearly an option for the storage, however, the gateway costs
are too
	> high for iSCSI to be used as the "normal" server connect into
a FC based
	> Fabric. That is true for the current 1GE; and for 10GE the
cost is just
	> out of sight.  The reason for this is the requirement for
TCP/IP
	> termination and re-initiation with FC at the Gateway.
	> 
	> Now with respect to FC over Ethernet the important thing to
understand
	> is that it is not Ethernet as we have known it up to know.
The Ethernet
	> we are talking about is a type of Ethernet that can only be
deployed in
	> a constrained environment such as a Data Center.  This form of
Ethernet
	> is called DCE (Data Center Ethernet) or CEE (Convergence
Enhanced
	> Ethernet).  This form of Ethernet is a Loss-less type
Ethernet, with
	> multi-priority and Flow Control.  This is NOT an Internet or
Intranet
	> type of Ethernet.
	> 
	> FCoE is all about using the DCE (CEE) to carry FC frames.  The
rest of
	> the Host and storage stack remain the same, the functions and
features
	> of the switches also remain the same and add the capability to
provide
	> Cluster Message Switching which has latency close to
InfiniBand speeds.
	> 
	> 
	> Because the FC frames are transported to the switches intact
via a DCE
	> frame, the Bridging, if you want to call it that, is virtually
non
	> existent.  Hence you can deliver the FC frames to FC devices,
or send FC
	> frames to DCE FCoE devices, just like one would do if it was
all FC. And
	> all this is done while performing Cluster message switching
and general
	> message trucking to the IP outfacing network. 
	> 
	
	The rosy future of the yet to appear DCE/CEE and a layer 2 only
world. 
	First you have some terms confused: 
	
	Bridging is the term commonly used for Layer-2 switching and
routing is therm used for layer-3 (switching). 
	
	Bridging has some advantages (less management) that have created
a movement towards an enterprise wide 
	LAN. But this has a long way to go and will require significant
equipment and protocol changes. 
	Even its proponents do not call for transportless networks,
lossless networks etc. 
	The second trouble with your argument is that there are no known
large scale networking technologies that 
	really work at full speed (high speed) and are lossless
(flow-controlled) and errorless like FCoE assumes. 
	The TCP/IP has solved this issue for every generation using the
proven end-to-end principle (and is doing so now). 
	And it is not by chance so and that is why all networking
applications are built above layer-3 and not dropping layer-3 (like
FCoE) does. 
	
	Although I can understand the DCE arguments as a management
statement I would prefer like any rational engineer, to base my building
blocks on structures that are proven and long lasting. And those are
still the end-to-end TCP/IP that can accommodate even your FCP addicts.
The IPS TWG has developed the iFCP that does exactly what FCoE claims to
do an a better base. 
	
	> This means an evolutionary process is possible to the solution
of
	> getting a single Fabric connection for all networks connected
to a
	> server, further, the process has very low interconnection cost
on the
	> Data Center Fabric. And it maintains all the FC Fabric
Services, and all
	> the same Storage Management processes. 
	> 
	> By the way, this is primarily a Server driven value statement,
there
	> seems to be little value in having FCoE on the storage
controller.
	> Therefore FC storage controllers (and FICON) will be the very
last
	> things that connect using FCoE and that evolution will take at
least a
	> decade or more.
	> 
	
	It is server cost statement. It costs nothing to connect a
modern server to ethernet it will cost a bundle to connect to FCoE and
it will force users in short lived bad solutions. 
	
	
	> We see value in offering switches and Directors that can
support DCE
	> switching, FC switching as well as iSCSI interconnect, and the
	> "Trunking" of general messaging to the Outfacing IP network.
That said;
	> we do not see FCoE going beyond the constraints of the Data
Center. 
	
	Data Centers now grow to tens of thousands of nodes. There is no
layer-2 technology for errorless/lossless operation at this scale and
there is no good reason to pursue one. The only possible reason (good
reason) is the bridging infrastructure but that infrastructure has a
completely different rationale than the flowcontrol. 
	
	> This issue and message is quite different from the issues and
messages
	> we struggled with when we started iSCSI.  There is a
consortium of folks
	> both working on the DCE (CEE) and the FCoE.  Without the DCE
the FCoE
	> will not happen.  
	> 
	> None of the above cancels out the value of iSCSI in numerous
	> environments.
	> 
	> 
	
	iSCSI is good for all environments. Business consideration (and
some politics) keep it form "exploding" and large storage vendors are
completely indifferent to the network connection they are using. 
	You and I have also slightly different views of DCE. I expect
DCE (that still has a way to go) to improve the QoS in the data-center
(and for storage too). You expect it to bring the loss rates down to the
levels that FCP assumes (FCP has no transport layer) and that is
probably a pipe dream. Todays transport solution for loss mitigation are
far more cost effective - and that's why iFCP is a better proposition as
a transition technology than FCoE and iSCSI with gateways is propably
better in the long run. 
	
	> 
	> .
	> .
	> .
	> John L Hufferd
	> Sr. Executive Director of Technology
	> jhufferd@brocade.com
	> Office Phone: (408) 333-5244; eFAX: (408) 904-4688
	> Alt Office Phone: (408) 997-6136; Cell: (408) 627-9606
	>   
	> -----Original Message-----
	> From: John Hufferd 
	> Sent: Tuesday, April 24, 2007 12:57 PM
	> To: 'Julian_Satran@il.ibm.com'
	> Subject: Re: [Ips] Recent comments about FCoE and iSCSI
	> 
	> Julian,
	> I think you are wrong on this one.  The arguments are quite
different
	> then the ones we had in pre iSCSI days.  (By the way I missed
you on
	> today's Renato meeting/conf call where Brocade took the IBM
technology
	> group through FCoE as it is being placed in our plans).  
	> I will send you more info when I get to my computer.  But you
probably
	> were sent the Brocade charts.  Please review them and I will
follow up
	> with more information.
	> This does NOT replace iSCSI it applies only to a DataCenter
envuornment
	> with lossless DCE ethernet.
	> --------------------------
	> John L. Hufferd
	> Sr. Ex. Director of Technology
	> Brocade Communications Systems, Inc.
	> Phone: (408) 333-5244
	> Mobile: (408) 627-9606
	> eMail: jhufferd@brocade.com
	> (Sent from my BlackBerry Wireless)
	>  
	> 
	> ----- Original Message -----
	> From: Julian Satran <Julian_Satran@il.ibm.com>
	> To: ips@ietf.org <ips@ietf.org>
	> Sent: Tue Apr 24 12:10:29 2007
	> Subject: [Ips] Recent comments about FCoE and iSCSI
	> 
	> 
	> Dear All, 
	> 
	> The trade press is lately full with comments about the latest
and
	> greatest reincarnation of Fiber Channel over ethernet. 
	> It made me try and summarize all the long and hot debates that
preceded
	> the advent of iSCSI. 
	> Although FCoE proponents make it look like no debate preceded
iSCSI that
	> was not so - FCoE was considered even then and was dropped as
a dumb
	> idea. 
	> 
	> Here is a summary (as afar as I can remember) of the main
arguments.
	> They are not bad arguments even in retrospect and technically
FCoE
	> doesn't look better than it did then. 
	> 
	> Feel free to use this material in a nay form. I expect this
group to
	> seriously  expand my arguments and make them public - in
personal or
	> collective form. 
	> 
	> And do not forget - it is a technical dispute - although we
all must
	> have some doubts about the way it is pursued. 
	> 
	> Regards, 
	> Julo 
	> 
	>
--------------------------------------------------------------------- 
	> 
	> What a piece of nostalgia :-) 
	> 
	> Around 1997 when a team at IBM Research (Haifa and Almaden)
started
	> looking at connecting storage to servers using the "regular
network"
	> (the ubiquitous LAN) we considered many alternatives (another
team even
	> had a look at ATM - still a computer network candidate at the
time). I
	> won't get you over all of our rationale (and we went over some
of them
	> again at the end of 1999 with a team from CISCO before we
convened the
	> first IETF BOF in 2000 at Adelaide that resulted in iSCSI and
all the
	> rest) but some of the reasons we choose to drop Fiber Channel
over raw
	> Ethernet where multiple: 
	> 
	> 
	> *   Fiber Channel Protocol (SCSI over Fiber Channel Link) is
	> "mildly" effective because: 
	> 
	>    *   it implements endpoints in a dedicated engine (Offload)

	>    *   it has no transport layer (recovery is done at the
	> application layer under the assumption that the error rate
will be very
	> low) 
	>    *   the network is limited in physical span and logical
span
	> (number of switches) 
	>    *   flow-control/congestion control is achieved with a
	> mechanism adequate for a limited span network (credits). The
packet loss
	> rate is almost nil and that allows FCP to avoid using a
transport
	> (end-to-end) layer
	> 
	>    *   FCP she switches are simple (addresses are local and
the
	> memory requirements cam be limited through the credit
mechanism) 
	>    *   However FCP endpoints are inherently costlier than
	> simple NICs - the cost argument (initiators are more
expensive) 
	>    *   The credit mechanisms is highly unstable for large
	> networks (check switch vendors planning docs for the network
diameter
	> limits) - the scaling argument 
	>    *   The assumption of low losses due to errors might
	> radically change when moving from 1 to 10 Gb/s - the scaling
argument 
	>    *   Ethernet has no credit mechanism and any mechanism with
	> a similar effect increases the end point cost. Building a
transport
	> layer in the protocol stack has always been the preferred
choice of the
	> networking community - the community argument 
	>    *   The "performance penalty" of a complete protocol stack
	> has always been overstated (and overrated). Advances in
protocol stack
	> implementation and finer tuning of the congestion control
mechanisms
	> make conventional TCP/IP performing well even at 10 Gb/s and
over.
	> Moreover the multicore processors that become dominant on the
computing
	> scene have enough compute cycles available to make any
"offloading"
	> possible as a mere code restructuring exercise (see the stack
reports
	> from Intel, IBM etc.) 
	>    *   Building on a complete stack makes available a wealth
of
	> operational and management mechanisms built over the years by
the
	> networking community (routing, provisioning, security, service
location
	> etc.) - the community argument 
	>    *   Higher level storage access over an IP network is
widely
	> available and having both block and file served over the same
connection
	> with the same support and management structure is compelling -
the
	> community argument 
	>    *   Highly efficient networks are easy to build over IP
with
	> optimal (shortest path) routing while Layer 2 networks use
bridging and
	> are limited by the logical tree structure that bridges must
follow. The
	> effort to combine routers and bridges (rbridges) is promising
to change
	> that but it will take some time to finalize (and we don't know
exactly
	> how it will operate). Untill then the scale of Layer 2 network
is going
	> to seriously limited - the scaling argument
	> 
	> 
	> 
	>       As a side argument - a performance comparison made in
	> 1998 showed SCSI over TCP (a predecessor of the later iSCSI)
to perform
	> better than FCP at 1Gbs for block sizes typical for OLTP
(4-8KB). That
	> was what convinced us to take the path that lead to iSCSI -
and we used
	> plain vanilla x86 servers with plain-vanilla NICs and Linux
(with
	> similar measurements conducted on Windows). 
	>    The networking and storage community acknowledged those
	> arguments and developed iSCSI and the companion protocols for
service
	> discovery, boot etc. 
	>    
	>    The community also acknowledged the need to support
existing
	> infrastructure and extend it in a reasonable fashion and
developed 2
	> protocols iFCP (to support hosts with FCP drivers and IP
connections to
	> connect to storage by a simple conversion from FCP to TCP
packets) FCPIP
	> to extend the reach of FCP through IP (connects FCP islands
through TCP
	> links). Both have been 
	>    implemented and their foundation is solid. 
	>    
	>    The current attempt of developing a "new-age" FCP over an
	> Ethernet link is going against most of the arguments that have
given us
	> iSCSI etc. 
	>    
	>    It ignores the networking layering practice, build an
	> application protocol directly above a link and thus limits
scaling,
	> mandates elements at the link layer and application layer that
make
	> applications more expensive and leaves aside the whole
"ecosystem" that
	> accompanies TCP/IP (and not Ethernet). 
	>    
	>    In some related effort (and at a point also when developing
	> iSCSI) we considered also moving away from SCSI (like some "no
	> standardized" but popular in some circles software did - e.g.,
NBP) but
	> decided against. SCSI is a mature and well understood access
	> architecture for block storage and is implemented by many
device
	> vendors. Moving away from it would not have been justified at
the time. 
	>     

	
________________________________


	_______________________________________________
	Ips mailing list
	Ips@ietf.org
	https://www1.ietf.org/mailman/listinfo/ips 

	
________________________________


	_______________________________________________
	Ips mailing list
	Ips@ietf.org
	
https://www1.ietf.org/mailman/listinfo/ips______________________________
_________________
	Ips mailing list
	Ips@ietf.org
	https://www1.ietf.org/mailman/listinfo/ips
	

	
________________________________


	

	_______________________________________________
	Ips mailing list
	Ips@ietf.org
	https://www1.ietf.org/mailman/listinfo/ips
	

_______________________________________________
Ips mailing list
Ips@ietf.org
https://www1.ietf.org/mailman/listinfo/ips