Re: [arch-d] HTTP 2.0 + Performance improvement Idea

Joe Touch <touch@isi.edu> Wed, 26 March 2014 05:17 UTC

Return-Path: <touch@isi.edu>
X-Original-To: architecture-discuss@ietfa.amsl.com
Delivered-To: architecture-discuss@ietfa.amsl.com
Received: from localhost (ietfa.amsl.com [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 647491A00D7 for <architecture-discuss@ietfa.amsl.com>; Tue, 25 Mar 2014 22:17:09 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.91
X-Spam-Level:
X-Spam-Status: No, score=-1.91 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, T_RP_MATCHES_RCVD=-0.01] autolearn=ham
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id fFgbmJDlxDlH for <architecture-discuss@ietfa.amsl.com>; Tue, 25 Mar 2014 22:17:07 -0700 (PDT)
Received: from darkstar.isi.edu (darkstar.isi.edu [128.9.128.127]) by ietfa.amsl.com (Postfix) with ESMTP id 4C1EE1A00C0 for <architecture-discuss@ietf.org>; Tue, 25 Mar 2014 22:17:07 -0700 (PDT)
Received: from [192.168.1.91] (pool-71-105-87-112.lsanca.dsl-w.verizon.net [71.105.87.112]) (authenticated bits=0) by darkstar.isi.edu (8.13.8/8.13.8) with ESMTP id s2Q5Ghvl025734 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=NOT); Tue, 25 Mar 2014 22:16:47 -0700 (PDT)
Mime-Version: 1.0 (Mac OS X Mail 7.2 \(1874\))
Content-Type: text/plain; charset="windows-1252"
From: Joe Touch <touch@isi.edu>
In-Reply-To: <CANw0z+Vy1imt-HmZdqVpNzq4-Gd7cKC5B=PmAe7bBYrcHCD2mQ@mail.gmail.com>
Date: Tue, 25 Mar 2014 22:16:42 -0700
Content-Transfer-Encoding: quoted-printable
Message-Id: <5EE5585A-19E4-4940-B21A-4BA208F08B78@isi.edu>
References: <CANw0z+Wy09iGvwL2DgzkMLdNxcwxOHmd38yxGz0H6v=FGpzEJw@mail.gmail.com> <5331F25C.20803@isi.edu> <CANw0z+Vy1imt-HmZdqVpNzq4-Gd7cKC5B=PmAe7bBYrcHCD2mQ@mail.gmail.com>
To: Rakshith Venkatesh <vrock28@gmail.com>
X-Mailer: Apple Mail (2.1874)
X-ISI-4-43-8-MailScanner: Found to be clean
X-MailScanner-From: touch@isi.edu
Archived-At: http://mailarchive.ietf.org/arch/msg/architecture-discuss/GmozKDPeOFdeMGJ7Xr8V4KRIKEs
Cc: architecture-discuss@ietf.org
Subject: Re: [arch-d] HTTP 2.0 + Performance improvement Idea
X-BeenThere: architecture-discuss@ietf.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: open discussion forum for long/wide-range architectural issues <architecture-discuss.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/architecture-discuss>, <mailto:architecture-discuss-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/architecture-discuss/>
List-Post: <mailto:architecture-discuss@ietf.org>
List-Help: <mailto:architecture-discuss-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/architecture-discuss>, <mailto:architecture-discuss-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 26 Mar 2014 05:17:09 -0000

On Mar 25, 2014, at 6:01 PM, Rakshith Venkatesh <vrock28@gmail.com> wrote:

> I am not sure if an NFS client can accept blocks out of order from a server. I need to check on that.

NFS should be matching read responses to requests; order should not matter, esp. because when using UDP the requests and responses can be reordered by the network.

> If it does then NFS over HTTP sounds good. The way i see it is any File service protocol such as NFS, SMB (Large MTU's), HTTP, FTP should have a way which would help the server achieve true parallelism in reading a file and not worry about sending data/blocks in-order.

FTP has block mode that should support any order of response, but if they're all going over the same TCP connection it can help only with the source access order rather than network reordering.

HTTP has chunking that should help with this (e.g., using a chunk extension field that indicates chunk offset).

However, most file systems are engineered to manage files via in-order access, even when parallelized, so I'm not sure how much this will all help. I.e., I don't know if your assumption is valid, that overriding the serial access will be faster.

It might be useful to show that you can get to the components of a file faster as you assume first.

Joe


> 
> Rakshith 
> 
> 
> On Wed, Mar 26, 2014 at 2:47 AM, Joe Touch <touch@isi.edu> wrote:
> You sound like you're looking for NFS; maybe you should propose an "NSF over HTTP" variant where responses can be issued out of order?
> 
> Joe
> 
> 
> On 3/25/2014 4:56 AM, Rakshith Venkatesh wrote:
> Hi,
> 
> I was going through SPDY draft or the HTTP 2.0 initial draft. I had an
> idea which I think would require a change in the architecture of the
> same and so I am dropping this mail. Here is the idea:
> 
> A Http client expects the server to send data in-order. (NOTE: When I
> say a server, I am referring to an appliance to which disks are attached
> and the file resides on the disk.). If there is a file let’s say 10GB
> and if the client asks for this file from the server, the data has to be
> given in-order from the 1^st byte till the last byte. Now let’s say I
> 
> implement an engine at the server side to do a parallel read on this
> huge file to fetch data at various offsets within the file, I will be
> able to fetch the data faster for sure but, I will not be able to send
> across the data immediately as and when I fetch it. I am expected to
> finish doing all parallel reads on the file till I read all the bytes,
> then send across the wire to the client.
> 
> Now if we can have some tag or header which can be introduced as part of
> HTTP 2.0 which actually can help re-order the byte stream at the session
> layer or at a layer in between the application and session layer, we
> could potentially improve the performance for File reads using HTTP by
> making sure that the new module looks at this new tag and jumbles around
> the data based on this tag and eventually presents the data to HTTP to
> make it look all seamless.
> 
> So the server can just do parallel reads on the same file at various
> offsets without worrying about ordering and send across the read chunks
> and this new module sitting at the client side can intervene and look at
> some form of tag/header to make sure it swaps the data accordingly and
> waits till all data is received and then present it to the Application
> protocol.
> 
> NOTE: I am not referring to packet-reordering at TCP.
> 
> By having this, servers can attain true parallelism in reading the file
> and can effectively improve file transfer rates.
> 
> 
> Thanks,
> 
> Rakshith
> 
> 
> 
> _______________________________________________
> Architecture-discuss mailing list
> Architecture-discuss@ietf.org
> https://www.ietf.org/mailman/listinfo/architecture-discuss
> 
>