Re: [hybi] SPDY protocol from google frame

Jamie Lokier <jamie@shareable.org> Fri, 13 November 2009 04:36 UTC

Return-Path: <jamie@shareable.org>
X-Original-To: hybi@core3.amsl.com
Delivered-To: hybi@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id AB6613A6A1E for <hybi@core3.amsl.com>; Thu, 12 Nov 2009 20:36:56 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.386
X-Spam-Level:
X-Spam-Status: No, score=-2.386 tagged_above=-999 required=5 tests=[AWL=0.213, BAYES_00=-2.599]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id tUgMbJJmVXrb for <hybi@core3.amsl.com>; Thu, 12 Nov 2009 20:36:55 -0800 (PST)
Received: from mail2.shareable.org (mail2.shareable.org [80.68.89.115]) by core3.amsl.com (Postfix) with ESMTP id B1BF43A676A for <hybi@ietf.org>; Thu, 12 Nov 2009 20:36:55 -0800 (PST)
Received: from jamie by mail2.shareable.org with local (Exim 4.63) (envelope-from <jamie@shareable.org>) id 1N8nuR-0006OH-2z; Fri, 13 Nov 2009 04:37:19 +0000
Date: Fri, 13 Nov 2009 04:37:19 +0000
From: Jamie Lokier <jamie@shareable.org>
To: "Thomson, Martin" <Martin.Thomson@andrew.com>
Message-ID: <20091113043719.GG19405@shareable.org>
References: <4AFC869C.9090403@webtide.com> <F4C6CDAD-1ABE-4A2A-A65B-0C8EEA95D90B@surrey.ac.uk> <4AFC9936.8090007@webtide.com> <803EA6E6-F94E-4F8D-9026-86C6EB33422A@icesoft.com> <3a880e2c0911121751q22b3929bwc5d7dbcaa0731226@mail.gmail.com> <bbeaa26f0911121800m6ee5e014n327b1dee77dafd54@mail.gmail.com> <8B0A9FCBB9832F43971E38010638454F0F35CBBC@SISPE7MB1.commscope.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <8B0A9FCBB9832F43971E38010638454F0F35CBBC@SISPE7MB1.commscope.com>
User-Agent: Mutt/1.5.13 (2006-08-11)
Cc: "hybi@ietf.org" <hybi@ietf.org>
Subject: Re: [hybi] SPDY protocol from google frame
X-BeenThere: hybi@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Server-Initiated HTTP <hybi.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/hybi>, <mailto:hybi-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/hybi>
List-Post: <mailto:hybi@ietf.org>
List-Help: <mailto:hybi-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/hybi>, <mailto:hybi-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 13 Nov 2009 04:36:56 -0000

Thomson, Martin wrote:
>    I'm also increasingly of the view that channels-based protocols aren't
>    necessarily the right way to solve head-of-line blocking problems.
>    There are other methods that do not demand the same overheads.  For
>    instance, a simple request identifier can be used for request-response
>    correlation.

Fwiw, when I've talked about multiplexing on the HyBi list before,
I've always had in mind request-response correlation identifiers, not
opening long-lived channels.

In general, request-response is a bit limiting: Some applications just
want messages, without responses, because they correlate in other ways
unknown to the transport.  They may not be 1:1, as with pubsub
architectures for example.  So let's say request-response is good, but
it's better if the response is optional because redundant responses
are overhead.

I've advocated that multiplexing protocols should handle small and
large message sizes equally well, for the same reason as HTTP:
Separation of concerns, and robust good performance over a range of uses.

To send *large* messages without head-of-line blocking, you need a way
to split them (to allow others to go through), and a message identifier
to correlate the fragments.

One bit of baggage associated with channels is that they need to be
"setup" before using them.  BEEP does this, and it is way too high
overhead (latency).  It should not require a round trip before using
new channels: just start using them, allocated by the sender!  There
are receiver overflow issues, but that is best handled by flow control.

When you can use new channels immediately, that's technically *the
same thing* as request or message identifiers with ability to split
messages.  The difference is just terminology.

It's been argued that a basic transport protocol doesn't need to split
large messages, because applications (or shared middle layers) can do
it themselves, over a basic transport optimised for short messages
only.  The argument is if it can be done at a higher level, keep the
transport simple.

That probably is good protocol design, to keep the problems separate.

However, for *implementation* there are technical arguments in favour
of doing splitting and multiplexing together close to transport
framing: Network performance, latency, fairness among message sources,
and flow control to different sinks, are likely to perform better with
knowledge fed back from the transport.  Things like aligning with TCP
MSS boundaries to reduce average message latency.

(Perhaps some of the debates/conflict over this are due to it being
good to specify them as separate protocol layers, but good to
implement them in a slightly more unified way.)

I hope I've been able to explain why "channels" means different things
and can be a *low overhead, performance enhancing* concept which arise
naturally from robust, multiplexed message transport.

-- Jamie