Re: Multi-GET, extreme compression?

Willy Tarreau <w@1wt.eu> Mon, 18 February 2013 07:09 UTC

Return-Path: <ietf-http-wg-request@listhub.w3.org>
X-Original-To: ietfarch-httpbisa-archive-bis2Juki@ietfa.amsl.com
Delivered-To: ietfarch-httpbisa-archive-bis2Juki@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 9CE7C21F8A33 for <ietfarch-httpbisa-archive-bis2Juki@ietfa.amsl.com>; Sun, 17 Feb 2013 23:09:25 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -9.909
X-Spam-Level:
X-Spam-Status: No, score=-9.909 tagged_above=-999 required=5 tests=[AWL=0.090, BAYES_00=-2.599, J_CHICKENPOX_33=0.6, RCVD_IN_DNSWL_HI=-8]
Received: from mail.ietf.org ([64.170.98.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id T7xcPU9or83Z for <ietfarch-httpbisa-archive-bis2Juki@ietfa.amsl.com>; Sun, 17 Feb 2013 23:09:24 -0800 (PST)
Received: from frink.w3.org (frink.w3.org [128.30.52.56]) by ietfa.amsl.com (Postfix) with ESMTP id 0A8BD21F8A2F for <httpbisa-archive-bis2Juki@lists.ietf.org>; Sun, 17 Feb 2013 23:09:23 -0800 (PST)
Received: from lists by frink.w3.org with local (Exim 4.72) (envelope-from <ietf-http-wg-request@listhub.w3.org>) id 1U7Kp2-0000Zw-Uf for ietf-http-wg-dist@listhub.w3.org; Mon, 18 Feb 2013 07:07:32 +0000
Resent-Date: Mon, 18 Feb 2013 07:07:32 +0000
Resent-Message-Id: <E1U7Kp2-0000Zw-Uf@frink.w3.org>
Received: from maggie.w3.org ([128.30.52.39]) by frink.w3.org with esmtp (Exim 4.72) (envelope-from <w@1wt.eu>) id 1U7Kop-0000Yn-OB for ietf-http-wg@listhub.w3.org; Mon, 18 Feb 2013 07:07:19 +0000
Received: from 1wt.eu ([62.212.114.60]) by maggie.w3.org with esmtp (Exim 4.72) (envelope-from <w@1wt.eu>) id 1U7Koo-0001QY-RM for ietf-http-wg@w3.org; Mon, 18 Feb 2013 07:07:19 +0000
Received: (from willy@localhost) by mail.home.local (8.14.4/8.14.4/Submit) id r1I76ZRv022256; Mon, 18 Feb 2013 08:06:35 +0100
Date: Mon, 18 Feb 2013 08:06:35 +0100
From: Willy Tarreau <w@1wt.eu>
To: "William Chan (?????????)" <willchan@chromium.org>
Cc: Helge Hess <helge.hess@opengroupware.org>, James M Snell <jasnell@gmail.com>, Roberto Peon <grmocg@gmail.com>, HTTP Working Group <ietf-http-wg@w3.org>, Phillip Hallam-Baker <hallam@gmail.com>, Cyrus Daboo <cyrus@daboo.name>
Message-ID: <20130218070635.GH13100@1wt.eu>
References: <A6A82B6EF92590887D2A23D5@cyrus.local> <D9118B58-F53F-4F75-8292-2B990172E234@opengroupware.org> <CABP7RbcX2OqttZuYeuYxhyOgE_ax0M67L1ywPy_VDpW1upM69Q@mail.gmail.com> <CAP+FsNeWDBTBYJ0P-URbO5avbUno5etKid10RM+dRwDWAUys2w@mail.gmail.com> <CABP7RbfVvZnYPPsRvzmC0BtCiPBxQmYXHTRKtq8XE7Z2wY2EfA@mail.gmail.com> <CAA4WUYioRAOEbjU32yEaJuWDAySiZF=OfKXcF-8esqTP0uqwtQ@mail.gmail.com> <968F329C-B6CF-4129-B816-DC56C834A4A4@opengroupware.org> <CAA4WUYgGiJmtbswzmXWi-Ob+1HmoGnMhwr+9j9b5KS5OVQQdjQ@mail.gmail.com> <2DA0834D-C0B7-4A26-B6AD-B5789D0CFA3B@opengroupware.org> <CAA4WUYh6hNKtiJHuuVxPzki+BQf=2YouAxuYc5Ea3tkmdhxDdg@mail.gmail.com>
Mime-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <CAA4WUYh6hNKtiJHuuVxPzki+BQf=2YouAxuYc5Ea3tkmdhxDdg@mail.gmail.com>
User-Agent: Mutt/1.4.2.3i
Received-SPF: pass client-ip=62.212.114.60; envelope-from=w@1wt.eu; helo=1wt.eu
X-W3C-Hub-Spam-Status: No, score=-3.3
X-W3C-Hub-Spam-Report: AWL=-2.783, RP_MATCHES_RCVD=-0.554, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001
X-W3C-Scan-Sig: maggie.w3.org 1U7Koo-0001QY-RM a0ced976a677c3b8983ba28821f563f4
X-Original-To: ietf-http-wg@w3.org
Subject: Re: Multi-GET, extreme compression?
Archived-At: <http://www.w3.org/mid/20130218070635.GH13100@1wt.eu>
Resent-From: ietf-http-wg@w3.org
X-Mailing-List: <ietf-http-wg@w3.org> archive/latest/16661
X-Loop: ietf-http-wg@w3.org
Resent-Sender: ietf-http-wg-request@w3.org
Precedence: list
List-Id: <ietf-http-wg.w3.org>
List-Help: <http://www.w3.org/Mail/>
List-Post: <mailto:ietf-http-wg@w3.org>
List-Unsubscribe: <mailto:ietf-http-wg-request@w3.org?subject=unsubscribe>

Hi William,

On Sun, Feb 17, 2013 at 07:00:20PM -0800, William Chan (?????????) wrote:
> > Yes, you might want to wait n (3?) milliseconds before sending out
> > additional requests and batch what you get within that timeframe. You don't
> > really send out requests in realtime while parsing, do you? ;-)
> >
> 
> If you had read the previous email thread I linked to at the very
> beginning, you would realize that contrary to Willy's expectation, I
> demonstrated that we do indeed send out requests ASAP (putting aside some
> very low-latency batching). We disable Nagle in order to prevent kernel
> level delays in this manner, since we do indeed want to get requests out
> ASAP.

As you told me it was not possible to guess URIs the way HTML is parsed
nowadays, I realized that for the protocol to succeed, we must make it
easy to implement on all sides (browsers, intermediaries, servers). We
must not make the protocol optimal for situations that do not exist or
which forces any party to do complex or undesired things (such as waiting
a few ms). Anyway, nobody must ever sleep for some time, everything must
always be event-driven because there is no way to recover for lost time.

Having thought about this for a while, I think we must keep in mind that
the enemy is RTT and we want to avoid stacking them. Thus I think that
a reasonable solution would be to set a limit on the number of concurrent
connections or streams to a given server, and to batch requests only when
there are too many unreplied ones.

This means that when DNS+RTT is shorter than the HTML parsing time, requests
leave one at a time. When DNS+RTT is higher than the parsing time, requests
leave in batches.

Also, the behaviour you explained to me means that requests should probably
be merged much like a pipeline method and less like a pack of requests. By
this, I mean that we should keep the opportunity to add requests very late
at low cost (eg: append something to the packet). For this reason, I don't
think the MGET method would be the most suited for the task because as you
said, you don't know that in advance. And if we're going to always prepare
MGET requests, then they will be a de-facto replacement for GET even for
single objects, which means we failed somewhere. One solution to this could
be to have structured requests ordered approximately like this :

     1) length from 1 to 6
     2) Host
     3) METH
     4) URI for req 1
     5) Header fields
     6) Data
     7) *(length + URI)
     8) END

That way requests may be aggregated until the send() is performed. And we
may even go further with better frame encoding if we allow a request to
reuse headers from the previous one, because then even if you did the send()
segments may still be merged in kernel buffers. This is what pipelining does
in 1.1 (except that pipelining offers no provisions for reusing elements from
the previous request).

And after all, that's also how objects are linked in the HTML document :
you discover requests in the middle of the stream until you reach the </HTML>
tag. Above it would be the same for the server : it would process all requests
with the same header fields until it sees the END tag.

For intermediaries and servers, this would be almost stateless. I'm saying
"almost" because reusing data means a state is needed, but this state lasts
only till the end of the requests batch, so in practice it's just like
tomorrow where we have to keep some request information while processing it
(at least for logging).

Regards,
Willy