Re: [hybi] Websocket success rates and TLS extension.

Mike Belshe <> Sun, 18 April 2010 02:19 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 9F5963A6AAE for <>; Sat, 17 Apr 2010 19:19:14 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: 1.387
X-Spam-Level: *
X-Spam-Status: No, score=1.387 tagged_above=-999 required=5 tests=[AWL=0.763, BAYES_50=0.001, FM_FORGED_GMAIL=0.622, HTML_MESSAGE=0.001]
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id MShdVQ5kA2zC for <>; Sat, 17 Apr 2010 19:19:13 -0700 (PDT)
Received: from ( []) by (Postfix) with ESMTP id 7690D3A6AAF for <>; Sat, 17 Apr 2010 19:19:05 -0700 (PDT)
Received: by pzk31 with SMTP id 31so2053758pzk.31 for <>; Sat, 17 Apr 2010 19:18:55 -0700 (PDT)
MIME-Version: 1.0
Received: by with HTTP; Sat, 17 Apr 2010 19:18:53 -0700 (PDT)
In-Reply-To: <>
References: <> <> <> <>
Date: Sat, 17 Apr 2010 19:18:53 -0700
Received: by with SMTP id c27mr1416842wfj.65.1271557133223; Sat, 17 Apr 2010 19:18:53 -0700 (PDT)
Message-ID: <>
From: Mike Belshe <>
To: Jamie Lokier <>
Content-Type: multipart/alternative; boundary="001636e1f7ac3cc77004847977fd"
Cc: "" <>
Subject: Re: [hybi] Websocket success rates and TLS extension.
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Server-Initiated HTTP <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Sun, 18 Apr 2010 02:19:15 -0000

On Sat, Apr 17, 2010 at 4:29 AM, Jamie Lokier <> wrote:

> Mike Belshe wrote:
> >    I think I know what Greg was referring to.
> >    With SPDY, we've observed that the key to minimizing latency is
> >    removing round-trips.  While bandwidth continues to go up (5Mbps,
> >    10Mbps, more!), RTTs are not going down.  The average RTT to Google
> >    today is over 100ms.  Even in the US, where broadband is prevalent,
> >    ~90ms.
> >    Today, each layer of the stack introduces a Round Trip:   DNS, TCP,
> >    HTTP
> >    With WebSockets, as spec'd:  DNS, TCP, HTTP, WebSocket
> >    So, any protocol which induces an extra RT (e.g. the websocket upgrade
> >    handshake) faces a 90ms per connection on the critical path tax which
> >    current HTTP does not have.  From our study of web pages, web pages
> >    typically have about 2.5 connections to unique domains that are on the
> >    critical path of the web page download.
> Let me add a data point: mobile browsing.
> I find typical 3.5G (HSDPA) RTTs to vary from 150ms to about 6000ms
> (6 seconds) while remaining functional.  It depends on time of day as
> much as location - it gives every appearance of depending on network load.
> With GSM, the minimum latency is more like 400ms, which is similar to
> land-line dialup.
> That is then multiplies by the number of round trips which involve the
> handset.
> As you can see, in the worst case, the loading time can be very long.
> Anything we can do to reduce the number of round trips, parallelise
> operations, and move round trips to upstream proxies that don't
> involve the handset would be a huge improvement.


> >    This number can use more research, but I believe it is roughly
> >    right. (e.g. [3] loads subresources from
> >    [4] - so adding an extra handshake RT to the protocol
> >    means 2 extra RTs to load the page; the average site on the net
> >    today touches 8 domains).
> Why aren't those domains usually already in the user's DNS cache, with
> long TTLs?  Well, I already know, most users don't have a DNS cache :-)
> But that's a technical possibility...

Even if the DNS is cached, the TCP & HTTP each add an extra round trip for
each subdomain.

> >    which it has that.  We're pretty sure that a low-latency protocol
> >    needs to remove more RTs.  But perhaps there is something genius yet
> >    to be discovered :-)
> Well now you mention it :-)
> There is a mechanism to remove handshaking RTs.
> Send "conditional action" chunks along with negotiation headers.
> That would be a blocks of data, in a pre-agreed format, which the
> server will ignore if negotiation didn't succeed or didn't agree to
> use that data.

Agree with the concept, but the mechanisms to implement are not obvious; TCP
can carry data in the SYN and SYN-ACK, but this is seldom used, is not
exposed via the API on most platforms, and gets dropped at some percentage
rate around the internet.  And don't forget that the TCP handshake is often
a piece of how sites protect against DoS attacks.  Solve-able? maybe.  but
not trivial ;-)

> If that is defined that from the very first protocol version, clients
> can safely send messages without waiting for negotiation to complete.
> When it does complete, they will know which already sent messages are
> acted upon, and which are not.  It can then resend any in a different
> format if necessary.
> The above mechanism can be designed to be forward compatible to new
> extensions and protocol elements, provided a basic conditional chunk
> type is defined from the beginning.
> Baasic labelling can enable several different protocol features to be
> negotiated independently, including transport-level features for
> intermediary use.
> It is enough to simply say "this chunk type must be ignored" in
> the first protocol version.
> For negotiation up from HTTP to something else, it's necessary to find
> some way to embed conditional chunks in the HTTP negotiation which
> will be ignored by an HTTP server.

Seems likely do-able for websockets.

> I don't know if SSL/TLS has a place to put these type of messages
> to reduce TLS negotiation time.

SSL has an additional constraint that it authenticates the server and
encrypts the channel.  Sending data before completing authentication
requires some additional way to encrypt the data you're sending (since you
haven't authenticated and negotiated encryption yet).  Not impossible - just


> -- Jamie