Re: [hybi] Handshake was: The WebSocket protocol issues.

Eric Rescorla <ekr@rtfm.com> Mon, 11 October 2010 14:06 UTC

Return-Path: <ekr@rtfm.com>
X-Original-To: hybi@core3.amsl.com
Delivered-To: hybi@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id 4B03C3A6A32 for <hybi@core3.amsl.com>; Mon, 11 Oct 2010 07:06:08 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -101.72
X-Spam-Level:
X-Spam-Status: No, score=-101.72 tagged_above=-999 required=5 tests=[AWL=0.256, BAYES_00=-2.599, FM_FORGED_GMAIL=0.622, HTML_MESSAGE=0.001, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 4wOxiqJ+SqNy for <hybi@core3.amsl.com>; Mon, 11 Oct 2010 07:06:07 -0700 (PDT)
Received: from mail-gx0-f172.google.com (mail-gx0-f172.google.com [209.85.161.172]) by core3.amsl.com (Postfix) with ESMTP id CD7843A6A29 for <hybi@ietf.org>; Mon, 11 Oct 2010 07:06:06 -0700 (PDT)
Received: by gxk20 with SMTP id 20so1246421gxk.31 for <hybi@ietf.org>; Mon, 11 Oct 2010 07:07:18 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.90.84.18 with SMTP id h18mr3142006agb.13.1286806038574; Mon, 11 Oct 2010 07:07:18 -0700 (PDT)
Received: by 10.91.190.1 with HTTP; Mon, 11 Oct 2010 07:07:18 -0700 (PDT)
In-Reply-To: <20101011053354.GA12672@1wt.eu>
References: <4CAFA043.10101@caucho.com> <AANLkTi=eo-cjBz160FN0cn53v4-CpDSYaEneqkr_ZP7k@mail.gmail.com> <4CAFAC2B.5000800@caucho.com> <55bva61goeqtn0lifgjt5uihf50obh7kf4@hive.bjoern.hoehrmann.de> <4CAFB9C4.6030905@caucho.com> <AANLkTinv5Ym5jwUEqS76z3UkVa7GpmOBT_WXhBbFK0-m@mail.gmail.com> <20101009055723.GL4712@1wt.eu> <AANLkTimY2DjxgZybibSRtc7L34Wns2KhQC=Wa9K6PYku@mail.gmail.com> <20101009204009.GP4712@1wt.eu> <AANLkTi=Az0RmE1Uipo068zMh3YzgMpM2tQ+zYxaDT47A@mail.gmail.com> <20101011053354.GA12672@1wt.eu>
Date: Mon, 11 Oct 2010 07:07:18 -0700
Message-ID: <AANLkTimC6A+5ZWuNhWASehAibUB9fQKVCPVsVpShUkrW@mail.gmail.com>
From: Eric Rescorla <ekr@rtfm.com>
To: Willy Tarreau <w@1wt.eu>
Content-Type: multipart/alternative; boundary="0016e64805ead2f74f049257e0e9"
Cc: hybi <hybi@ietf.org>, Bjoern Hoehrmann <derhoermi@gmx.net>
Subject: Re: [hybi] Handshake was: The WebSocket protocol issues.
X-BeenThere: hybi@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Server-Initiated HTTP <hybi.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/hybi>, <mailto:hybi-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/hybi>
List-Post: <mailto:hybi@ietf.org>
List-Help: <mailto:hybi-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/hybi>, <mailto:hybi-request@ietf.org?subject=subscribe>
X-List-Received-Date: Mon, 11 Oct 2010 14:06:08 -0000

On Sun, Oct 10, 2010 at 10:33 PM, Willy Tarreau <w@1wt.eu> wrote:

> On Sun, Oct 10, 2010 at 09:17:21PM -0700, Eric Rescorla wrote:
> (...)
> > Thus, it's quite possible to implement an HTTP server which does not
> > deadlock
> > without looking at the Connection header at all, simply by having a short
> > timeout.
>
> That's what I meant with the "deadlock", we can only end the transfer on
> a timeout if the client expects a close and the server does not close.
> Even if the timeout is short, it makes the situation very uncomfortable
> for the user.


I don't understand what you mean here. There are two issues:

(1) when the response is finished
(2) when the connection can be closed.

Only the first affects the user experience, but they don't really interact.
Even
if the client is using persistent connections, the server still needs to
either
incorporate a length indication or terminate the response after sending the
response. In neither case does the user experience a stall.



> >    in either the request or the response header fields indicates that
> >    the connection SHOULD NOT be considered `persistent' (section 8.1)
> >    after the current request/response is complete.
>
> Indeed, however two lines below the requirement is a MUST for the client :
>
>   An HTTP/1.1 client that does not support persistent connections MUST
>   include the "close" connection option in every request message.


I don't see how this is relevant. The question is whether servers
universally
treat the field's presence as an indication of no future requests.




> > Accordingly, it's not at all clear to me that it's safe to rely on
> > Connection: close
>
> Well, the best way to wipe any doubt would be to test it on a variety of
> servers.
>

No, I don't agree on this. Once you're reduced to this kind of survey you
no longer have any reasonable assurance of security. Surveys are sometimes
OK for resolving interoperability questions, but in this case you are
relying
on this property for a security purpose.



> BTW, I believe that Adam's example was that he could write a program on
> a shared server that could return a valid handshake to the CONNECT request.
> But since the valid response is a 200, by definition it's an establishment
> of a tunnel between both sides, which ends only by the close. So once again
> there is no other request on the wire after the handshake (rfc2817, #5.3).
>

Yes, that's why using CONNECT is a desirable feature, since for
interoperability
reasons servers/proxies cannot treat data that appears in the tunnel as if
it
were HTTP traffic.


> I don't agree with that point at all. We're doing the same mistake again
> that we did with -76 handshake : the intermediaries should not wait for
> the connection to be completely established to take a routing decision.
> Look at this very common example :
>
>                       <--- hosting provider's infrastructure --->
>
>                                   /---- server farm A
>  client --- internet --- content <----- server farm B
>                          switch   \---- server farm C
>
> Some server farms are shared and other ones are dedicated to some
> customers, which is the typical scenario we find at almost every
> hosting provider's, because some customers with very poor code,
> high traffic or nasty reputations can cause negative side effects
> on other sites if shared on the same farms. Here, an HTTP content
> switch (reverse proxy and/or load balancer) will simply look at
> the host header and forward the request accordingly to the proper
> server.
>
> With Adam's proposed handshake, this is not possible anymore with
> currently deployed components. We would have to implement WebSocket in
> all front components just so that they can decrypt the host header and
> see what farm is supposed to process it, if any at all.


Yes, I agree that this would be required. I don't agree that it's a
dealbreaker.




> Not only this
> is not compatible with existing HTTP infrastructure, but doing so makes
> the frontend component sensible to new DoS attacks because it has to
> maintain a context before even knowing if it has to handle the request.
>
>
I don't see how this creates any meaningful increase in the state that must
be maintained by the infrastructure element, which must already maintain
TCP buffers, which are far larger.

-Ekr