Re: p2: Expect: 100-continue and "final" status codes

Willy Tarreau <w@1wt.eu> Wed, 24 April 2013 06:33 UTC

Return-Path: <ietf-http-wg-request@listhub.w3.org>
X-Original-To: ietfarch-httpbisa-archive-bis2Juki@ietfa.amsl.com
Delivered-To: ietfarch-httpbisa-archive-bis2Juki@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 88F6821F8F58 for <ietfarch-httpbisa-archive-bis2Juki@ietfa.amsl.com>; Tue, 23 Apr 2013 23:33:57 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -10.225
X-Spam-Level:
X-Spam-Status: No, score=-10.225 tagged_above=-999 required=5 tests=[AWL=0.374, BAYES_00=-2.599, RCVD_IN_DNSWL_HI=-8]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 9ZN7sFzLlKcW for <ietfarch-httpbisa-archive-bis2Juki@ietfa.amsl.com>; Tue, 23 Apr 2013 23:33:56 -0700 (PDT)
Received: from frink.w3.org (frink.w3.org [128.30.52.56]) by ietfa.amsl.com (Postfix) with ESMTP id C3C8621F8FAB for <httpbisa-archive-bis2Juki@lists.ietf.org>; Tue, 23 Apr 2013 23:33:56 -0700 (PDT)
Received: from lists by frink.w3.org with local (Exim 4.72) (envelope-from <ietf-http-wg-request@listhub.w3.org>) id 1UUtGR-00056q-HI for ietf-http-wg-dist@listhub.w3.org; Wed, 24 Apr 2013 06:33:11 +0000
Resent-Date: Wed, 24 Apr 2013 06:33:11 +0000
Resent-Message-Id: <E1UUtGR-00056q-HI@frink.w3.org>
Received: from lisa.w3.org ([128.30.52.41]) by frink.w3.org with esmtp (Exim 4.72) (envelope-from <w@1wt.eu>) id 1UUtGN-000564-A0 for ietf-http-wg@listhub.w3.org; Wed, 24 Apr 2013 06:33:07 +0000
Received: from 1wt.eu ([62.212.114.60]) by lisa.w3.org with esmtp (Exim 4.72) (envelope-from <w@1wt.eu>) id 1UUtGL-000468-1Z for ietf-http-wg@w3.org; Wed, 24 Apr 2013 06:33:07 +0000
Received: (from willy@localhost) by mail.home.local (8.14.4/8.14.4/Submit) id r3O6VMKk018771; Wed, 24 Apr 2013 08:31:22 +0200
Date: Wed, 24 Apr 2013 08:31:22 +0200
From: Willy Tarreau <w@1wt.eu>
To: "Adrien W. de Croy" <adrien@qbik.com>
Cc: Mark Nottingham <mnot@mnot.net>, Amos Jeffries <squid3@treenet.co.nz>, "ietf-http-wg@w3.org" <ietf-http-wg@w3.org>
Message-ID: <20130424063122.GF15918@1wt.eu>
References: <4805210C-9117-4E5E-9D95-9E9A12A0CE03@mnot.net> <em371a6470-eea5-4b2a-8741-d2e3c419f0ed@bombed>
Mime-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <em371a6470-eea5-4b2a-8741-d2e3c419f0ed@bombed>
User-Agent: Mutt/1.4.2.3i
Received-SPF: pass client-ip=62.212.114.60; envelope-from=w@1wt.eu; helo=1wt.eu
X-W3C-Hub-Spam-Status: No, score=-0.0
X-W3C-Hub-Spam-Report: RP_MATCHES_RCVD=-0.001, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001
X-W3C-Scan-Sig: lisa.w3.org 1UUtGL-000468-1Z 0e17be41b25475239f0c951473679ad2
X-Original-To: ietf-http-wg@w3.org
Subject: Re: p2: Expect: 100-continue and "final" status codes
Archived-At: <http://www.w3.org/mid/20130424063122.GF15918@1wt.eu>
Resent-From: ietf-http-wg@w3.org
X-Mailing-List: <ietf-http-wg@w3.org> archive/latest/17525
X-Loop: ietf-http-wg@w3.org
Resent-Sender: ietf-http-wg-request@w3.org
Precedence: list
List-Id: <ietf-http-wg.w3.org>
List-Help: <http://www.w3.org/Mail/>
List-Post: <mailto:ietf-http-wg@w3.org>
List-Unsubscribe: <mailto:ietf-http-wg-request@w3.org?subject=unsubscribe>

Hi Adrien,

On Wed, Apr 24, 2013 at 04:39:16AM +0000, Adrien W. de Croy wrote:
> I'm really struggling to see what benefit can be derived by a client in 
> knowing whether a server supports 100 continue or not.  So to me 
> Expects: 100-continue is a complete waste of space.  I've never seen one 
> so I guess implementors by and large agree.

The first place I saw lots of them (100% of the requests) were between
applications using web services. All the requests were POST and all of
them were using 100-continue. That's how I discovered that it was a non
final status code and that haproxy didn't handle it properly at this
time...

> Regardless of 100 continue being transmitted, the client has to send the 
> payload if it wants to reuse the connection.  The only early-out options 
> involve closing the connection.

... or using chunked-encoding.

> There was quite a lot of discussion about this in the past, and my 
> understanding was that 100 continue couldn't be used to negotiate 
> whether or not the payload would be sent.

But this can be quite useful with a webmail for example, where you don't
want to upload your mail with attached documents to discover that your
session has expired and that you must upload again!

> The outcome of this 
> discussion was not satisfactory IMO, since the "answer" was for the 
> client to send request bodies always chunked, and send a 0 chunk if it 
> needed to abort early.

Yes indeed, this is the only reliable way of using it.

> This IMO is unsatisfactory because it does not indicate that the client 
> didn't send the payload, and a whole heap of intermediary agents may act 
> on that as if it were complete.
> 
> So for me therefore there's still a hole in the spec around this - 
> chunking doesn't have a way to indicate aborting the body.  And there's 
> no way to pre-authorization transmission of a request body.

It's not a big problem because if the server says it rejects the request, it
will just drop the payload and it can safely be transmitted and truncated.

> I don't see how a server can return a success status code to a message 
> it didn't even receive yet.

It will only base its decision on credentials or everything found in headers
(eg: auth, cookies, advertised content-length, ...).

> Returning a 417 due to expectation not met 
> is just extra noise and RTT, and the connection needs to be closed 
> anyway or the payload sent.

Except it's sometimes hard for the client to stop uploading something
that was already sent.

> So, what would we really lose if 100-continue were deprecated?  and what 
> would we gain.

First, it's the only way for the client to send non-idempotent requests
over existing connections without the risk that they expire during the
upload and that they don't know if the server could process them. If you
want to use a connection pool, you have no other choice.

Second, it's true that it's annoying in high latency networks as it adds
an RTT. I think that clients could have a threshold on the amount of data
below which they don't use it (unless they're reusing an existing
connection).

Regards,
Willy