Re: WGLC p1: Tear-down

Willy Tarreau <> Tue, 30 April 2013 06:14 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id BF75421F9BE7 for <>; Mon, 29 Apr 2013 23:14:17 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -10.599
X-Spam-Status: No, score=-10.599 tagged_above=-999 required=5 tests=[BAYES_00=-2.599, RCVD_IN_DNSWL_HI=-8]
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id vfi1mJ98LwP9 for <>; Mon, 29 Apr 2013 23:14:11 -0700 (PDT)
Received: from ( []) by (Postfix) with ESMTP id 6A15121F9B44 for <>; Mon, 29 Apr 2013 23:14:11 -0700 (PDT)
Received: from lists by with local (Exim 4.72) (envelope-from <>) id 1UX3ok-0004kD-TB for; Tue, 30 Apr 2013 06:13:34 +0000
Resent-Date: Tue, 30 Apr 2013 06:13:34 +0000
Resent-Message-Id: <>
Received: from ([]) by with esmtp (Exim 4.72) (envelope-from <>) id 1UX3oa-0004jE-RO for; Tue, 30 Apr 2013 06:13:24 +0000
Received: from ([]) by with esmtp (Exim 4.72) (envelope-from <>) id 1UX3oa-0002kG-1q for; Tue, 30 Apr 2013 06:13:24 +0000
Received: (from willy@localhost) by mail.home.local (8.14.4/8.14.4/Submit) id r3U6Cscb021612; Tue, 30 Apr 2013 08:12:54 +0200
Date: Tue, 30 Apr 2013 08:12:54 +0200
From: Willy Tarreau <>
To: "Adrien W. de Croy" <>
Cc: Mark Nottingham <>, Zhong Yu <>, Ben Niven-Jenkins <>, HTTP Working Group <>
Message-ID: <>
References: <> <em8250d35a-13c2-4842-a118-d69ff0da24b1@bombed>
Mime-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <em8250d35a-13c2-4842-a118-d69ff0da24b1@bombed>
User-Agent: Mutt/
Received-SPF: pass client-ip=;;
X-W3C-Hub-Spam-Status: No, score=-5.2
X-W3C-Hub-Spam-Report: AWL=-0.875, BAYES_00=-1.9, RP_MATCHES_RCVD=-2.442, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001
X-W3C-Scan-Sig: 1UX3oa-0002kG-1q fe0192ec56e31d1ff5dd22bbde2bcd1d
Subject: Re: WGLC p1: Tear-down
Archived-At: <>
X-Mailing-List: <> archive/latest/17716
Precedence: list
List-Id: <>
List-Help: <>
List-Post: <>
List-Unsubscribe: <>

Hi Adrien,

On Tue, Apr 30, 2013 at 02:52:49AM +0000, Adrien W. de Croy wrote:
> >> Do we need a way for a server to communicate which requests may be 
> >>made with impunity multiple times, and which should only be made once? 
> >>e.g. safe to retry or not. then only pipeline requests that are safe 
> >>to retry according to the server (rather than according to some 
> >>assumption or heuristic at the client, as such things are inevitably 
> >>wrong on occasion).
> >
> >That's built into the method of the request...
> that's what I meant by assume.
> UA authors might assume GET is idempotent.

UAs are the most well placed to know where they get the information they
send. I suspect that when the send a form using GET they don't trust
idempotence. However if a link has an embedded query-string, maybe they
consider the request idempotent as it's present in a link.

> It doesn't stop web 
> developers from writing sites that have significant side-effects on GET. 

We'll always get such things from clueless people, but that's also the
goal of the spec to insist on the risks of not respecting the standard.
If it's clearly written that GET/HEAD/PUT/DELETE are idempotent and that
browsers will consider this statement as true, then web developers will
have some guidance about the risk of doing stupid things.

> Getting these people to indicate safety of retrying is another problem. 

If they already use the wrong method and don't understand idempotence,
we can't expect them to advertise it correctly.

> I guess this is one reason why pipelining isn't that widespread yet.  
> Lots of problems with it.

No, it's really because many intermediaries and servers have had issues
with it, causing such requests to frequently stall or be dropped. It's
not always easy to get right, despite appearing obvious initially. I
recently managed to break it in haproxy without noticing until a user
reported some abnormal errors. Just to give you a rought idea, I believe
the issue was not caused by the request itself but by lack of space in
the response buffer when haproxy had to emit a redirect based on the
second request. Thus it forgot to wait for free space in the *response*
buffer to start to parse the *request* buffer... So it's easier to break
than to maintain in good shape.

Clearly, pipelining opens a new class of bugs, but there is no excuse
for not fixing them. If the spec provides some guidance on this, we'll
manage to slowly fix the web.