Re: "Timeout" request header to tell server to wait for resource to become available

Martin Thomson <> Thu, 09 April 2015 16:55 UTC

Return-Path: <>
Received: from localhost ( []) by (Postfix) with ESMTP id C37C01B2F0D for <>; Thu, 9 Apr 2015 09:55:12 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -5.812
X-Spam-Status: No, score=-5.812 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, J_CHICKENPOX_41=0.6, J_CHICKENPOX_64=0.6, RCVD_IN_DNSWL_HI=-5, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001, T_RP_MATCHES_RCVD=-0.01] autolearn=ham
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id jqRAxP1Hf0GB for <>; Thu, 9 Apr 2015 09:55:10 -0700 (PDT)
Received: from ( []) (using TLSv1.2 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) by (Postfix) with ESMTPS id 8AE921B2F0A for <>; Thu, 9 Apr 2015 09:55:10 -0700 (PDT)
Received: from lists by with local (Exim 4.80) (envelope-from <>) id 1YgFfS-0000Ns-MK for; Thu, 09 Apr 2015 16:51:02 +0000
Resent-Date: Thu, 09 Apr 2015 16:51:02 +0000
Resent-Message-Id: <>
Received: from ([]) by with esmtp (Exim 4.80) (envelope-from <>) id 1YgFfN-0000Mm-0u for; Thu, 09 Apr 2015 16:50:57 +0000
Received: from ([]) by with esmtps (TLS1.2:RSA_ARCFOUR_SHA1:128) (Exim 4.80) (envelope-from <>) id 1YgFfK-0007MC-Hh for; Thu, 09 Apr 2015 16:50:56 +0000
Received: by obbry2 with SMTP id ry2so23343280obb.1 for <>; Thu, 09 Apr 2015 09:50:28 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=+MfHc7qfjVI4b0uGozWcFT2ALopzjG/+Gw6I/Lou2DM=; b=NsOUIMTidsl+gC6auh+L2639pTGwsuEGUVUxWPeX9ktu1OVrnGbafRl98sdoXsy+5H vIVBqcnKj2wIxV+vCFnrc+XITPwPwZozs4/PepvoUaQBk6Ptigo2ZGbEXllDT0/6dhpw 4qUAn3+xPrnYBw7fPRYGZQ0LWDERlc3uhEYtAP84vO+0bFJZCjWKkuProEbKDQyGIO0h 6Y+R3O7z994ncmRXwLnv7XkieVf+iI5qKgvcMWx1NhElO28D+rR2p9lGTjC02//TN3ag 0nhLYlbDzLSXhdKxVMa4KQPj7mdQYqjRGUrEBIAKsg32ut6kw5TEC5+JsPcAXb/OIX/R yexw==
MIME-Version: 1.0
X-Received: by with SMTP id or1mr40736350oeb.82.1428598228387; Thu, 09 Apr 2015 09:50:28 -0700 (PDT)
Received: by with HTTP; Thu, 9 Apr 2015 09:50:28 -0700 (PDT)
In-Reply-To: <>
References: <> <> <>
Date: Thu, 09 Apr 2015 09:50:28 -0700
Message-ID: <>
From: Martin Thomson <>
To: Benjamin Carlyle <>
Cc: Brendan Long <>, HTTP Working Group <>
Content-Type: text/plain; charset="UTF-8"
Received-SPF: pass client-ip=;;
X-W3C-Hub-Spam-Status: No, score=-6.9
X-W3C-Hub-Spam-Report: AWL=0.897, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FREEMAIL_FROM=0.001, RCVD_IN_DNSWL_LOW=-0.7, SPF_PASS=-0.001, W3C_AA=-1, W3C_DB=-1, W3C_IRA=-1, W3C_IRR=-3, W3C_WL=-1
X-W3C-Scan-Sig: 1YgFfK-0007MC-Hh 7a6f3328e958cc015533524ff05fd877
Subject: Re: "Timeout" request header to tell server to wait for resource to become available
Archived-At: <>
X-Mailing-List: <> archive/latest/29296
Precedence: list
List-Id: <>
List-Help: <>
List-Post: <>
List-Unsubscribe: <>

That sounds like exactly the case Prefer: wait=x was designed for.

Note that with HTTP/2 you can set the header field to the actual time
that you are willing to wait, and use PING frames to test (and
maintain) connectivity.

On 9 April 2015 at 06:39, Benjamin Carlyle
<> wrote:
> On 28 March 2015 at 11:46, Martin Thomson <> wrote:
>> I believe that what you want is accomplished by RFC 7240:
>> Prefer: wait=5
>> The units are perhaps suboptimal for your use case (seconds instead of
>> milliseconds), but we might be able to make a change to support finer
>> grained timing.
> I thought I would write in to describe a use case for combining
> Prefer:wait with GET requests. I'm not sure if my case is completely
> compatible with Brendan's. My main use case for HTTP, SPDY, and soon
> h2 is within highly available safety related (not safety-critical)
> SCADA systems. Within these systems there is often a requirement for
> soft realtime transfer of data, that is delivery of information within
> an order of magnitude or two or three of the effectively latency of
> the network. The current preferred way to do this with HTTP is to have
> a "main" URL for a given collection of data, plus a series of "delta"
> URLs. Issuing GET to the main URL returns immediately, and includes a
> Link header to the next delta URL. A client will issue GET to the
> delta URL which includes an time-like identifier for the recent main
> resource representation. The delta response will include a Link header
> to the delta.
> A crude example:
> -> GET /main
> <- 200 OK
> <- Link: </delta/5>; rel="delta"
> <- (current state)
> -> GET /delta/5
> <- 200 OK
> <- Link </delta/7>; rel="next"
> <- (changes from t=5 to t=7)
> The request to the delta URL is a "long poll" where the client is
> willing to wait until content is available at the delta URL it has
> been given. There are two main alternatives to a success response for
> the delta URL:
> 4xx - the delta is invalid for some reason, say a circular buffer is
> keeping track of recent changes on behalf of all clients and the index
> into that buffer that the client holds is no longer valid
> 204 - the delta is valid but no changes have occurred yet. The server
> can validly return 204 at any time to a delta request so can shed
> clients it no longer wants to serve etc.
> I haven't been using the RFC7240 code but I may start using it when we
> deprecate a h2-like internal protocol dating back many years and
> switch to official h2. Currently I'm using a custom header sent in the
> GET request to indicate how long the client is willing to wait for a
> response. Typically this might be around 4s after which the client
> will expect a response - otherwise it might be that the server or TCP
> connection is dead. In this way the 204 response acts as a heartbeat
> message to the client when the change rate is low. I refer to the
> technique as long poll delta encoding, and for synchronisation of data
> across a network between control system components with
> well-controlled failure modes I think it's actually hard to beat -
> partially because this particular interaction is stateless. A client
> only has to make one request at any time to come back into sync and
> the server can drop clients at will without loss of synchronisation
> state. A header like this can also be a hint to layers that do not
> understand the full request semantics to allocate resources to the
> request differently, for example by shifting workload onto different
> thread pool.
> I wrote about the mechanism back in 2012 in case anyone is interested
> in a slightly more complete though slightly out of date treatment of
> the subject:
> Obviously there is a lot more to the story of ensuring good responses
> to failure for HTTP requests. As a blanket rule for these systems
> there exists a time limit T in the order of a few seconds such that
> should any kind of network failure occur clients detect the failure
> and reestablish comms to a new server. Doing a long poll with a
> timeout is one part of that.