Re: [httpapi] Server push - use cases

Evert Pot <me@evertpot.com> Thu, 24 August 2023 20:05 UTC

Return-Path: <me@evertpot.com>
X-Original-To: httpapi@ietfa.amsl.com
Delivered-To: httpapi@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 69125C15109E for <httpapi@ietfa.amsl.com>; Thu, 24 Aug 2023 13:05:23 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.807
X-Spam-Level:
X-Spam-Status: No, score=-2.807 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_ZEN_BLOCKED_OPENDNS=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01, URIBL_DBL_BLOCKED_OPENDNS=0.001, URIBL_ZEN_BLOCKED_OPENDNS=0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (1024-bit key) header.d=evertpot.com header.b="VM4Vxkmb"; dkim=pass (2048-bit key) header.d=messagingengine.com header.b="U1nkNzjl"
Received: from mail.ietf.org ([50.223.129.194]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id TaSRItsAfafL for <httpapi@ietfa.amsl.com>; Thu, 24 Aug 2023 13:05:19 -0700 (PDT)
Received: from wout1-smtp.messagingengine.com (wout1-smtp.messagingengine.com [64.147.123.24]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 48670C14CEFA for <httpapi@ietf.org>; Thu, 24 Aug 2023 13:05:19 -0700 (PDT)
Received: from compute6.internal (compute6.nyi.internal [10.202.2.47]) by mailout.west.internal (Postfix) with ESMTP id 4A2563200A40 for <httpapi@ietf.org>; Thu, 24 Aug 2023 16:05:16 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162]) by compute6.internal (MEProxy); Thu, 24 Aug 2023 16:05:16 -0400
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=evertpot.com; h= cc:content-type:content-type:date:date:from:from:in-reply-to :in-reply-to:message-id:mime-version:references:reply-to:sender :subject:subject:to:to; s=mesmtp; t=1692907515; x=1692993915; bh=U1n530ncXOimDRXCEoqJ5OPiJeFoF9gZPpO/UXXq/nY=; b=VM4Vxkmbw5Qw jcslMs0ILE/J5aRpcU6OAdDgGN+ImImyvV2Ml26/SS6qJqWwaFlNMnyQh37k1+Dg eEsbvPWmlWhi6hk+glPcHTD9SaJythmdgSxuT6JqrA6X+yuBbMfFPrwqN8iKo1DK Y8MSOIxsx2pJDjPPrkaUlQUyPx7c/qU=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-type:content-type:date:date :feedback-id:feedback-id:from:from:in-reply-to:in-reply-to :message-id:mime-version:references:reply-to:sender:subject :subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender :x-sasl-enc; s=fm1; t=1692907515; x=1692993915; bh=U1n530ncXOimD RXCEoqJ5OPiJeFoF9gZPpO/UXXq/nY=; b=U1nkNzjlV/r0FW8vNiNuKMqjH98kc l60i/beunX5+A3Z5cHbrIzqzEItPJnZw2GYV453/KlUNPjMY7BAcn4xvBwFGGeRM C+/sBU3COpkc1QZcT8MOMD/plr5jJ3jASu9+be5j6HgZvtRX+OxA5NhfZp8/OTTd rToK5jsJgFPlXd3PzuMfeo3DnbM63XmC97Dxye6AIEr9cieM4W2VOpKoL0yT7q3X Q/hvm+ZKMzggir3m+gtl8PbuYzI1zSHh8+vW33CwMUJ8mPHJeChBxosyas6/qIWJ xP4QodvgrY9wy3UYYpVYdxczxBXmIUdrk6WQqtkPWP1o8RDyk2IUxXygw==
X-ME-Sender: <xms:-7fnZMTTMh-Gj0_OCs1RwWwV1PFPHFTebRnbPB3T3WOO4HVgcI4PCQ> <xme:-7fnZJxejGtI6CdtOLiysn_eACB1ynxgFDYEVDr3VKkkk_utxRBQSh54Eld2smh9V j2o4L2E0T4MDZ7Y>
X-ME-Received: <xmr:-7fnZJ3I9F4dQ0Yw1zCxTn9tiGbLC1o4GJhvWmI1ShxyxujsDoXo4Z4g5sQLirJvFw>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedviedruddviedgudegvdcutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh necuuegrihhlohhuthemuceftddtnecuogfuuhhsphgvtghtffhomhgrihhnucdlgeelmd enucfjughrpegtkfffgggfuffvfhfhjgesrgdtreertddvjeenucfhrhhomhepgfhvvghr thcurfhothcuoehmvgesvghvvghrthhpohhtrdgtohhmqeenucggtffrrghtthgvrhhnpe elffdtveefhfekiefhvefhieehtedvgefhjeelveelieelkedthfefleejieehueenucff ohhmrghinhepghhithhhuhgsrdhiohdpghhithhhuhgsrdgtohhmpdhivghtfhdrohhrgh dphhhtthhpsggvihhnghhushgvuggrshgrshhtrghtvghshihntghhrhhonhhiiigrthhi ohhnvghnghhinhgvrdhsohenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmh grihhlfhhrohhmpehmvgesvghvvghrthhpohhtrdgtohhm
X-ME-Proxy: <xmx:-7fnZADlFbYKffF6otQ-XsTMe7BHw1uA4GBGGCUZydWMLsXKDRK53g> <xmx:-7fnZFhFVIuZmtSvzxOYw1a3ZVKYb1MZvuXie4tF37T6RlsEWjEGmg> <xmx:-7fnZMoyNce_PiI2Y7JAITO8rjRCfxrMpxwRL9n-j8CTUMQA1azK2Q> <xmx:-7fnZPdIKwwY6LE8HmIWOD3xWZIffhvyICVW0SygJ4vKt8h1iFVQMg>
Feedback-ID: i525c409a:Fastmail
Received: by mail.messagingengine.com (Postfix) with ESMTPA for <httpapi@ietf.org>; Thu, 24 Aug 2023 16:05:15 -0400 (EDT)
Content-Type: multipart/alternative; boundary="------------8Cq0XFxKhJa0be34vhkff1hZ"
Message-ID: <be438fce-5f27-945a-11fd-4832cd3e5d8e@evertpot.com>
Date: Thu, 24 Aug 2023 16:05:14 -0400
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Content-Language: en-US
To: httpapi@ietf.org
References: <e79370cf-b870-eb7d-f61f-e217c2a21819@apache.org> <cee049f0-3028-43ab-a153-beed06f34aea@Spark> <20230814133948.GF18837@1wt.eu>
From: Evert Pot <me@evertpot.com>
In-Reply-To: <20230814133948.GF18837@1wt.eu>
Archived-At: <https://mailarchive.ietf.org/arch/msg/httpapi/igYXtIM7v2gQ_C_QW19yPMBhuUg>
Subject: Re: [httpapi] Server push - use cases
X-BeenThere: httpapi@ietf.org
X-Mailman-Version: 2.1.39
Precedence: list
List-Id: Building Blocks for HTTP APIs <httpapi.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/httpapi>, <mailto:httpapi-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/httpapi/>
List-Post: <mailto:httpapi@ietf.org>
List-Help: <mailto:httpapi-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/httpapi>, <mailto:httpapi-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 24 Aug 2023 20:05:23 -0000

On 2023-08-14 09:39, Willy Tarreau wrote:
> Hi Asbjørn,
>
> On Mon, Aug 14, 2023 at 12:46:42PM +0200, Asbjørn Ulsberg wrote:
>> HTTP Server Push has been debated much in the API community, although
>> widespread adoption has yet to materialise. I presented some of the use-cases
>> at the 2019 HTTP Workshop, which can be viewed here:
>>
>> https://asbjornu.github.io/prefer-push-presentation/
> Thanks for sharing. Sadly without the explanations, it's not obvious
> where you're seeing a gain in the example, because the provided example
> turns a moderately sized responses into 4 slightly smaller ones which
> together are twice as large.
>
>> Kévin Dunglas has been a pioneer in the space, with his implementation of Server Push called "Vulcain":
>>
>> https://github.com/dunglas/vulcain
>>
>> I would love to see more, not less, adoption of Server Push going forward.
>> The web isn't browser-only; if it was, the HTTP specification would have
>> dropped  support for other methods than GET + POST long ago (as an
>> example). Thankfully, it hasn't, and I think Server Push deserves a similar
>> treatment.
> For sure the web is not just browsers, but likewise it could be said
> that whenever a server decides to deliver multiple responses it could
> also coalesce them into a composite one that the client would get at
> once. And since push is useful in this case when the requester has a
> cache, populating a cache from multiple responses or from collected
> elements is not different.
>
> The real benefit of push *was* for browsers, on high-latency links,
> where the server could decide to push some objects the client had not
> yet requested. We all know the pros and cons (saving a round-trip vs
> sending objects already present there, who's responsible for that
> object's presence in the cache, etc). But here unless I'm not getting
> it from the presentation, the server did not really use push for what
> it provides, it used it as a way to split the response, just as if it
> had sent a multi-part response in some sort.
The main benefit of using push for sending representations vs compound 
responses is that with push there was a future possibility that they 
would work better with HTTP caches.

Since browsers are not aware of the identity of documents embedded in 
compound documents, they also don't know that that a compound response 
carrying a new representation should replace a cache entry for something 
it already received as an individual response.

This applies to both browsers and intermediaries of course.

The result is that many frontend frameworks and protocols like GraphQL 
end up building their own caching layers from scratch, often with fever 
features as HTTP implementations and turning off browser cach facilities 
because they're not easy to control.

Having Push itself (in the form it was released) would not have been the 
whole story for this working really well, with Push, along with Cache 
Digests[1] and Cache Groups[2] and HTTP/2 frames for cache invalidation, 
this could have gotten us closer to HTTP being used as a state 
synchronization engine.

So count me as one of the people that would have *liked* to see this 
explored more

[1]: 
https://datatracker.ietf.org/doc/html/draft-ietf-httpbis-cache-digest-05
[2]: https://datatracker.ietf.org/doc/draft-nottingham-http-cache-groups/