Re: Proposal: Adopt State Synchronization into HTTPbis

Michael Toomim <toomim@gmail.com> Fri, 11 October 2024 00:41 UTC

Received: by ietfa.amsl.com (Postfix) id AD48EC14F6B5; Thu, 10 Oct 2024 17:41:47 -0700 (PDT)
Delivered-To: ietfarch-httpbisa-archive-bis2juki@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id AC96FC14F61B for <ietfarch-httpbisa-archive-bis2Juki@ietfa.amsl.com>; Thu, 10 Oct 2024 17:41:47 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.856
X-Spam-Level:
X-Spam-Status: No, score=-2.856 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, HEADER_FROM_DIFFERENT_DOMAINS=0.25, HTML_MESSAGE=0.001, MAILING_LIST_MULTI=-1, RCVD_IN_DNSWL_BLOCKED=0.001, RCVD_IN_ZEN_BLOCKED_OPENDNS=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01, URIBL_DBL_BLOCKED_OPENDNS=0.001, URIBL_ZEN_BLOCKED_OPENDNS=0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=w3.org header.b="gW39Fo6Z"; dkim=pass (2048-bit key) header.d=w3.org header.b="g/rbY0f5"; dkim=pass (2048-bit key) header.d=gmail.com header.b="T665eAU3"
Received: from mail.ietf.org ([50.223.129.194]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id JPy7Ft_FeWIk for <ietfarch-httpbisa-archive-bis2Juki@ietfa.amsl.com>; Thu, 10 Oct 2024 17:41:43 -0700 (PDT)
Received: from mab.w3.org (mab.w3.org [IPv6:2600:1f18:7d7a:2700:d091:4b25:8566:8113]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-256) server-signature ECDSA (P-256) server-digest SHA256) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 3B081C151717 for <httpbisa-archive-bis2Juki@ietf.org>; Thu, 10 Oct 2024 17:41:43 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=w3.org; s=s1; h=Subject:In-Reply-To:From:References:Cc:To:MIME-Version:Date: Message-ID:Content-Type:Reply-To; bh=5CHX3zyuLDVTxhIcaUYUvM5Gtz1kP8R5G7YQHbe/kuI=; b=gW39Fo6ZUTSYIm2K5Zj3eM2C2u Id53nT35gyog/Wprhoh/StEJRubXtLHop3vyRCBv/1xT6WFNXsUVb2t1LCN/KmD7hR/JkYRZTreTa p9UheXn9iHXtesw6t9OVKY6p6YgHBdsptmGZ4oqpS0NdgqnROj+Y/+8VLfJSM92lj4+ds+ErEA+Z8 oKTTf46t6gqHJRo8xlvexpONWVIFzPBA5FpKHx/WMJiQN6bwaZkk2ImO1XPsSPwwMv1BZe7nvGDOS BJWRGGMoWlSOMJdXIpObHdlW7mbViW89GRFqzaRhMxFrzpq9x21OZQWMIDNizJkUJcjR/4ajqe+Zu 9W2qYa2w==;
Received: from lists by mab.w3.org with local (Exim 4.96) (envelope-from <ietf-http-wg-request@listhub.w3.org>) id 1sz3i5-00GipL-1Z for ietf-http-wg-dist@listhub.w3.org; Fri, 11 Oct 2024 00:40:49 +0000
Resent-Date: Fri, 11 Oct 2024 00:40:49 +0000
Resent-Message-Id: <E1sz3i5-00GipL-1Z@mab.w3.org>
Received: from ip-10-0-0-224.ec2.internal ([10.0.0.224] helo=puck.w3.org) by mab.w3.org with esmtps (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.96) (envelope-from <toomim@gmail.com>) id 1sz3i3-00GioQ-1n for ietf-http-wg@listhub.w3.internal; Fri, 11 Oct 2024 00:40:47 +0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=w3.org; s=s1; h=In-Reply-To:From:References:Cc:To:Subject:MIME-Version:Date: Message-ID:Content-Type:Reply-To; bh=5CHX3zyuLDVTxhIcaUYUvM5Gtz1kP8R5G7YQHbe/kuI=; t=1728607247; x=1729471247; b=g/rbY0f5LchHEX8Kzpcb+lKA4v2iKZuoOPhlVQmMQuRkcl2n6g2fVP55ZDdF1QaPpIVWX9xiGrK mBAuHw7SO2YMA56dCJI//CxxUPUjNZTdFkxMIqi6+f7cM3gluo8+oo+mL6+JaX1ixr1Fc4qXePbhM ZFeCD3aCPOfDRd3BCcL0l3PLB+yJvoMSIHLD8N7W4Rdj+4hTHSe7YsqNXyOPkyG4JyRwZZOcDtrGe SPe+bCS+D+N5MrqW8nJ4ucYwhFzRIRqRmunYQKszd/TYSvV6UnQP/Csl6S/dDUUPK5UawNbY32qIw GfFaez3Az00skOSrgj/Rm+U26OVvjillI9gg==;
Received-SPF: pass (puck.w3.org: domain of gmail.com designates 2607:f8b0:4864:20::42c as permitted sender) client-ip=2607:f8b0:4864:20::42c; envelope-from=toomim@gmail.com; helo=mail-pf1-x42c.google.com;
Received: from mail-pf1-x42c.google.com ([2607:f8b0:4864:20::42c]) by puck.w3.org with esmtps (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (Exim 4.96) (envelope-from <toomim@gmail.com>) id 1sz3i2-00Eq60-0o for ietf-http-wg@w3.org; Fri, 11 Oct 2024 00:40:47 +0000
Received: by mail-pf1-x42c.google.com with SMTP id d2e1a72fcca58-71df67c67fcso1175023b3a.2 for <ietf-http-wg@w3.org>; Thu, 10 Oct 2024 17:40:46 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1728607242; x=1729212042; darn=w3.org; h=in-reply-to:from:content-language:references:cc:to:subject :user-agent:mime-version:date:message-id:from:to:cc:subject:date :message-id:reply-to; bh=5CHX3zyuLDVTxhIcaUYUvM5Gtz1kP8R5G7YQHbe/kuI=; b=T665eAU3LC4apv7ZaEACVA2z9gLxPg76POV03f4x9LSdD6U0r6GrHWlDkAzmiIzx+4 +GceVVfTwMS2sjJ0M0ZpyQe9Ewewld3GOKABdMt21S1inY/kBmMP1JH7eLTxVheSavLw 0YcUaTcPOIJQZRd8hfcU6yMMuF3PxsMd9lop/10kHn2vMwhTwPyKc8PaSS95YvsuydyU fiYyA2n3Aqf/x+MGjOwfsQYC/ALTU/fLlJ76URhffSsE21b0uu3BldyuvQplFRG7A3lj n3Fo3ySBP2tkvClwMss6ZRcx15I2R3Zm8XxyNN7t/Ln9+d6Vu00G7G2KGCpnvfxsllhj YbHQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728607242; x=1729212042; h=in-reply-to:from:content-language:references:cc:to:subject :user-agent:mime-version:date:message-id:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=5CHX3zyuLDVTxhIcaUYUvM5Gtz1kP8R5G7YQHbe/kuI=; b=e9X8/Zs3GaAJBiJAjTjV0jMmDBHSO+ccc6vZW9+zB4/aDrYDkqj+nviqyTNGnRQBWu k2dPJlpoq/vCvkfO8IC1fqWwtWIv2XEJMkT6qdWNoi7MLJ1iSoFo7U0hZXQk3zTHZwob gRo7U8n6yCFbzp9SGiVmy1uoSiIRrzEGe12FQ3HBKnM4NOiReLpBU1x2hHOxvdbZ+8HT cvZMNkMJqD1rKCdZSw7YVVKdAlY5drvWPVsjuwBqJT2Us4GgC2TgoYEvFKoZtCXYy1vI /at6rnLUcRJpaMtr3bRE8B849MZF+0+xbWrgkP/Jezt/1NXGf6sUsXIAZiqQIlQJUcVe 6kKQ==
X-Forwarded-Encrypted: i=1; AJvYcCVuGku+FW6obPXfqhemtUns5Ev1FExRvj8v8H/YeTuY4QRrBNFtZowcdrSDdqqNHMXvjGBFOJ+rULZujXQ=@w3.org
X-Gm-Message-State: AOJu0YzMM5EfkAJiVb6C4A+GWSdV9jEt/cMxoChbOHer38ZS/jwfvzOO nz8U2ObzAHhzFDUFV3JHZzGj81tlDLglxG7Z5Xy7AWuZU0fftjTw
X-Google-Smtp-Source: AGHT+IHnoBhsGtiO+3EuHVgScsRej0CEkY2Ov0MQeiq1FNkKt7ouc7hl5xV3qs5Xz49bXaTkdsPDKA==
X-Received: by 2002:a05:6a21:3990:b0:1d2:bb49:63ac with SMTP id adf61e73a8af0-1d8bcfaa073mr1336400637.40.1728607241919; Thu, 10 Oct 2024 17:40:41 -0700 (PDT)
Received: from [192.168.10.52] (c-73-241-198-96.hsd1.ca.comcast.net. [73.241.198.96]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-7ea449688e3sm1614707a12.87.2024.10.10.17.40.40 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 10 Oct 2024 17:40:40 -0700 (PDT)
Content-Type: multipart/alternative; boundary="------------rdxbiMiA2xC6uCsW8NeIVhlz"
Message-ID: <de383c38-5031-49c1-8ba7-cc9ea00551e5@gmail.com>
Date: Thu, 10 Oct 2024 17:40:39 -0700
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
To: Mark Nottingham <mnot@mnot.net>
Cc: Watson Ladd <watsonbladd@gmail.com>, ietf-http-wg@w3.org
References: <2F6DB48A-D17C-47DF-B1BC-EAC0791D23AE@gmail.com> <CACsn0cmEbb2XF=HCFo7UqKeQCfy8Smkm1cYoqf4MBWu5=Rbs3A@mail.gmail.com> <75592854-1dcc-409b-a33c-a45c3cbd716e@gmail.com> <CACsn0ckj4V+NLp413uVd21nv4Cah63dE9vtNib3TfWXu3DRsOA@mail.gmail.com> <5573afb1-23b1-4f2e-9d99-2f82dbcd4987@gmail.com> <F92CA811-9809-4746-95CC-CCDC1B50B80F@mnot.net>
Content-Language: en-US
From: Michael Toomim <toomim@gmail.com>
In-Reply-To: <F92CA811-9809-4746-95CC-CCDC1B50B80F@mnot.net>
X-W3C-Hub-DKIM-Status: validation passed: (address=toomim@gmail.com domain=gmail.com), signature is good
X-W3C-Hub-Spam-Status: No, score=-5.1
X-W3C-Hub-Spam-Report: BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, DMARC_PASS=-0.001, FREEMAIL_FROM=0.001, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_BLOCKED=0.001, RCVD_IN_ZEN_BLOCKED_OPENDNS=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, URIBL_DBL_BLOCKED_OPENDNS=0.001, W3C_AA=-1, W3C_DB=-1, W3C_WL=-1
X-W3C-Scan-Sig: puck.w3.org 1sz3i2-00Eq60-0o f7c9bce1e9c7cda8c03d2358e7c5f78f
X-Original-To: ietf-http-wg@w3.org
Subject: Re: Proposal: Adopt State Synchronization into HTTPbis
Archived-At: <https://www.w3.org/mid/de383c38-5031-49c1-8ba7-cc9ea00551e5@gmail.com>
Resent-From: ietf-http-wg@w3.org
X-Mailing-List: <ietf-http-wg@w3.org> archive/latest/52384
X-Loop: ietf-http-wg@w3.org
Resent-Sender: ietf-http-wg-request@w3.org
Precedence: list
List-Id: <ietf-http-wg.w3.org>
List-Help: <https://www.w3.org/email/>
List-Post: <mailto:ietf-http-wg@w3.org>
List-Unsubscribe: <mailto:ietf-http-wg-request@w3.org?subject=unsubscribe>

Thanks, Mark!

I fully agree with you that intermediaries is a great place to start, 
along with web applications.

I appreciate you pointing out the buffering problem with SSE-style 
responses over H1, and agree that everything will be much better with 
H2-native framing 
<https://lists.w3.org/Archives/Public/ietf-http-wg/2024OctDec/0018.html>.

I'll also point out that the Braid draft supports polling for updates, 
too! The client can just repeatedly ask for the latest updates like this:

    Request 1:

       GET /foo
       Parents: "1"

    Response 1:

       HTTP/1.1 200 OK
       Version: "2"

       <body>

    ...

    Request 2:

       GET /foo
       Parents: "2"

    Response 2:

       HTTP/1.1 200 OK
       Version: "5"

       <body>

    ...

    Request 3:

       GET /foo
       Parents: "5"

    Response 3:

       HTTP/1.1 200 OK
       Version: "10"

       <body>

In the big picture, we can support the full range of synchronization 
fidelity—from polling, to H1 SSE, to H2 native frames—over the same 
basic model. There is plenty of room for implementations to fall 
forward, or back, to get around quirks we discover in middleboxes.

Some issues are solved with H2 and H3. For instance, isn't the problem 
with long-lived connections solved by H3's 0rt or 1rt Connection 
Migration? Doesn't this let a mobile device drop a connection ... and 
pick it up later, at a different IP address?

As for Browser Support— I don't think that's needed yet. SSE started 
with a polyfill. Browsers added native support later. Braid-HTTP has a 
full-featured Javascript polyfill: 
https://www.npmjs.com/package/braid-http. Developers can get devtools 
support via a browser extension: 
https://github.com/braid-org/braid-chrome. It adds a "Braid" tab to view 
and inspect resource history. This all works today! We're building apps.

Browser support will help with two things:

  * Performance (a native HTTP parser will outperform our Javascript
    polyfill)
  * H2 framing (our polyfill library can only implement H1-style framing
    even with H2 and H3 connections)

But perhaps I'm missing something, because you wrote that continuing 
without Browser support is "not a great path to start with." It seems 
like a great path to me! Am I missing something?

You also wrote "I don't think we're going to see all changes on Web 
sites pushed over these mechanisms, for a variety of reasons." What are 
those reasons? I've been working on this for a while, and anticipate a 
world where most websites *do* push updates over these mechanisms, so 
I'd love to learn where you see things differently.

Thanks!

Michael

On 10/9/24 10:25 PM, Mark Nottingham wrote:
> Hi Michael,
>
> Just a few thoughts.
>
> Experience with SSE shows that it works... most of the time. Some intermediaries (eg virus scanners, corporate firewalls) tend to buffer HTTP response bodies, disrupting the streaming nature of these responses. This leads to poor user experience and is maddening to debug.
>
> It also causes very specific problems with HTTP/1.1, since it effectively consumes a connection there. Given the extensibility of H2 and H3's framing layers, it may make sense to only deliver a solution over those protocols.
>
> Additionally, deploying a new SSE-like thing (or anything else with similar properties) in browsers requires browser vendor support, and they've shown active disinterest in working on such things in the past, as they seem firmly committed to WebSockets/WebTransport.
>
> While it's possible to work on something for HTTP that doesn't get implemented in browsers (at all, or at the beginning), that's not a great path to start on. I think the best argument for a standard in this space is building shared infrastructure -- e.g., intermediaries (whether CDN or not) that want to support the new thing, and who can't add much value to WebSockets/WebTransport because it's so "low level."
>
> However, there are a few complicating factors. One is finding the right use cases -- pub/sub mechanisms are always a tradeoff between scalability and immediacy; I don't think we're going to see all changes on Web sites pushed over these mechanisms, for a variety of reasons, so we need to figure out what the sweet spot is. Another is the power requirements of keeping connections alive (a constraint that led to the design of WebPush).
>
> Cheers,
>
>
>> On 10 Oct 2024, at 12:10 pm, Michael Toomim<toomim@gmail.com>  wrote:
>>
>> Thanks, Watson, for these excellent questions!
>> Let's address them:
>> On 10/9/24 9:50 AM, Watson Ladd wrote:
>>> On Tue, Oct 8, 2024 at 4:16 PM Michael Toomim<toomim@gmail.com>  wrote:
>>>
>>>> Servers—which *authoritatively know* when resources change—will promise to tell clients, automatically, and optimally. Terabytes of bandwidth will be saved. Millions of lines of cache invalidation logic will be eliminated. Quadrillions of dirty-cache bugs will disappear. In time, web browser "reload" buttons will become obsolete, across the face of earth.
>>>>
>>> That assumes deployment and that this works pretty universally. I'm
>>> less sanguine about the odds of success.
>>>
>>> This happy state holds after every single intermediary and HTTP
>>> library is modified to change a basic request-response invariant.
>> No, the "happy state" does not require universal adoption. The benefits (of performance, bug elimination, code elimination, and interoperability) are accrued to whichever subnetworks of HTTP adopt them. Let's say I have a simple client/server app, currently using SSE or a WebSocket. If I switch to this RESS standard, my app's state will become more interoperable, more performant, require less code, have fewer bugs, and will have libraries to provide extra features (e.g. offline mode) for free.
>> This doesn't require other apps to adopt it. And (AFAIK) it doesn't require intermediaries to support it.
>> We are confirming we can run transparently through legacy intermediaries in tests right now— but we've already got running production apps working fine, so the track record is already great.
>>> Servers have a deeply ingrained idea that they don't need to
>>> hold long lived resources for a request. It's going to be hard to
>>> change that
>> Actually we do this already today. SSE holds responses open for long periods of time, and works great.
>> When a connection dies, the client just reconnects. It's fine.
>>> and some assets will change meaningfully for clients
>>> outside of the duration of a TCP connection (think e.g. NAT, etc).
>> This is a different problem, and is solved by the other Braid extensions— specifically versioning and merge-types. These extensions enable offline edits, with consistency guarantees upon reconnection.
>> Not all apps will need this. The apps that just need subscriptions still get value from subscriptions. Apps that need offline edits can use the other braid extensions and add OT/CRDT support.
>>> Subscriptions are push based, HTTP requests are pull based. Pulls
>>> scale better: clients can do a distributed backoff, understand that
>>> they are missing information, recover from losing it. Push might be
>>> faster in the happy case, but it is complex to do right. The cache
>>> invalidation logic remains: determining a new version must be pushed
>>> to clients is the same as saying "oh, we must clear caches because
>>> front.jpg changed". We already have a lot of cache control and HEAD to
>>> try to prevent large transfers of unchanged information. A
>>> subscription might reduce some of this, but when the subscription
>>> stops, the client has to check back in, which is just as expensive as
>>> a HEAD.
>> It's almost sounding here like you're arguing that programmers should only write pull-based apps, and should not write a push-based app?
>> Pull-based apps usually have some polling interval, which wastes bandwidth with redundant requests, and incurs a delay before updates can be seen. Is that what you're talking about? You can't do realtime that way.
>> Realtime apps like Figma push updates in realtime. So does Facebook, Google Search (with instant search suggestions), and basically every app that uses a WebSocket. Yes, this architecture is more sophisticated— Figma implements CRDTs! But it's awesome, and the web is going in this direction. Programmers are writing apps that push updates in realtime, and they need a standard.
>>> I don't really understand the class of applications for which this is
>>> useful. Some like chat programs/multiuser editors I get: this would be
>>> a neat way to get the state of the room.
>> I'll make a strong statement here— this is useful for any website with dynamic state.
>> Yes, chats and collaborative editors have dynamic state, where realtime updates are particularly important. But dynamic state exists everywhere. Facebook and Twitter push live updates to clients. Gmail shows you new mail without you having to click "reload." News sites update their pages automatically with new headlines. The whole web has dynamic state, now. Instead of writing custom protocols over WebSockets, these sites can get back to HTTP and REST — except now it will be RESS, and powerful enough to handle synchronization within the standard infrastructure, in an interoperable, performant, and featureful way.
>>> It also isn't clear to me
>>> that intermediaries can do anything on seeing a PATCH propagating up
>>> or a PUT: still has to go to the application to determine what the
>>> impact of the change to the state is.
>> Yes, they can't today, but we will solve this when we need to — this is the issue of Validating and Interpreting a mutation outside of the origin server. Today, you have to rely on the server to validate and interpret a PUT or PATCH. But when we're ready, we can write specs for how any peer can validate and interpret a PUT or PATCH independently.
>> This will be a beautiful contribution, but again not all apps need it yet, and there's a lot of value to be gained with just a basic subscription mechanism. We can solve the big problems one piece at a time, and different subnets of HTTP can adopt these solutions at their own pace, and for their own incentives.
>>>> Request:
>>>>
>>>> GET /chat
>>>> Subscribe: timeout=10s
>>>>
>>>> Response:
>>>>
>>>> HTTP/1.1 104 Multiresponse
>>>> Subscribe: timeout=10s
>>>> Current-Version: "3"
>>>>
>>>> HTTP/1.1 200 OK
>>>> Version: "2"
>>>> Parents: "1a", "1b"
>>>> Content-Type: application/json
>>>> Content-Length: 64
>>>>
>>>> [{"text": "Hi, everyone!",
>>>> "author": {"link": "/user/tommy"}}]
>>>>
>>>> HTTP/1.1 200 OK
>>>> Version: "3"
>>>> Parents: "2"
>>>> Content-Type: application/json
>>>> Merge-Type: sync9
>>>> Content-Length: 117
>>>>
>>>> [{"text": "Hi, everyone!",
>>>> "author": {"link": "/user/tommy"}}
>>>> {"text": "Yo!",
>>>> "author": {"link": "/user/yobot"}]
>>>>
>>> *every security analyst snaps around like hungry dogs to a steak*
>>> Another request smuggling vector?
>> Request Smuggling is a strong claim! Can you back it up with an example of how you'd smuggle a request through a Multiresponse?
>> I don't think it's possible. Usually Request Smuggling involves some form of "Response Splitting" that behaves differently on upgraded vs. legacy implementations. But there's no ambiguity here. Legacy implementations just see an opaque Response Body. Upgraded implementations see a set of Multi-responses, each distinguished unambiguously via Content-Length.
>> I'd love to see an example attack.
>>> How does a busy proxy with lots of internal connection reuse distinguish updates
>>> as it passes them around on a multiplexed connection? What does this
>>> look like for QUIC and H/3?
>> That's simple. Each Multiresponse—just like a normal response—exists on its own stream within the multiplexed TCP or QUIC connection. The Proxy just forwards all the stream's frames from upstream to downstream, on the same stream.
>> Each Multiresponse corresponds to a single Request, just like regular HTTP Responses.
>>>> This will (a) eliminate bugs and code complexity; while simultaneously (b) improving performance across the internet, and (c) giving end-users the functionality of a realtime web by default.
>>>>
>>> We have (c): it's called WebSockets. What isn't it doing that it
>>> should be?
>> Ah, the limitation of WebSockets is addressed in the third paragraph of the Braid-HTTP draft:
>> https://datatracker.ietf.org/doc/html/draft-toomim-httpbis-braid-http#section-1.1
>> 1. Introduction
>>
>> 1.1. HTTP applications need state Synchronization, not just Transfer
>>
>> HTTP [RFC9110] transfers a static version of state within a single
>> request and response. If the state changes, HTTP does not
>> automatically update clients with the new versions. This design
>> satisficed when webpages were mostly static and written by hand;
>> however today's websites are dynamic, generated from layers of state
>> in databases, and provide realtime updates across multiple clients
>> and servers. Programmers today need to *synchronize*, not just
>> *transfer* state, and to do this, they must work around HTTP.
>>
>> The web has a long history of such workarounds. The original web
>> required users to click reload when a page changed. Javascript and
>> XMLHTTPRequest [XHR] made it possible to update just part of a page,
>> running a GET request behind the scenes. However, a GET request
>> still could not push server-initiated updates. To work around this,
>> web programmers would poll the resource with repeated GETs, which was
>> inefficient. Long-polling was invented to reduce redundant requests,
>> but still requires the client to initiate a round-trip for each
>> update. Server-Sent Events [SSE] finally created a standard for the
>> server to push events, but SSE provides semantics of an event-stream,
>> not an update-stream, and SSE programmers must encode the semantics
>> of updating a resource within the event stream. Today there is still
>> no standard to push updates to a resource's state.
>>
>> In practice, web programmers today often give up on using standards
>> for "data that changes", and instead send custom messages over a
>> WebSocket -- a hand-rolled synchronization protocol. Unfortunately,
>> this forfeits the benefits of HTTP and ReST, such as caching and a
>> uniform interface [REST]. As the web becomes increasingly dynamic,
>> web applications are forced to implement additional layers of
>> non-standard Javascript frameworks to synchronize changes to state.
>>
>> Does that answer your question? WebSockets give up on using HTTP. Every programmer builds a different subprotocol over their websocket. Then the (increasingly dynamic) state of websites ends up inaccessible and obscured behind proprietary protocols. As a result, websites turn into walled gardens. They can openly link to each other's *pages*, but they cannot reliably interoperate with each other's internal *state*.
>> This will change when the easiest way to build a website is the interoperable way again. We get there by adding Subscription & Synchronization features into HTTP. This is the missing feature from HTTP that drives people to WebSockets today. Programmers use HTTP for static assets; but to get realtime updates they give up and open a WebSockets and a new custom protocol to subscribe to and publish state over it. We end up with yet another piece of web state that's proprietary; hidden behind some programmer's weird API. We can't build common infrastructure for that. CDNs can't optimize WebSocket traffic.
>> We solve this by extending HTTP with support for *dynamic* state; not just *static*. Then programmers don't need WebSockets. They use HTTP for all state; static *and* dynamic. They don't have to design their own sync protocol; they just use HTTP. The easiest way to build a website becomes the interoperable way again. CDNs get to cache stuff again.
>> Thank you very much for your questions. I hope I have addressed them here.
>> Michael
> --
> Mark Nottinghamhttps://www.mnot.net/
>