Re: [Moq] Exploring HTTP/3
"Ali C. Begen" <ali.begen@networked.media> Thu, 09 February 2023 02:31 UTC
Return-Path: <ali.begen@networked.media>
X-Original-To: moq@ietfa.amsl.com
Delivered-To: moq@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 2F4A4C14CE2D for <moq@ietfa.amsl.com>; Wed, 8 Feb 2023 18:31:20 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -6.994
X-Spam-Level:
X-Spam-Status: No, score=-6.994 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, HTML_FONT_LOW_CONTRAST=0.001, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_HI=-5, RCVD_IN_ZEN_BLOCKED_OPENDNS=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, TRACKER_ID=0.1, URIBL_BLOCKED=0.001, URIBL_DBL_BLOCKED_OPENDNS=0.001, URIBL_ZEN_BLOCKED_OPENDNS=0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=networked.media
Received: from mail.ietf.org ([50.223.129.194]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id wNtW554h_9QW for <moq@ietfa.amsl.com>; Wed, 8 Feb 2023 18:31:16 -0800 (PST)
Received: from mail-pg1-x52e.google.com (mail-pg1-x52e.google.com [IPv6:2607:f8b0:4864:20::52e]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 0E1FCC14CF05 for <moq@ietf.org>; Wed, 8 Feb 2023 18:31:15 -0800 (PST)
Received: by mail-pg1-x52e.google.com with SMTP id s8so667568pgg.11 for <moq@ietf.org>; Wed, 08 Feb 2023 18:31:15 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networked.media; s=google; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=L33BUIO6ZLABLhVKp/GcgOnvIplzXIVV1P3Mc9QiAek=; b=PJPnsQrdhoKU4q88b/ua1PrnArjq4CZ2ZXDsqH7w/fLYl7sHOmtBq5AbD3EqkOObqo SVImFSWZXa6Lkwrl8B11nUYoEbXjnYRe0/XyYITmrS6VkPOSxkYd2dokrIlCcoQSOF3q sioCTUNaaHP6T46JGFjYQHGdQKjfrLEkeYj4rDMqbdBD7/LjCiq3mINHkzgTbrn89/mf TQ8USLYOuGaDShl64jFh08CpRnzTcQOtWKmGeeys1CRGEFOaeVt4nV6VQ+gEpesMicJY 7fwn+E6mYEmZgEal+QUK5NZmBGIC0mbHLC0himVA2UMBjQoAJ7F/45NkzTTdk+oWvAV7 3H0g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=L33BUIO6ZLABLhVKp/GcgOnvIplzXIVV1P3Mc9QiAek=; b=ShqkcLJlrNJDU64qYiP14Mfg2MCjs49/5hDlyZkVzO8+RCXKqPbLejOtLs6KKvbnXN xmfQD61YLTFYq4qC8sR/m92P447OH1NtP1T62ZHwCnDvt8VAoS7bPCAV47P08Fesn/EF fudXSEzYfAnsn7scXn3J7JPsnbpyOMe2otTlNTO3FZESlcvhgrZ6W7QOo31njtpcrlEP ftOxuo4RjpvcWWBfAlvQzTZ+u4o8ksoa4A7bi7HUtoHw6AI9+xfb5kblKCItkEUe7YWN hPw3IskkSIGMp6mX2svlvMBgENPcidLKQU0NtTIFoAr4JTOT5IGupgDxS3vTW7AYvihW xN5g==
X-Gm-Message-State: AO0yUKVc4lXQM+7b9IVctm6JfoEGKvwG2M5JFJe7O+8zyzFI/eo3Najf V5dlkzwG3wIevEWf+AGwERljcQOW7CxC9PPPc0iyDguNCxFlaPBF
X-Google-Smtp-Source: AK7set9HauRR5TlxrOTM0AgmMTOtVL1v0rsnKPNWWnUAg6oiqnwi4i+B7DglSuoviQwe7HffdaCjqpItDlChVgo8nzY=
X-Received: by 2002:a63:af58:0:b0:4ef:2f60:f762 with SMTP id s24-20020a63af58000000b004ef2f60f762mr1937344pgo.22.1675909875118; Wed, 08 Feb 2023 18:31:15 -0800 (PST)
MIME-Version: 1.0
References: <CAHVo=ZmD7KvKxh2tTeaM2B+0q9=qZPgBydmfaHor5MaPODZf6w@mail.gmail.com> <CAAZdMae+WVxYZbKPdWHqApPVX3F5wQ2KHUS03VdFekaCvQiyiA@mail.gmail.com> <MW5PR15MB5145F86C3D9C90438A733218D4D89@MW5PR15MB5145.namprd15.prod.outlook.com> <CAA4MczugjobBo9Xa-E1EeZd+9z3jgWz2K-ADrTj8phSRChqepw@mail.gmail.com> <MW5PR15MB51450A8BD28C0B6B08B82F6CD4D99@MW5PR15MB5145.namprd15.prod.outlook.com>
In-Reply-To: <MW5PR15MB51450A8BD28C0B6B08B82F6CD4D99@MW5PR15MB5145.namprd15.prod.outlook.com>
From: "Ali C. Begen" <ali.begen@networked.media>
Date: Wed, 08 Feb 2023 18:31:04 -0800
Message-ID: <CAA4McztWnbCxn8Xr5Ep+pbB2N17Ea9ZRVt77APu7isFMGK36=g@mail.gmail.com>
To: Roberto Peon <fenix=40meta.com@dmarc.ietf.org>
Cc: Luke Curley <kixelated@gmail.com>, MOQ Mailing List <moq@ietf.org>, Victor Vasiliev <vasilvv@google.com>
Content-Type: multipart/alternative; boundary="000000000000c84f6305f43b2c44"
Archived-At: <https://mailarchive.ietf.org/arch/msg/moq/izaBkRwtShRoTnb4Bcawui3MoeQ>
Subject: Re: [Moq] Exploring HTTP/3
X-BeenThere: moq@ietf.org
X-Mailman-Version: 2.1.39
Precedence: list
List-Id: Media over QUIC <moq.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/moq>, <mailto:moq-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/moq/>
List-Post: <mailto:moq@ietf.org>
List-Help: <mailto:moq-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/moq>, <mailto:moq-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 09 Feb 2023 02:31:20 -0000
On Wed, Feb 8, 2023 at 18:26 Roberto Peon <fenix=40meta.com@dmarc.ietf.org> wrote: > Pedantry is not always unappreciated—I got a smile from it. > :) > > True—caching impacts latency as 2nd order thing (fewer packets going > through places which could potentially be bottlenecks, and reduces path > RTT, which allows lower jitter for same channel utilization (or higher > utilization for same jitter)). > > The 1st order latency impacts come from being able to forward things > cheaply and without requiring in-order reassembly everywhere. > Thats where chunked transfer encoding / delivery works well. It can be made better though if CMAF chunking and HTTP chunking worked better together. Maybe thats a topic we could tackle. They are independent from each other and from what i can see every implementation is different. > -=R > > *From: *Ali C. Begen <ali.begen=40networked.media@dmarc.ietf.org> > *Date: *Wednesday, February 8, 2023 at 5:25 PM > *To: *Roberto Peon <fenix@meta.com> > *Cc: *Luke Curley <kixelated@gmail.com>, MOQ Mailing List <moq@ietf.org>, > Victor Vasiliev <vasilvv@google.com> > *Subject: *Re: [Moq] Exploring HTTP/3 > > On Wed, Feb 8, 2023 at 15: 47 Roberto Peon <fenix=40meta. com@ dmarc. > ietf. org> wrote: Hello Victor! Here is how HTTP/3 could work as an > underlying protocol shape for moq, I think. Client makes request P for a > playback session for video X, > > ZjQcmQRYFpfptBannerStart > > *This Message Is From an External Sender * > > ZjQcmQRYFpfptBannerEnd > > > > > > On Wed, Feb 8, 2023 at 15:47 Roberto Peon <fenix=40meta.com@dmarc.ietf.org> > wrote: > > Hello Victor! > > Here is how HTTP/3 could work as an underlying protocol shape for moq, I > think. > > Client makes request P for a playback session for video X, by stating what > it wants onto the “control stream” (i.e. a generic resource name for the > video/broadcast/whatever, which does not need to a prioi know anything > about what is available). > > Server responds to this request that it is replying with the following > streams: > - video stream V > - audio stream A > - manifest stream M > .. and keeps the stream open, so it can continue to communicate to the > client. > > Resources V, A, M are cacheable. > Resource P (the playback request) is not cacheable, since this is an > ephemeral “control” (a.k.a metadata) stream. > > Client, being informed it is needing V,A,M, requests these if it isn’t > already reading from them. > Assuming push exists, and the server did it, this data will already be at > the client, and so we’ve saved the latency we needed to save. > > Not to pedantic, but you reduced the startup delay aka time to first > frame, not the latency. If latency reduction is the goal, pushing is not > mandatory. The client can request the upcoming media object to get the > lowest latency. > > > > Startup delay vs. latency are orthogonal goals. And IMO the client should > pick whatever it likes to have. RFC 6285 for example supports either. > > If client wishes to change things, it can send another request to the > server for this, referencing the same playback session ID. > The server can then stop sending old or start sending new things, or start > sending from a different offset. > > This works even when the server is dumb and doesn’t push, and we still can > get much of the benefit in that case. > > What I think this gets us: > - streaming with single-packet “delay” at relays > - O(1) time/space overheads at relays (about as good as we’ll ever get). > This works by having a ‘subscription’ table so when we receive a packet on > a stream with subscriptions, we can immediately write a new packet and send > it out on the sub’d connection. We’d rewrite the CID,stream ID, and of > course reencrypt. > - caching when caching is useful/available > - easy fallbacks > - optional compression of metadata. > > What is this missing, requirements-wise? > > -=R > > > *From: *Moq <moq-bounces@ietf.org> on behalf of Victor Vasiliev <vasilvv= > 40google.com@dmarc.ietf.org> > *Date: *Wednesday, February 8, 2023 at 3:23 PM > *To: *Luke Curley <kixelated@gmail.com> > *Cc: *MOQ Mailing List <moq@ietf.org> > *Subject: *Re: [Moq] Exploring HTTP/3 > > Hi Luke, As you mention, most of the transport-level techniques that WARP > uses (priorities, etc) are doable with HLS or DASH over HTTP/3 (in fact, I > am aware of some of those being actually implemented in practice). That > said, I believe that > > ZjQcmQRYFpfptBannerStart > > *This Message Is From an External Sender * > > ZjQcmQRYFpfptBannerEnd > > Hi Luke, > > > > As you mention, most of the transport-level techniques that WARP uses > (priorities, etc) are doable with HLS or DASH over HTTP/3 (in fact, I am > aware of some of those being actually implemented in practice). > > > > That said, I believe that HTTP is fundamentally not the right solution > here. HTTP is really good when you have a resource with an address that's > known in advance and you are trying to fetch that resource (directly or > indirectly) from an entity that is an authoritative source for it. This > works well for the regular web pages, and this works well for VoD, but as > soon as you stray away from the well-lit path, you suddenly find yourself > doing an increasingly convoluted series of awkward things just to make it > work. For instance, you mention long-polling, but long-polling is not > necessarily consistently supported by the ecosystem, and you have no > guarantee that an intermediary won't wait until fetching the entire > resource, or deliver chunks unevenly over time. Similarly, HTTP server > push has been largely unsuccessful. Same thing with trying to make > Priority header work with the WARP priority scheme. > > > > I think MoQ provides us with an opportunity to build a protocol that > actually makes sense for transporting live media, as opposed to trying to > fit HTTP into the model it's not good at. Especially when with every > "gotcha" you encounter with HTTP, you lose some of the advantage that made > HTTP seem like an appealing proposition in the first place. > > > > -- Victor. > > > > On Wed, Feb 8, 2023 at 1:33 PM Luke Curley <kixelated@gmail.com> wrote: > > Hey MoQ, > > > > As I mentioned in a recent email: > > > > > The best part about HLS/DASH is the HTTP ecosystem. That includes CDN > support, optimized software, and general interoperability. We lose a lot of > this by creating a custom pub/sub mechanism. > > > > > The worst part about HLS/DASH is the latency. This is caused by > head-of-line blocking (buffering) and the client being in charge > (requesting playlists and segments). The Warp draft tackles these problems > with QUIC prioritization (delivery order) and WebTransport push > respectively. > > > > > > I think it's possible to address the problems with HLS/DASH without > forgoing HTTP.; WebTransport push is not the only option. At one point I > drafted Warp over HTTP/3, but abandoned it because it's more complicated > and Twitch doesn't need 3rd party CDN support. > > > > Warp's OBJECT frames are strikingly similar to HTTP/3's HEADER+DATA > frames, and unironically we're considering using qpack to compress the > OBJECT headers. I propose each Warp object would be a HTTP resource > instead. This parallels discussion at the interim suggesting we need a way > to GET old media for DVR. > > > > Head-of-line blocking can be avoided using QUIC (or HTTP/2) prioritization > with the Priority <https://datatracker.ietf.org/doc/html/rfc9218> header. > The client would request each source with a priority based on the contents. > For example, request the newest audio segment with urgency=7 while the > older video with urgency=4. There's some warts especially involving relays > but it's not unsolvable. > > > > The latency caused by requests can be avoided by long polling. The purpose > of WebTransport push is to avoid the round trip between when the client is > informed about new media until it requests it. Twitch uses long-polling > with HLS today to accomplish the same thing, preflighting the next request > based on a deterministic URL. Prioritization lets you preflight multiple > concurrent requests without the risk of them fighting for bandwidth. > > > > Alternatively, some variation of HTTP push could avoid request latency. > I've said this before, but QUICR looks a lot like a hypothetical HTTP > subscription since it's based soley on the URL. Nobody likes HTTP push, let > alone extending it, but technically we're building something similar. I > would not recommend this direction. > > > > I'm mostly worried about how browsers/servers will handle a request per > frame. I still strongly recommend breaking media into layers anyway; I'm > still not convinced that networks need the ability to drop individual > frames. For example, non-reference frames could be bundled together into > same HTTP resource, and prioritized lower than reference frames. > > > > > > But in theory that's all we need. I can write up a draft if the WG thinks > it would be fruitful to explore this direction. It could be a DASH > extension, although it's still important to address the contribution side > of the coin (ex. push using HTTP PUT). > > -- > Moq mailing list > Moq@ietf.org > https://www.ietf.org/mailman/listinfo/moq > > -- > Moq mailing list > Moq@ietf.org > https://www.ietf.org/mailman/listinfo/moq > > -- > > -acbegen > Using iThumbs > -- -acbegen Using iThumbs
- Re: [Moq] Exploring HTTP/3 Lucas Pardue
- [Moq] Exploring HTTP/3 Luke Curley
- Re: [Moq] Exploring HTTP/3 Spencer Dawkins at IETF
- Re: [Moq] Exploring HTTP/3 Luke Curley
- Re: [Moq] Exploring HTTP/3 Ali C. Begen
- Re: [Moq] Exploring HTTP/3 Lucas Pardue
- Re: [Moq] Exploring HTTP/3 Victor Vasiliev
- Re: [Moq] Exploring HTTP/3 Roberto Peon
- Re: [Moq] Exploring HTTP/3 Lucas Pardue
- Re: [Moq] Exploring HTTP/3 Roberto Peon
- Re: [Moq] Exploring HTTP/3 Ali C. Begen
- Re: [Moq] Exploring HTTP/3 Roberto Peon
- Re: [Moq] Exploring HTTP/3 Ali C. Begen
- Re: [Moq] Exploring HTTP/3 Mark Nottingham
- Re: [Moq] Exploring HTTP/3 Spencer Dawkins at IETF
- Re: [Moq] Exploring HTTP/3 Lucas Pardue
- Re: [Moq] Exploring HTTP/3 Luke Curley
- Re: [Moq] Exploring HTTP/3 Roberto Peon
- Re: [Moq] Exploring HTTP/3 Bernard Aboba
- Re: [Moq] Exploring HTTP/3 Spencer Dawkins at IETF
- Re: [Moq] Exploring HTTP/3 Bernard Aboba
- Re: [Moq] Exploring HTTP/3 Spencer Dawkins at IETF
- Re: [Moq] Exploring HTTP/3 Lucas Pardue
- Re: [Moq] Exploring HTTP/3 Charles 'Buck' Krasic
- Re: [Moq] Exploring HTTP/3 Roberto Peon
- Re: [Moq] Exploring HTTP/3 Roberto Peon
- Re: [Moq] Exploring HTTP/3 Luke Curley
- Re: [Moq] Exploring HTTP/3 Luke Curley
- Re: [Moq] Exploring HTTP/3 Christian Huitema
- Re: [Moq] Exploring HTTP/3 Victor Vasiliev
- Re: [Moq] Exploring HTTP/3 Suhas Nandakumar