Re: [Moq] Exploring HTTP/3

Spencer Dawkins at IETF <spencerdawkins.ietf@gmail.com> Thu, 09 February 2023 18:24 UTC

Return-Path: <spencerdawkins.ietf@gmail.com>
X-Original-To: moq@ietfa.amsl.com
Delivered-To: moq@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 5316EC17CEA0 for <moq@ietfa.amsl.com>; Thu, 9 Feb 2023 10:24:47 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -6.993
X-Spam-Level:
X-Spam-Status: No, score=-6.993 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FROM=0.001, HTML_FONT_LOW_CONTRAST=0.001, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_HI=-5, RCVD_IN_ZEN_BLOCKED_OPENDNS=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, TRACKER_ID=0.1, URIBL_BLOCKED=0.001, URIBL_DBL_BLOCKED_OPENDNS=0.001, URIBL_ZEN_BLOCKED_OPENDNS=0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com
Received: from mail.ietf.org ([50.223.129.194]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id KsyF4gUlFt4d for <moq@ietfa.amsl.com>; Thu, 9 Feb 2023 10:24:45 -0800 (PST)
Received: from mail-pf1-x434.google.com (mail-pf1-x434.google.com [IPv6:2607:f8b0:4864:20::434]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 82CCAC1782B5 for <moq@ietf.org>; Thu, 9 Feb 2023 10:24:45 -0800 (PST)
Received: by mail-pf1-x434.google.com with SMTP id r17so1906566pff.9 for <moq@ietf.org>; Thu, 09 Feb 2023 10:24:45 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=3Wbtjng4ofxFX4xdIBUscgXE6NJZH7xKfxbg9gHK2/w=; b=DwKV/xmdmxpzLYamWgajMJIfY9FbPtsjgWNCYzoVKrNhPKtYONq3KhBDIisM/tOTEH LG3Rl8SR1+uJ/Xh9woDszWzVqek+k1WLXRo7q55tPtS3ytP9xVCGiGVGPc3W0GARQNKd fAomnp0utAp8dm+tftK/GRyyBmEZK33p+V2/8MyiyTcPIgwp8VRpiyG40/ntVyE9M1mm TQmytFvD7NidsLodBtMcMzyAKrX/ui8MN3IbBwVJyZA+8o0Wa9qxtBSeXJ4O2eVusRBM FGMLnZIUgjt8teLpSvbNXjB5sMmGygQ5LG47Do+UQFGz4ky6IdwVrz1VP08pgwTresA2 x0Zw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=3Wbtjng4ofxFX4xdIBUscgXE6NJZH7xKfxbg9gHK2/w=; b=VxDe24Aw3VAEVHWt2qh1hSLDxQOyQ3KFeYFlF5GUxi0Bcyq+vxU03yBEebJxggi7Yb S8HfWkGeSZ0Kmxskv+hSKMS0llg8df/yASwIbUxInXunooY6aKl8JHjXQdr4LpxIS39T vfXodstOT32RDbb2CbTsns7+tDKCRdgoIZmTwBxhmFkqcW9OT1ZZl6E4P04xq4Fd1Mdc 5lPWZZCereqi5E6VzciRxoJm5ZNLjU7PDwIuBeY8Ei+HwTfybQfeEipK8e9GPGV2sLlB rNDK949VH3ATcC03LMEBz3ppukiSpZHW+W2bqlNJLS0GvCySSKXk5+8tacjJoxi3LzXH VxGg==
X-Gm-Message-State: AO0yUKUeMTAuG3Z6jBWUkWE4JHAB8XJvkHJarqZ1lcBpl4NtuDlfesZY MM5V9cyFpvqDKMzxYzYYbW56UVgQgyCBIfA5smArVdvu
X-Google-Smtp-Source: AK7set/YMp5EAaIR0+vauUM/ImKbv3QLGPYP/wISZlhjsBVYabk2d+OZ+gHdUei+LOggXmC8nTrC3jxfvp9hhCt/ULE=
X-Received: by 2002:a62:2902:0:b0:593:df5a:1588 with SMTP id p2-20020a622902000000b00593df5a1588mr2598320pfp.38.1675967084770; Thu, 09 Feb 2023 10:24:44 -0800 (PST)
MIME-Version: 1.0
References: <CAHVo=ZmD7KvKxh2tTeaM2B+0q9=qZPgBydmfaHor5MaPODZf6w@mail.gmail.com> <CAAZdMae+WVxYZbKPdWHqApPVX3F5wQ2KHUS03VdFekaCvQiyiA@mail.gmail.com> <MW5PR15MB5145F86C3D9C90438A733218D4D89@MW5PR15MB5145.namprd15.prod.outlook.com> <CAA4MczugjobBo9Xa-E1EeZd+9z3jgWz2K-ADrTj8phSRChqepw@mail.gmail.com> <MW5PR15MB51450A8BD28C0B6B08B82F6CD4D99@MW5PR15MB5145.namprd15.prod.outlook.com> <CAA4McztWnbCxn8Xr5Ep+pbB2N17Ea9ZRVt77APu7isFMGK36=g@mail.gmail.com> <MW5PR15MB514583AD51F7CB970480C0B9D4D99@MW5PR15MB5145.namprd15.prod.outlook.com> <CAOW+2dsEAuyjc65HbJNRJ5W9y90eTfEEBcz+RiHs1GT4yw9prA@mail.gmail.com>
In-Reply-To: <CAOW+2dsEAuyjc65HbJNRJ5W9y90eTfEEBcz+RiHs1GT4yw9prA@mail.gmail.com>
From: Spencer Dawkins at IETF <spencerdawkins.ietf@gmail.com>
Date: Thu, 09 Feb 2023 12:24:18 -0600
Message-ID: <CAKKJt-d1D6FveC+87gevrFbds3EgBv+29qauZUoJzJkO2+1fbw@mail.gmail.com>
To: Bernard Aboba <bernard.aboba@gmail.com>
Cc: Roberto Peon <fenix=40meta.com@dmarc.ietf.org>, "Ali C. Begen" <ali.begen=40networked.media@dmarc.ietf.org>, Luke Curley <kixelated@gmail.com>, MOQ Mailing List <moq@ietf.org>, Victor Vasiliev <vasilvv@google.com>
Content-Type: multipart/alternative; boundary="000000000000be3d5c05f4487eac"
Archived-At: <https://mailarchive.ietf.org/arch/msg/moq/VzUP-Zu70hKBeLZ5DsHeyAGLzbk>
Subject: Re: [Moq] Exploring HTTP/3
X-BeenThere: moq@ietf.org
X-Mailman-Version: 2.1.39
Precedence: list
List-Id: Media over QUIC <moq.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/moq>, <mailto:moq-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/moq/>
List-Post: <mailto:moq@ietf.org>
List-Help: <mailto:moq-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/moq>, <mailto:moq-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 09 Feb 2023 18:24:47 -0000

Just chiming in,

On Thu, Feb 9, 2023 at 12:12 PM Bernard Aboba <bernard.aboba@gmail.com>
wrote:

> This forwarding scheme makes assumptions that should be made explicit. For
> example: can a connection carry multiple streams with different
> subscribers?
>
> On Thu, Feb 9, 2023 at 10:01 Roberto Peon <fenix=40meta.com@dmarc.ietf.org>
> wrote:
>
>> I believe relaying could/should be done down at the QUIC processing
>> layer, instead of requiring it to be done at L7.
>> I’d hope this to be true of moq, regardless of using HTTP or not, to be
>> clear.
>>
>
I can't comment on what the working group thinks about MQ relay
functionality (see previous emails sent this week, and last), but I'm still
trying to understand how many relays will be strictly relaying one
broadcast, and how many relays will also be performing additional
functionality we've identified (caching, rate adaptation, transmuxing,
etc.).

It will be easier to optimize for the common case, if we know what the
common case is.

Unless we're figuring out a QUIC in/QUIC out caching architecture, for
instance, I'd think relays caches would be popping up to HTTP anyway,
wouldn't they?

Best,

Spencer


> Assuming we use the higher-level protocol I suggested below, relays
>> would/could then look like this python pseudocode, with O(1) memory and
>> time, assuming the couple of lookups are O(1), and log(n) if not:
>>
>> while True:
>>
>>   quic_packet, addr = sock.recvfrom(port)
>>   cid = quic.getCID(quic_packet)
>>   decrypted_quic_packet = quic.decrypt(cid, data)
>>   did_fast_path = False
>>
>>   try:
>>     streams_in_packet = quic.getStreams(decrypted_quic_packet)
>>     if len(streams_in_packet) == 1: # generally one when doing media.
>>
>>       try:
>>
>>         subs = subscribers[cid]
>>         decrypted_packet_copy = decrypted_quic_packet
>>         for sub_cid, sub_sid in subs:
>>           if quic.allowedToSend(sub_cid, sub_sid) and
>> h3.noHeadersFrame(sub_cid,sub_sid,decrypted_quic_packet):
>>             quic.rewriteCid(decrypted_quic_packet_copy, sub_cid)
>>            quic.rewriteSid(decrypted_quic_packet_copy, sub_sid)
>>             quic.encryptAndSend(sub_cid, sub_sid,
>> decrypted_quic_packet_copy)
>>             didFastPath = True
>>       except KeyError:
>>
>>           pass
>>   except ErrorDecodingPacket:
>>
>>    pass
>>   quic.slowPathProcessing(decrypted_quic_packet, didFastPath) # do HTTP3
>> L7 processing, normal quic stuff.
>>
>>
>> HTTP3 things could use this fast-path so long as the mapping of
>> entity->stream was mostly static during a connection.
>>
>> -=R
>>
>> *From: *Ali C. Begen <ali.begen=40networked.media@dmarc.ietf.org>
>> *Date: *Wednesday, February 8, 2023 at 6:31 PM
>> *To: *Roberto Peon <fenix@meta.com>
>> *Cc: *Luke Curley <kixelated@gmail.com>, MOQ Mailing List <moq@ietf.org>,
>> Victor Vasiliev <vasilvv@google.com>
>> *Subject: *Re: [Moq] Exploring HTTP/3
>>
>> On Wed, Feb 8, 2023 at 18: 26 Roberto Peon <fenix=40meta. com@ dmarc.
>> ietf. org> wrote: Pedantry is not always unappreciated—I got a smile from
>> it. :) True—caching impacts latency as 2nd order thing (fewer packets going
>> through places which
>>
>> ZjQcmQRYFpfptBannerStart
>>
>> *This Message Is From an External Sender *
>>
>> ZjQcmQRYFpfptBannerEnd
>>
>>
>>
>>
>>
>> On Wed, Feb 8, 2023 at 18:26 Roberto Peon <fenix=
>> 40meta.com@dmarc.ietf.org> wrote:
>>
>> Pedantry is not always unappreciated—I got a smile from it.
>>
>>
>>
>> :)
>>
>>
>> True—caching impacts latency as 2nd order thing (fewer packets going
>> through places which could potentially be bottlenecks, and reduces path
>> RTT, which allows lower jitter for same channel utilization (or higher
>> utilization for same jitter)).
>>
>> The 1st order latency impacts come from being able to forward things
>> cheaply and without requiring in-order reassembly everywhere.
>>
>> Thats where chunked transfer encoding / delivery works well. It can be
>> made better though if CMAF chunking and HTTP  chunking worked better
>> together. Maybe thats a topic we could tackle. They are independent from
>> each other and from what i can see every implementation is different.
>>
>>
>>
>>
>>
>>
>> -=R
>>
>> *From: *Ali C. Begen <ali.begen=40networked.media@dmarc.ietf.org>
>> *Date: *Wednesday, February 8, 2023 at 5:25 PM
>> *To: *Roberto Peon <fenix@meta.com>
>> *Cc: *Luke Curley <kixelated@gmail.com>, MOQ Mailing List <moq@ietf.org>,
>> Victor Vasiliev <vasilvv@google.com>
>> *Subject: *Re: [Moq] Exploring HTTP/3
>>
>> On Wed, Feb 8, 2023 at 15: 47 Roberto Peon <fenix=40meta. com@ dmarc.
>> ietf. org> wrote: Hello Victor! Here is how HTTP/3 could work as an
>> underlying protocol shape for moq, I think. Client makes request P for a
>> playback session for video X,
>>
>> ZjQcmQRYFpfptBannerStart
>>
>> *This Message Is From an External Sender *
>>
>> ZjQcmQRYFpfptBannerEnd
>>
>>
>>
>>
>>
>> On Wed, Feb 8, 2023 at 15:47 Roberto Peon <fenix=
>> 40meta.com@dmarc.ietf.org> wrote:
>>
>> Hello Victor!
>>
>> Here is how HTTP/3 could work as an underlying protocol shape for moq, I
>> think.
>>
>> Client makes request P for a playback session for video X, by stating
>> what it wants onto the “control stream” (i.e. a generic resource name for
>> the video/broadcast/whatever, which does not need to a prioi know anything
>> about what is available).
>>
>> Server responds to this request that it is replying with the following
>> streams:
>> - video stream V
>> - audio stream A
>> - manifest stream M
>> .. and keeps the stream open, so it can continue to communicate to the
>> client.
>>
>> Resources V, A, M are cacheable.
>> Resource P (the playback request) is not cacheable, since this is an
>> ephemeral “control” (a.k.a metadata) stream.
>>
>> Client, being informed it is needing V,A,M, requests these if it isn’t
>> already reading from them.
>> Assuming push exists, and the server did it, this data will already be at
>> the client, and so we’ve saved the latency we needed to save.
>>
>> Not to pedantic, but you reduced the startup delay aka time to first
>> frame, not the latency. If latency reduction is the goal, pushing is not
>> mandatory. The client can request the upcoming media object to get the
>> lowest latency.
>>
>>
>>
>> Startup delay vs. latency are orthogonal goals. And IMO the client should
>> pick whatever it likes to have. RFC 6285 for example supports either.
>>
>> If client wishes to change things, it can send another request to the
>> server for this, referencing the same playback session ID.
>> The server can then stop sending old or start sending new things, or
>> start sending from a different offset.
>>
>> This works even when the server is dumb and doesn’t push, and we still
>> can get much of the benefit in that case.
>>
>> What I think this gets us:
>> - streaming with single-packet “delay” at relays
>> - O(1) time/space overheads at relays (about as good as we’ll ever get).
>> This works by having a ‘subscription’ table so when we receive a packet on
>> a stream with subscriptions, we can immediately write a new packet and send
>> it out on the sub’d connection. We’d rewrite the CID,stream ID, and of
>> course reencrypt.
>> - caching when caching is useful/available
>> - easy fallbacks
>> - optional compression of metadata.
>>
>> What is this missing, requirements-wise?
>>
>> -=R
>>
>> *From: *Moq <moq-bounces@ietf.org> on behalf of Victor Vasiliev <vasilvv=
>> 40google.com@dmarc.ietf.org>
>> *Date: *Wednesday, February 8, 2023 at 3:23 PM
>> *To: *Luke Curley <kixelated@gmail.com>
>> *Cc: *MOQ Mailing List <moq@ietf.org>
>> *Subject: *Re: [Moq] Exploring HTTP/3
>>
>> Hi Luke, As you mention, most of the transport-level techniques that WARP
>> uses (priorities, etc) are doable with HLS or DASH over HTTP/3 (in fact, I
>> am aware of some of those being actually implemented in practice). That
>> said, I believe that
>>
>> ZjQcmQRYFpfptBannerStart
>>
>> *This Message Is From an External Sender *
>>
>> ZjQcmQRYFpfptBannerEnd
>>
>> Hi Luke,
>>
>>
>>
>> As you mention, most of the transport-level techniques that WARP uses
>> (priorities, etc) are doable with HLS or DASH over HTTP/3 (in fact, I am
>> aware of some of those being actually implemented in practice).
>>
>>
>>
>> That said, I believe that HTTP is fundamentally not the right solution
>> here.  HTTP is really good when you have a resource with an address that's
>> known in advance and you are trying to fetch that resource (directly or
>> indirectly) from an entity that is an authoritative source for it.  This
>> works well for the regular web pages, and this works well for VoD, but as
>> soon as you stray away from the well-lit path, you suddenly find yourself
>> doing an increasingly convoluted series of awkward things just to make it
>> work.  For instance, you mention long-polling, but long-polling is not
>> necessarily consistently supported by the ecosystem, and you have no
>> guarantee that an intermediary won't wait until fetching the entire
>> resource, or deliver chunks unevenly over time.  Similarly, HTTP server
>> push has been largely unsuccessful.  Same thing with trying to make
>> Priority header work with the WARP priority scheme.
>>
>>
>>
>> I think MoQ provides us with an opportunity to build a protocol that
>> actually makes sense for transporting live media, as opposed to trying to
>> fit HTTP into the model it's not good at.  Especially when with every
>> "gotcha" you encounter with HTTP, you lose some of the advantage that made
>> HTTP seem like an appealing proposition in the first place.
>>
>>
>>
>>   -- Victor.
>>
>>
>>
>> On Wed, Feb 8, 2023 at 1:33 PM Luke Curley <kixelated@gmail.com> wrote:
>>
>> Hey MoQ,
>>
>>
>>
>> As I mentioned in a recent email:
>>
>>
>>
>> > The best part about HLS/DASH is the HTTP ecosystem. That includes CDN
>> support, optimized software, and general interoperability. We lose a lot of
>> this by creating a custom pub/sub mechanism.
>>
>>
>>
>> > The worst part about HLS/DASH is the latency. This is caused by
>> head-of-line blocking (buffering) and the client being in charge
>> (requesting playlists and segments). The Warp draft tackles these problems
>> with QUIC prioritization (delivery order) and WebTransport push
>> respectively.
>>
>>
>>
>>
>>
>> I think it's possible to address the problems with HLS/DASH without
>> forgoing HTTP.; WebTransport push is not the only option. At one point I
>> drafted Warp over HTTP/3, but abandoned it because it's more complicated
>> and Twitch doesn't need 3rd party CDN support.
>>
>>
>>
>> Warp's OBJECT frames are strikingly similar to HTTP/3's HEADER+DATA
>> frames, and unironically we're considering using qpack to compress the
>> OBJECT headers. I propose each Warp object would be a HTTP resource
>> instead. This parallels discussion at the interim suggesting we need a way
>> to GET old media for DVR.
>>
>>
>>
>> Head-of-line blocking can be avoided using QUIC (or HTTP/2)
>> prioritization with the Priority
>> <https://datatracker.ietf.org/doc/html/rfc9218> header. The client would
>> request each source with a priority based on the contents. For example,
>> request the newest audio segment with urgency=7 while the older video with
>> urgency=4. There's some warts especially involving relays but it's not
>> unsolvable.
>>
>>
>>
>> The latency caused by requests can be avoided by long polling. The
>> purpose of WebTransport push is to avoid the round trip between when the
>> client is informed about new media until it requests it. Twitch uses
>> long-polling with HLS today to accomplish the same thing, preflighting the
>> next request based on a deterministic URL. Prioritization lets you
>> preflight multiple concurrent requests without the risk of them fighting
>> for bandwidth.
>>
>>
>>
>> Alternatively, some variation of HTTP push could avoid request latency.
>> I've said this before, but QUICR looks a lot like a hypothetical HTTP
>> subscription since it's based soley on the URL. Nobody likes HTTP push, let
>> alone extending it, but technically we're building something similar. I
>> would not recommend this direction.
>>
>>
>>
>> I'm mostly worried about how browsers/servers will handle a request per
>> frame. I still strongly recommend breaking media into layers anyway; I'm
>> still not convinced that networks need the ability to drop individual
>> frames. For example, non-reference frames could be bundled together into
>> same HTTP resource, and prioritized lower than reference frames.
>>
>>
>>
>>
>>
>> But in theory that's all we need. I can write up a draft if the WG thinks
>> it would be fruitful to explore this direction. It could be a DASH
>> extension, although it's still important to address the contribution side
>> of the coin (ex. push using HTTP PUT).
>>
>> --
>> Moq mailing list
>> Moq@ietf.org
>> https://www.ietf.org/mailman/listinfo/moq
>>
>> --
>> Moq mailing list
>> Moq@ietf.org
>> https://www.ietf.org/mailman/listinfo/moq
>>
>> --
>>
>> -acbegen
>> Using iThumbs
>>
>> --
>>
>> -acbegen
>> Using iThumbs
>> --
>> Moq mailing list
>> Moq@ietf.org
>> https://www.ietf.org/mailman/listinfo/moq
>>
> --
> Moq mailing list
> Moq@ietf.org
> https://www.ietf.org/mailman/listinfo/moq
>