Re: [Moq] Flow to join 15 seconds before live

Ted Hardie <ted.ietf@gmail.com> Mon, 25 March 2024 11:20 UTC

Return-Path: <ted.ietf@gmail.com>
X-Original-To: moq@ietfa.amsl.com
Delivered-To: moq@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id AA488C14F6B6 for <moq@ietfa.amsl.com>; Mon, 25 Mar 2024 04:20:54 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -7.104
X-Spam-Level:
X-Spam-Status: No, score=-7.104 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FROM=0.001, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_HI=-5, RCVD_IN_ZEN_BLOCKED_OPENDNS=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01, URIBL_BLOCKED=0.001, URIBL_DBL_BLOCKED_OPENDNS=0.001, URIBL_ZEN_BLOCKED_OPENDNS=0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com
Received: from mail.ietf.org ([50.223.129.194]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id f9Kvsnv0SYcC for <moq@ietfa.amsl.com>; Mon, 25 Mar 2024 04:20:50 -0700 (PDT)
Received: from mail-yw1-x1135.google.com (mail-yw1-x1135.google.com [IPv6:2607:f8b0:4864:20::1135]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 72364C14F6A0 for <moq@ietf.org>; Mon, 25 Mar 2024 04:20:50 -0700 (PDT)
Received: by mail-yw1-x1135.google.com with SMTP id 00721157ae682-611248b4805so25992827b3.0 for <moq@ietf.org>; Mon, 25 Mar 2024 04:20:50 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1711365649; x=1711970449; darn=ietf.org; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=DzobtZvdihbyhb+MJq62L5zpnQwr6P8Dn+eGzXnzNPQ=; b=PRqVW1pHDeynO3Ad+EJ8aO+VNcpBFJ1NM2N9+4b+yLQCjIi6w7jkZsKKetbw2MBB+9 bWGtiJRobUOhT9rXPVixE4MaARY3mFSkr4R2amAoHa0SL3si97NRnxMY+UPZdWoYyEO3 niVfS5MwBh9v17YJqQ7Y4Ght+sz6kRsgCMsstQgqQdxA+wmPK26zXHcpQXRp5NJS0uGG PTNRGP20NWU1K2uCYNPsNYVRcTl/9dHtITkIbCDpU8IyDKddxlkv4ui23SlE5yOG6cCR yNSKdNph0CHxxsFyaZEV4xfd94LpEgEPplT+0cMdbgrAI6Td5mU634QM6lsvCuyTJ2+0 43yQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711365649; x=1711970449; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=DzobtZvdihbyhb+MJq62L5zpnQwr6P8Dn+eGzXnzNPQ=; b=tHKThZL7uIOGOilR69oMA4Pipx+nZD2WZ104kKp9C9UNMgW5RAtmC8B5/OxEFK0+Ev ZPLFyJDxiObfUVQFRrm9VEth9h/5sOTbV0IQygZi8eigvzeyx98BiSWelOP/Gp88BVrW i739Ic+IMP1Vbi91iXDWV8fa6ml3XSQ4kj4JAyjPN4ezI2BeGPxQb66/s+TVivZY38kw mH1kVcoohA0B7P9WYA4C6a/rwyNvfQF6yGa8DbXbuXf/cPvwwjIIBZsdDcIMdOzTNN2Y Y9RPpZ90zaWDD9DAyx7Fe4xGC1MAD9XzJpmIgBvSH79e1LSZTY1Jlcsg1Cb2p2NjY48v Ve3Q==
X-Gm-Message-State: AOJu0Yz9jBgLS+jbdC5lNDZmKrhHu7yy+eCkgyDENXHlb8Z2NI2x9XD2 UEKuyVb7CoUessgo4on1ZASR0d8D0dd5aCN6r0y1haLiub1XfpCvP60HhP1YqwNmBvca7BrFzDh +UPAhc1ph0HYlSXYxffby/QCGzUVIB2GJ
X-Google-Smtp-Source: AGHT+IE6Yv3eNQVTAraf1c9hF/iF0ixvBWvV+K1sDQIvB1s4MYq4zoRh4+G16oUC+Ny+s2UcRY1nZbdLHdYBr3MqxlQ=
X-Received: by 2002:a81:c242:0:b0:60a:5031:2de9 with SMTP id t2-20020a81c242000000b0060a50312de9mr5479060ywg.51.1711365647968; Mon, 25 Mar 2024 04:20:47 -0700 (PDT)
MIME-Version: 1.0
References: <B1E13534-1440-42B7-A820-3EFC405AD558@iii.ca> <CAHVo=ZmUsVvnJ43-_HMjk51OcRJaYJ1iiO94Hfx9askqaMcZTA@mail.gmail.com> <1B369D1B-7B1F-46FE-B2F1-48B453F8DBFA@akamai.com>
In-Reply-To: <1B369D1B-7B1F-46FE-B2F1-48B453F8DBFA@akamai.com>
From: Ted Hardie <ted.ietf@gmail.com>
Date: Mon, 25 Mar 2024 11:20:20 +0000
Message-ID: <CA+9kkMBXWaJuHrrqer6cQcL0MUQ-LURO-FCTpZ=NpTASTfY97w@mail.gmail.com>
To: "Law, Will" <wilaw=40akamai.com@dmarc.ietf.org>
Cc: MOQ Mailing List <moq@ietf.org>
Content-Type: multipart/alternative; boundary="0000000000008730d206147a5d05"
Archived-At: <https://mailarchive.ietf.org/arch/msg/moq/6JKZGQlLUfpAX4CZDY_Pe_55X3I>
Subject: Re: [Moq] Flow to join 15 seconds before live
X-BeenThere: moq@ietf.org
X-Mailman-Version: 2.1.39
Precedence: list
List-Id: Media over QUIC <moq.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/moq>, <mailto:moq-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/moq/>
List-Post: <mailto:moq@ietf.org>
List-Help: <mailto:moq-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/moq>, <mailto:moq-request@ietf.org?subject=subscribe>
X-List-Received-Date: Mon, 25 Mar 2024 11:20:54 -0000

Just so I understand the use case, is the choice of 15 seconds because of a
control presumed to be on the player (a "go back N seconds"/"go forward N
seconds" control)?

I ask this because the ease of delivering this functionality depends a lot
on whether the catalog producer has provided a mapping.  For a catalog with
no mapping between groups and timings, this pretty much has to be done via
a heuristic; for one with a well-described mapping, it should be trivial to
extract.  If that is the case, I think it might make sense to say that the
MoQT protocol provides the ability to request group ids/object ids earlier
than live edge, but that mapping those to timing is not a function of the
transport.   Any publisher that wants to provide that functionality has to
do so in the catalog or with a timing track (and in a lot of cases, it will
know whether or not the app has that type of control).

If that were the case, then the only thing MoQT needs to do is to tell you
the group number/object number of the live edge (as seen by the relay
you're on).  The mechanics from that point are client driven.

>From my perspective, the next question is then whether you expect the
client to have a playout buffer that allows it to handle re-ordering up to
the 15 second mark.  If that is the case, then the subscribe with a group
number/group id target makes sense to me, because the cache can start
delivering whatever it has (including the live edge) and fill in the buffer
as it gets older data, knowing that it is up to the client to hold that and
start playback when it has the data it wants to send to the decoder.

If you don't expect the playout buffer to be that large, then I think you
do need to get the group/objects in order.  That requires either the relay
to buffer until it can do that or something like what Cullen put forward,
since that allows the client to drive the process in a way that gets the
data in a specific order without going all the way to using fetch for the
parts after the live edge.

Just my perspective,

Ted



On Mon, Mar 25, 2024 at 10:03 AM Law, Will <wilaw=
40akamai.com@dmarc.ietf.org> wrote:

> Thanks Cullen for initiating this thread and Cullen and Luke for proposing
> solutions.
>
>
>
> I find the proposed worklow to start N seconds behind to be unnecessarily
> convoluted, As proposed, it requires the player to make SUBSCRIBE request
> which is “frozen” , followed by FETCH request, followed by a SUBSCRIBE
> “unfreeze”. In comparison, our workflow for joining a real-time track is
> clean – a single SUBSCRIBE and all received objects can be piped to a
> decode buffer and rendered. We should aspire to an equally clean workflow
> for non-real-time and VOD playback. These are our two majority join
> use-cases.
>
>
>
> I am not of fan of frozen=true. This is overloading SUBSCRIBE to return
> state information about the track. This would be better off being moved to
> an explicit control message, perhaps (as Luke just proposed) TRACK_INFO.
>
>
>
> I also support the suggestion to remove relative start and end groups from
> SUSBCRIBE. This is causing more problems than it is solving. Real-time
> players don’t care about relative offsets because they always want the
> latest group. Non-real time players have the luxury of time . The key
> difference with starting N seconds behind live is that we can afford to
> spend one RTT discovering things (assuming RTT << N). So a clean workflow
> would be:
>
>
>
>    1. Client makes the MOQT connection.
>    2. Subscribe to the catalog. The catalog tells it the track names and
>    also details of their numbering schemes and group duration.
>    3. For the track it wants to starts with, request TRACKINFO(track) (or
>    a timeline track, if this is offered by the streaming format).
>    4. If the relay has an active subscription, it can immediately return
>    the latest group number. Otherwise it makes the same request upstream and
>    then relays the answer to the client. The client now knows the latest group
>    number.
>    5. The player uses its knowledge of the latest group, the numbering
>    scheme, the group duration and the desired starting buffer to calculate the
>    starting group G.
>    6. It then makes a single FETCH(startgroup=G endgroup=undefined
>    priority=X). The response from this can be piped directly into its decode
>    buffer. (note, this is a changed API from what we currently have)
>
>
>
> We have two options for step 6.
>
>    1. We can extend SUBSCRIBE to allow a reliable ASCending delivery
>    mode, or
>    2. We can extend FETCH to allow an undefined end group i.e keep
>    delivering into the future until end-of-track is signaled (as I showed
>    above).
>
>
>
> I don’t have religion on either of these options, although separating the
> reliable and ordered nature of FETCHed delivery into its own API seems
> reasonable and preferable.
>
>
>
> Cheers
>
> Will
>
>
>
>
>
> *From: *Moq <moq-bounces@ietf.org> on behalf of Luke Curley <
> kixelated@gmail.com>
> *Date: *Monday, March 25, 2024 at 6:58 AM
> *To: *Cullen Jennings <fluffy@iii.ca>
> *Cc: *MOQ Mailing List <moq@ietf.org>
> *Subject: *Re: [Moq] Flow to join 15 seconds before live
>
>
>
> Hey Cullen, thank you very much for writing down the proposed flow.
>
>
>
> The problem with this approach is that while there is head-of-line
> blocking on startup (via FETCH), there is no head-of-line blocking during
> the steady state (via SUBSCRIBE). Any congestion will cause the player
> buffer to shrink, and unfortunately it will be filled in reverse group
> order. This makes sense for low latency playback as it enables skipping
> content, but it will cause excessive buffering for high latency playback
> that does not skip content.
>
>
>
> The other big issue in #421
> <https://urldefense.com/v3/__https:/github.com/moq-wg/moq-transport/pull/421__;!!GjvTz_vk!ShhE-HViGjsrzsZesMScVrIVVAcs8AfyrZ5jX9EAl3puqYDj97MAshlITS2bwutnaV46CztKMoJk_V0$>
> is that we still need the ability to SUBSCRIBE at absolute group IDs,
> otherwise ABR won't work because of numerous race conditions. Even a
> back-to-back UNSUBSCRIBE 720p + SUBSCRIBE 240p are evaluated at different
> times based on different cache states (including potentially empty). I
> can't think of any algorithm that works using only relative SUBSCRIBEs,
> *especially* if groups are not aligned between tracks.
>
>
>
>
>
> As I see it, the main problem with SUBSCRIBE today is the RelativeNext
> and RelativePrev start/end ID. These are both hard to reason about and
> implement; what if we removed them altogether? I propose that a SUBSCRIBE
> could only start/end at an absolute ID or the latest group.
>
>
>
> And then we add SUBSCRIBE order=ASC|DESC (#411
> <https://urldefense.com/v3/__https:/github.com/moq-wg/moq-transport/issues/411__;!!GjvTz_vk!ShhE-HViGjsrzsZesMScVrIVVAcs8AfyrZ5jX9EAl3puqYDj97MAshlITS2bwutnaV46CztKgyJTgA0$>)
> to indicate if head-of-line blocking is desired, supporting both VOD (old
> content) and reliable live (new content). I do think we also need SUBSCRIBE
> priority=N (also #411
> <https://urldefense.com/v3/__https:/github.com/moq-wg/moq-transport/issues/411__;!!GjvTz_vk!ShhE-HViGjsrzsZesMScVrIVVAcs8AfyrZ5jX9EAl3puqYDj97MAshlITS2bwutnaV46CztKgyJTgA0$>),
> as otherwise it's undefined how multiple subscriptions/fetches interact
> once we can conditionally ignore send_order, but that's a bigger
> discussion.
>
>
>
>
>
> *Reliable Live (Fixed Groups)*
>
> -> SUBSCRIBE    range=live..   order=ASC priority=1
>
> <- SUBSCRIBE_OK range=823..
>
> -> SUBSCRIBE    range=818..823 order=ASC priority=0
>
> <- SUBSCRIBE_OK range=818..823
>
>
> *Optional*: The first subscribe is lower priority than the follow-up,
> allowing it to both learn the latest group (823) and use the first RTT to
> warm the cache/connection with eventually useful data. Your proposed
> frozen=true is okay but conceptually it's behaving as a HEAD request,
> which I think we should add anyway (TRACK_INFO?).
>
>
>
> *Reliable Live (Dynamic Groups)*
>
> -> SUBSCRIBE    track=timeline range=live.. order=DESC
>
> <- SUBSCRIBE_OK track=timeline range=823..
>
> -> SUBSCRIBE    track=video    range=818..  order=ASC
>
> <- SUBSCRIBE_OK track=video    range=818..
>
>
>
> *Optional*: The timeline track has aligned groups in this example but you
> should parse the received OBJECTs instead. It's still going to be 1RTT and
> the timeline track needs to be fetched anyway for DVR support.
>
>
>
> On Sun, Mar 24, 2024, 3:15 PM Cullen Jennings <fluffy@iii.ca> wrote:
>
>
> At the IETF meeting I said I would write something up about how a MoQ
> client could join and get the video for 15 seconds behind the live edge.
>
>
> The goal here as I understand it is let the the client get say 15 seconds
> of video before the the current live edge then start playing roughly that
> far behind. The reason it would be 15 seconds behind would be e just to
> create a play out buffer and be able to do things like shift to a lower
> bitrate subscription (client side ABR style)  if the network was getting
> crappy without having a stall in the play out. This is not a use case I do
> so I might be missing something in this but I do want to make sure this use
> case works if it is important for other people.
>
>
> Here is how I am thinking about client could do this:
>
> Step 1:
>
> Discover a relay and setup the TLS/ QUIC / MoQ connection to it.
>
> Step 2:
>
> Subscribe to the catalog and get the catalog information. From this learn
> the track name but also learn that each group is 5 seconds long ( so 3
> groups for 15 seconds ). For things that use variable groups sizes,
> subscribe to the track that gives the mapping from time to group numbers
> and get the latest data from that.
>
> Step 3:
>
> Subscribe to Track with start=next, and frozen=true.
>
> This will cause the relay to get the information from the upstream if it
> does not already have it and return information about the live head. Note
> that if the relay already has a subscription for this track, it does not
> need to do anything, just retun OK with this relays view of the current
> live edge.
>
> Relay will return a Subscribe OK with the live edge object - for example,
> lets say it is group=1234 object=5,
>
> Step 4:
>
> At this point the client knows it needs to go 3 groups back from 1234 so
> it needs groups 1231 to 1234.
>
> Client sends a Fetch of Group 1231 to 1234 with the direction set to
> normal not reverse. If the relay. is missing some of this in the relays
> cache it will request it upstream. The relay will send an OK and start
> sending all the objects in those three groups. Note that if the relay got
> several clients joining at same time, and the first requested 1231-1234 and
> the second client requested 1230-1233, the relay MAY do the optimization
> book keeping to send then  1231-1234 upstream then for the second just send
> 1230-1230 upstream as it already has requested the others.
>
> The client will receive all the objects for the three groups in the order
> of the group id / object id.
>
> Step 5:
>
> When the last object in group 1234 arrives, the client sends and subscribe
> Update that changes the freeze to false in the original subscribe and
> causes objects from the subscription to start going to the client.
>
> ( and yes we need a way to know what the last object in a group is but
> that is a separate issue. We agree we will have some way of doing this even
> though we are not sure exactly what that way is yet )
>
> At this point the client will start receiving objects from group 1235 and
> future groups.
>
>
> >From a processing point of view, the client does pretty much the same
> thing when it gets an object regardless of if it came from Fetch or
> Subscribe.
>
> Few questions on this:
>
> A. Before we get into if this is the optimal solution for this, am I
> understand the problem and use case correctly ?
>
> B. Does this explanation make sense and does this solution work ?
>
> C. What is uggly or unfortunate about this solution and how big a deal is
> that ?
>
>
>
>
>
>
>
>
> --
> Moq mailing list
> Moq@ietf.org
> https://www.ietf.org/mailman/listinfo/moq
>