Re: [Moq] Flow to join 15 seconds before live

Martin Duke <martin.h.duke@gmail.com> Wed, 27 March 2024 21:54 UTC

Return-Path: <martin.h.duke@gmail.com>
X-Original-To: moq@ietfa.amsl.com
Delivered-To: moq@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id E3C64C180B55 for <moq@ietfa.amsl.com>; Wed, 27 Mar 2024 14:54:24 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.002
X-Spam-Level:
X-Spam-Status: No, score=-2.002 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FROM=0.001, HTML_FONT_LOW_CONTRAST=0.001, HTML_MESSAGE=0.001, HTTPS_HTTP_MISMATCH=0.1, RCVD_IN_DNSWL_BLOCKED=0.001, RCVD_IN_ZEN_BLOCKED_OPENDNS=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01, URIBL_BLOCKED=0.001, URIBL_DBL_BLOCKED_OPENDNS=0.001, URIBL_ZEN_BLOCKED_OPENDNS=0.001] autolearn=unavailable autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com
Received: from mail.ietf.org ([50.223.129.194]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id NCcc0oigAiIq for <moq@ietfa.amsl.com>; Wed, 27 Mar 2024 14:54:20 -0700 (PDT)
Received: from mail-ua1-x934.google.com (mail-ua1-x934.google.com [IPv6:2607:f8b0:4864:20::934]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 9203EC14F68F for <moq@ietf.org>; Wed, 27 Mar 2024 14:54:20 -0700 (PDT)
Received: by mail-ua1-x934.google.com with SMTP id a1e0cc1a2514c-7e0fa900784so58774241.3 for <moq@ietf.org>; Wed, 27 Mar 2024 14:54:20 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1711576459; x=1712181259; darn=ietf.org; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=u7X1oxuDPEwIyj5Tt2wo382oYlb4fQfxU2pIM6xzb5I=; b=YKiJ9E58A6/PW38DXWKAmHpyrH6wxCyvWf/WtGwXVej6yRjmgWcCvf6+c83ix+DTxX vYA3eo/iH4aDA+A9LvgDOpsebOn9XFrikEA9l3YLzOVWDwKLrqyWj6dsQ8e8QKcf1nSO Vb/dTaCmJq8ViBqfji8k8kCvYsBQgX8zIiXggPOro0wjCOyDqxALIaYVloIP1QxpwwwK rP3rHUznjvTRLdLr5+OaAYrJSq5LfzOy0pm/yJ4uAJom7r4jYyfgOeU6xrTga8jA9b6N 8i0R86NutN+RF40b2mXJUcBMTJ4XKgzFd1rGGSTvZuwTOFZUmbX1/6e1ZsPfwDidK9ox QX2Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711576459; x=1712181259; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=u7X1oxuDPEwIyj5Tt2wo382oYlb4fQfxU2pIM6xzb5I=; b=VW9gsD+DpZvR3V3+wswziLmQxwBCBjIkTZWjCITDnK/Hmuf1RHQaeZ4P3KTkDdPdMH dRHaWxs1RQeNwc/y8LofvhsE7pHPgpdP/fwcWJywKIeVZudMyUT71W869t+RCoJ14Am3 W5I1ccf6Wa69hISfeQeA5fULc2oJr6n11D0imwwyb+x2SsxdYWspBhdXdd+so+aZnUYs vdH0kiF715aXC3biA5/RfmJp3k6c7Iti8dPf4i+J1t6lmS4KoEmMXZOkipdCKM9qsEf8 TYNfFytsmvHF+Gfd9L8KRJBHmcDQ8zwlTxTDwip8axUdsgWp76IgpD0p93pgFEEPB9Wh +/ig==
X-Forwarded-Encrypted: i=1; AJvYcCVKf5pFgdwg+Hu0PmVh/IPr43//Ob4g9O9RW7BffOnUDTHFyeLV6F5Xe05lwXsdj0ElS5Q5cgyi36iA1S8=
X-Gm-Message-State: AOJu0Yyc5GJo7+fDHi7RnsDwueQkHXKRCjVjUgz3P2/VhbrxevRT6c7T taCm7MytggT0BQELKWbVyHfLCqKM4f+BY5WLApfRNoOVqCMMDJV7B38/84tYY5cB34vTGI7sCkV YCvMAW3DoKfUkgVwOklqx56wHJE4=
X-Google-Smtp-Source: AGHT+IH1RyNyTAOOnrQXz50d0qLsU/6HpEuIJzM+hfBhRP+BNlQbcD/XZzhpJLJwAlb6JszNuZYKEcbynW8pvyv1OaY=
X-Received: by 2002:a05:6102:3671:b0:478:3fb4:3d44 with SMTP id bg17-20020a056102367100b004783fb43d44mr1440652vsb.25.1711576459088; Wed, 27 Mar 2024 14:54:19 -0700 (PDT)
MIME-Version: 1.0
References: <B1E13534-1440-42B7-A820-3EFC405AD558@iii.ca> <CAHVo=ZmUsVvnJ43-_HMjk51OcRJaYJ1iiO94Hfx9askqaMcZTA@mail.gmail.com> <1B369D1B-7B1F-46FE-B2F1-48B453F8DBFA@akamai.com> <CA+9kkMBXWaJuHrrqer6cQcL0MUQ-LURO-FCTpZ=NpTASTfY97w@mail.gmail.com> <CAMRcRGS+k9Y5FgG=jEo9KV4K9N3jM6Uzmdvcmi+w3yXxPUEt0A@mail.gmail.com> <1345E124-ECB0-409E-83F2-4DF485067C2D@fb.com> <BCDC64CD-FECB-46F9-9517-1739F7B1E41D@akamai.com> <CANG3SPxrcyjWL6OgiYPNgLhB5SOTAk8wdppkbNBBMjoo+1W66A@mail.gmail.com> <CAM4esxSV5v31kxU0bxeyTfHbncowjF5hskZ28L-EBtOkWXWVvg@mail.gmail.com>
In-Reply-To: <CAM4esxSV5v31kxU0bxeyTfHbncowjF5hskZ28L-EBtOkWXWVvg@mail.gmail.com>
From: Martin Duke <martin.h.duke@gmail.com>
Date: Wed, 27 Mar 2024 14:54:05 -0700
Message-ID: <CAM4esxRTbMCsGhotpsYW5Loa58YoyYRGKSssOfu5eUik6UcMoA@mail.gmail.com>
To: nathan.burr@paramount.com
Cc: "Law, Will" <wilaw=40akamai.com@dmarc.ietf.org>, Alan Frindell <afrind=40meta.com@dmarc.ietf.org>, Suhas Nandakumar <suhasietf@gmail.com>, Ted Hardie <ted.ietf@gmail.com>, MOQ Mailing List <moq@ietf.org>
Content-Type: multipart/alternative; boundary="000000000000d99bac0614ab72c3"
Archived-At: <https://mailarchive.ietf.org/arch/msg/moq/1-c4r38XRZj8zrVeZlpUTxhbZzo>
Subject: Re: [Moq] Flow to join 15 seconds before live
X-BeenThere: moq@ietf.org
X-Mailman-Version: 2.1.39
Precedence: list
List-Id: Media over QUIC <moq.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/moq>, <mailto:moq-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/moq/>
List-Post: <mailto:moq@ietf.org>
List-Help: <mailto:moq-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/moq>, <mailto:moq-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 27 Mar 2024 21:54:25 -0000

Put another way, if the client wants 15 seconds of buffering, that
buffering has to be *on the client* to be effective. So the live edge
should be delivered at roughly the same time as the left edge is, not
"frozen".

On Wed, Mar 27, 2024 at 2:24 PM Martin Duke <martin.h.duke@gmail.com> wrote:

> As an individual, what I don't like about the freeze flag is that it
> muddies the distinction between SUBSCRIBE (just forward what comes from
> the publisher) and FETCH (if it's not cached, go get it from the
> publisher). A frozen SUBSCRIBE is a FETCH for future groups, which to me
> defeats the whole point of separating FETCH and SUBSCRIBE.
>
> On Wed, Mar 27, 2024 at 8:37 AM Nathan Burr <nathan.burr@paramount.com>
> wrote:
>
>> I would like to see the ability to get the current initialization frame
>> for a track. This doesn't make sense for 15s latency but for smaller
>> latency windows it does make more sense. It also makes sense for fast
>> channel switching and return from ad scenarios.
>>
>> So basically you have two tracks. The first track is all i-frames. or
>> i-frames every n number of milliseconds depending on your encoding budget.
>> The second track will be the regularly compressed track. On start of
>> streaming you would download 1 frame from the init track and then all
>> subsequent data for the remaining segment from the second track.
>>
>> You wouldn't necessarily have an init track for every rendition. just a
>> few that would make sense to start fast and upshift to the quality you were
>> previously at.
>>
>> In order to enable this workflow. I would like to have the ability to
>> fetch the next available i-frame. fetch the current segment after the PTS
>> of the already fetched i-frame and then subscribe to the track we would
>> like to continue receiving data from.
>>
>> This scenario probably doesn't apply to a lot of people. But just want to
>> make sure the ability to do this is still there while we are defining
>> client behaviors.
>>
>> Thanks,
>>
>> Nate Burr
>>
>>
>> On Wed, Mar 27, 2024 at 8:59 AM Law, Will <wilaw=
>> 40akamai.com@dmarc.ietf.org> wrote:
>>
>>> <External Email>
>>>
>>>
>>> I have also taken a stab at defining the requirements:
>>>
>>>
>>>
>>> R1 – as a receiving client, I want to be able to join a track, starting
>>> at the latest available group, without prior knowledge of the latest group
>>> number.
>>>
>>> R2 – as a receiving client, I want to be able to join at track at an
>>> explicit start group and then receive all future groups from that point on.
>>>
>>> R3 – as a receiving client, I want to be able to fetch a discreet range
>>> of groups, from a start group to an end group that I specify.
>>>
>>> R4 - as a receiving client, I want to be able to discover what is the
>>> latest available group in a track without having to subscribe to that track.
>>>
>>> R5 - as a receiving client, I want to be able to define a relative
>>> priority between my active subscriptions and fetches.
>>>
>>> R6 - as a receiving client, I want to be able to define whether data in
>>> a subscription or fetch is delivered with Ascending or descending Group
>>> number.
>>>
>>> R7 – as a receiving client, I want to be able to instruct the sender to
>>> drop lower groups in favor of higher number groups.
>>>
>>>
>>>
>>> I have doubts that we can enable R6 with the current permissive
>>> definition of ascending-but-not-consecutive group numbers and I include
>>> this requirement to respect Suhas’prior requirements. My concern is this:
>>>  if a relay has received group 16, 18, and now it receives group 22, can it
>>> forward it? It has no idea that group 20 is delayed. The reality is that
>>> with a live stream, the relay NEVER knows if there are out-of-order groups.
>>> The same is true for non-cached VOD. A relay knows what is has in cache,
>>> and can order those, however if it doesn’t have the requested groups in
>>> cache and has to request them from upstream, that again it does not know if
>>> there are gaps and therefore cannot deliver a reliable order. I feel R6 is
>>> a red herring. Instead, relays should always forward groups in the order
>>> they receive them and this will usually be ASC by nature of our definition
>>> of group number. This is decoupled from the congestion-response behavior of
>>> dropping lower groups in favor of higher groups . This requirement is
>>> expressed as R7 and it can always be done by the relay if requested by the
>>> client.
>>>
>>>
>>>
>>> Cheers
>>>
>>> Will
>>>
>>>
>>>
>>>
>>>
>>> *From: *Moq <moq-bounces@ietf.org> on behalf of Alan Frindell <afrind=
>>> 40meta.com@dmarc.ietf.org>
>>> *Date: *Monday, March 25, 2024 at 10:25 PM
>>> *To: *Suhas Nandakumar <suhasietf@gmail.com>, Ted Hardie <
>>> ted.ietf@gmail.com>
>>> *Cc: *"Law, Will" <wilaw=40akamai.com@dmarc.ietf.org>, MOQ Mailing List
>>> <moq@ietf.org>
>>> *Subject: *Re: [Moq] Flow to join 15 seconds before live
>>>
>>>
>>>
>>> Hi Suhas, thanks for up leveling the discussion to requirements.  My
>>> recommendation is that we focus on getting agreement on those first, before
>>> moving to identify which solutions address which ones to what degree.
>>>
>>>
>>>
>>> --
>>>
>>> *Req-01*: As a receiving client, I want to be able to find out if there
>>> are any gaps (one or more of them) in the objects being published for the
>>> thing I asked for.
>>>
>>>
>>>
>>> *Req-02:* As a receiving client, I want to have data delivered to be in
>>> the *oldest to newest order of the groups *to support my playout buffer.
>>>
>>>
>>>
>>> *Req-03*: As a receiving client, I want to have fed the data as it is
>>> produced and I don't know absolute start or end locations.
>>>
>>>
>>>
>>> *Req-04*: As a sending client, I want to mark what data is important
>>> for my application.
>>>
>>> --
>>>
>>>
>>>
>>> You used slightly different language for Req-03, I interpret it to mean
>>> “As a receiving client, I want to join the track at the live head without
>>> knowing the absolute start or end locations” – is that correct, or is there
>>> more you are trying to express?
>>>
>>> What do other folks think about the requirement list – any additions or
>>> amendments?
>>>
>>>
>>> Thanks
>>>
>>>
>>>
>>> -Alan
>>>
>>>
>>>
>>>
>>>
>>> *From: *Moq <moq-bounces@ietf.org> on behalf of Suhas Nandakumar <
>>> suhasietf@gmail.com>
>>> *Date: *Monday, March 25, 2024 at 1:38 PM
>>> *To: *Ted Hardie <ted.ietf@gmail.com>
>>> *Cc: *"Law, Will" <wilaw=40akamai.com@dmarc.ietf.org>, MOQ Mailing List
>>> <moq@ietf.org>
>>> *Subject: *Re: [Moq] Flow to join 15 seconds before live
>>>
>>>
>>>
>>> Thanks Cullen for starting this thread. Overall I like the simplicity of
>>> the proposal. After reading the commentary, I do think we need to take a
>>> step back and establish some set of requirements . I will lay out my
>>> understanding of the requirements
>>>
>>> Thanks Cullen for starting this thread. Overall I like the simplicity of
>>> the proposal.  After reading the commentary, I do think we need to take a
>>> step back and establish some set of requirements . I will lay out my
>>> understanding of the* requirements at the MOQ Transport layer to
>>> support various applications* ( vod, live streaming, interactive, ad
>>> insertion, gaming, side channel catchup, chat and so on).
>>>
>>>
>>>
>>>
>>>
>>> *Req-01*: As a receiving client, I want to be able to find out if there
>>> are any gaps (one or more of them) in the objects being published for the
>>> thing I asked for.
>>>
>>>
>>>
>>> *Req-02:* As a receiving client, I want to have data delivered to be in
>>> the *oldest to newest order of the groups *to support my playout buffer.
>>>
>>>
>>>
>>> *Req-03*: As a receiving client, I want to have fed the data as it is
>>> produced and I don't know absolute start or end locations.
>>>
>>>
>>>
>>> *Req-04*: As a sending client, I want to mark what data is important
>>> for my application.
>>>
>>>
>>>
>>> Let me take a stab at few API semantics being proposed against these
>>> requirements
>>>
>>>
>>>
>>> *(1) Subscribe with no locations but with a clear list of options*
>>>
>>>
>>>
>>> *Req-03 *can be done.
>>>
>>> *Req-02* conflicts with Req-03. Also fails when it comes to
>>> unambiguously identifying Relay behavior and limits de-duping subscriptions
>>> under conflicting requests. ( Please note we have the same problems when
>>> subscribing with relative locations as defined in draft-03)
>>>
>>> *Req-01* can be probably done, however it conflicts with Req-03 since
>>> finding out gaps might need searching through relay hierarchy and/or
>>> eventually going back to the original publisher.
>>>
>>>
>>>
>>> *(2) Fetch with only absolute locations*
>>>
>>>
>>>
>>> *Req-01* can be done
>>>
>>> *Req-02* can be done
>>>
>>> *Req-03* conflicts with Req-02.
>>>
>>>
>>>
>>> *(3) Subscribe with Delivery Order for Groups (oldest to newest / newest
>>> to oldest), not Fetch-like operation*
>>>
>>>
>>>
>>> *Req-01 and Req-03* conflict
>>>
>>> *Req-02* can be done, however defining any RECOMMENDATION for the relay
>>> behavior will be inappropriate under conflicting requests for the delivery
>>> order. This causes receiving clients for the same application to have
>>> different experience based on the delivery order picked by a given relay (
>>> or set of Relays)
>>>
>>>
>>>
>>> Next let's look at the requirements on group delivery order preference (
>>> oldest group first or newest group first) and who controls it (original
>>> sender or every receiver)
>>>
>>>
>>>
>>> (1) Every receiver (end receiver, relays) of a track can ask  a given
>>> delivery order preference for a group. This is the most flexible approach
>>> and seems powerful at first. But when we start building relays and have to
>>> manage conflicting requests ,things get challenging ( as described above)
>>>
>>>
>>>
>>> (2) Sender sets the group delivery preference. Since the sender controls
>>> the application  data, it  sets the needed group delivery order preference
>>> for a given track. Relays use the group delivery order preference and
>>> object priority to decide the ordering of groups and objects within a group
>>> when making delivery decisions. Note, this doesn't remove the need to move
>>> API semantics that solves Req-01, however.
>>>
>>>
>>>
>>> Fetch in itself can be *"cleanly"* applied to several use-cases that
>>> cannot be easily done with Subscribe without causing semantic ambiguities (
>>> partial data at relay, finding gaps, who to ask for the data)
>>>
>>>
>>>
>>>
>>>
>>> *Lastly, on Freeze for Subscriptions*
>>>
>>> * T*he option to freeze a subscription request is still needed outside
>>> the use-cases of fetch  vs subscribe. It helps pre-warming the relay cache,
>>> it helps moving between qualities for ABR to be realized within a single
>>> hop length  ( avoiding the need to go back deeper into the Relay network),
>>> for example.
>>>
>>>
>>>
>>> I feel the following should meet most of the use-cases MOQ can
>>> eventually help address
>>>
>>> - Subscribe with modes ( prev, now and next)
>>>
>>> - Fetch with absolute ranges
>>>
>>> - Sender setting group delivery order preference
>>>
>>> - Sender setting object priorities
>>>
>>>
>>>
>>> Cheers
>>>
>>> Suhas
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Mon, Mar 25, 2024 at 4:21 AM Ted Hardie <ted.ietf@gmail.com> wrote:
>>>
>>> Just so I understand the use case, is the choice of 15 seconds because
>>> of a control presumed to be on the player (a "go back N seconds"/"go
>>> forward N seconds" control)?
>>>
>>>
>>>
>>> I ask this because the ease of delivering this functionality depends a
>>> lot on whether the catalog producer has provided a mapping.  For a catalog
>>> with no mapping between groups and timings, this pretty much has to be done
>>> via a heuristic; for one with a well-described mapping, it should be
>>> trivial to extract.  If that is the case, I think it might make sense to
>>> say that the MoQT protocol provides the ability to request group ids/object
>>> ids earlier than live edge, but that mapping those to timing is not a
>>> function of the transport.   Any publisher that wants to provide that
>>> functionality has to do so in the catalog or with a timing track (and in a
>>> lot of cases, it will know whether or not the app has that type of control).
>>>
>>>
>>>
>>> If that were the case, then the only thing MoQT needs to do is to tell
>>> you the group number/object number of the live edge (as seen by the relay
>>> you're on).  The mechanics from that point are client driven.
>>>
>>>
>>>
>>> From my perspective, the next question is then whether you expect the
>>> client to have a playout buffer that allows it to handle re-ordering up to
>>> the 15 second mark.  If that is the case, then the subscribe with a group
>>> number/group id target makes sense to me, because the cache can start
>>> delivering whatever it has (including the live edge) and fill in the buffer
>>> as it gets older data, knowing that it is up to the client to hold that and
>>> start playback when it has the data it wants to send to the decoder.
>>>
>>>
>>>
>>> If you don't expect the playout buffer to be that large, then I think
>>> you do need to get the group/objects in order.  That requires either the
>>> relay to buffer until it can do that or something like what Cullen put
>>> forward, since that allows the client to drive the process in a way that
>>> gets the data in a specific order without going all the way to using fetch
>>> for the parts after the live edge.
>>>
>>>
>>>
>>> Just my perspective,
>>>
>>>
>>>
>>> Ted
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Mon, Mar 25, 2024 at 10:03 AM Law, Will <wilaw=
>>> 40akamai.com@dmarc.ietf.org> wrote:
>>>
>>> Thanks Cullen for initiating this thread and Cullen and Luke for
>>> proposing solutions.
>>>
>>>
>>>
>>> I find the proposed worklow to start N seconds behind to be
>>> unnecessarily convoluted, As proposed, it requires the player to make
>>> SUBSCRIBE request which is “frozen” , followed by FETCH request, followed
>>> by a SUBSCRIBE “unfreeze”. In comparison, our workflow for joining a
>>> real-time track is clean – a single SUBSCRIBE and all received objects can
>>> be piped to a decode buffer and rendered. We should aspire to an equally
>>> clean workflow for non-real-time and VOD playback. These are our two
>>> majority join use-cases.
>>>
>>>
>>>
>>> I am not of fan of frozen=true. This is overloading SUBSCRIBE to return
>>> state information about the track. This would be better off being moved to
>>> an explicit control message, perhaps (as Luke just proposed) TRACK_INFO.
>>>
>>>
>>>
>>> I also support the suggestion to remove relative start and end groups
>>> from SUSBCRIBE. This is causing more problems than it is solving. Real-time
>>> players don’t care about relative offsets because they always want the
>>> latest group. Non-real time players have the luxury of time . The key
>>> difference with starting N seconds behind live is that we can afford to
>>> spend one RTT discovering things (assuming RTT << N). So a clean workflow
>>> would be:
>>>
>>>
>>>
>>>    1. Client makes the MOQT connection.
>>>    2. Subscribe to the catalog. The catalog tells it the track names
>>>    and also details of their numbering schemes and group duration.
>>>    3. For the track it wants to starts with, request TRACKINFO(track)
>>>    (or a timeline track, if this is offered by the streaming format).
>>>    4. If the relay has an active subscription, it can immediately
>>>    return the latest group number. Otherwise it makes the same request
>>>    upstream and then relays the answer to the client. The client now knows the
>>>    latest group number.
>>>    5. The player uses its knowledge of the latest group, the numbering
>>>    scheme, the group duration and the desired starting buffer to calculate the
>>>    starting group G.
>>>    6. It then makes a single FETCH(startgroup=G endgroup=undefined
>>>    priority=X). The response from this can be piped directly into its decode
>>>    buffer. (note, this is a changed API from what we currently have)
>>>
>>>
>>>
>>> We have two options for step 6.
>>>
>>>    1. We can extend SUBSCRIBE to allow a reliable ASCending delivery
>>>    mode, or
>>>    2. We can extend FETCH to allow an undefined end group i.e keep
>>>    delivering into the future until end-of-track is signaled (as I showed
>>>    above).
>>>
>>>
>>>
>>> I don’t have religion on either of these options, although separating
>>> the reliable and ordered nature of FETCHed delivery into its own API seems
>>> reasonable and preferable.
>>>
>>>
>>>
>>> Cheers
>>>
>>> Will
>>>
>>>
>>>
>>>
>>>
>>> *From: *Moq <moq-bounces@ietf.org> on behalf of Luke Curley <
>>> kixelated@gmail.com>
>>> *Date: *Monday, March 25, 2024 at 6:58 AM
>>> *To: *Cullen Jennings <fluffy@iii.ca>
>>> *Cc: *MOQ Mailing List <moq@ietf.org>
>>> *Subject: *Re: [Moq] Flow to join 15 seconds before live
>>>
>>>
>>>
>>> Hey Cullen, thank you very much for writing down the proposed flow.
>>>
>>>
>>>
>>> The problem with this approach is that while there is head-of-line
>>> blocking on startup (via FETCH), there is no head-of-line blocking during
>>> the steady state (via SUBSCRIBE). Any congestion will cause the player
>>> buffer to shrink, and unfortunately it will be filled in reverse group
>>> order. This makes sense for low latency playback as it enables skipping
>>> content, but it will cause excessive buffering for high latency playback
>>> that does not skip content.
>>>
>>>
>>>
>>> The other big issue in #421
>>> <https://urldefense.com/v3/__https:/github.com/moq-wg/moq-transport/pull/421__;!!GjvTz_vk!WGTeZi5pgPzPD8mxX8VyAOgHY3oe8Z_BFDyrNfOXtsABW1UxHKoIHOqmfXHD1ou-vdwSAVggUnzwWeB-DpLBTseeQjsm$>
>>> is that we still need the ability to SUBSCRIBE at absolute group IDs,
>>> otherwise ABR won't work because of numerous race conditions. Even a
>>> back-to-back UNSUBSCRIBE 720p + SUBSCRIBE 240p are evaluated at
>>> different times based on different cache states (including potentially
>>> empty). I can't think of any algorithm that works using only relative
>>> SUBSCRIBEs, *especially* if groups are not aligned between tracks.
>>>
>>>
>>>
>>>
>>>
>>> As I see it, the main problem with SUBSCRIBE today is the RelativeNext
>>> and RelativePrev start/end ID. These are both hard to reason about and
>>> implement; what if we removed them altogether? I propose that a SUBSCRIBE
>>> could only start/end at an absolute ID or the latest group.
>>>
>>>
>>>
>>> And then we add SUBSCRIBE order=ASC|DESC (#411
>>> <https://urldefense.com/v3/__https:/github.com/moq-wg/moq-transport/issues/411__;!!GjvTz_vk!WGTeZi5pgPzPD8mxX8VyAOgHY3oe8Z_BFDyrNfOXtsABW1UxHKoIHOqmfXHD1ou-vdwSAVggUnzwWeB-DpLBTu3MfrWi$>)
>>> to indicate if head-of-line blocking is desired, supporting both VOD (old
>>> content) and reliable live (new content). I do think we also need SUBSCRIBE
>>> priority=N (also #411
>>> <https://urldefense.com/v3/__https:/github.com/moq-wg/moq-transport/issues/411__;!!GjvTz_vk!WGTeZi5pgPzPD8mxX8VyAOgHY3oe8Z_BFDyrNfOXtsABW1UxHKoIHOqmfXHD1ou-vdwSAVggUnzwWeB-DpLBTu3MfrWi$>),
>>> as otherwise it's undefined how multiple subscriptions/fetches interact
>>> once we can conditionally ignore send_order, but that's a bigger
>>> discussion.
>>>
>>>
>>>
>>>
>>>
>>> *Reliable Live (Fixed Groups)*
>>>
>>> -> SUBSCRIBE    range=live..   order=ASC priority=1
>>>
>>> <- SUBSCRIBE_OK range=823..
>>>
>>> -> SUBSCRIBE    range=818..823 order=ASC priority=0
>>>
>>> <- SUBSCRIBE_OK range=818..823
>>>
>>>
>>> *Optional*: The first subscribe is lower priority than the follow-up,
>>> allowing it to both learn the latest group (823) and use the first RTT to
>>> warm the cache/connection with eventually useful data. Your proposed
>>> frozen=true is okay but conceptually it's behaving as a HEAD request,
>>> which I think we should add anyway (TRACK_INFO?).
>>>
>>>
>>>
>>> *Reliable Live (Dynamic Groups)*
>>>
>>> -> SUBSCRIBE    track=timeline range=live.. order=DESC
>>>
>>> <- SUBSCRIBE_OK track=timeline range=823..
>>>
>>> -> SUBSCRIBE    track=video    range=818..  order=ASC
>>>
>>> <- SUBSCRIBE_OK track=video    range=818..
>>>
>>>
>>>
>>> *Optional*: The timeline track has aligned groups in this example but
>>> you should parse the received OBJECTs instead. It's still going to be 1RTT
>>> and the timeline track needs to be fetched anyway for DVR support.
>>>
>>>
>>>
>>> On Sun, Mar 24, 2024, 3:15 PM Cullen Jennings <fluffy@iii.ca> wrote:
>>>
>>>
>>> At the IETF meeting I said I would write something up about how a MoQ
>>> client could join and get the video for 15 seconds behind the live edge.
>>>
>>>
>>> The goal here as I understand it is let the the client get say 15
>>> seconds of video before the the current live edge then start playing
>>> roughly that far behind. The reason it would be 15 seconds behind would be
>>> e just to create a play out buffer and be able to do things like shift to a
>>> lower bitrate subscription (client side ABR style)  if the network was
>>> getting crappy without having a stall in the play out. This is not a use
>>> case I do so I might be missing something in this but I do want to make
>>> sure this use case works if it is important for other people.
>>>
>>>
>>> Here is how I am thinking about client could do this:
>>>
>>> Step 1:
>>>
>>> Discover a relay and setup the TLS/ QUIC / MoQ connection to it.
>>>
>>> Step 2:
>>>
>>> Subscribe to the catalog and get the catalog information. From this
>>> learn the track name but also learn that each group is 5 seconds long ( so
>>> 3 groups for 15 seconds ). For things that use variable groups sizes,
>>> subscribe to the track that gives the mapping from time to group numbers
>>> and get the latest data from that.
>>>
>>> Step 3:
>>>
>>> Subscribe to Track with start=next, and frozen=true.
>>>
>>> This will cause the relay to get the information from the upstream if it
>>> does not already have it and return information about the live head. Note
>>> that if the relay already has a subscription for this track, it does not
>>> need to do anything, just retun OK with this relays view of the current
>>> live edge.
>>>
>>> Relay will return a Subscribe OK with the live edge object - for
>>> example, lets say it is group=1234 object=5,
>>>
>>> Step 4:
>>>
>>> At this point the client knows it needs to go 3 groups back from 1234 so
>>> it needs groups 1231 to 1234.
>>>
>>> Client sends a Fetch of Group 1231 to 1234 with the direction set to
>>> normal not reverse. If the relay. is missing some of this in the relays
>>> cache it will request it upstream. The relay will send an OK and start
>>> sending all the objects in those three groups. Note that if the relay got
>>> several clients joining at same time, and the first requested 1231-1234 and
>>> the second client requested 1230-1233, the relay MAY do the optimization
>>> book keeping to send then  1231-1234 upstream then for the second just send
>>> 1230-1230 upstream as it already has requested the others.
>>>
>>> The client will receive all the objects for the three groups in the
>>> order of the group id / object id.
>>>
>>> Step 5:
>>>
>>> When the last object in group 1234 arrives, the client sends and
>>> subscribe Update that changes the freeze to false in the original subscribe
>>> and causes objects from the subscription to start going to the client.
>>>
>>> ( and yes we need a way to know what the last object in a group is but
>>> that is a separate issue. We agree we will have some way of doing this even
>>> though we are not sure exactly what that way is yet )
>>>
>>> At this point the client will start receiving objects from group 1235
>>> and future groups.
>>>
>>>
>>> >From a processing point of view, the client does pretty much the same
>>> thing when it gets an object regardless of if it came from Fetch or
>>> Subscribe.
>>>
>>> Few questions on this:
>>>
>>> A. Before we get into if this is the optimal solution for this, am I
>>> understand the problem and use case correctly ?
>>>
>>> B. Does this explanation make sense and does this solution work ?
>>>
>>> C. What is uggly or unfortunate about this solution and how big a deal
>>> is that ?
>>>
>>>
>>>
>>>
>>>
>>>
>>> --
>>> Moq mailing list
>>> Moq@ietf.org
>>> https://www.ietf.org/mailman/listinfo/moq
>>> <https://urldefense.com/v3/__https:/www.ietf.org/mailman/listinfo/moq__;!!GjvTz_vk!WGTeZi5pgPzPD8mxX8VyAOgHY3oe8Z_BFDyrNfOXtsABW1UxHKoIHOqmfXHD1ou-vdwSAVggUnzwWeB-DpLBTryR8lSJ$>
>>>
>>> --
>>> Moq mailing list
>>> Moq@ietf.org
>>> https://www.ietf.org/mailman/listinfo/moq
>>> <https://urldefense.com/v3/__https:/www.ietf.org/mailman/listinfo/moq__;!!GjvTz_vk!WGTeZi5pgPzPD8mxX8VyAOgHY3oe8Z_BFDyrNfOXtsABW1UxHKoIHOqmfXHD1ou-vdwSAVggUnzwWeB-DpLBTryR8lSJ$>
>>>
>>> --
>>> Moq mailing list
>>> Moq@ietf.org
>>>
>>> https://urldefense.com/v3/__https://www.ietf.org/mailman/listinfo/moq__;!!CxwJSw!PFiBkB9j-eU37cU3dHGoCdkS9xbpky4c2yvj-dOUj3I8UZ11-1PrQGBm4V9WEjcXzlIrLCH22cRgXZTe039CPn7DhcbYlqCvFeA$
>>>
>> --
>> Moq mailing list
>> Moq@ietf.org
>> https://www.ietf.org/mailman/listinfo/moq
>>
>