Re: [Moq] Warp

"Ali C. Begen" <ali.begen@networked.media> Tue, 15 February 2022 01:14 UTC

Return-Path: <ali.begen@networked.media>
X-Original-To: moq@ietfa.amsl.com
Delivered-To: moq@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 6893F3A13F0 for <moq@ietfa.amsl.com>; Mon, 14 Feb 2022 17:14:05 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -6.998
X-Spam-Level:
X-Spam-Status: No, score=-6.998 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, HTML_MESSAGE=0.001, HTTPS_HTTP_MISMATCH=0.1, RCVD_IN_DNSWL_HI=-5, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=networked.media
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id wn9sVWXle2pq for <moq@ietfa.amsl.com>; Mon, 14 Feb 2022 17:14:00 -0800 (PST)
Received: from mail-lf1-x129.google.com (mail-lf1-x129.google.com [IPv6:2a00:1450:4864:20::129]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 757713A0ABD for <moq@ietf.org>; Mon, 14 Feb 2022 17:13:59 -0800 (PST)
Received: by mail-lf1-x129.google.com with SMTP id bu29so28714755lfb.0 for <moq@ietf.org>; Mon, 14 Feb 2022 17:13:59 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networked.media; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=18WkqwzjixOOt32tsSgoE3tYF/RlsN9u4jDdt/wIMT4=; b=hcnzL6jZcHQBsAomo2BQDhLv2Ykanv2KBVbPfuWervpQyQPJwKe/9Jp4kctgSbfi7u cAN+CAz7ZFtge3S0TzDzWJcdUs7WK2tGq5i/5zzSIMvxlFqhz0sNvHfvcwPchPuAxnxf w91+8PPgt3ebT/cc0QjsFbtuLE5MSkRS+/1frSmjwUqysFQq+LQNW31JUxAuTIzJUsal N88tksWdP7aB216wNxbkLmBkunJPKeedNJg4L7CAHkkcGX8ota6YhP7H0WOc/H2E07iJ u8QN1LsUoiiG9vzNwy9MpG+94ehNhNBHnCrH1SbrknSyldV5OEgQP0KNyC8suyLMI9G8 f+wg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=18WkqwzjixOOt32tsSgoE3tYF/RlsN9u4jDdt/wIMT4=; b=3pSRhMmn+JXpwigtWIIKApB4SAW5G2Myl2Zlc0Ux3E1FJlgeOJJj2ETwk6ZUXzXMEc L8uTWE9VGugs8K0UqAgsxVImMNwFuzxsefbVAIshm9RYcGW0GBgW2NXicwN55UYcTVYp a2j9oheo+L6W8fRI9YsDnYUshCEZnEuADqBYDf1/dR310bhlLvLtLKVzfOVARDpmdsFc /j+zpryh7oEPm1C4jhI8y99i45j3bSRmqTDg69I5LO4tbkCUCG1T8YZPnUzq9ZMFSM42 GEZlJA50NrfFczr5Dxb/C2XDOPNYw2jNyCgrinJk1sq+pNokjTbuFtoShiIJ6zfpIwDN u7Iw==
X-Gm-Message-State: AOAM533/vXXC+2ENECbEhMI4zHCZaRCK1kAvlwQauVj/PKErZtJP09Pq j/1ehaR0P7mE0IBaN2mAHGV//Lx4++luMs24C7PXIHwV+g3zYQ==
X-Google-Smtp-Source: ABdhPJyv0IEET9Vk2rJcf0yMVrnuhQeGTmWM8Ky5QuImho9V8XsfjT9Ol06qYWjvdcRMy7Au13b32+VC9YV0TXmdFOI=
X-Received: by 2002:ac2:4437:: with SMTP id w23mr1319431lfl.481.1644887636247; Mon, 14 Feb 2022 17:13:56 -0800 (PST)
MIME-Version: 1.0
References: <bb5da866d00c4113b48386a0702b4565@huawei.com> <CAHVo=Znu3q-R5jHV+n9Jb8vPDqCThjJtUMdoLoJoY1aLkf2r8w@mail.gmail.com>
In-Reply-To: <CAHVo=Znu3q-R5jHV+n9Jb8vPDqCThjJtUMdoLoJoY1aLkf2r8w@mail.gmail.com>
From: "Ali C. Begen" <ali.begen@networked.media>
Date: Tue, 15 Feb 2022 04:13:44 +0300
Message-ID: <CAA4Mczt6eJ1hL+xMd2q3b80-UQoSOj1GsNGEVKo8hhqU3v74rw@mail.gmail.com>
To: Luke Curley <kixelated@gmail.com>
Cc: "shihang (C)" <shihang9@huawei.com>, MOQ Mailing List <moq@ietf.org>, "Law, Will" <wilaw@akamai.com>
Content-Type: multipart/alternative; boundary="000000000000411a1805d8043f3f"
Archived-At: <https://mailarchive.ietf.org/arch/msg/moq/akDTq65MptaIru6VVBnmV092KlY>
Subject: Re: [Moq] Warp
X-BeenThere: moq@ietf.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Media over QUIC <moq.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/moq>, <mailto:moq-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/moq/>
List-Post: <mailto:moq@ietf.org>
List-Help: <mailto:moq-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/moq>, <mailto:moq-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 15 Feb 2022 01:14:06 -0000

Hi Luke,

On Mon, Feb 14, 2022 at 11:46 PM Luke Curley <kixelated@gmail.com> wrote:

> Hey Hang,
>
> ABR is the primary mechanism for HLS/DASH to deal with congestion. Warp
> adds the ability to skip video at the end of media segments until the ABR
> algorithm kicks in. So all things remaining the same, Warp would be a
> better user experience.
>

Warp seems to require the frames to be transmitted in the encode/decode
order and to eliminate any additional delay due to frame reordering, I
suppose you forbid frames referencing future frames during the encoding? I
am not much of an encoding person, but this will certainly reduce your
visual quality at a target bitrate.

While this allows dropping the tail of a segment when the sender realizes
they won't make it on time, depending on the number of frames the sender
drops/skips, the resulting jittery playback might be discomforting. The
jump in the audio will be even worse.

You claim this is a better user experience. Any numbers or studies that you
can back this up with? Nobody likes stalls but I don't think anybody likes
skipped video/audio frames, either.


> That being said, we still want to reduce latency. The way to minimize
> latency is to transfer each frame to the player as it is encoded, provided
> any dependencies are transferred first (GoP structure).
>

So, you do allow forward prediction (because it will improve the quality)?


> This poses a problem for client-side ABR.
> <https://blog.twitch.tv/en/2020/01/15/twitch-invites-you-to-take-on-our-acm-mmsys-2020-grand-challenge/> Measuring
> the arrival time of frames on the client side is not enough signal to
> determine the connection bandwidth, making switching up renditions quite
> difficult. There's three solutions to this problem that I've seen: 1. hold
> back enough media to burst the connection (LL-HLS)
>

Measuring the bandwidth with just a few packets (packet pair/train) is not
very reliable but it does not mean one needs to hold back several frames to
measure something reliable. That's why we opted for counting the data
received and the actual time it took to receive that data (discarding the
idle times) to measure the bandwidth.
the code:
http://reference.dashif.org/dash.js/nightly/samples/low-latency/lolp_index.html
the paper: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9429986


> or 2. run a speed test on the connection every so often (our LHLS
> solution)
>

If you are already fighting with congestion, this will only make things
worse. Setting the right priority on the measurement stream is also
problematic. I would not go for this approach for Warp.


> or 3. hope that machine learning can save the day.
>

It can only do so if it is trained with the right data. There is some hope
here, but then conditions vary vastly so this is not a quick win, either.


> With Warp we used a fourth option: 4. have the sender perform ABR. The
> sender knows the send rate, knows the queue size, and is the entity
> actively limiting the amount of data that can be sent. This is not ideal
> for a CDN because they're traditionally designed to be stateless with a
> standardized API, but I certainly think it's a solvable problem.
>

Right, the sender knows exactly what is able to send. But, it does not know
whether the receiver wants all that data. In principle, it is the client
who needs to decide on the selected representation and I suppose Warp will
eventually include some messages to be sent by the client to that effect.

At the moment, does the Warp sender pick the segment at the highest bitrate
that it assumes it can send on time w/o any tail dropping? Or could it pick
an even higher bitrate segment and risk dropping some frames toward the end?

-acbegen


>
> On Mon, Feb 14, 2022 at 3:02 AM shihang (C) <shihang9@huawei.com> wrote:
>
>> @Luke, I wonder whether the timely bandwidth estimate is needed for ABR,
>> given that client has 2-5s buffer anyway(when facing congestion which is
>> the primary scenario of Warp). The client side ABR is more scalable than
>> sender side ABR, right? Is the computation overhead of the sender side ABR
>> one of obstacles when deploying to the CDN?
>>
>>
>>
>> Best Regards,
>>
>> Hang
>>
>>
>>
>>
>>
>> *发件人**:* Moq <moq-bounces@ietf.org> *代表 *Luke Curley
>> *发送时间:* 2022年2月12日 13:20
>> *收件人:* Law, Will <wilaw@akamai.com>
>> *抄送:* MOQ Mailing List <moq@ietf.org>
>> *主题:* Re: [Moq] Warp
>>
>>
>>
>> ...and to clarify what I mean by "CDN support", I mean using HTTP/3
>> requests instead of QUIC streams. A client could request each HLS/DASH
>> segment in parallel providing the Warp priority as a header
>> <https://datatracker.ietf.org/doc/html/draft-ietf-httpbis-priority>. You
>> effectively get the same segment data and prioritization but encapsulated
>> in a HTTP response.
>>
>>
>>
>> However it does get quite a bit more complicated than that. The biggest
>> issue is that prioritization is not guaranteed, especially when multiple
>> connections are involved (ex. different hostnames). It's also very
>> difficult for the server to provide a timely bandwidth estimate for ABR. We
>> opted to take the simpler route and push via WebTransport instead of
>> pulling via HTTP/3.
>>
>> On Fri, Feb 11, 2022, 5:32 PM Luke Curley <kixelated@gmail.com> wrote:
>>
>> Hey Will,
>>
>>
>>
>> Unlike HLS, the media sender is responsible for ABR. Our server pulls the
>> estimated bitrate directly from the QUIC congestion controller (BBR, Cubic,
>> etc) and switches renditions at segment boundaries. This is a dramatic
>> improvement over client-side ABR because it's the actual rate at which
>> media can be sent. It's also the primary challenge with using Warp over
>> HTTP/3 with CDN support.
>>
>>
>>
>> Also I want to clarify that this draft is not complete. I wanted to focus
>> on what I felt were the core concepts that would shape a WG. That may have
>> been a mistake because it's come up a few times... and in fact. the client
>> can create streams. These are used to send messages like
>> load/play/pause/track but somehow I completely neglected to document it.
>>
>>
>>
>>
>>
>> On Fri, Feb 11, 2022 at 4:27 PM Law, Will <wilaw@akamai.com> wrote:
>>
>> @Luke – how does WARP handle throughput variation across the connection
>> (the equivalent of ABR with HAS)? The draft indicates that older frames are
>> dropped in the face of congestion. This implies that resolution and encoded
>> bitrate remain constant and that it’s the rendered frame rate that drops on
>> the client to compensate for any throughput degradation. If that is
>> correct, then at what point can the client decide I’m tired of receiving
>> the 4K feed at 8fps, I’d rather get 1080p at 30fps? Conceivably it could
>> request the server to begin sending a lower resolution/bitrate stream of
>> data, however the established streams are unidirectional and no control
>> back-channel is defined. It could also tune-in to a new QUIC stream at the
>> appropriate bitrate, if there was some standard metadata to define what was
>> available and how to access it.   Do you consider discovery and service
>> description to be out of scope of this core protocol definition? If so, has
>> any thought be given to extending WARP so that it includes service
>> discovery and description and perhaps a control back-channel?
>>
>>
>>
>> Cheers
>>
>> Will
>>
>>
>>
>>
>>
>> *From: *Luke Curley <kixelated@gmail.com>
>> *Date: *Friday, February 11, 2022 at 1:11 PM
>> *To: *Sergio Garcia Murillo <sergio.garcia.murillo@gmail.com>
>> *Cc: *MOQ Mailing List <moq@ietf.org>
>> *Subject: *Re: [Moq] Warp
>>
>>
>>
>> Hey Sergio,
>>
>>
>>
>> Warp has flexible latency depending on the broadcaster and viewer(s).
>>
>>
>>
>> The broadcaster chooses their encoding settings, for example using
>> b-frames (higher latency/quality) or using a larger look-ahead buffer
>> (better compression and rate control). The viewer dynamically chooses their
>> buffer size, dictating how long to wait before skipping the end of a
>> segment.
>>
>>
>>
>> With a perfect network, Warp would transfer each video frame from the
>> encoder to decoder as they are generated. However, congestion makes that
>> impossible, which is why it's necessary to have a dynamic player buffer for
>> smooth playback. For example, a viewer with a reliable connection may have
>> a 500ms buffer, while a viewer with a cellular connection may have a 2s
>> buffer, while a viewer in a developing country may have a 5s buffer, while
>> a service that archives the stream may have a 30s buffer for maximum
>> reliability.
>>
>>
>>
>> The broadcaster and any intermediate proxies do not know or care about
>> each viewer's desired latency. They just create QUIC streams, transmit
>> packets based on stream priority, and eventually close any streams if they
>> reach some maximum upper bound. This makes it ideal for video distribution
>> especially when multiple caches and proxies are involved.
>>
>>
>>
>> On Fri, Feb 11, 2022 at 11:59 AM Sergio Garcia Murillo <
>> sergio.garcia.murillo@gmail.com> wrote:
>>
>> Hi luke,
>>
>>
>>
>> QUICK question, what is the target glass to glass latency for WARP?
>>
>>
>>
>> Best regards
>>
>> Sergio
>>
>>
>>
>>
>>
>> El vie, 11 feb 2022 20:22, Luke Curley <kixelated@gmail.com> escribió:
>>
>> Hey MOQ, I just published a draft for Warp
>> <https://urldefense.com/v3/__https:/datatracker.ietf.org/doc/draft-lcurley-warp/__;!!GjvTz_vk!CN-FzuL40h3RSvNUobOIUtEEMChMR2oAcW4N7QAzHt3yJISvAijnqM0MaK7E$>.
>> Here's a quick FAQ:
>>
>>
>>
>> *What is Warp?*
>>
>> Twitch has developed a new video distribution protocol to replace our
>> custom low-latency HLS stack. Warp uses QUIC streams to deliver media
>> segments, prioritizing streams based on content and age. This allows
>> viewers to skip old video content during congestion instead of buffering;
>> improving the user experience and reducing latency.
>>
>>
>>
>> *What about contribution?*
>>
>> Warp is very similar to Facebook's RUSH
>> <https://urldefense.com/v3/__https:/www.ietf.org/archive/id/draft-kpugin-rush-00.html__;!!GjvTz_vk!CN-FzuL40h3RSvNUobOIUtEEMChMR2oAcW4N7QAzHt3yJISvAijnqK2Y57G-$> and
>> can be used as a contribution protocol. There's a few fundamental
>> differences, like the prioritization scheme and transferring media as
>> segments. This first version of the draft focuses on these core differences
>> and omits anything else that could be a distraction.
>>
>>
>>
>> *Why not WebRTC?*
>>
>> We initially used WebRTC (both media and data channels) for
>> last mile-delivery but the user experience was significantly worse than our
>> existing stack. There were so many minor issues, primarily caused by
>> WebRTC's focus on real-time latency and the inability to control the client
>> (browser) behavior. I personally had to scrap years of work on a custom
>> SFU. 😔
>>
>>
>>
>> *Why not use datagrams?*
>>
>> Warp uses QUIC streams because it dramatically simplifies the protocol.
>> We get the full benefit of QUIC's fragmentation, congestion control, flow
>> control, recovery, cancellation, multiplexing, etc. Using datagrams gives
>> you extra flexibility but it also means you have to reimplement everything
>> on every platform.
>>
>>
>>
>> *Why not use HTTP?*
>>
>> Good question! The key to warp is the prioritization mechanism, which
>> could work with HTTP/3 and possibly HTTP/2. Twitch has the benefit of
>> running our own network so it was just simpler to make a push-based
>> protocol using QUIC and WebTransport. I've got some ideas for a more
>> complicated HTTP solution that would enable CDN support..
>>
>>
>>
>> *How is media delivered?*
>>
>> Warp sends each segment (group of pictures) over a QUIC stream. Audio and
>> newer video segments are prioritized, causing older video segments to
>> starve during congestion. Either side can cancel the stream to effectively
>> drop the tail of a segment. Media is quite linear by nature and most frames
>> need to be processed in decode order.
>>
>>
>>
>> *Why not drop individual frames?*
>>
>> We decided that it wasn't worth dropping non-reference frames, given
>> their infrequency and relatively small size for high quality media. Our
>> hardware encodes (QuickSync) have only reference frames and we've seen
>> software encodes with only 3% non-reference frames by file size. And of
>> course, dropping reference frames will cause artifacting or freezing so
>> that wasn't an option.
>>
>>
>> * How could this be improved?*
>>
>> We want to experiment with layered coding (ex. SVC) at some point in the
>> future. This would involve transferring non-reference frames/slices on a
>> different QUIC stream so they can be deprioritized. Simulcast would work
>> the same way: transfer each rendition on a different QUIC stream
>> prioritized based on the resolution.
>>
>>
>>
>> *Why use fMP4?*
>>
>> HLS and DASH support CMAF: a standard for fragmenting MP4 files. Warp
>> uses this file format so we can deliver the same segment data regardless of
>> the delivery protocol. The Warp MP4 atom uses JSON because I was too lazy
>> to do things "properly" for this first draft. The wire format doesn't
>> matter!
>>
>> --
>> Moq mailing list
>> Moq@ietf.org
>> https://www.ietf.org/mailman/listinfo/moq
>>
>> <https://urldefense.com/v3/__https:/www.ietf.org/mailman/listinfo/moq__;!!GjvTz_vk!CN-FzuL40h3RSvNUobOIUtEEMChMR2oAcW4N7QAzHt3yJISvAijnqCbRA_82$>
>>
>> --
> Moq mailing list
> Moq@ietf.org
> https://www.ietf.org/mailman/listinfo/moq
>