Re: [Moq] Exploring HTTP/3

Luke Curley <kixelated@gmail.com> Thu, 09 February 2023 19:56 UTC

Return-Path: <kixelated@gmail.com>
X-Original-To: moq@ietfa.amsl.com
Delivered-To: moq@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 90041C15153C for <moq@ietfa.amsl.com>; Thu, 9 Feb 2023 11:56:21 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.097
X-Spam-Level:
X-Spam-Status: No, score=-2.097 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FROM=0.001, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_ZEN_BLOCKED_OPENDNS=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com
Received: from mail.ietf.org ([50.223.129.194]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id ShqOhpcWLoF0 for <moq@ietfa.amsl.com>; Thu, 9 Feb 2023 11:56:17 -0800 (PST)
Received: from mail-ej1-x636.google.com (mail-ej1-x636.google.com [IPv6:2a00:1450:4864:20::636]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 0D708C151524 for <moq@ietf.org>; Thu, 9 Feb 2023 11:56:17 -0800 (PST)
Received: by mail-ej1-x636.google.com with SMTP id qw12so9834965ejc.2 for <moq@ietf.org>; Thu, 09 Feb 2023 11:56:16 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=6UwTJp6eB3IyZZE9kHB0+aKq5jFMfSP/IBDFDW+jSUE=; b=UBxQuiUSCf9y3B7TyyXjwyeH5tcLvMRwGtYml9wQ85LTQrOwbjHGLl767oCKcA67o6 Zwj9wI+wxQSlEbIAbesgMJkOEpFue0uL4Kph2gffLb3ll8+IM/85cST85Sz0ac0kwoIh SVuoxG5Lcz7MYuOU+rnSbnIPeZIslNXznirZAiFs0AtUND3/5rMvKZFc3lQvOe9Z1Wia IJh/gA0IYZPhN+c0L3yak3a7CL5vaKe78pGYmCR3KIEpM2De37MNJg94KtiRjFpqp62Q ZwcMOr3oH0xk4nDOp9gwaDvHpKZyB5uw249DxvuUMNfKT0yQ0USG3jD8FO6bw2V8F3+Y glng==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=6UwTJp6eB3IyZZE9kHB0+aKq5jFMfSP/IBDFDW+jSUE=; b=x5J1u9dCNA/6Hlk6cFNlaxAmFlNIepqi4FIGq5/DzOAvdzYsfphhIdkal6neZ3pLwy GPlTKydI2HDcSgbyxjL5te9OKk/Bk2awpxbgvRlBzsvgF+hzj/X0ih3oefVcocFwSuaA SR8WtIz/n5QhUX0IUi7bZcQfRToUl37GicWnH/0WFcS/KCxC5tTAiOMA3lk42EtsxjDq k9doAThWfoVtE2pvWi40Ueydc1d3Ig6TPxLkjxsYOj8Gnzcp4Uzp4PO5obNzWZcCY2uq ZIrFJgI1w15VOY00XrodR5zerQ0Sa780Wn1WIgZhVsvSBauZ+Zs4rAeJG83m7U0pvwmS Fh3w==
X-Gm-Message-State: AO0yUKXdOHxyjMlx2Z7duEW4KHx4s1nnNbcPVej8VPLzduN55YFc9bZA v2Ixs1DD22CI41KYcGiB3ZE2S7uzVat/Qm33128=
X-Google-Smtp-Source: AK7set+VZG98nsEuQUEyp5ZEfFEJcZhcYIfXq5XSCWy0ZvaP8iW7RDkkakk3f5jYKW7TOjjIPEoqwXDMBnE3dXsLg0Q=
X-Received: by 2002:a17:906:4d58:b0:87b:d491:4311 with SMTP id b24-20020a1709064d5800b0087bd4914311mr2580232ejv.275.1675972575463; Thu, 09 Feb 2023 11:56:15 -0800 (PST)
MIME-Version: 1.0
References: <CAHVo=ZmD7KvKxh2tTeaM2B+0q9=qZPgBydmfaHor5MaPODZf6w@mail.gmail.com> <CALGR9oas8cMBrX1WVf64fH13jr1r-S0KQB5spNzFj41k9Lgk+A@mail.gmail.com> <CAHVo=Z=Nov7B24A=M2pxPnUgyBg3n-AjF8AD2mKwgbTQ81F+mA@mail.gmail.com> <CALGR9ob4i7Z8zuqFVHtzOGV3QMTFjvOK4uZW3Xfvb5ZsoULvMg@mail.gmail.com> <CAKKJt-eC=h20Va4+64r=zkhYXK_ypC+txLqzgpr+YL=HW-DD+g@mail.gmail.com> <CALGR9oYr2OZZcfmFdqLgQ0Uqu7pAwQTnbuf-Fm64m58Spe6xYw@mail.gmail.com> <CAHVo=ZmeJfdoLc9NatDDEeAQG0X9_aygQm0ZSdtzeKEu=bO2pw@mail.gmail.com> <CAOW+2dsgewEtqnT0i=drp5dRDvDtMyKyojEn0sp7Htx6SOJ3Uw@mail.gmail.com>
In-Reply-To: <CAOW+2dsgewEtqnT0i=drp5dRDvDtMyKyojEn0sp7Htx6SOJ3Uw@mail.gmail.com>
From: Luke Curley <kixelated@gmail.com>
Date: Thu, 09 Feb 2023 11:56:03 -0800
Message-ID: <CAHVo=ZmX45czku9upiP2iR15Re4TKM+43bd6gqb5bKmC0heARw@mail.gmail.com>
To: Bernard Aboba <bernard.aboba@gmail.com>
Cc: Lucas Pardue <lucaspardue.24.7@gmail.com>, MOQ Mailing List <moq@ietf.org>, Spencer Dawkins at IETF <spencerdawkins.ietf@gmail.com>
Content-Type: multipart/alternative; boundary="000000000000038a7205f449c690"
Archived-At: <https://mailarchive.ietf.org/arch/msg/moq/LWXRopzqS7v0zbsz3gcYtZOz7oQ>
Subject: Re: [Moq] Exploring HTTP/3
X-BeenThere: moq@ietf.org
X-Mailman-Version: 2.1.39
Precedence: list
List-Id: Media over QUIC <moq.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/moq>, <mailto:moq-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/moq/>
List-Post: <mailto:moq@ietf.org>
List-Help: <mailto:moq-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/moq>, <mailto:moq-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 09 Feb 2023 19:56:21 -0000

If we assume the sender will round-robin, like is the case with HLS/DASH,
then the client has an unbelievably difficult task.


Let's suppose the client application is responsible for canceling requests
during congestion. The client has to decide if it should cancel a pending
request and/or it should make the next request.

1. The client doesn't know when data is buffered at the TCP/QUIC layer,
pending retransmission of a gap before the contents are flushed to the
application.
2.The client doesn't know what data should be forthcoming. It needs to make
assumptions about how much jitter exists in the video system (even
b-frames) so it can predict when a timestamp should have arrived.
3. The client can't determine where queuing occurs in the pipeline. If the
broadcaster is encountering congestion, that will appear like last mile
congestion to the client, so it will incorrectly cancel the previous
segment and degrade the experience.
4. The client doesn't know if a segment will be transferred at network
speed (ex. advertisement on disk) or encoder speed (ex. live stream). If
the next segment is transferred at network speed, then it will fight for
bandwidth.
5. The client doesn't know the congestion window. It doesn't know how many
simultaneous requests could be processed at once, similar to how it doesn't
know if it can switch to a higher rendition, without just trying it.
6. When the client decides to cancel a request, it takes at least
a round-trip to apply.
7. When the client decides to make a new request, it takes at least a
round-trip to start receiving the content.

All of these combined, a HLS/DASH client has a remarkably delayed response
to congestion. The slower the response to congestion, the larger the jitter
buffer required, and the higher the latency. It's extremely difficult to
build an optimal HLS/DASH client so most of them (Twitch included) just
download segments sequentially, cancelling them only in extreme situations.
This is a fundamental latency wall on networks with congestion, and why
Twitch is replacing LHLS (basically LL-DASH) with Warp.


The idea behind delivery order (prioritization) is that the sender has a
much better view of the network, can make faster decisions, and thus can
achieve lower latency. The sender iterates over streams in delivery order,
creating STREAM frames when there's data in the send buffer, until the
congestion window is hit. It's effectively a priority queue, and the end
result is that any number of segments can be transmitted in parallel
without the need to cancel during congestion.

Something does need to annotate each segment with a delivery order. It
could be the viewer in the case of HTTP/3 Warp, or it could be the
broadcaster in the case of WebTransport Warp. The nice thing about doing it
at the broadcaster is that it's consistent throughout multiple layers of
relays, so the response to congestion is the same regardless of the hop.
One of the warts with HTTP/3 Warp I alluded to earlier is that two viewers
could request the same segment with different delivery orders, and it's not
clear to the CDN which one it should use on a cache miss.


On Thu, Feb 9, 2023 at 10:18 AM Bernard Aboba <bernard.aboba@gmail.com>
wrote:

> On Thu, Feb 9, 2023 at 09:56 Luke Curley <kixelated@gmail.com> wrote:
>
>>
> For example, suppose a client issues a request for segment 5 and segment
>> 6, asking that the newer segment is delivered first during congestion.
>>
>> If the two requests share a HTTP/3 or HTTP/2 connection, then the HTTP
>> server can prioritize. Any available bandwidth under the congestion window
>> is spent on STREAM frames for segment 6 first.
>>
>
> [BA] Couldn’t this be accomplished without priority, by having the
> receiver send a STOP_SENDING frame for segment 5, once it became clear it
> was taking too long?
>
>>