Re: [Moq] Exploring HTTP/3
Luke Curley <kixelated@gmail.com> Thu, 09 February 2023 21:02 UTC
Return-Path: <kixelated@gmail.com>
X-Original-To: moq@ietfa.amsl.com
Delivered-To: moq@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 69500C1516F2 for <moq@ietfa.amsl.com>; Thu, 9 Feb 2023 13:02:36 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -7.097
X-Spam-Level:
X-Spam-Status: No, score=-7.097 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FROM=0.001, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_HI=-5, RCVD_IN_ZEN_BLOCKED_OPENDNS=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com
Received: from mail.ietf.org ([50.223.129.194]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id KmLWrV-Qh7IJ for <moq@ietfa.amsl.com>; Thu, 9 Feb 2023 13:02:35 -0800 (PST)
Received: from mail-ej1-x62e.google.com (mail-ej1-x62e.google.com [IPv6:2a00:1450:4864:20::62e]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id B22E1C1516E3 for <moq@ietf.org>; Thu, 9 Feb 2023 13:02:35 -0800 (PST)
Received: by mail-ej1-x62e.google.com with SMTP id dr8so10250270ejc.12 for <moq@ietf.org>; Thu, 09 Feb 2023 13:02:35 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=ZhvX5VVa3T16Ag2v4ZqNG0CHpZNlJT7rN66eAAylODE=; b=H/YKNu+LVOpEJeBCaTwgx0bJ831AFskAaTndTvqAV/3xlP+5dfOhZdgwMRm/eL20KY 5TPLxmTYRiXtSWi+1VjoGPz0cTS8ZhzHNLiTLr/M8TlL7rAfqZSvoQFxpebVRGyPGDO+ Eo3unOeZ2FSqkJCFY9Q0gmlBTJ2K5/v+FwfeD0ii99qgOfSggOl2+oRyl4+IbnTFzSPE bwNf7AYEeAsGQZjSKsQdXYI+bMnUYmOsyL75id61nt1M7vXD8kxa8Mx8ISCiNm1DSZcq ERmh7gkPsKBrjRgusVD9vbCbxePTrVDKchugPdOAeJ7Z/VLFO4LcjyckrGiA9yGYMQ02 tsIw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=ZhvX5VVa3T16Ag2v4ZqNG0CHpZNlJT7rN66eAAylODE=; b=QrDgGuMzn6zCpuoCEspABfr/hnpTysx0JaoVc0Lz7X4acHpuaJWNTmiFedQixRbHwS aHogOj1T5ARju4GmFVhYZ0veF0Pg0wA+dN89JbXx/LeadQjbSI9D7XdNXycMfyr26Uhd 5Xi9cmIEQMK53Se9u3SEJLfW63xa4z64/fxOcdDaq7m+zG7lcoJhSx6tuPHBwHihYoVP FnkRFigHarPuMJy9FGRhsRXOwLFJwfdiuJssRzQWd8XQuHo/5BHs1SYzM8jtFYSGZka0 at4Pzoh9LyRVseiDezrQ6ZFwOGZl/uT+b0fY1ZOwT0ewqPCcZCQ3RmkPXImIBEgTAUpH Fhxg==
X-Gm-Message-State: AO0yUKWZPaacDXacZGzl3I9o9XZ18GANSD4GNHDNQvJNAcbfCivePEVX a7cYkqIeevvmXKwuis4uLOqg/+TFmaAGIjvRsuH//sUNSllV/A==
X-Google-Smtp-Source: AK7set9XIx65VokXJMIWdAOPtZFJ2TCuUTjD0plwEANGufQ3zMey/Se/WYvq33ra8e6jT33WzN3GKNMkQZQyTlStBqA=
X-Received: by 2002:a17:906:4d58:b0:87b:d491:4311 with SMTP id b24-20020a1709064d5800b0087bd4914311mr2613640ejv.275.1675976553770; Thu, 09 Feb 2023 13:02:33 -0800 (PST)
MIME-Version: 1.0
References: <CAHVo=ZmD7KvKxh2tTeaM2B+0q9=qZPgBydmfaHor5MaPODZf6w@mail.gmail.com> <CALGR9oas8cMBrX1WVf64fH13jr1r-S0KQB5spNzFj41k9Lgk+A@mail.gmail.com> <CAHVo=Z=Nov7B24A=M2pxPnUgyBg3n-AjF8AD2mKwgbTQ81F+mA@mail.gmail.com> <CALGR9ob4i7Z8zuqFVHtzOGV3QMTFjvOK4uZW3Xfvb5ZsoULvMg@mail.gmail.com> <CAKKJt-eC=h20Va4+64r=zkhYXK_ypC+txLqzgpr+YL=HW-DD+g@mail.gmail.com> <CALGR9oYr2OZZcfmFdqLgQ0Uqu7pAwQTnbuf-Fm64m58Spe6xYw@mail.gmail.com> <CAHVo=ZmeJfdoLc9NatDDEeAQG0X9_aygQm0ZSdtzeKEu=bO2pw@mail.gmail.com> <CAOW+2dsgewEtqnT0i=drp5dRDvDtMyKyojEn0sp7Htx6SOJ3Uw@mail.gmail.com> <CAHVo=ZmX45czku9upiP2iR15Re4TKM+43bd6gqb5bKmC0heARw@mail.gmail.com>
In-Reply-To: <CAHVo=ZmX45czku9upiP2iR15Re4TKM+43bd6gqb5bKmC0heARw@mail.gmail.com>
From: Luke Curley <kixelated@gmail.com>
Date: Thu, 09 Feb 2023 13:02:22 -0800
Message-ID: <CAHVo=ZkLcDq3ZjwkQyK_1EkzwQ20VNB8xCSMhfrGiZpM3M3P+w@mail.gmail.com>
To: Bernard Aboba <bernard.aboba@gmail.com>
Cc: Lucas Pardue <lucaspardue.24.7@gmail.com>, MOQ Mailing List <moq@ietf.org>, Spencer Dawkins at IETF <spencerdawkins.ietf@gmail.com>
Content-Type: multipart/alternative; boundary="00000000000023b38605f44ab3b5"
Archived-At: <https://mailarchive.ietf.org/arch/msg/moq/ugP82C5nv2-BVU_24-y4qcb0cpw>
Subject: Re: [Moq] Exploring HTTP/3
X-BeenThere: moq@ietf.org
X-Mailman-Version: 2.1.39
Precedence: list
List-Id: Media over QUIC <moq.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/moq>, <mailto:moq-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/moq/>
List-Post: <mailto:moq@ietf.org>
List-Help: <mailto:moq-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/moq>, <mailto:moq-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 09 Feb 2023 21:02:36 -0000
Sorry, that was probably too in-depth. I meant to reply to Bernard saying that STOP_SENDING during congestion is not good enough. The receiver has a limited view of the network and thus responds slowly or incorrectly to congestion. The sender, on the other hand, has direct access to both the queued data and the congestion window. On Thu, Feb 9, 2023 at 11:56 AM Luke Curley <kixelated@gmail.com> wrote: > If we assume the sender will round-robin, like is the case with HLS/DASH, > then the client has an unbelievably difficult task. > > > Let's suppose the client application is responsible for canceling requests > during congestion. The client has to decide if it should cancel a pending > request and/or it should make the next request. > > 1. The client doesn't know when data is buffered at the TCP/QUIC layer, > pending retransmission of a gap before the contents are flushed to the > application. > 2.The client doesn't know what data should be forthcoming. It needs to > make assumptions about how much jitter exists in the video system (even > b-frames) so it can predict when a timestamp should have arrived. > 3. The client can't determine where queuing occurs in the pipeline. If the > broadcaster is encountering congestion, that will appear like last mile > congestion to the client, so it will incorrectly cancel the previous > segment and degrade the experience. > 4. The client doesn't know if a segment will be transferred at network > speed (ex. advertisement on disk) or encoder speed (ex. live stream). If > the next segment is transferred at network speed, then it will fight for > bandwidth. > 5. The client doesn't know the congestion window. It doesn't know how many > simultaneous requests could be processed at once, similar to how it doesn't > know if it can switch to a higher rendition, without just trying it. > 6. When the client decides to cancel a request, it takes at least > a round-trip to apply. > 7. When the client decides to make a new request, it takes at least a > round-trip to start receiving the content. > > All of these combined, a HLS/DASH client has a remarkably delayed response > to congestion. The slower the response to congestion, the larger the jitter > buffer required, and the higher the latency. It's extremely difficult to > build an optimal HLS/DASH client so most of them (Twitch included) just > download segments sequentially, cancelling them only in extreme situations. > This is a fundamental latency wall on networks with congestion, and why > Twitch is replacing LHLS (basically LL-DASH) with Warp. > > > The idea behind delivery order (prioritization) is that the sender has a > much better view of the network, can make faster decisions, and thus can > achieve lower latency. The sender iterates over streams in delivery order, > creating STREAM frames when there's data in the send buffer, until the > congestion window is hit. It's effectively a priority queue, and the end > result is that any number of segments can be transmitted in parallel > without the need to cancel during congestion. > > Something does need to annotate each segment with a delivery order. It > could be the viewer in the case of HTTP/3 Warp, or it could be the > broadcaster in the case of WebTransport Warp. The nice thing about doing it > at the broadcaster is that it's consistent throughout multiple layers of > relays, so the response to congestion is the same regardless of the hop. > One of the warts with HTTP/3 Warp I alluded to earlier is that two viewers > could request the same segment with different delivery orders, and it's not > clear to the CDN which one it should use on a cache miss. > > > On Thu, Feb 9, 2023 at 10:18 AM Bernard Aboba <bernard.aboba@gmail.com> > wrote: > >> On Thu, Feb 9, 2023 at 09:56 Luke Curley <kixelated@gmail.com> wrote: >> >>> >> For example, suppose a client issues a request for segment 5 and segment >>> 6, asking that the newer segment is delivered first during congestion. >>> >>> If the two requests share a HTTP/3 or HTTP/2 connection, then the HTTP >>> server can prioritize. Any available bandwidth under the congestion window >>> is spent on STREAM frames for segment 6 first. >>> >> >> [BA] Couldn’t this be accomplished without priority, by having the >> receiver send a STOP_SENDING frame for segment 5, once it became clear it >> was taking too long? >> >>>
- Re: [Moq] Exploring HTTP/3 Lucas Pardue
- [Moq] Exploring HTTP/3 Luke Curley
- Re: [Moq] Exploring HTTP/3 Spencer Dawkins at IETF
- Re: [Moq] Exploring HTTP/3 Luke Curley
- Re: [Moq] Exploring HTTP/3 Ali C. Begen
- Re: [Moq] Exploring HTTP/3 Lucas Pardue
- Re: [Moq] Exploring HTTP/3 Victor Vasiliev
- Re: [Moq] Exploring HTTP/3 Roberto Peon
- Re: [Moq] Exploring HTTP/3 Lucas Pardue
- Re: [Moq] Exploring HTTP/3 Roberto Peon
- Re: [Moq] Exploring HTTP/3 Ali C. Begen
- Re: [Moq] Exploring HTTP/3 Roberto Peon
- Re: [Moq] Exploring HTTP/3 Ali C. Begen
- Re: [Moq] Exploring HTTP/3 Mark Nottingham
- Re: [Moq] Exploring HTTP/3 Spencer Dawkins at IETF
- Re: [Moq] Exploring HTTP/3 Lucas Pardue
- Re: [Moq] Exploring HTTP/3 Luke Curley
- Re: [Moq] Exploring HTTP/3 Roberto Peon
- Re: [Moq] Exploring HTTP/3 Bernard Aboba
- Re: [Moq] Exploring HTTP/3 Spencer Dawkins at IETF
- Re: [Moq] Exploring HTTP/3 Bernard Aboba
- Re: [Moq] Exploring HTTP/3 Spencer Dawkins at IETF
- Re: [Moq] Exploring HTTP/3 Lucas Pardue
- Re: [Moq] Exploring HTTP/3 Charles 'Buck' Krasic
- Re: [Moq] Exploring HTTP/3 Roberto Peon
- Re: [Moq] Exploring HTTP/3 Roberto Peon
- Re: [Moq] Exploring HTTP/3 Luke Curley
- Re: [Moq] Exploring HTTP/3 Luke Curley
- Re: [Moq] Exploring HTTP/3 Christian Huitema
- Re: [Moq] Exploring HTTP/3 Victor Vasiliev
- Re: [Moq] Exploring HTTP/3 Suhas Nandakumar