Re: [Moq] Latency @ Twitch

Piers O'Hanlon <piers.ohanlon@bbc.co.uk> Wed, 10 November 2021 15:45 UTC

Return-Path: <piers.ohanlon@bbc.co.uk>
X-Original-To: moq@ietfa.amsl.com
Delivered-To: moq@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 0D9493A0DF7 for <moq@ietfa.amsl.com>; Wed, 10 Nov 2021 07:45:00 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.898
X-Spam-Level:
X-Spam-Status: No, score=-1.898 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, HTML_MESSAGE=0.001, RCVD_IN_MSPIKE_H2=-0.001, SPF_NONE=0.001, URIBL_BLOCKED=0.001] autolearn=unavailable autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id P-wu1-FPkGrj for <moq@ietfa.amsl.com>; Wed, 10 Nov 2021 07:44:55 -0800 (PST)
Received: from mailout1.telhc.bbc.co.uk (mailout1.telhc.bbc.co.uk [132.185.161.180]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 34B7E3A0EA4 for <moq@ietf.org>; Wed, 10 Nov 2021 07:44:54 -0800 (PST)
Received: from BGB01XI1006.national.core.bbc.co.uk ([10.184.50.56]) by mailout1.telhc.bbc.co.uk (8.15.2/8.15.2) with ESMTP id 1AAFih4J010680; Wed, 10 Nov 2021 15:44:43 GMT
Received: from BGB01XH1001.national.core.bbc.co.uk (10.184.63.1) by BGB01XI1006.national.core.bbc.co.uk (10.184.50.56) with Microsoft SMTP Server (TLS) id 14.3.498.0; Wed, 10 Nov 2021 15:44:43 +0000
Received: from BGB01XH1003.national.core.bbc.co.uk (10.118.80.1) by BGB01XH1001.national.core.bbc.co.uk (10.184.63.1) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Wed, 10 Nov 2021 15:44:43 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com (104.47.14.54) by BGB01XH1003.national.core.bbc.co.uk (172.22.64.33) with Microsoft SMTP Server (TLS) id 15.0.1497.18 via Frontend Transport; Wed, 10 Nov 2021 15:44:42 +0000
Received: from AM7PR01MB6738.eurprd01.prod.exchangelabs.com (2603:10a6:20b:1a2::9) by AS8PR01MB7287.eurprd01.prod.exchangelabs.com (2603:10a6:20b:254::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4669.13; Wed, 10 Nov 2021 15:44:41 +0000
Received: from AM7PR01MB6738.eurprd01.prod.exchangelabs.com ([fe80::9a4:4582:1500:e4d5]) by AM7PR01MB6738.eurprd01.prod.exchangelabs.com ([fe80::9a4:4582:1500:e4d5%6]) with mapi id 15.20.4690.016; Wed, 10 Nov 2021 15:44:41 +0000
From: Piers O'Hanlon <piers.ohanlon@bbc.co.uk>
To: Joerg Ott <ott@in.tum.de>
CC: Kirill Pugin <ikir=40fb.com@dmarc.ietf.org>, Luke Curley <kixelated@gmail.com>, "Mo Zanaty (mzanaty)" <mzanaty=40cisco.com@dmarc.ietf.org>, Ian Swett <ianswett@google.com>, "Ali C. Begen" <ali.begen@networked.media>, Justin Uberti <juberti@alphaexplorationco.com>, MOQ Mailing List <moq@ietf.org>, Bernard Aboba <bernard.aboba@gmail.com>
Thread-Topic: [Moq] Latency @ Twitch
Thread-Index: AQHXz4Iha1HowypueUeGgLSGNgqcS6vwtIiAgAFBZICAABT1AIAAPBSAgAAJGgCAAAl+AIAA3TuAgAhNpQCAAAdWgIAAA92AgABL+wCAAAD2AIABE7oAgAAEbwA=
Date: Wed, 10 Nov 2021 15:44:41 +0000
Message-ID: <B3AE2C3B-744A-4DE4-8943-DFBAD6B3BF27@bbc.co.uk>
References: <CAHVo=ZnXNnT2uod6oxHXTRoyA58cpn35BrV6eOXXnGUOFbcvSQ@mail.gmail.com> <0ADDD7B3-B49E-40E1-99E9-278EF0EA9B85@networked.media> <AF32886D-0524-45D4-9577-FCEFD601A0A1@bbc.co.uk> <73C6FFEB-CE81-4DE7-B110-55892D746927@networked.media> <CAHVo=Znu7F18fj4Anxz3j1byM+9aQmJ6N4DdFjUZk9fGjG8iXg@mail.gmail.com> <CAKcm_gM=bcALtqoLd8mYLdCiTK=ZfEF0RkXBkw17bPR6MjoMhA@mail.gmail.com> <CAHVo=ZngW+Z4-wGqAb4fRYQiSz6O4tOq1+nuto3PJaYLj1iWFg@mail.gmail.com> <6904CE31-940F-4D10-B312-4AEB67E9F9CB@bbc.co.uk> <CAOLzse37YZdnOLkt70F8yvmSXnaQ+KktX00keje3Vh2xkuFzjg@mail.gmail.com> <CAOW+2dtXVTzYK-ZkY_jSD4y8wa4_LxOO1fEeumwbmTzc1RAzDQ@mail.gmail.com> <9D095CBB-7BA8-4773-8981-8131C956F1C4@cisco.com> <CAHVo=Zk3NmN8QwdiSweb6dC+ziVfKLUKn1f=JhcTU3sv1rh2VQ@mail.gmail.com> <D3D938E0-3D6C-4AD0-8C64-495CFAC3871E@fb.com> <6f058a8c-88bf-1f3d-d711-3ed2e530903e@in.tum.de>
In-Reply-To: <6f058a8c-88bf-1f3d-d711-3ed2e530903e@in.tum.de>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=bbc.co.uk;
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: b3b100dc-7702-4add-4e2c-08d9a46102b9
x-ms-traffictypediagnostic: AS8PR01MB7287:
x-microsoft-antispam-prvs: <AS8PR01MB72876AC48896E9C7CFDB8009BD939@AS8PR01MB7287.eurprd01.prod.exchangelabs.com>
x-ms-oob-tlc-oobclassifiers: OLM:5797;
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: vXyJeW66FbGq8M1EB8RX469k8tYUMCtBLfeaKLzy+cL9BS2DTAg30d3ZQjEV0p0mfvOEgvquY8gOz0yKCoBF9IdDv4+osAzKBHOnvbuBT0NmdXjkLLD8uaxd04+XBJneuUpjxohI8+9kOEU2/7SfPJmvujWFv7hvuSq2nbhxlQJlmbHhDaDdmg2+KGs2boJ0oIQEm1MRCBCrLhJ5LlgNbmXeRNiEwlKsOVMnumIHGGnarW/6iz+ZSfZaAvD1Iid/OlOjvcgyPIYlWUJa6YSiVLsSISB8BBG/K58imf6teQohJvKv1ChFmBcPlVvvp1fqFDsh8ErM01n8nXWYT42T386U1p3AcTa728iF+gf5q2yy2cjNeIpBM/6hbiTIVpdYivMBXZIAl3dD9tJmMCryB9TMx7H/ofM/MX9PmeB5aGfKkePXB7W57zzb1MqkEqnBZimYSPTVM/Z2ydUBxu0K4bMxU2Wjzzv3SSwoj7b/rPdj6IyhK/5qjpF0Z1iivbODAaCBrtc0kDiMuUo/DUIPizfE+w+f6Z3WMB9GCQ2Nzquu2nYLwDwqamv1xE8Ey+8nXx1m91hDYF4ISuUyjl7kD1c3FMdbafqROmRzMZhAbbcyYouk40HtDFcFpt9IfYHPbX/2zV+ku4QCWKqFaqHnuRqtziQpR1vy9kHAxdWYoIXXeb4Fp8mWdnS3pp7c//y+JExVHtbeGeTwSRjTxTlyk6Evz1wq/s3w95BP+/Q1jNzIPk9D6rjCEG+X+iG9XMvwpSqK4AxnrFOTN1FvCLHBmdX77FvTwWWI4pHo3//CZ7Y9JLxldceG3XiWQOpL3/WICbbi4hrYImh+ezwIrvanjSdNzKhEAIgtcF29UoftrAY=
x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:AM7PR01MB6738.eurprd01.prod.exchangelabs.com; PTR:; CAT:NONE; SFS:(4636009)(366004)(34552001)(38070700005)(66574015)(83380400001)(91956017)(54906003)(4326008)(38100700002)(66946007)(2616005)(66446008)(8936002)(64756008)(66556008)(6512007)(30864003)(966005)(8676002)(66476007)(86362001)(76116006)(6486002)(508600001)(122000001)(316002)(2906002)(36756003)(33656002)(166002)(71200400001)(6916009)(5660300002)(186003)(53546011)(6506007)(579004)(559001); DIR:OUT; SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: Hs6k8tt3XAo8YrdFNzMO92Vaxeo5gYqhS1jalQ4aMTT1+oKAMIYS7djI93hdB8xtJka42+myqr9DiM3U2Z7c5qDSUAP7NU5h58cys9makAIXtKNSgJTtn5vd0L+0LWilAY3hP6EIha71u0MiPjajFrKopLOC8PhQ1gyx/vQeMMncreZxK5UO2bObtwBC/p4geH1xkJQp0fFuAhOYZoXgE8WGhzXPnWFqIBDNbMZGwoutI80XY8Bbqket/i/hmZlEQH+UMuDc7ZfNaVv5EIZeiSKG7VKAXiv5caVzRdykwwuS2yHfJ2JvN+bfnu3xXWSuGEMuaZaliHNuxX2sx62KM2iw5ZqrEchJL90wb2joKSUxozefCl+3mgQ1kQZcOgrPz5m3jUnC13jvqRR+rUzV4Jcb800vE0uNtwmmoDqBkLL4Qk9GyCWvxN+SoiJhI8EA+nt4vME1kdgp6r1GgXmcmTeP912Lb4WXR0xNw5PvX+1Cxx7CiLUc8wRLHTtbSsCSCj/axaKB/1dg81gZmBPby7QwvrmnO1iQBFYjiMx7pPGQjGsf5AbxsVCQAoAImMDxyztQ0tkzAOs6HyPrAT6puZCCZ9AYwuhw5mCj4hrMRRE6LQvfO73CRJs02LRclShKSE+fi/NWQnkJlf8SNHM1NvFndpm1XuwbBW9BTxdRRsGX9ysgoHT3t+Hc0c/YUoDwbsUe5APNhygoWuJ1Hh7nh4BbYG3m3nJ6M5YV/8NSuvRcp0iPHyZg9bRx/TGCg/J4yW7qsp8nHOnfCpvmM2ahucettQGr6jPyLErZZwZvjLW4gmn/OSvDoZgByteClUxb1oI43aEkOjVMAjbtR7aIBVbPbJJTTZ9aU2bUljuGR+UKj5VXxxl+h0LZ+gGhL+HQzyKIPihvBiCBqfw6tjL60e9zQUTyzeOFGPW/54BIGdx4LGqqEhI3z0UKheoQvOMj5corE8XFbJGh32nOgki/x3w9ToGP9hInJsw7xqwu0xG8kPfMXHuVrOIST8vCukwaFx4Z+MRVqPcvw/5laLelY911TMAhCWYUiIueLnhE9Tywd9dt+B8mv2/F1mm4c4AydWXs47H7hcRzrm9iYUPTgV7+uj5BMYTM4wBSWR0eMSBkWZyMFHaAWXBydscwQBfgItULUI+XrMdpNFVdTb4VlbpBojSm90m0fSkwCsyIXA74A6LcscYlQd4C6/RL/4bKjp8s71BJiaF2UnBMwxFr6ocZZwMflzDGGqSI47OnUOaZHCuvsShXrSmfB2V05b7GeK0mpKexuIsfYQiDlcWgtULf6YdwyzzyWIfua08CftHitoe4jTB2wId4KlYdmSH5RsnzpvkLaKo/a5KIB68fk2glek9jeU1KEXryhyDhX6XdgRIl9B0BW7ECaRiWjsuwaqlhXhXu3y1XutyQukg1kgVgg2JAKYmEZqQJsdX8CvpKO3o8sompAqiE1jJ9Dv2KT5LLLsvoN4azvR0SZfcutdki1ddS+bZklge8YEj+OukvZK8rP8Gai1ZNrgrqwHBwQlcucD8XD00EwIcX/epJagtI7ctuRGJwV6aXAy3g9AftrY2+NF8B/awytKahsXMMAfHeToDvEwXJ11XLDtt+ZDW1hueDG13sx+3tYyUapde6lj43P8fZTUsOMbIrfbNZyIb5McLgW/Q1cQnGoi2bYvJNxLNSYlp0kj5MfFmEU7w=
arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=N0GG+VLPpHiBIzfzEXtvSiZaIdiOYV0RF5mnrokpNn34uB6E7Q/icT7YOmnBSDkrQJr0FjLz5I461V1RUu+j/oo6SOcJtoueFKymiRYcy2IioS7L7JNNj8ZaRZui9+InR13JDopKJU/Oe09nuOIcTd1T3r7PC1pivcVgogrviq5r2zAFidxohAWcLeDkClpfH5XoNeIeZwqXq0cu1W7geyHa9Y14u2IUYY2lwABqIlcLrvIcu/lYN6CmLbHwiqTh9AC9DoMH6yUwBsGaU7s5yMZhZK/BMiWjw5vBaMJ3lAyMVB9vbQqEM1QxUnFvtHGBrvWGqfPcQb1XLbAgPYPq0A==
arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=hse8ZmH/vWFEwDWKbqzDZAZzgvGCo1FETV8okEf2Gc0=; b=Z4o1EGDr/P7ud4u1XcochF2NA46l9uplZSm+32oAn00Xt0hlCm3TApV1JKxrRiD2vuCccFI2T0HRH1EVnJWHsOgtCJbOcFNVJJZJYEbPXwUw6Z1toi+MVwtw5+4R97EOeAnq5KRb/eyt6anUvsD69/sHlDa2Lgm6J/0ObSlnkQeZsYoicPP5McHcXn3iN+QAgjifatmo1vQ5tr7c1e+bZq0+ZcxIw9hlYSM2PRxmbcl7RQK3Egr5K4VGso7Z7muZiV5uP+wpn8AKLRREdBKASIX6HPEIezGJNvJAcMiqDt8j3tdsK87Yo2MfGM3TUJQ+J3vPPlRTfudVkmus20n37g==
arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=bbc.co.uk; dmarc=pass action=none header.from=bbc.co.uk; dkim=pass header.d=bbc.co.uk; arc=none
x-ms-exchange-crosstenant-authas: Internal
x-ms-exchange-crosstenant-authsource: AM7PR01MB6738.eurprd01.prod.exchangelabs.com
x-ms-exchange-crosstenant-network-message-id: b3b100dc-7702-4add-4e2c-08d9a46102b9
x-ms-exchange-crosstenant-originalarrivaltime: 10 Nov 2021 15:44:41.3181 (UTC)
x-ms-exchange-crosstenant-fromentityheader: Hosted
x-ms-exchange-crosstenant-id: 0e587133-568e-44d6-801d-2266bc52e5cf
x-ms-exchange-crosstenant-mailboxtype: HOSTED
x-ms-exchange-crosstenant-userprincipalname: 37xyIZ6SnuWy0QmyR/fUkPu0E599x2WGq76FYPZhSgjhvSwJK4Z3SCLiOe04n89w6lF8Dl3uwZQjdMHAqIL6PQ==
x-ms-exchange-transport-crosstenantheadersstamped: AS8PR01MB7287
Content-Type: multipart/alternative; boundary="_000_B3AE2C3B744A4DE48943DFBAD6B3BF27bbccouk_"
MIME-Version: 1.0
X-OriginatorOrg: bbc.co.uk
X-EXCLAIMER-MD-CONFIG: c91d45b2-6e10-4209-9543-d9970fac71b7
X-TM-AS-Product-Ver: SMEX-12.5.0.1300-8.6.1018-24052.007
X-TM-AS-Result: No-13.020500-8.000000-10
X-TMASE-MatchedRID: 8HTFlOrbAtEPCyiJ1lKbPOEbUg4xvs+weFyzQYzPQ+TagsZM0qVv1/Uk 4dVDtQTHWYy5qP5asEcr9PB/a20OBULMcWGLKmhWLTHwnYOikQ2RPtwwl97om9vpj5+dNlQvjpS 64nDSufgAU0SftFCogLGWHwl4WR95NGfbF3CuxemAV3ZMVrKwyJJOcXMQc4jBtXl9IxEPXOrion KGnEof/UvuLUFcu0MoBmfkIJXsGFpIm8dlggjEyENTnAhL0/m3bveZreOw8zYSrOHawddkcTjXT uvQviw8qfspxhTdLgb7ZiQFZE9+84+uMz+W0xT57DzBuedLDxuVq+okl1rYD+pXjNFX/i9ShUFk y2glWuOM5u/CFmL6CWgSpopdJG9AI423U5AaFw4AKzYLecaUGDfwU1OWX2US2Fq6de/aaIa/bJM jxVLZKWfNCcEVrP0WkFDOD80xU1Bx9EmrVy1N/PC1l8Gignprc/VFKUl2I1pSwaqf8l1iZJ6RR7 8hppN7H8z938sJWbGYLCTCbNErgjrtp7lHa+BCFqNNeSB7ZZAIKlg5gvgW3F8Ay5JsA3H9vyVy3 q4JobwFuk8SCmbssk0vLlXaANxw+YrD22aYgIEQ9/tMNQ4aiiwvdvzdTlpZDlLOAk1ICsv+k/BS 4NLeni7lEntNqW/uyUVD56sAT97QXn6rLnPBzO0yyL51qL/RT5++FIORChAa1EOwPTK5hptjPSO eBAIl8BUuhBsTjERCZ5L+QeY2mCLCoCz4vNpfneIQmu8UmaGP/EshoNKyEe8uQidO/zNIWrbvBw 6CMICuIMQpGfOhsaifMatLfx5c1QwDBMM4GeyeAiCmPx4NwMFrpUbb72MU1B0Hk1Q1KyIikvdS5 PcmKCAJgyd9wrc8hHR6mwppPXxf4lEoBKB4cFqgeCWq5gRrB0M56bSSB9BAMMD1Yvc72eN4kmKu d3jG15JRQxr4BtyGEu83/ojQOGc5iXpPBftt
X-TM-AS-User-Approved-Sender: Yes
X-TM-AS-User-Blocked-Sender: No
X-TMASE-Result: 10--13.020500-8.000000
X-TMASE-Version: SMEX-12.5.0.1300-8.6.1018-24052.007
Archived-At: <https://mailarchive.ietf.org/arch/msg/moq/paO6mEiRamsiWjFKTnCCt8YFyCU>
Subject: Re: [Moq] Latency @ Twitch
X-BeenThere: moq@ietf.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Media over QUIC <moq.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/moq>, <mailto:moq-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/moq/>
List-Post: <mailto:moq@ietf.org>
List-Help: <mailto:moq-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/moq>, <mailto:moq-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 10 Nov 2021 15:45:00 -0000

I guess it depends whether a QUIC API would allow for obtaining per flow stats or just info for the entire connection? Using information from ACKs just for your flow may provide a clearer picture…

Best,

Piers

On 10 Nov 2021, at 15:28, Joerg Ott <ott@in.tum.de<mailto:ott@in.tum.de>> wrote:

If you can get to QUIC's readily available stats it seems you can
roughly approximate what you need for know for building some of the
RTP CC on top without extra RTCP feedback packets.

Per-packet timestamps mentioned by Justin would still be a nice to have.

Best,
Jörg

On 10.11.21 00:01, Kirill Pugin wrote:
Fwiw, for RUSH/QUIC we tested RMCAT as CC 😃
We did have to “re-implement” RTCP feedback, but that was relatively straightforward and we could re-use RMCAT implementation that was used for video calling applications.
Best,
Kirill
*From: *Luke Curley <kixelated@gmail.com<mailto:kixelated@gmail.com>>
*Date: *Tuesday, November 9, 2021 at 2:58 PM
*To: *"Mo Zanaty (mzanaty)" <mzanaty=40cisco.com@dmarc.ietf.org<mailto:mzanaty=40cisco.com@dmarc.ietf.org>>
*Cc: *Bernard Aboba <bernard.aboba@gmail.com<mailto:bernard.aboba@gmail.com>>, Justin Uberti <juberti@alphaexplorationco.com<mailto:juberti@alphaexplorationco.com>>, Ian Swett <ianswett@google.com<mailto:ianswett@google.com>>, "Ali C. Begen" <ali.begen@networked.media<mailto:ali.begen@networked.media>>, MOQ Mailing List <moq@ietf.org<mailto:moq@ietf.org>>
*Subject: *Re: [Moq] Latency @ Twitch
Maybe a dumb thought, but is the PROBE_RTT phase required when sufficiently application limited, as is primarily the case for live video? If I understand correctly, it's meant to drain the queue to remeasure the minimum RTT, but that doesn't seem necessary when the queue is constantly being drained due to a lack of data to send.
Either way, the issue is that existing TCP algorithms don't care about the live video use-case, and those are the ones that have been ported to QUIC thus far. But like Justin mentioned, this doesn't actually matter for the sake of standardizing a video over QUIC protocol provided the building blocks are in place.
The real question is: do QUIC ACKs contain enough signal to implement an adequate live video congestion control algorithm? If not, how can we increase that signal, potentially taking cues from RMCAT (ex. RTT on a per-packet basis)?
On Tue, Nov 9, 2021, 10:27 AM Mo Zanaty (mzanaty) <mzanaty=40cisco.com@dmarc.ietf.org<mailto:mzanaty=40cisco.com@dmarc.ietf.org> <mailto:40cisco.com@dmarc.ietf.org>> wrote:
   All current QUIC CCs (BBRv1/2, CUBIC, NewReno, etc.) are not well
   suited for real-time media, even for a rough “envelope” or
   “circuit-breaker”. RMCAT CCs are explicitly designed for real-time
   media, but, of course, rely on RTCP feedback, so must be adapted to
   QUIC feedback.
   Mo
   On 11/9/21, 1:13 PM, "Bernard Aboba" <bernard.aboba@gmail.com<mailto:bernard.aboba@gmail.com>
   <mailto:bernard.aboba@gmail.com>> wrote:
   Justin said:
   "As others have noted, BBR does not work great out of the box for
   realtime scenarios."
   [BA] At the ICCRG meeting on Monday, there was an update on BBR2:
   https://datatracker.ietf.org/meeting/112/materials/slides-112-iccrg-bbrv2-update-00.pdf
   <https://datatracker.ietf.org/meeting/112/materials/slides-112-iccrg-bbrv2-update-00.pdf>
   While there are some improvements, issues such as "PROBE_RTT" and
   rapid rampup after loss remain, and overall, it doesn't seem like
   BBR2 is going to help much with realtime scenarios. Is that fair?
   On Tue, Nov 9, 2021 at 12:46 PM Justin Uberti
   <juberti@alphaexplorationco.com<mailto:juberti@alphaexplorationco.com>
   <mailto:juberti@alphaexplorationco.com>> wrote:
       Ultimately we found that it wasn't necessary to standardize the
       CC as long as the behavior needed from the remote side (e.g.,
       feedback messaging) could be standardized.
       As others have noted, BBR does not work great out of the box for
       realtime scenarios. The last time this was discussed, the
       prevailing idea was to allow the QUIC CC to be used as a sort of
       circuit-breaker, but within that envelope the application could
       use whatever realtime algorithm it preferred (e.g, goog-cc).
       On Thu, Nov 4, 2021 at 3:58 AM Piers O'Hanlon
       <piers.ohanlon@bbc.co.uk<mailto:piers.ohanlon@bbc.co.uk> <mailto:piers.ohanlon@bbc.co.uk>> wrote:
               On 3 Nov 2021, at 21:46, Luke Curley
               <kixelated@gmail.com<mailto:kixelated@gmail.com> <mailto:kixelated@gmail.com>> wrote:
               Yeah, there's definitely some funky behavior in BBR when
               application limited but it's nowhere near as bad as
               Cubic/Reno. With those algorithms you need to burst
               enough packets to fully utilize the congestion window
               before it can be grown. With BBR I believe you need to
               burst just enough to fully utilize the pacer, and even
               then this condition
               <https://source.chromium.org/chromium/chromium/src/+/master:net/third_party/quiche/src/quic/core/congestion_control/bbr_sender.cc;l=393> lets
               you use application-limited samples if they would
               increase the send rate.
           And there’s also the idle cwnd collapse/reset behaviour to
           consider if you’re sending a number of frames together and
           their inter-data gap exceeds the RTO - I’m not quite sure
           how the various QUIC stacks have translated RFC2861/7661
           advice on this…?
               I started with BBR first because it's simpler, but I'm
               going to try out BBR2 at some point because of the
               aforementioned PROBE_RTT issue. I don't follow the
               congestion control space closely enough; are there any
               notable algorithms that would better fit the live video
               use-case?
           I guess Google’s Goog_CC appears to be well used in the
           WebRTC space (e.g. WEBRTC
           <https://webrtc.googlesource.com/src/+/refs/heads/main/modules/congestion_controller/goog_cc> and
           aiortc
           <https://github.com/aiortc/aiortc/blob/1a192386b721861f27b0476dae23686f8f9bb2bc/src/aiortc/rate.py#L271>)
           despite the draft
           <https://datatracker.ietf.org/doc/html/draft-ietf-rmcat-gcc> never
           making it to RFC status… There's also SCREAM
           <https://datatracker.ietf.org/doc/rfc8298/> which has an
           open source implementation
           <https://github.com/EricssonResearch/scream> but not sure
           how widely deployed it is.
               On Wed, Nov 3, 2021 at 2:12 PM Ian Swett
               <ianswett@google.com<mailto:ianswett@google.com> <mailto:ianswett@google.com>> wrote:
                    From personal experience, BBR has some issues with
                   application limited behavior, but it is still able
                   to grow the congestion window, at least slightly, so
                   it's likely an improvement over Cubic or Reno.
                   On Wed, Nov 3, 2021 at 4:40 PM Luke Curley
                   <kixelated@gmail.com<mailto:kixelated@gmail.com> <mailto:kixelated@gmail.com>>
                   wrote:
                       I think resync points are an interesting idea
                       although we haven't evaluated them. Twitch did
                       push for S-frames in AV1 which will be another
                       option in the future instead of encoding a full
                       IDR frame at these resync boundaries.
                       An issue is you have to make the hard decision
                       to abort the current download and frantically
                       try to pick up the pieces before the buffer
                       depletes. It's a one-way door (maybe your
                       algorithm overreacted) and you're going to
                       be throwing out some media just to redownload it
                       at a lower bitrate.
                       Ideally, you could download segments in parallel
                       without causing contention. The idea is to spend
                       any available bandwidth on the new segment to
                       fix the problem, and any excess bandwidth on the
                       old segment in the event it arrives before the
                       player buffer actually depletes. That's more or
                       less the core concept for what we've built using
                       QUIC, and it's compatible with resync points if
                       we later go down that route.
                       And you're exactly right Piers. The fundamental
                       issue is that a web player lacks the low level
                       timing information required to infer
                       the delivery rate. You would want something like
                       BBR's rate estimation
                       <https://datatracker.ietf.org/doc/html/draft-cheng-iccrg-delivery-rate-estimation> which
                       inspects the time delta between packets to
                       determine the send rate. That gets really
                       difficult when the OS and browser delay flushing
                       data to the application, be it for performance
                       reasons or due to packet loss (to maintain
                       head-of-line blocking).
                       I did run into CUBIC/Reno not being able to grow
                       the congestion window when frames are sent one
                       at a time (application limited). I don't believe
                       BBR suffers from the same problem though due to
                       the aforementioned rate estimator.
                       On Wed, Nov 3, 2021 at 10:05 AM Ali C. Begen
                       <ali.begen@networked.media<mailto:ali.begen@networked.media>
                       <mailto:ali.begen@networked.media>> wrote:
                            > On Nov 3, 2021, at 6:50 PM, Piers
                           O'Hanlon <piers.ohanlon@bbc.co.uk<mailto:piers.ohanlon@bbc.co.uk>
                           <mailto:piers.ohanlon@bbc.co.uk>> wrote:
                            >
                            >
                            >
                            >> On 2 Nov 2021, at 20:39, Ali C. Begen
                           <ali.begen=40networked.media@dmarc.ietf.org<mailto:ali.begen=40networked.media@dmarc.ietf.org>
                           <mailto:40networked.media@dmarc.ietf.org>>
                           wrote:
                            >>
                            >>
                            >>
                            >>> On Nov 2, 2021, at 3:39 AM, Luke Curley
                           <kixelated@gmail.com<mailto:kixelated@gmail.com>
                           <mailto:kixelated@gmail.com>> wrote:
                            >>>
                            >>> Hey folks, I wanted to quickly
                           summarize the problems we've run into at
                           Twitch that have led us to QUIC.
                            >>>
                            >>>
                            >>> Twitch is a live one-to-many product.
                           We primarily focus on video quality due to
                           the graphical fidelity of video games.
                           Viewers can participate in a chat room,
                           which the broadcaster reads and can respond
                           to via video. This means that latency is
                           also somewhat important to facilitate this
                           social interaction.
                            >>>
                            >>> A looong time ago we were using RTMP
                           for both ingest and distribution (Flash
                           player). We switched to HLS for distribution
                           to gain the benefit of 3rd party CDNs, at
                           the cost of dramatically increasing latency.
                           A later project lowered the latency of HLS
                           using chunked-transfer delivery, very
                           similar to LL-DASH (and not LL-HLS). We're
                           still using RTMP for contribution.
                            >>>
                            > I guess Apple do also have their
                           BYTERANGE/CTE mode for LL-HLS which is
                           pretty similar to LL-DASH.
                           Yes, Apple can list the parts (chunks in
                           LL-DASH) as byteranges in the playlist but
                           the frequent playlist refresh and part
                           retrieval process is inevitable in LL-HLS,
                           which is one of the main differences from
                           LL-DASH (no need for manifest refresh and
                           request per segment not chunk).
                            >
                            >>>
                            >>> To summarize the issues with our
                           current distribution system:
                            >>>
                            >>> 1. HLS suffers from head-of-line blocking.
                            >>> During congestion, the current segment
                           stalls and is delivered slower than the
                           encoded bitrate. The player has no recourse
                           than to wait for the segment to finish
                           downloading, risking depleting the buffer.
                           It can switch down to a lower rendition at
                           segment boundaries, but these boundaries
                           occur too infrequently (every 2s) to handle
                           sudden congestion. Trying to switch earlier,
                           either by canceling the current segment or
                           downloading the lower rendition in parallel,
                           only exacerbates the issue.
                            >>
                            > Isn't the HoL limitation more down to the
                           use of HTTP/1.1?
                            >
                            >> DASH has the concept of Resync points
                           that were designed exactly for this purpose
                           (allowing you to emergency downshift in the
                           middle of a segment).
                            >>
                            > I was curious if there are any studies or
                           experience of how resync points perform in
                           practice?
                           Resync points are pretty fresh out of the
                           oven. dash.js has it in the roadmap but not
                           yet implemented (and we also need to
                           generate test streams). So, there is no data
                           available yet with the real clients. But, I
                           suppose you can imagine how in-segment
                           switching can help in sudden bw drops
                           especially for long segments.
                            >
                            >>> 2. HLS has poor "auto" quality (ABR).
                            >>> The player is responsible for choosing
                           the rendition to download. This is a problem
                           when media is delivered frame-by-frame (ie.
                           HTTP chunked-transfer), as we're effectively
                           application-limited by the encoder bitrate.
                           The player can only measure the arrival
                           timestamp of data and does not know when the
                           network can sustain a higher bitrate without
                           just trying it. We hosted an ACM challenge
                           for this issue in particular.
                            >>
                            > The limitation here may also be down to
                           the lack of access to sufficiently accurate
                           timing information about data arrivals in
                           the browser - unfortunately the Streams API,
                           which provides data from the fetch API,
                           doesn’t directly timestamp the data arrivals
                           so the JS app has to timestamp it which can
                           suffer from noise such as scheduling etc -
                           especially a problem for small/fast data
                           arrivals.
                           Yes, you need to get rid of that noise (see
                           LoL+).
                            > I guess another issue could be that if
                           the system is only sending single frames
                           then the network transport may be operating
                           in application limited mode so the cwnd
                           doesn’t grow sufficiently to take advantage
                           of the available capacity.
                           Unless the video bitrate is too low, this
                           should not be an issue most of the time.
                            >
                            >> That exact challenge had three competing
                           solutions, two of which are now part of the
                           official dash.js code. And yes, the player
                           can figure what the network can sustain
                           *without* trying higher bitrate renditions.
                            >>
                           https://github.com/Dash-Industry-Forum/dash.js/wiki/Low-Latency-streaming
                           <https://github.com/Dash-Industry-Forum/dash.js/wiki/Low-Latency-streaming>
                            >> Or read the paper that even had “twitch”
                           in its title here:
                           https://ieeexplore.ieee.org/document/9429986
                           <https://ieeexplore.ieee.org/document/9429986>
                            >>
                            > There was a recent study that seems to
                           show that none of the current algorithms are
                           that great for low latency, and the two new
                           dash.js ones appear to lead to much higher
                           levels of rebuffering:
                            >
                           https://dl.acm.org/doi/pdf/10.1145/3458305.3478442
                           <https://dl.acm.org/doi/pdf/10.1145/3458305.3478442>
                           Brightcove’s paper uses the LoL and L2A
                           algorithms from the challenge where low
                           latency was the primary goal. For Twitch’s
                           own evaluation, I suggest you watch:
                           https://www.youtube.com/watch?v=rcXFVDotpy4
                           <https://www.youtube.com/watch?v=rcXFVDotpy4>
                           We later addressed the rebuffering issue,
                           developed LoL+, which is the version
                           included in dash.js now and explained at the
                           ieeexplore link I gave above.
                           Copying the authors in case they want to add
                           anything for the paper you cited.
                           -acbegen
                            >
                            > Piers
                            >
                            >>> I believe this is why LL-HLS opts to
                           burst small chunks of data (sub-segments) at
                           the cost of higher latency.
                            >>>
                            >>>
                            >>> Both of these necessitate a larger
                           player buffer, which increases latency. The
                           contribution system it's own problems, but
                           let me sync up with that team first before I
                           try to enumerate them.
                            >>> --
                            >>> Moq mailing list
                            >>> Moq@ietf.org<mailto:Moq@ietf.org> <mailto:Moq@ietf.org>
                            >>>
                           https://www.ietf.org/mailman/listinfo/moq
                           <https://www.ietf.org/mailman/listinfo/moq>
                            >>
                            >> --
                            >> Moq mailing list
                            >> Moq@ietf.org<mailto:Moq@ietf.org> <mailto:Moq@ietf.org>
                            >>
                           https://www.ietf.org/mailman/listinfo/moq
                           <https://www.ietf.org/mailman/listinfo/moq>
                       --                         Moq mailing list
                       Moq@ietf.org<mailto:Moq@ietf.org> <mailto:Moq@ietf.org>
                       https://www.ietf.org/mailman/listinfo/moq
                       <https://www.ietf.org/mailman/listinfo/moq>
           --             Moq mailing list
           Moq@ietf.org<mailto:Moq@ietf.org> <mailto:Moq@ietf.org>
           https://www.ietf.org/mailman/listinfo/moq
           <https://www.ietf.org/mailman/listinfo/moq>
       --         Moq mailing list
       Moq@ietf.org<mailto:Moq@ietf.org> <mailto:Moq@ietf.org>
       https://www.ietf.org/mailman/listinfo/moq
       <https://www.ietf.org/mailman/listinfo/moq>
   --     Moq mailing list
   Moq@ietf.org<mailto:Moq@ietf.org> <mailto:Moq@ietf.org>
   https://www.ietf.org/mailman/listinfo/moq
   <https://www.ietf.org/mailman/listinfo/moq>

--
Moq mailing list
Moq@ietf.org<mailto:Moq@ietf.org>
https://www.ietf.org/mailman/listinfo/moq