Re: [Moq] Latency @ Twitch

"Mo Zanaty (mzanaty)" <mzanaty@cisco.com> Tue, 09 November 2021 18:26 UTC

Return-Path: <mzanaty@cisco.com>
X-Original-To: moq@ietfa.amsl.com
Delivered-To: moq@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id D2B5D3A0F49 for <moq@ietfa.amsl.com>; Tue, 9 Nov 2021 10:26:58 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -9.598
X-Spam-Level:
X-Spam-Status: No, score=-9.598 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, HTML_MESSAGE=0.001, RCVD_IN_MSPIKE_H2=-0.001, SPF_NONE=0.001, URIBL_BLOCKED=0.001, USER_IN_DEF_DKIM_WL=-7.5] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (1024-bit key) header.d=cisco.com header.b=QuOZnRF1; dkim=pass (1024-bit key) header.d=cisco.onmicrosoft.com header.b=i4FNNU9X
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id mtukMA08nIOQ for <moq@ietfa.amsl.com>; Tue, 9 Nov 2021 10:26:53 -0800 (PST)
Received: from rcdn-iport-8.cisco.com (rcdn-iport-8.cisco.com [173.37.86.79]) (using TLSv1.2 with cipher DHE-RSA-SEED-SHA (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 59EB23A0F5E for <moq@ietf.org>; Tue, 9 Nov 2021 10:26:53 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=@cisco.com; l=42915; q=dns/txt; s=iport; t=1636482413; x=1637692013; h=from:to:cc:subject:date:message-id:references: in-reply-to:mime-version; bh=dy1XFfvb2CnNA8ammPw6I5NXSu+H90XCLAHhnk1aH40=; b=QuOZnRF1EparrWIiQohmyCnYMMflINV2T40+ggqCiNYiyAaxGeiMoRs/ 9eQeiIOewAeFiYqpfYJ5BoHqSzsX74zKdHFp/PDGfZFQx4zypXiUvPGx8 lRLlqHkfKxy5bbvl2YeEjM3QSnSoXJZv7mSNuUyaUbiq8D3O9CZBEc/0y 4=;
IronPort-PHdr: A9a23:QFaFnhSpsGXLDPHJvddl4Mo3Ydpso7vLVj580XJvo75Nc6H2+ZPkMQSf4Ph2l1bGUM3d7O4MkOvZta3sGAliqZaMuXwPatpAAhkCj8hFkwkpGsXQD0r9IbbjZDA7G8IXUlhj8jm7PEFZFdy4aUfVpyi57CUZHVP0Mg8mTtk=
IronPort-Data: A9a23:y30suKkE1YMqipbwlaPEUgDo5gx6JERdPkR7XQ2eYbSJt1+Wr1GztxIaC26OafiKZ2r1KNlwOY2y9B5SuMDcmNVnTgdvpS0xF1tH+JHPbTi7wugcHM8zwvUuxyuL1u1GAjX7BJ1yHiC0SiuFaOC79CAljfDQHdIQNcadUsxPbV48IMseoUoLd94R2uaEsPDha++/kYqaT/73YDdJ7wVJ3lc8sMpvnv/AUMPa41v0tnRmDRxCUcS3e3M9VPrzLonpR5f0rxU9IwK0ewrD5OnREmLx5RwhDJaulaz2NxFMSb/JNg/IgX1TM0SgqkEd/WppjeBqb7xFNBo/Zzahx7idzP1Ip5W2QBs4FqbNg+8aFRJfFkmSOIUXpe+ZeSTj75z7I0ruNiGEL+9VJEQxJKUZ9/p5R2ZU+pQwKTkLdQ+Om/6ez7W8Re1hwM8kKaHDNYce/HttwjzfJfArXY/EWabH6Zld0Tsxj6hmH+vUatAFaBJmaAzAahBfIlMQEpsineCujX65eDpdwHqfqLAx6nLfigV717LkGNXQc92OA85Smy6lSsjul4jiKgsRONrawj2f/zfwwOTOhij8HokVEdWFGjdRqAX77gQu5Nc+CArTTSGFt3OD
IronPort-HdrOrdr: A9a23:ILh+HKyuToPA3UgotJA5KrPxj+skLtp133Aq2lEZdPULSK2lfpGV8sjziyWatN9IYgBepTiBUJPwJk80hqQFn7X5XI3SHTUO3VHJEGgM1/qY/9SNIVyaygcZ79YdT0EcMqyxMbEZt7eB3ODQKb9Jq7PrnNHK9IXjJjVWPHxXgspbnmFE43OgYzVLrX59dOME/fSnl656jgvlXU5SQtWwB3EDUeSGjcbMjojabRkPAANiwBWSjBuzgYSKUCSw71M7aXdi0L0i+W/Kn0jS/aO4qcy2zRfayiv684lWot380dFObfb8yPT9aw+czzpAVr4RHIFqjwpF5t1HL2xayeUkli1Qe/ibLUmhJl1d7yGdgDUImwxemkMKgWXo8UcL5/aJHg7Tz6F69N5kmtyz0Tt8gDg06tM444rS3aAnfi/ojWDz4cPFWAptkVfxqX0+kfQLh3gaSocGbqRNxLZvsH+9Pa1wVh4S0rpXXdWGzfusksp+YBefdTTUr2NvyNujUjA6GQqHWFELvoiQ3yJNlH50wkMEzIhH901wuq4VWt1B/aDJI65onLZBQosfar98Hv4IRY+yBnbWSRzBPWqOKRDsFb0BOXjKt5nriY9Fq92CadgN1t8/iZ7BWFRXuSo7fF/vE9SH2NlR/hXEUAyGLH/QIwFlltBEU5HHNc7W2By4ORkTepGb0oAi6+XgKoGOBK4=
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: A0DqBgDwvIph/5tdJa1aHgEBCxIMgg4LgSExKSgHd1o3MQKERYNHA4U5hQ+DAgOKf4UiimGBLhSBEQNPBQsBAQENAQEqAQ4IBAEBhQICF4JAAiU0CQ4BAgQBAQESAQEFAQEBAgEGBIERE4VoDYZCAQEBAQEBAQEBEAgJHQEBKgIIAwEECwIBCBggBwMCAgIfBgsUEQIEAQkEBRsHggRLAYF+VwMOIQEOn3gBgToCiVAaNXqBMYEBgggBAQYEBIE2AQMEDEGCfw0LgjUDBoE6gwuEGAEBgR6FZiccgUlEgRUnDBCBZoEBPoIhQgEBAQEBF4EdEDAWgms3gi6OUR5ACCkbDhUDBC8iAQEJFwIrAgEHBAIJBBsHBQwCERUIGB8FKQFCApFZLwWDFYkUnWiBN2gKgziKTo5GhWoFLYNskjuRAYZ2jmA6H4IhijSDR5AVFhgchGkCBAIEBQIOAQEGgWE7OYEgcBUaISoBgj5RGQ+OIAwWgQQBCYJChRSFSQF0AgEIByYCBgEKAQEDCY4vBIJCAQE
X-IronPort-AV: E=Sophos;i="5.87,221,1631577600"; d="scan'208,217";a="958604468"
Received: from rcdn-core-4.cisco.com ([173.37.93.155]) by rcdn-iport-8.cisco.com with ESMTP/TLS/DHE-RSA-SEED-SHA; 09 Nov 2021 18:26:37 +0000
Received: from mail.cisco.com (xbe-rcd-005.cisco.com [173.37.102.20]) by rcdn-core-4.cisco.com (8.15.2/8.15.2) with ESMTPS id 1A9IQbFH023829 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=OK); Tue, 9 Nov 2021 18:26:37 GMT
Received: from xfe-aln-005.cisco.com (173.37.135.125) by xbe-rcd-005.cisco.com (173.37.102.20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.792.15; Tue, 9 Nov 2021 12:26:36 -0600
Received: from xfe-rcd-002.cisco.com (173.37.227.250) by xfe-aln-005.cisco.com (173.37.135.125) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.792.15; Tue, 9 Nov 2021 12:26:36 -0600
Received: from NAM11-DM6-obe.outbound.protection.outlook.com (72.163.14.9) by xfe-rcd-002.cisco.com (173.37.227.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.792.15 via Frontend Transport; Tue, 9 Nov 2021 12:26:36 -0600
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Ce8gD232Z5AZGJIIzyhh2oPEj/ACVbPUawiatsFs0lyW9ErKzVli76xWSTLkwO9+ZosYfGA1hREnv1+bMpeUvZ7yJz3qd3LIpeakNS67dBGahys/sUoR487HRk2vnDw/qZ3/tuiGkG0ltmYWj6GkxObq3azCwTqH+hAMnPCmnJBRRpPVGF32U3X1x/RMdjQkctMtaQf106GjZSAZ8GUMXciJYfbKQacyp7RIbLi1j15HmfLGqsZ6dTMInSbIMLP9kxRvo1WkdvboOAkbhtXyhpVrtT8O31touLwoYsFo/4m7Jy6hJaYeDMXF47tLtaFGPSivzAK0detCXFbBFJir3Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=dy1XFfvb2CnNA8ammPw6I5NXSu+H90XCLAHhnk1aH40=; b=Q0CT5XFWproHp1rCWN7tHwk2JwYt6+z7B5yJPpF1eD9OTGgN3Fib995PUGHmJg5uiLT/lUhn4f8hBLdAcY88BjUjBFLPAgEwml4QKOC9YLYeSWCHuK5QEZmKGyMFUz+mhX1VGEyK3wRBJLA+LddekJC5mXflsLscfv8tP925rcQ+NEbgPIGVo1OhGyPF5k3GeZUwVUH/TJ0f0gL/ip9IH0px7187mPKwPoMs2fnWsm7WzLmiqQIqA1tQNVrwpNbi0AGcWJOFZ+VSjknyqqA/rYrYFT4VXPqbjVts9UIqt9gMvYS98GcQgszenbU4oTGH1qpKVTCU+dL2b2ICraAuqw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=cisco.com; dmarc=pass action=none header.from=cisco.com; dkim=pass header.d=cisco.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cisco.onmicrosoft.com; s=selector2-cisco-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=dy1XFfvb2CnNA8ammPw6I5NXSu+H90XCLAHhnk1aH40=; b=i4FNNU9XWBhfS25aJrNiFC3WPGTNvU8AvCazTXjIYob5N23hFf+gLn0rGZeNTH4xHTP55D8sVJjniuBWLd1XcTiZY5JeGW1ENqDtlyGeviIYPY/J7Li0nmsBpSMgLWBnXdpSxNyBh+Ll7SknxS8fuZuzZW4Efz5N48ceOe+o3Hw=
Received: from BN7PR11MB2753.namprd11.prod.outlook.com (2603:10b6:406:b0::23) by BN6PR11MB0052.namprd11.prod.outlook.com (2603:10b6:405:69::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4690.15; Tue, 9 Nov 2021 18:26:34 +0000
Received: from BN7PR11MB2753.namprd11.prod.outlook.com ([fe80::58d1:b84c:4253:3511]) by BN7PR11MB2753.namprd11.prod.outlook.com ([fe80::58d1:b84c:4253:3511%6]) with mapi id 15.20.4690.015; Tue, 9 Nov 2021 18:26:34 +0000
From: "Mo Zanaty (mzanaty)" <mzanaty@cisco.com>
To: Bernard Aboba <bernard.aboba@gmail.com>, Justin Uberti <juberti@alphaexplorationco.com>
CC: "Ali C. Begen" <ali.begen@networked.media>, Ian Swett <ianswett@google.com>, MOQ Mailing List <moq@ietf.org>
Thread-Topic: [Moq] Latency @ Twitch
Thread-Index: AQHXz4Ik579Jpj7zi0OpmKkErGNdm6vwtIiAgAFBZQCAABT0AIAAPBSAgAAJGgCAAAl+AIAA3T2AgAhNowCAAAdWgP//sAuA
Date: Tue, 09 Nov 2021 18:26:33 +0000
Message-ID: <9D095CBB-7BA8-4773-8981-8131C956F1C4@cisco.com>
References: <CAHVo=ZnXNnT2uod6oxHXTRoyA58cpn35BrV6eOXXnGUOFbcvSQ@mail.gmail.com> <0ADDD7B3-B49E-40E1-99E9-278EF0EA9B85@networked.media> <AF32886D-0524-45D4-9577-FCEFD601A0A1@bbc.co.uk> <73C6FFEB-CE81-4DE7-B110-55892D746927@networked.media> <CAHVo=Znu7F18fj4Anxz3j1byM+9aQmJ6N4DdFjUZk9fGjG8iXg@mail.gmail.com> <CAKcm_gM=bcALtqoLd8mYLdCiTK=ZfEF0RkXBkw17bPR6MjoMhA@mail.gmail.com> <CAHVo=ZngW+Z4-wGqAb4fRYQiSz6O4tOq1+nuto3PJaYLj1iWFg@mail.gmail.com> <6904CE31-940F-4D10-B312-4AEB67E9F9CB@bbc.co.uk> <CAOLzse37YZdnOLkt70F8yvmSXnaQ+KktX00keje3Vh2xkuFzjg@mail.gmail.com> <CAOW+2dtXVTzYK-ZkY_jSD4y8wa4_LxOO1fEeumwbmTzc1RAzDQ@mail.gmail.com>
In-Reply-To: <CAOW+2dtXVTzYK-ZkY_jSD4y8wa4_LxOO1fEeumwbmTzc1RAzDQ@mail.gmail.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Microsoft-MacOutlook/16.54.21101001
authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=cisco.com;
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: fd2aefe5-beae-41dc-c6e7-08d9a3ae757f
x-ms-traffictypediagnostic: BN6PR11MB0052:
x-microsoft-antispam-prvs: <BN6PR11MB0052C997485092665DA62CE2B4929@BN6PR11MB0052.namprd11.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:8882;
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: JRbOT3gyf9k63j/3fBl0Lhi5x/Baz5icDsT3EbRezbAM/0bbxsbqqWVwNObO+I0v3U1RJnv8ILPexbiF6ouuthv9zoi3ntSApvPbkts0bvmRW3Tn48EOzmyCTYCjS/hOrWrlSTZ5tOnBJBuqfe0F6/g+xoDn4q6Wvthw8JxvqcD8mlMxtiLQu+R0SQg5oF+bv9AayLXTpgkESVfiLWroTjLYWQLX1kYh8TDuYYBAI3x8eY+DqpPw4JcTsvM1yCIqwEXmVxSOkbAov7xvY77joXYmVNhL4nR4yIL2w6mf/RnpX6cki2zmQho/yKDQmpcVm1Jb2wp2tezOuQZ3RaGKNnmZIc4GIY4juzQkdf6N72Mm6muF8cfq8P60AdAclTked02pflUiidTksa2+zE7t2suf6G+bpX2DE88bbtKHjIE2lM3GQuo9ruFPwvl4xUC4IYY5/McI7P3e+VnWJr3ymn6HZ4th+5ey601HAqygot1G1OQO1CM+L27fw5h6ID5Q6wY5x9M52dsed8cL+mcYDAGeP1UQvQW0WP9Z8Afue6EOMvHkfNymw4kGRfOsk8yJiNXBPEGzrWZ8RQ1FRdRlNdneShzZYKnBPyQwF89gm3+IJbHL5jk6XVNEDlw9YbIK45qCeRyLHniLIt3pvp8riKDQoy3hi2wi9LETNduJKeEfygVGzfux1IhaHG/eOZscpuWfZfIMXHph5A3Od+qoCg+BUURxbuyGMlvPuNWwXJppqveMyuoTP8r0tPFBIZhFQwNhCrIrTF9JGWGYFJTktNarNPyz2Lj/Fo1496FdgAbpmJ40U1+Z1X3Zf1V78UpO60jkSCsoEHnWKBw1LhKPsFzeI3A5zs/b9oHu1UhVJmg=
x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:BN7PR11MB2753.namprd11.prod.outlook.com; PTR:; CAT:NONE; SFS:(366004)(5660300002)(2616005)(110136005)(66476007)(6486002)(71200400001)(33656002)(122000001)(54906003)(2906002)(66574015)(966005)(508600001)(64756008)(66446008)(66556008)(4326008)(53546011)(36756003)(30864003)(6512007)(166002)(186003)(76116006)(8936002)(316002)(38100700002)(38070700005)(86362001)(83380400001)(8676002)(6506007)(66946007)(45980500001); DIR:OUT; SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: sso2tdoReSUFDqCGur3vZgS3hNdmPBMblUhbBU4F/CIQPHPOJZTdF90E9D9G4xfkFE3Zt2eDCDvk2/SppBHTsHNR/LRt/yDXjzdH1l5muYqgClFJ5o5GMFxSBv89wzQgfbsykBffJWwLfzMLja6DX42rAUohAKnQS/vBE0h/jwjFGWuv8Ht/AOYBjgM2v13VAYDti8oqgcA6FdFKq83Y3oH0TFRtXBnVcOzJbk2iQ53Kv0zilY5/tm3LjdzlWtUEH681N4hrdsC2dz9YvZ7pKFTVQ3yN2F1haTqJ3x/+I6Gy/OsO4IanIsO+j+zJFgXOFHcfevFsk8a7ZlV6nP/+Dk11kEL0NN3hN5Pzqcgq3Oz2QHz32xksaslUq/xKNYnr7Mcy96f18OTQYIQLCoaIDBx1pMEGq/felY/pyNQ1eYcGbzmm02C7ztosNBmBC+B7l74gVh1rFdMfYl7n09mLae1xaWIOXyfp4H5ryZyk3djmypJp3Jd7DBOsTPZ48HqXjuHzDXNi5zJUmqUjKsqkXIM+xXjhsq9d7Snj52Bfn1O9QXgEU4w9lKnDgb1rfF3z223/7XSqjZkkrDHDkxtxBRIZrbcKdSlSeGzTmKf+8p7+YkHrOhXYk7Kb44kY1riE/wvKRFFK+LjNNDO7WG9/VIL2vRLVxL98+CFQ0O93L9+bAONKHl65KtpRu2+HGBTwYMNMG/I222a8AcU7g7JYvcoyLeVp4bERgvHDgdSMwkfvyJd+dw566LgwQax6liSq/Ig1e5vCM4PqDht9ntvuhJ9cFHD/L7OeoO26pKuVS1utjDYx/j7oCxS+Iqcfsp1AVpdur4chnpY2zZx1/REk0LYj+tehLDkBfIe40tNX8BWlEmoqFUy6marS31cnKwcatdIXti9RkAhI8re54yfgD21rWd/ZWp29VAt9ImgLpHJmqjNDoSdnP9UKwrClAX6fcd/fPw8t8D8SFWSDOhPQE8CYplb48jFpdq09+pZzum8lc4qo7gZzzpm7vcKMx6ZM/jGYUX0rsHu61dqubFydoBesFJW+WI+6uLLET+YPAslZEQtVNKKmHE/KP/efySY01ykwCJhmZDjv0depLpnO/4/f1hnlKy7mn2qZ1Qp7I11LLrzLjNNNLmEnoiVPbe8UIHTAkcbGToTLEHgkwmhnHCy/DjLv6bk5iXtN6aTlhTDIeimlaN5YCfWTKa11s3rSciZgT9KEAz5v779FBwVZ0jckldN8KTZ8VoxgggicK4mhsTEFyJP3TEvRvG3Li4O/BgB8M1vbyUNmUxtJD+baRdz7PvInUbUJxl/1N4OU4suQxHyGKkIf7ISViwqkxQ0XwrXYhHwi3S2h2KJINx7AyBMo5z/RJFdIS+TM6pdg2u/SVUeKrh++dyDFp3duTF4SnOiG1QE7R4WL46h1gM51aBszVcRAyobI4M+aO2K+5Rf6RkqUhHLGRJfBBHmypR7NzGYXmRFcrqmVuuA/cJb1/RekqtJc8xbrGfBofO3mtRQehHJBnA74pEoplAqpud+lGZWcqSKNXZlEI8wstn2SB7yMPldsm61cklJQpuhPL6DR2yYj1D1Dqfp1qBdfnM4fePOTGnictPsCgPnysyypAn6elyzlusoxLlKmN3OOhi8WMrzQHHuoF+D1HLX3PjhnRZJWnsmQn2c7ltoH7ZCXDmrCFF/GPS2ntyS/7Q/SyzIIQSXx5NWwlXu1XbW8cpJ0
Content-Type: multipart/alternative; boundary="_000_9D095CBB7BA8477389818131C956F1C4ciscocom_"
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BN7PR11MB2753.namprd11.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: fd2aefe5-beae-41dc-c6e7-08d9a3ae757f
X-MS-Exchange-CrossTenant-originalarrivaltime: 09 Nov 2021 18:26:33.8102 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 5ae1af62-9505-4097-a69a-c1553ef7840e
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 5kP6TBSpkEydosWm5R5bsfQkl+UW6jySldl6YK6AYGbgcKiKL2t3nmOdAM4gYKDhfnK8Th0r2GesLbnT7HyMCQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN6PR11MB0052
X-OriginatorOrg: cisco.com
X-Outbound-SMTP-Client: 173.37.102.20, xbe-rcd-005.cisco.com
X-Outbound-Node: rcdn-core-4.cisco.com
Archived-At: <https://mailarchive.ietf.org/arch/msg/moq/qrl-jCZ8xmZqWm4l15AXZD9s2Vo>
Subject: Re: [Moq] Latency @ Twitch
X-BeenThere: moq@ietf.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Media over QUIC <moq.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/moq>, <mailto:moq-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/moq/>
List-Post: <mailto:moq@ietf.org>
List-Help: <mailto:moq-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/moq>, <mailto:moq-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 09 Nov 2021 18:26:59 -0000

All current QUIC CCs (BBRv1/2, CUBIC, NewReno, etc.) are not well suited for real-time media, even for a rough “envelope” or “circuit-breaker”. RMCAT CCs are explicitly designed for real-time media, but, of course, rely on RTCP feedback, so must be adapted to QUIC feedback.

Mo


On 11/9/21, 1:13 PM, "Bernard Aboba" <bernard.aboba@gmail.com<mailto:bernard.aboba@gmail.com>> wrote:

Justin said:

"As others have noted, BBR does not work great out of the box for realtime scenarios."

[BA] At the ICCRG meeting on Monday, there was an update on BBR2:
https://datatracker.ietf.org/meeting/112/materials/slides-112-iccrg-bbrv2-update-00.pdf

While there are some improvements, issues such as "PROBE_RTT" and rapid rampup after loss remain, and overall, it doesn't seem like BBR2 is going to help much with realtime scenarios.  Is that fair?

On Tue, Nov 9, 2021 at 12:46 PM Justin Uberti <juberti@alphaexplorationco.com<mailto:juberti@alphaexplorationco.com>> wrote:
Ultimately we found that it wasn't necessary to standardize the CC as long as the behavior needed from the remote side (e.g., feedback messaging) could be standardized.

As others have noted, BBR does not work great out of the box for realtime scenarios. The last time this was discussed, the prevailing idea was to allow the QUIC CC to be used as a sort of circuit-breaker, but within that envelope the application could use whatever realtime algorithm it preferred (e.g, goog-cc).

On Thu, Nov 4, 2021 at 3:58 AM Piers O'Hanlon <piers.ohanlon@bbc.co.uk<mailto:piers.ohanlon@bbc.co.uk>> wrote:

On 3 Nov 2021, at 21:46, Luke Curley <kixelated@gmail.com<mailto:kixelated@gmail.com>> wrote:

Yeah, there's definitely some funky behavior in BBR when application limited but it's nowhere near as bad as Cubic/Reno. With those algorithms you need to burst enough packets to fully utilize the congestion window before it can be grown. With BBR I believe you need to burst just enough to fully utilize the pacer, and even then this condition<https://source.chromium.org/chromium/chromium/src/+/master:net/third_party/quiche/src/quic/core/congestion_control/bbr_sender.cc;l=393> lets you use application-limited samples if they would increase the send rate.

And there’s also the idle cwnd collapse/reset behaviour to consider if you’re sending a number of frames together and their inter-data gap exceeds the RTO - I’m not quite sure how the various QUIC stacks have translated RFC2861/7661 advice on this…?


I started with BBR first because it's simpler, but I'm going to try out BBR2 at some point because of the aforementioned PROBE_RTT issue. I don't follow the congestion control space closely enough; are there any notable algorithms that would better fit the live video use-case?

I guess Google’s Goog_CC appears to be well used in the WebRTC space (e.g. WEBRTC<https://webrtc.googlesource.com/src/+/refs/heads/main/modules/congestion_controller/goog_cc> and aiortc<https://github.com/aiortc/aiortc/blob/1a192386b721861f27b0476dae23686f8f9bb2bc/src/aiortc/rate.py#L271>) despite the draft<https://datatracker.ietf.org/doc/html/draft-ietf-rmcat-gcc> never making it to RFC status… There's also SCREAM<https://datatracker.ietf.org/doc/rfc8298/> which has an open source implementation<https://github.com/EricssonResearch/scream> but not sure how widely deployed it is.



On Wed, Nov 3, 2021 at 2:12 PM Ian Swett <ianswett@google.com<mailto:ianswett@google.com>> wrote:
From personal experience, BBR has some issues with application limited behavior, but it is still able to grow the congestion window, at least slightly, so it's likely an improvement over Cubic or Reno.

On Wed, Nov 3, 2021 at 4:40 PM Luke Curley <kixelated@gmail.com<mailto:kixelated@gmail.com>> wrote:
I think resync points are an interesting idea although we haven't evaluated them. Twitch did push for S-frames in AV1 which will be another option in the future instead of encoding a full IDR frame at these resync boundaries.

An issue is you have to make the hard decision to abort the current download and frantically try to pick up the pieces before the buffer depletes. It's a one-way door (maybe your algorithm overreacted) and you're going to be throwing out some media just to redownload it at a lower bitrate.

Ideally, you could download segments in parallel without causing contention. The idea is to spend any available bandwidth on the new segment to fix the problem, and any excess bandwidth on the old segment in the event it arrives before the player buffer actually depletes. That's more or less the core concept for what we've built using QUIC, and it's compatible with resync points if we later go down that route.


And you're exactly right Piers. The fundamental issue is that a web player lacks the low level timing information required to infer the delivery rate. You would want something like BBR's rate estimation<https://datatracker.ietf.org/doc/html/draft-cheng-iccrg-delivery-rate-estimation> which inspects the time delta between packets to determine the send rate. That gets really difficult when the OS and browser delay flushing data to the application, be it for performance reasons or due to packet loss (to maintain head-of-line blocking).

I did run into CUBIC/Reno not being able to grow the congestion window when frames are sent one at a time (application limited). I don't believe BBR suffers from the same problem though due to the aforementioned rate estimator.

On Wed, Nov 3, 2021 at 10:05 AM Ali C. Begen <ali.begen@networked.media<mailto:ali.begen@networked.media>> wrote:


> On Nov 3, 2021, at 6:50 PM, Piers O'Hanlon <piers.ohanlon@bbc.co.uk<mailto:piers.ohanlon@bbc.co.uk>> wrote:
>
>
>
>> On 2 Nov 2021, at 20:39, Ali C. Begen <ali.begen=40networked.media@dmarc.ietf.org<mailto:40networked.media@dmarc.ietf.org>> wrote:
>>
>>
>>
>>> On Nov 2, 2021, at 3:39 AM, Luke Curley <kixelated@gmail.com<mailto:kixelated@gmail.com>> wrote:
>>>
>>> Hey folks, I wanted to quickly summarize the problems we've run into at Twitch that have led us to QUIC.
>>>
>>>
>>> Twitch is a live one-to-many product. We primarily focus on video quality due to the graphical fidelity of video games. Viewers can participate in a chat room, which the broadcaster reads and can respond to via video. This means that latency is also somewhat important to facilitate this social interaction.
>>>
>>> A looong time ago we were using RTMP for both ingest and distribution (Flash player). We switched to HLS for distribution to gain the benefit of 3rd party CDNs, at the cost of dramatically increasing latency. A later project lowered the latency of HLS using chunked-transfer delivery, very similar to LL-DASH (and not LL-HLS). We're still using RTMP for contribution.
>>>
> I guess Apple do also have their BYTERANGE/CTE mode for LL-HLS which is pretty similar to LL-DASH.

Yes, Apple can list the parts (chunks in LL-DASH) as byteranges in the playlist but the frequent playlist refresh and part retrieval process is inevitable in LL-HLS, which is one of the main differences from LL-DASH (no need for manifest refresh and request per segment not chunk).

>
>>>
>>> To summarize the issues with our current distribution system:
>>>
>>> 1. HLS suffers from head-of-line blocking.
>>> During congestion, the current segment stalls and is delivered slower than the encoded bitrate. The player has no recourse than to wait for the segment to finish downloading, risking depleting the buffer. It can switch down to a lower rendition at segment boundaries, but these boundaries occur too infrequently (every 2s) to handle sudden congestion. Trying to switch earlier, either by canceling the current segment or downloading the lower rendition in parallel, only exacerbates the issue.
>>
> Isn't the HoL limitation more down to the use of HTTP/1.1?
>
>> DASH has the concept of Resync points that were designed exactly for this purpose (allowing you to emergency downshift in the middle of a segment).
>>
> I was curious if there are any studies or experience of how resync points perform in practice?

Resync points are pretty fresh out of the oven. dash.js has it in the roadmap but not yet implemented (and we also need to generate test streams). So, there is no data available yet with the real clients. But, I suppose you can imagine how in-segment switching can help in sudden bw drops especially for long segments.

>
>>> 2. HLS has poor "auto" quality (ABR).
>>> The player is responsible for choosing the rendition to download. This is a problem when media is delivered frame-by-frame (ie. HTTP chunked-transfer), as we're effectively application-limited by the encoder bitrate. The player can only measure the arrival timestamp of data and does not know when the network can sustain a higher bitrate without just trying it. We hosted an ACM challenge for this issue in particular.
>>
> The limitation here may also be down to the lack of access to sufficiently accurate timing information about data arrivals in the browser - unfortunately the Streams API, which provides data from the fetch API, doesn’t directly timestamp the data arrivals so the JS app has to timestamp it which can suffer from noise such as scheduling etc - especially a problem for small/fast data arrivals.

Yes, you need to get rid of that noise (see LoL+).

> I guess another issue could be that if the system is only sending single frames then the network transport may be operating in application limited mode so the cwnd doesn’t grow sufficiently to take advantage of the available capacity.

Unless the video bitrate is too low, this should not be an issue most of the time.

>
>> That exact challenge had three competing solutions, two of which are now part of the official dash.js code. And yes, the player can figure what the network can sustain *without* trying higher bitrate renditions.
>> https://github.com/Dash-Industry-Forum/dash.js/wiki/Low-Latency-streaming
>> Or read the paper that even had “twitch” in its title here: https://ieeexplore.ieee.org/document/9429986
>>
> There was a recent study that seems to show that none of the current algorithms are that great for low latency, and the two new dash.js ones appear to lead to much higher levels of rebuffering:
> https://dl.acm.org/doi/pdf/10.1145/3458305.3478442

Brightcove’s paper uses the LoL and L2A algorithms from the challenge where low latency was the primary goal. For Twitch’s own evaluation, I suggest you watch:
https://www.youtube.com/watch?v=rcXFVDotpy4
We later addressed the rebuffering issue, developed LoL+, which is the version included in dash.js now and explained at the ieeexplore link I gave above.

Copying the authors in case they want to add anything for the paper you cited.

-acbegen


>
> Piers
>
>>> I believe this is why LL-HLS opts to burst small chunks of data (sub-segments) at the cost of higher latency.
>>>
>>>
>>> Both of these necessitate a larger player buffer, which increases latency. The contribution system it's own problems, but let me sync up with that team first before I try to enumerate them.
>>> --
>>> Moq mailing list
>>> Moq@ietf.org<mailto:Moq@ietf.org>
>>> https://www.ietf.org/mailman/listinfo/moq
>>
>> --
>> Moq mailing list
>> Moq@ietf.org<mailto:Moq@ietf.org>
>> https://www.ietf.org/mailman/listinfo/moq
--
Moq mailing list
Moq@ietf.org<mailto:Moq@ietf.org>
https://www.ietf.org/mailman/listinfo/moq

--
Moq mailing list
Moq@ietf.org<mailto:Moq@ietf.org>
https://www.ietf.org/mailman/listinfo/moq
--
Moq mailing list
Moq@ietf.org<mailto:Moq@ietf.org>
https://www.ietf.org/mailman/listinfo/moq