Re: RTP over QUIC experiments

Joerg Ott <> Mon, 15 November 2021 20:19 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 6A6EF3A08ED; Mon, 15 Nov 2021 12:19:38 -0800 (PST)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -3.749
X-Spam-Status: No, score=-3.749 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, NICE_REPLY_A=-1.852, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=unavailable autolearn_force=no
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id H64eM2bXGKjH; Mon, 15 Nov 2021 12:19:33 -0800 (PST)
Received: from ( []) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by (Postfix) with ESMTPS id 8D4AF3A08EC; Mon, 15 Nov 2021 12:19:32 -0800 (PST)
Received: from ( []) by (Postfix) with ESMTP id D5E362400B6; Mon, 15 Nov 2021 21:19:29 +0100 (CET)
Received: by (Postfix, from userid 112) id D396C1C4; Mon, 15 Nov 2021 21:19:29 +0100 (CET)
Received: from (localhost []) by (Postfix) with ESMTP id B410B1BA; Mon, 15 Nov 2021 21:19:29 +0100 (CET)
Received: from ( [IPv6:2a09:80c0::78]) by (Postfix) with ESMTPS id B2495194; Mon, 15 Nov 2021 21:19:29 +0100 (CET)
Received: by (Postfix, from userid 112) id AE8EE4A05E7; Mon, 15 Nov 2021 21:19:29 +0100 (CET)
Received: (Authenticated sender: ott) by (Postfix) with ESMTPSA id 22F744A0587; Mon, 15 Nov 2021 21:19:28 +0100 (CET) (Extended-Queue-bit
Message-ID: <>
Date: Mon, 15 Nov 2021 21:19:28 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0) Gecko/20100101 Thunderbird/91.3.0
Subject: Re: RTP over QUIC experiments
Content-Language: en-US
To: Ingemar Johansson S <>, "" <>, IETF QUIC WG <>
Cc: Ingemar Johansson S <>, "" <>
References: <>
From: Joerg Ott <>
In-Reply-To: <>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Archived-At: <>
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Main mailing list of the IETF QUIC working group <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Mon, 15 Nov 2021 20:19:39 -0000

Hi Ingmar,

On 12.11.21 17:28, Ingemar Johansson S wrote:
> Hi Jörg, Mathis + others
> It was nice to learn about your activity to try and use SCReAM as 
> example algorithm to integrate with QUIC. Pages 14-25 in
> <>
> Did you use the new gsteamer plugin from 
> <>  ?

Well, we use your C++ version (forked a bit ago), not the plugin you
refer to.  This experiment has already been ongoing for some time.

> Observations/Comments:
> + SCReAM + Reno : Strange that the throughput dropped like that but 
> perhaps an unlucky outcome of two cascaded congestion controls.

Nested control loops may not play out that well, and this seems just
one artifact of this.

> + Split of network congestion control and media rate control : QUIC 
> already today has the congestion control on the connection level, it is 
> then up to the individual streams to deliver media, subject to the 
> individual stream priorities. SCReAM is quite similar in that respect, 
> one difference is perhaps the implementation of the media rate control.

It is but it attends to the specific needs of real-time media, which
cannot really be said for New Reno and many others.

> I think that with QUIC one should do a full split and do the network 
> congestion control on the QUIC connection level. The congestion control 
> would then be some low latency version, perhaps BBRv2? or something 
> similar, I am not sure that the network congestion control in SCReAM is 
> the idea choice here as it is quite a lot tailored for RTP media.

We would have tried BBR(v2) if that was available with quic-go; on our
list, but there is only so much you can do at time :-)

> The media rate control is done on the stream level and is then subject 
> to stream priority. This should give a more clean split of functionality.

Right, we are currently exploring the basic combinations with lots of
tracing and try to understand who impacts whom and what and try to
disentangle implication specifics from protocol aspects.  So take this
as a first step of a more comprehensive evaluation of where we are.

A next step is understanding how you make the two work together so that
you can preserve the fact that QUIC congestion controls everything it
sends; this will then go more into a bit of integration and API

> My SCReAM experience is that one need to leak some of the congestion 
> signals from the connection level congestion control up to the stream 
> rate control, to make the whole thing responsive enough. In the SCReAM 
> code one can see that the exotic variable queueDelayTrend as well as ECN 
> marks and loss events are used for this purpose. I believe that 
> something like that is needed for an RTP (or whatever low latency) media 
> over QUIC. I believe that it is necessary to leak congestion information 
> from the connection level up to the stream level, especially to be able 
> to exploit L4S fully, even though it is a bit of a protocol layer violation.

We are all in for leaking (well: sharing) useful information, and one of
the main questions we tried to address is how much RTCP signaling for CC
you would need; well, and it seems we can do already pretty well with
what QUIC has built in.

This helps using RTP with one of its congestion control algorithms on 
top, but we hope it could also help understand what you'll need to do
an RTP++ (or: MOQ) on top of QUIC without all the legacy baggage (if
that turns out to be useful).

> + Stream prioritization : … is a problematic area, especially if one 
> stream is low latency video and another stream is a large chunk of data 
> for e.g. a large web page. With a simple round robin scheduler, the 
> stream with the large chunk of data will easily win because it is quite 
> likely to always have data to transmit. So some WRR is needed. I have 
> even had problems with the algorithm in SCReAM that prioritizes between 
> two cameras/video coders, this because the two cameras see different 
> views and thus provide differing information content/compression need.

This is an interesting data point I'd love to learn more about.  Not
surprised that scheduling would turn out to be one of the harder nuts to

> + Page 18 : Inferring the receive timestamp. What is suspect is that you 
> will essentially halve the estimated queue delay (I assume here that the 
> reverse path is uncongested). One alternative could be to compute
> receive-ts = send-ts + latest_rtt + min_rtt
> where min_rtt is the min RTT over a given time interval

That could work.  Still an area of experimentation (min_rtt may have its
own issues but that would remain to be seen).  QUIC rx timestamps may
help us further. So far, we mostly investigated different ways of
interpolating; tried different alternatives.