Re: [rtcweb] An input for discussing congestion control (Fwd: New Version Notification for draft-alvestrand-rtcweb-congestion-00.txt)

Randell Jesup <> Mon, 03 October 2011 12:32 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 394BB21F8AF7 for <>; Mon, 3 Oct 2011 05:32:43 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -2.569
X-Spam-Status: No, score=-2.569 tagged_above=-999 required=5 tests=[AWL=0.030, BAYES_00=-2.599]
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id 05W5SrQNCFeg for <>; Mon, 3 Oct 2011 05:32:42 -0700 (PDT)
Received: from ( []) by (Postfix) with ESMTP id 0403A21F86A5 for <>; Mon, 3 Oct 2011 05:32:42 -0700 (PDT)
Received: from ([] helo=[]) by with esmtpsa (TLSv1:AES256-SHA:256) (Exim 4.69) (envelope-from <>) id 1RAhkF-0003DC-VD for; Mon, 03 Oct 2011 07:35:44 -0500
Message-ID: <>
Date: Mon, 03 Oct 2011 08:31:47 -0400
From: Randell Jesup <>
User-Agent: Mozilla/5.0 (Windows NT 5.1; rv:6.0.1) Gecko/20110830 Thunderbird/6.0.1
MIME-Version: 1.0
References: <> <> <> <> <> <> <> <> <> <> <> <> <> <> <> <> <>
In-Reply-To: <>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 8bit
X-AntiAbuse: This header was added to track abuse, please include it with any abuse report
X-AntiAbuse: Primary Hostname -
X-AntiAbuse: Original Domain -
X-AntiAbuse: Originator/Caller UID/GID - [47 12] / [47 12]
X-AntiAbuse: Sender Address Domain -
Subject: Re: [rtcweb] An input for discussing congestion control (Fwd: New Version Notification for draft-alvestrand-rtcweb-congestion-00.txt)
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: Real-Time Communication in WEB-browsers working group list <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Mon, 03 Oct 2011 12:32:43 -0000

On 10/3/2011 6:55 AM, Ingemar Johansson S wrote:
> I guess one thing that should be considered is that a mobile user may 
> make handover between different radio access types with large 
> differences in terms of e.g throughput. You may have scenarios such as 
> handover from LTE to GPRS or perhaps you have handover from WiFi to 
> HSPA if a user walks away from a café. I would expect that these 
> scenarios will become very common in the future. It is also worth 
> mention that handover between  WiFi and 3GPP is standardized in 3GPP, 
> which in practice means that this will ultimately happen without end 
> user interaction. What the endpoints will notice is potentially a 
> large and sudden change in throughput.
> In cases like these thoughput may drop rapidly. Of course you can 
> sense this with the outlined algoritms that signals feedback only when 
> needed but that assumes that you receive packets that you can infer 
> enough statistics on. Problem is that on the receiver side don't 
> really know that you are about to receive a packet.

I suspect in practice you'll see (outgoing) a gap in received packets 
(delay spike, while the radio in the handset changes access points), 
followed by packets coming in at ever-increasing delays (because the 
packets already queued to send are larger than will fit over the 
interface).  Incoming packets will likewise have queued "somewhere" in 
the network - depending on the type of handover, you could have a burst 
of lost packets or a delay spike, again (if over-bandwidth in that 
direction) followed by a series of ever-increasing delays.

In either case the signal (once transmission is re-established) should 
be very clear, and the types of algorithms used in Google's draft should 
quickly identify a strong "over-bandwidth" signal in the data.  
Bufferbloat ironically may help in quickly identifying it, since you're 
less likely to have a lot of losses following by down-ticks in delay.  
(Though some of the plans I have to modify the algorithm to extend 
across all the media streams would probably deal with that as well.)

So, all-in-all I think the receivers at either end will likely identify 
the over-bandwidth condition within a small (4? 6? may depend on jitter) 
number of packets after an interface change.  Again, sharing the 
congestion control will help here (perhaps a lot) by putting all the 
datapoints into one bucket, so the signal will stabilize much faster.

Ok, so once the sides have recognized the sudden over-bandwidth 
situation, how do we get out of it?  In AVPF we're likely able to send 
an RTCP message anytime we want, so the receiver should be able to send 
an RTCP as soon as possible.  We've already said we plan to use RFC 5506 
to reduce sizes of feed back messages, so that will keep the size 
smaller.   It will be stuck behind any already-queued packets (hello, 
bufferbloat!)  but there's little we can do about that.

This RTCP feedback could of course be lost, and the CC algorithm should 
take that into account when it's trying to transmit a significant change 
in bandwidth (at least a significant reduction) by sending updated 
feedback messages frequently until stability has been achieved.

In some cases, if it's possible, it may be useful to know about a 
network switchover and perhaps reduce outgoing and ask for a reduction 
in incoming bandwidth to reduce the risk of a spike in delay over the 
switchover (at the cost of reduction in quality temporarily).

> With ACK-clocked algorithms like TCP and TFWC the sender simply stops 
> sending packets when ACK's are not received anymore. Receiver based 
> algos are a bit more complicated as the risk is higher that the sender 
> will continue to send packets for some time even though the channel 
> throughput has dropped considerably, resulting in excessive congestion 
> somewhere along the path.
> Is this a problem ?. I don't know, I guess time will tell.
> /Ingemar
>     ------------------------------------------------------------------------
>     *From:* Henrik Lundin []
>     *Sent:* den 3 oktober 2011 10:13
>     *To:* Jim Gettys
>     *Cc:* Randell Jesup;
>     *Subject:* Re: [rtcweb] An input for discussing congestion control
>     (Fwd: New Version Notification for
>     draft-alvestrand-rtcweb-congestion-00.txt)
>     Sorry for my late response. (I've been away for some time.) I'd
>     just like to add my two cents to the discussion on feedback latency.
>     Frequent and rapid feedback is important to ensure stability of
>     the CC; I think we all agree. However, with an algorithm similar
>     to the suggested one, having a receive-side processing and
>     analysis function, the key feature is that feedback _can_ be sent
>     quickly when needed. When everything is ok, feedback can be quite
>     sparse, as long as the rate increments are somewhat adjusted for
>     the system response time (including the feedback latency). When
>     congestion is detected by the receiver, it is important that a
>     feedback message can be sent more or less immediately (say, within
>     100 ms). However, I do not see the need for constant feedback
>     every RTT or so.
>     /Henrik
>     On Wed, Sep 21, 2011 at 3:43 PM, Jim Gettys <
>     <>> wrote:
>         On 09/21/2011 09:28 AM, Randell Jesup wrote:
>>         On 9/21/2011 4:23 AM, Harald Alvestrand wrote:
>>>         I think receiver->sender reporting every RTT (or every
>>>         packet, which is frequently less frequent) is overkill, but
>>>         that's a statement with a lot of gut feeling and very few
>>>         numbers behind it.
>>>         One advantage we have in RTCWEB is that we can assume that
>>>         if audio and video work OK across the network, we're in a
>>>         good place. We don't have to worry about getting gigabyte
>>>         file transfers to utilize 90% of the link - even thogh we
>>>         have to worry about audio and video functioning while those
>>>         gigabyte transfers are taking place.
>>         Agreed.  Also, in practice the TCP flows we're competing with
>>         are rarely long-lived
>>         high-bandwidth flows like GB file transfers.  Normally
>>         they're flurries of short-lived TCP
>>         (which is important to consider since these short-lived flows
>>         can suddenly cause buffering
>>         without warning).
>         You get to deal with some of each. Both cause havoc in the
>         face of bufferbloat.  The long lived flows keep the buffers in
>         your OS/Home router/broadband gear near full, inserting lots
>         of delay.  This includes doing backups (local or remote),
>         uploading videos, downloading videos to disk, bittorrent, etc.
>         The netalyzr data is quite grim, particularly when you realise
>         it's a lower bound on the problem (the netalyzr test is
>         sensitive to cross traffic and more importantly, tops out by
>         the time it gets to 20Mbps).
>         As far as the transient bufferbloat problem caused by web
>         traffic, and why IW10 is a problem in my view at this time, see:
>>         As for 1 feedback/RTT, I agree.  And if you wanted to use one
>>         feedback/RTT, I'd put the feedback in
>>         a TCP header extension or design an RTP equivalent that can
>>         carry it in the reverse-direction
>>         media flow (when available).  But that's a different argument.
>         I like timestamps, if only to make it easy to tell the user:
>         you are losing, and it's because of your broken network.
>         For TCP, the TCP timestamp option is on by default in Linux,
>         and I think may be on by default on other modern systems
>         (anyone have any data?).
>                                 - Jim
>         Other protocols may not be so nice.
>         _______________________________________________
>         rtcweb mailing list
> <>
>     -- 
>     Henrik Lundin | WebRTC Software Eng |
>     <> |+46 70 646 13 41
>     <tel:%2B46%2070%20646%2013%2041>
> _______________________________________________
> rtcweb mailing list

Randell Jesup