Re: [rtcweb] An input for discussing congestion control (Fwd: New Version Notification for draft-alvestrand-rtcweb-congestion-00.txt)

Soo-Hyun Choi <> Tue, 04 October 2011 00:17 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 8FADD21F8CF7 for <>; Mon, 3 Oct 2011 17:17:37 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -1.017
X-Spam-Status: No, score=-1.017 tagged_above=-999 required=5 tests=[BAYES_00=-2.599, FM_FORGED_GMAIL=0.622, RCVD_IN_BL_SPAMCOP_NET=1.96, RCVD_IN_DNSWL_LOW=-1]
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id lNEWipVGj7BH for <>; Mon, 3 Oct 2011 17:17:36 -0700 (PDT)
Received: from ( []) by (Postfix) with ESMTP id 754E321F8C3C for <>; Mon, 3 Oct 2011 17:17:05 -0700 (PDT)
Received: by bkaq10 with SMTP id q10so6573093bka.31 for <>; Mon, 03 Oct 2011 17:20:08 -0700 (PDT)
Received: by with SMTP id b19mr284637bku.257.1317687608224; Mon, 03 Oct 2011 17:20:08 -0700 (PDT)
MIME-Version: 1.0
Received: by with HTTP; Mon, 3 Oct 2011 17:19:48 -0700 (PDT)
In-Reply-To: <>
References: <> <> <> <> <> <> <> <> <> <> <> <> <> <> <> <> <> <>
From: Soo-Hyun Choi <>
Date: Tue, 04 Oct 2011 09:19:48 +0900
Message-ID: <>
To: Randell Jesup <>, Henrik Lundin <>
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: quoted-printable
Subject: Re: [rtcweb] An input for discussing congestion control (Fwd: New Version Notification for draft-alvestrand-rtcweb-congestion-00.txt)
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: Real-Time Communication in WEB-browsers working group list <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Tue, 04 Oct 2011 00:17:37 -0000

On Mon, Oct 3, 2011 at 21:31, Randell Jesup <> wrote:
> I suspect in practice you'll see (outgoing) a gap in received packets (delay
> spike, while the radio in the handset changes access points), followed by
> packets coming in at ever-increasing delays (because the packets already
> queued to send are larger than will fit over the interface).  Incoming
> packets will likewise have queued "somewhere" in the network - depending on
> the type of handover, you could have a burst of lost packets or a delay
> spike, again (if over-bandwidth in that direction) followed by a series of
> ever-increasing delays.
> In either case the signal (once transmission is re-established) should be
> very clear, and the types of algorithms used in Google's draft should
> quickly identify a strong "over-bandwidth" signal in the data.  Bufferbloat
> ironically may help in quickly identifying it, since you're less likely to
> have a lot of losses following by down-ticks in delay.  (Though some of the
> plans I have to modify the algorithm to extend across all the media streams
> would probably deal with that as well.)
> So, all-in-all I think the receivers at either end will likely identify the
> over-bandwidth condition within a small (4? 6? may depend on jitter) number
> of packets after an interface change.  Again, sharing the congestion control
> will help here (perhaps a lot) by putting all the datapoints into one
> bucket, so the signal will stabilize much faster.

I'd much appreciated if Google folks could present related data points
similar to the cases that you mentioned, if they have.

> Ok, so once the sides have recognized the sudden over-bandwidth situation,
> how do we get out of it?  In AVPF we're likely able to send an RTCP message
> anytime we want, so the receiver should be able to send an RTCP as soon as
> possible.  We've already said we plan to use RFC 5506 to reduce sizes of
> feed back messages, so that will keep the size smaller.   It will be stuck
> behind any already-queued packets (hello, bufferbloat!)  but there's little
> we can do about that.
> This RTCP feedback could of course be lost, and the CC algorithm should take
> that into account when it's trying to transmit a significant change in
> bandwidth (at least a significant reduction) by sending updated feedback
> messages frequently until stability has been achieved.

Sure - any kind of Ack-based CC mechanisms are significantly
influenced by the reliability of Ack flows.

> In some cases, if it's possible, it may be useful to know about a network
> switchover and perhaps reduce outgoing and ask for a reduction in incoming
> bandwidth to reduce the risk of a spike in delay over the switchover (at the
> cost of reduction in quality temporarily).
>> With ACK-clocked algorithms like TCP and TFWC the sender simply stops
>> sending packets when ACK's are not received anymore. Receiver based algos
>> are a bit more complicated as the risk is higher that the sender will
>> continue to send packets for some time even though the channel throughput
>> has dropped considerably, resulting in excessive congestion somewhere along
>> the path.
>> Is this a problem ?. I don't know, I guess time will tell.

Ack-clock based CC algos are simpler to implement in real apps as it
does not require a complicated non-linear computation in the algos.