Re: [rtcweb] An input for discussing congestion control (Fwd: New Version Notification for draft-alvestrand-rtcweb-congestion-00.txt)

Stefan Holmer <> Mon, 03 October 2011 12:54 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 79CC221F8B1E for <>; Mon, 3 Oct 2011 05:54:41 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -104.31
X-Spam-Status: No, score=-104.31 tagged_above=-999 required=5 tests=[BAYES_00=-2.599, FM_FORGED_GMAIL=0.622, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_MED=-4, SARE_HTML_USL_OBFU=1.666, USER_IN_WHITELIST=-100]
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id DJTKVI6rVBMm for <>; Mon, 3 Oct 2011 05:54:39 -0700 (PDT)
Received: from ( []) by (Postfix) with ESMTP id 8A5A321F8B05 for <>; Mon, 3 Oct 2011 05:54:39 -0700 (PDT)
Received: from ( []) by with ESMTP id p93CvfiP011927 for <>; Mon, 3 Oct 2011 05:57:41 -0700
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed;; s=beta; t=1317646661; bh=+9ZY3eIXbRy2brNeOmflN0VZOGs=; h=MIME-Version:In-Reply-To:References:Date:Message-ID:Subject:From: To:Cc:Content-Type; b=KsMwwjBZ61xqiKjs3sRytP+Rf4KC4SIKqxxtNfagqn06y26jeVHndrjupQNo8Chag 0Xq1BnGcIPqWiqqNhHRTQ==
DomainKey-Signature: a=rsa-sha1; s=beta;; c=nofws; q=dns; h=dkim-signature:mime-version:in-reply-to:references:date: message-id:subject:from:to:cc:content-type:x-system-of-record; b=HCTqvCrPQXECR0HFJXXqPhfY46Ly5MSGPFp7Qf1RhVv5QNO8262OyqUNApEM/xCCP BDuIEbCropMHq/m8lz4SQ==
Received: from eye4 ( []) by with ESMTP id p93CvdZ9025799 (version=TLSv1/SSLv3 cipher=RC4-SHA bits=128 verify=NOT) for <>; Mon, 3 Oct 2011 05:57:40 -0700
Received: by eye4 with SMTP id 4so4092532eye.17 for <>; Mon, 03 Oct 2011 05:57:39 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=beta; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=Z2RK/HxUbvESKNDPaeAOqyZfSW1TUrtJJc3LfNZiCzo=; b=cVE3+lm0KCMsXy63KjhNmmC985cIQLPV1xWpAB/0wv58BT91NomMXnJrDBrTb4d88S G+LNGs57tEW50AaOAxJQ==
Received: by with SMTP id l5mr54300ebl.12.1317646658767; Mon, 03 Oct 2011 05:57:38 -0700 (PDT)
MIME-Version: 1.0
Received: by with SMTP id l5mr54298ebl.12.1317646658508; Mon, 03 Oct 2011 05:57:38 -0700 (PDT)
Received: by with HTTP; Mon, 3 Oct 2011 05:57:38 -0700 (PDT)
In-Reply-To: <>
References: <> <> <> <> <> <> <> <> <> <> <> <> <> <> <> <> <> <>
Date: Mon, 3 Oct 2011 14:57:38 +0200
Message-ID: <>
From: Stefan Holmer <>
To: Randell Jesup <>
Content-Type: multipart/alternative; boundary=001517478be80517d904ae648580
X-System-Of-Record: true
Subject: Re: [rtcweb] An input for discussing congestion control (Fwd: New Version Notification for draft-alvestrand-rtcweb-congestion-00.txt)
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: Real-Time Communication in WEB-browsers working group list <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Mon, 03 Oct 2011 12:54:41 -0000

On Mon, Oct 3, 2011 at 2:31 PM, Randell Jesup <>wrote;wrote:

> On 10/3/2011 6:55 AM, Ingemar Johansson S wrote:
>> I guess one thing that should be considered is that a mobile user may make
>> handover between different radio access types with large differences in
>> terms of e.g throughput. You may have scenarios such as handover from LTE to
>> GPRS or perhaps you have handover from WiFi to HSPA if a user walks away
>> from a café. I would expect that these scenarios will become very common in
>> the future. It is also worth mention that handover between  WiFi and 3GPP is
>> standardized in 3GPP, which in practice means that this will ultimately
>> happen without end user interaction. What the endpoints will notice is
>> potentially a large and sudden change in throughput.
>> In cases like these thoughput may drop rapidly. Of course you can sense
>> this with the outlined algoritms that signals feedback only when needed but
>> that assumes that you receive packets that you can infer enough statistics
>> on. Problem is that on the receiver side don't really know that you are
>> about to receive a packet.
> I suspect in practice you'll see (outgoing) a gap in received packets
> (delay spike, while the radio in the handset changes access points),
> followed by packets coming in at ever-increasing delays (because the packets
> already queued to send are larger than will fit over the interface).
>  Incoming packets will likewise have queued "somewhere" in the network -
> depending on the type of handover, you could have a burst of lost packets or
> a delay spike, again (if over-bandwidth in that direction) followed by a
> series of ever-increasing delays.

We could get some of the benefits of ACK windows by adding the requirement
that RTCP must be sent periodically (every X ms), since periodic RTCP
packets can be used as a form of accumulated ACKs. The sender is only
allowed to transmit at a rate less than or equal to the rate specified by
TMMBR, and must decrease the rate if any of the following is true:

1. No RTCP packet arrives within Y ms (> X ms)
2. The RTCP packet suggests that the receiver didn't receive any data since
the previous RTCP.

> In either case the signal (once transmission is re-established) should be
> very clear, and the types of algorithms used in Google's draft should
> quickly identify a strong "over-bandwidth" signal in the data.  Bufferbloat
> ironically may help in quickly identifying it, since you're less likely to
> have a lot of losses following by down-ticks in delay.  (Though some of the
> plans I have to modify the algorithm to extend across all the media streams
> would probably deal with that as well.)
> So, all-in-all I think the receivers at either end will likely identify the
> over-bandwidth condition within a small (4? 6? may depend on jitter) number
> of packets after an interface change.  Again, sharing the congestion control
> will help here (perhaps a lot) by putting all the datapoints into one
> bucket, so the signal will stabilize much faster.
> Ok, so once the sides have recognized the sudden over-bandwidth situation,
> how do we get out of it?  In AVPF we're likely able to send an RTCP message
> anytime we want, so the receiver should be able to send an RTCP as soon as
> possible.  We've already said we plan to use RFC 5506 to reduce sizes of
> feed back messages, so that will keep the size smaller.   It will be stuck
> behind any already-queued packets (hello, bufferbloat!)  but there's little
> we can do about that.
> This RTCP feedback could of course be lost, and the CC algorithm should
> take that into account when it's trying to transmit a significant change in
> bandwidth (at least a significant reduction) by sending updated feedback
> messages frequently until stability has been achieved.
> In some cases, if it's possible, it may be useful to know about a network
> switchover and perhaps reduce outgoing and ask for a reduction in incoming
> bandwidth to reduce the risk of a spike in delay over the switchover (at the
> cost of reduction in quality temporarily).
>  With ACK-clocked algorithms like TCP and TFWC the sender simply stops
>> sending packets when ACK's are not received anymore. Receiver based algos
>> are a bit more complicated as the risk is higher that the sender will
>> continue to send packets for some time even though the channel throughput
>> has dropped considerably, resulting in excessive congestion somewhere along
>> the path.
>> Is this a problem ?. I don't know, I guess time will tell.
>> /Ingemar
>>    ------------------------------**------------------------------**
>> ------------
>>    *From:* Henrik Lundin []
>>    *Sent:* den 3 oktober 2011 10:13
>>    *To:* Jim Gettys
>>    *Cc:* Randell Jesup;
>>    *Subject:* Re: [rtcweb] An input for discussing congestion control
>>    (Fwd: New Version Notification for
>>    draft-alvestrand-rtcweb-**congestion-00.txt)
>>    Sorry for my late response. (I've been away for some time.) I'd
>>    just like to add my two cents to the discussion on feedback latency.
>>    Frequent and rapid feedback is important to ensure stability of
>>    the CC; I think we all agree. However, with an algorithm similar
>>    to the suggested one, having a receive-side processing and
>>    analysis function, the key feature is that feedback _can_ be sent
>>    quickly when needed. When everything is ok, feedback can be quite
>>    sparse, as long as the rate increments are somewhat adjusted for
>>    the system response time (including the feedback latency). When
>>    congestion is detected by the receiver, it is important that a
>>    feedback message can be sent more or less immediately (say, within
>>    100 ms). However, I do not see the need for constant feedback
>>    every RTT or so.
>>    /Henrik
>>    On Wed, Sep 21, 2011 at 3:43 PM, Jim Gettys <
>>    <>> wrote:
>>        On 09/21/2011 09:28 AM, Randell Jesup wrote:
>>>        On 9/21/2011 4:23 AM, Harald Alvestrand wrote:
>>>>        I think receiver->sender reporting every RTT (or every
>>>>        packet, which is frequently less frequent) is overkill, but
>>>>        that's a statement with a lot of gut feeling and very few
>>>>        numbers behind it.
>>>>        One advantage we have in RTCWEB is that we can assume that
>>>>        if audio and video work OK across the network, we're in a
>>>>        good place. We don't have to worry about getting gigabyte
>>>>        file transfers to utilize 90% of the link - even thogh we
>>>>        have to worry about audio and video functioning while those
>>>>        gigabyte transfers are taking place.
>>>        Agreed.  Also, in practice the TCP flows we're competing with
>>>        are rarely long-lived
>>>        high-bandwidth flows like GB file transfers.  Normally
>>>        they're flurries of short-lived TCP
>>>        (which is important to consider since these short-lived flows
>>>        can suddenly cause buffering
>>>        without warning).
>>        You get to deal with some of each. Both cause havoc in the
>>        face of bufferbloat.  The long lived flows keep the buffers in
>>        your OS/Home router/broadband gear near full, inserting lots
>>        of delay.  This includes doing backups (local or remote),
>>        uploading videos, downloading videos to disk, bittorrent, etc.
>>        The netalyzr data is quite grim, particularly when you realise
>>        it's a lower bound on the problem (the netalyzr test is
>>        sensitive to cross traffic and more importantly, tops out by
>>        the time it gets to 20Mbps).
>> glasse-must-not-throw-stones-**at-another/<>
>>        As far as the transient bufferbloat problem caused by web
>>        traffic, and why IW10 is a problem in my view at this time, see:
>> harmful-00.txt<>
>>>        As for 1 feedback/RTT, I agree.  And if you wanted to use one
>>>        feedback/RTT, I'd put the feedback in
>>>        a TCP header extension or design an RTP equivalent that can
>>>        carry it in the reverse-direction
>>>        media flow (when available).  But that's a different argument.
>>>         I like timestamps, if only to make it easy to tell the user:
>>        you are losing, and it's because of your broken network.
>>        For TCP, the TCP timestamp option is on by default in Linux,
>>        and I think may be on by default on other modern systems
>>        (anyone have any data?).
>>                                - Jim
>>        Other protocols may not be so nice.
>>        ______________________________**_________________
>>        rtcweb mailing list
>> <>
>>    --     Henrik Lundin | WebRTC Software Eng |
>>    <> |+46 70 646 13 41
>>    <tel:%2B46%2070%20646%2013%**2041>
>> ______________________________**_________________
>> rtcweb mailing list
> --
> Randell Jesup
> ______________________________**_________________
> rtcweb mailing list