Re: [rtcweb] rtcweb Digest, Vol 7, Issue 16

Stefan Holmer <> Thu, 08 September 2011 08:12 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 953CB21F8B08 for <>; Thu, 8 Sep 2011 01:12:43 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -105.976
X-Spam-Status: No, score=-105.976 tagged_above=-999 required=5 tests=[BAYES_00=-2.599, FM_FORGED_GMAIL=0.622, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_MED=-4, USER_IN_WHITELIST=-100]
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id 1R7SS6Z6UfSA for <>; Thu, 8 Sep 2011 01:12:40 -0700 (PDT)
Received: from ( []) by (Postfix) with ESMTP id D5A0121F8B0D for <>; Thu, 8 Sep 2011 01:12:39 -0700 (PDT)
Received: from ( []) by with ESMTP id p888EU2x008653 for <>; Thu, 8 Sep 2011 01:14:30 -0700
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed;; s=beta; t=1315469670; bh=1m/8jxHEgGL1sjQa56FXC+l+HTY=; h=MIME-Version:In-Reply-To:References:Date:Message-ID:Subject:From: To:Content-Type; b=cR+y66Hdbk4PKSlu9mzJF1KdyG7SChj1NyXbVo/rruoamtmizdecZ9jjg2rDxSqy5 +OT/F9K19VJkY8Q06VCMA==
DomainKey-Signature: a=rsa-sha1; s=beta;; c=nofws; q=dns; h=dkim-signature:mime-version:in-reply-to:references:date: message-id:subject:from:to:content-type:x-system-of-record; b=Hcmj4ITg8bUSvPF5z8+wGoBAd1jgVtOX3piVFwkCXxHatg55Q8UHEay+5mkv3ImP5 Tf23Asbj2X9pufXb+pj2w==
Received: from ewy5 ( []) by with ESMTP id p888ETej027469 (version=TLSv1/SSLv3 cipher=RC4-SHA bits=128 verify=NOT) for <>; Thu, 8 Sep 2011 01:14:29 -0700
Received: by ewy5 with SMTP id 5so231863ewy.34 for <>; Thu, 08 Sep 2011 01:14:29 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=beta; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=vu8XiMyu0Mh8y4NJ/ZbeYaoJz2dAnjRV2OXvdLiumWY=; b=RnMmJUSpJt+cYOTHelhcX/e5Lk4zfY2Q+30q3lPWA3fGeOVUA51XiUUHqKNIf3jwHX ncShE0opmvCTAYCLhWKQ==
Received: by with SMTP id 13mr172432ebj.50.1315469669580; Thu, 08 Sep 2011 01:14:29 -0700 (PDT)
MIME-Version: 1.0
Received: by with SMTP id 13mr172428ebj.50.1315469669283; Thu, 08 Sep 2011 01:14:29 -0700 (PDT)
Received: by with HTTP; Thu, 8 Sep 2011 01:14:29 -0700 (PDT)
In-Reply-To: <>
References: <>
Date: Thu, 08 Sep 2011 10:14:29 +0200
Message-ID: <>
From: Stefan Holmer <>
Content-Type: multipart/alternative; boundary="0015174be08059b79304ac69a6b1"
X-System-Of-Record: true
Subject: Re: [rtcweb] rtcweb Digest, Vol 7, Issue 16
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: Real-Time Communication in WEB-browsers working group list <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Thu, 08 Sep 2011 08:12:43 -0000

> ---------- Forwarded message ----------
> From: Randell Jesup <>
> To:
> Date: Tue, 06 Sep 2011 18:21:42 -0400
> Subject: Re: [rtcweb] An input for discussing congestion control (Fwd: New
> Version Notification for draft-alvestrand-rtcweb-congestion-00.txt)
> On 9/5/2011 6:09 AM, Harald Alvestrand wrote:
>> There is a congestion control algorithm inside the Google WebRTC codebase
>> that hasn't been documented publicly before, and might be interesting as
>> input when we get around to discussing what congestion control should be
>> mandatory-to-implement in this group.
> Where is this in the webrtc drop?  I looked but didn't see anything hooked
> up to RTPReceiverVideo::**EstimateBandwidth().  There's something in this
> genre for iSAC, though I didn't have time to look closely but it appears to
> be somewhat specialized for iSAC.

Links to the code are available in this discuss-webrtc post.

>> It's not forwarded as a candidate for what the result should be; I think
>> there can be better solutions (see the "further work" section in this
>> draft).
> Harald et al: this definitely answers my suggestion/request that we make
> congestion control mandatory, and that we target something similar to
> Radvision's NetSense, which this appears to have similar characteristics to.
> A few comments:
> In 3. (Receiver-side):
>   When an over-use is detected the system transitions to the decrease
>   state, where the available bandwidth estimate is decreased to a
>   factor times the currently incoming bit rate.
>     A_hat(i) = alpha*R_hat(i)
>   alpha is typically chosen to be in the interval [0.8, 0.95].
> May I suggest that there's more information available here than a single
> bit of "over-bandwidth".  We have an estimate here how *much* we're
> over-bandwidth, though cross-traffic and other fixed streams (audio) are
> part of that too, so it needs to be used with a grain of salt - but that
> slope is useful information.

You are right about that. I think it all boils down to how long you can
tolerate to wait for the queues to drain.

>   In either case we want to
>   measure the highest incoming rate during the under-use interval:
>     R_max = max{R_hat(i)} for i in 1..K
>   where K is the number of frames of under-use before returning to the
>   normal state.  R_max is a measure of the actual bandwidth available
>   and is a good guess of what bit rate we should be able to transmit
>   at.  Therefore the available bandwidth will be set to Rmax when we
>   transition from the hold state to the increase state.
> This is good, but it might make sense to explain why this is the case (as
> the draft does for other parts); I assume the argument is that if the delay
> is reducing after a bottleneck, then from the router that was buffering to
> us (usually just on the far end of the bottleneck) packets are flowing
> through from there as fast as they can.
> In fact, the rate they're getting through *while* you're over-bandwidth is
> actually the best estimate of total bandwidth you're likely to get.  So in
> fact you can estimate it well in all cases except when you're in "increase"
> mode (buffers drained and stable), pretty much.

I agree. Another reason for estimating the total bandwidth while queues are
draining is to get a more recent estimate than if we use the estimate we got
while over-using.

> I would also suggest that the rate to use is R_max * gamma (where gamma <
> 1.0)

Yes, I definitely agree here too, and that is actually what our
implementation does. We should update the draft with that.

> BTW, the description of directions here is confusing; the receiver is
> determining the apparent bandwidth the sender should be using; "we should be
> able to transmit at" is both wrong and confusing (probably should be "the
> sender should be able to transmit at").
I agree.

> Sender-side control:  10% seems rather high.  My experience was more
> aggressive in the face of loss, both in the limit at which it reacts and the
> amount it reduces - over maybe 5% loss I would cut sender bandwidth by 10 or
> 15% on top of whatever the slope told me to do.
Yes, these values can probably be better tuned. The purpose of the send-side
estimator is to be a last way out if the receive-side estimator fails. The
receiver has a better picture of whether the packet losses are a result of
congestion or not, and as an improvement I think it makes sense to
incorporate packet losses into the receive-side estimator (as you mentioned

> Interop -  the receiver could incorporate packet loss into the estimate
> used for TMMBR, which might improve talking to senders who don't follow this
> algorithm.  We should consider how to handle cases that involve interop and
> if and how to detect them; the algorithm may want to be different for those
> cases.
I think interop with endpoints which use other algorithms will work okay in
one way, as long as they handle incoming TMMBR correctly. However, if the
endpoint we're trying to interop with doesn't produce good estimates for
TMMBR (close to the available bandwidth) we will rely only on the send-side

> Future:
> We very much want to merge all the info we can from all the streams, so we
> can control where the bandwidth restrictions are applied instead of running
> it on N streams independently.  (For example, if we're sending two video
> streams we may want to apportion the bandwidth according to their
> size/framerate instead of equally, and we'll very much want to consider for
> data channels (and perhaps media) a 'priority' factor ala RTFMP.)
> I'll have more comments, but I need to turn into a pumpkin now and wanted
> to get what I have on the wire.  I haven't spent a lot of time fleshing them
> out or editing, so take them as a starting point for discussion.
>> The code is available through,  and the IPR statements
>> covering the code are found here:**
>> webrtc/license-rights<>
>>                    Harald
>> -------- Original Message --------
>> Subject:        New Version Notification for draft-alvestrand-rtcweb-**
>> congestion-00.txt
>> Date:   Mon, 05 Sep 2011 03:03:38 -0700
>> From:
>> To:
>> CC:,
>> A new version of I-D, draft-alvestrand-rtcweb-**congestion-00.txt has
>> been successfully submitted by Harald Alvestrand and posted to the IETF
>> repository.
>> Filename:        draft-alvestrand-rtcweb-**congestion
>> Revision:        00
>> Title:           A Google Congestion Control for Real-Time Communication
>> on the World Wide Web
>> Creation date:   2011-09-05
>> WG ID:           Individual Submission
>> Number of pages: 14
>> Abstract:
>>    This document describes two methods of congestion control when using
>>    real-time communications on the World Wide Web (RTCWEB); one sender-
>>    based and one receiver-based.
>>    It is published to aid the discussion on mandatory-to-implement flow
>>    control for RTCWEB applications; initial discussion is expected in
>>    the RTCWEB WG&#39;s mailing list.
> --
> Randell Jesup