Re: [rtcweb] Open data channel issues

Randell Jesup <> Sun, 02 March 2014 09:38 UTC

Return-Path: <>
Received: from localhost ( []) by (Postfix) with ESMTP id B7A751A0C13 for <>; Sun, 2 Mar 2014 01:38:29 -0800 (PST)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: 0.8
X-Spam-Status: No, score=0.8 tagged_above=-999 required=5 tests=[BAYES_50=0.8, RCVD_IN_DNSWL_NONE=-0.0001] autolearn=ham
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id OGzs7oEVbZkQ for <>; Sun, 2 Mar 2014 01:38:26 -0800 (PST)
Received: from ( []) by (Postfix) with ESMTP id 5ABFC1A0C1C for <>; Sun, 2 Mar 2014 01:38:26 -0800 (PST)
Received: from ([]:1513 helo=[]) by with esmtpsa (TLSv1:DHE-RSA-AES128-SHA:128) (Exim 4.82) (envelope-from <>) id 1WK2ql-000E4p-E3 for; Sun, 02 Mar 2014 03:38:23 -0600
Message-ID: <>
Date: Sun, 02 Mar 2014 04:37:00 -0500
From: Randell Jesup <>
User-Agent: Mozilla/5.0 (Windows NT 5.1; rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: "" <>
References: <> <>
In-Reply-To: <>
Content-Type: text/plain; charset="ISO-8859-1"; format="flowed"
Content-Transfer-Encoding: 7bit
X-AntiAbuse: This header was added to track abuse, please include it with any abuse report
X-AntiAbuse: Primary Hostname -
X-AntiAbuse: Original Domain -
X-AntiAbuse: Originator/Caller UID/GID - [47 12] / [47 12]
X-AntiAbuse: Sender Address Domain -
X-Get-Message-Sender-Via: authenticated_id:
Subject: Re: [rtcweb] Open data channel issues
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: Real-Time Communication in WEB-browsers working group list <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Sun, 02 Mar 2014 09:38:30 -0000

On 2/26/2014 2:50 AM, Magnus Westerlund wrote:
> Thanks Michael,
> WG, please consider these open issues and try to form your own position
> on them. They are intended to be discussed at the meeting. If you have
> proposals on how to resolve them or other input you are very welcome to
> provide that on the list.

One more big issue.  I realize this is very late for pre-meeting 
discussion; I'd hoped to hash this out earlier but for various reasons 
(including power outages and my own workload) this didn't happen.

We discussed a way to deal with the issues surrounding maximum message 
sizes at the last IETF.  Right now we have a proposal in the MMUSIC 
draft for limiting the maximum message size via the SDP.

There is a problem with this: it's at odds with the definition of 
DataChannels in the W3 and with the "duck-typing" of DataChannels to 
work largely as a superset of WebSockets (outside of channel creation), 
and the WebAPI folk at Mozilla I talked to don't like the direction 
we're taking.

I've been having talks with the WebAPI people at Mozilla, in particular 
Jonas Sicking, our WebAPI lead, and they strongly dislike encouraging 
applications to try to implement their large-data/blob transfer 
protocols; browsers have considerably more tools available to them to 
avoid memory hits and to make use of off-main-thread resources than the 
JS apps do.  "Having Send(blob) fail for any size of blob is crazy and 
non-sensical" was one comment made when I described the impacts of the 
current plan.

Manual chunking in the application means poorly-implemented congestion 
control in the app to keep the channel running efficiently (the only 
feedback available directly is either having the far-end ack at the user 
level, or trying to estimate sleep times via setTimeout() and 
bufferedAmount() (which is simply not a great solution), or simply 
dumping a large amount of smaller transfers into Send() and causing the 
browser to have to buffer them in memory).  Also GC or other pauses in 
JS execution may cause hiccups in the transfer and mis-estimation of 
available bandwidth.  And of course this is being run over a 
congestion-controlled channel in the first place.

Unless and until the W3 side makes DataChannels (and by extension, 
PeerConnection) APIs available from JS workers (and this is 
implemented), there will be compromises with packet-level protocols in 
JS.  One of those will be "it's hard to implement your own congestion 
control well".  Even with worker support, considerable extension of the 
APIs would be needed to make it work really well there.  I'll also note 
that DataChannels-from-worker support is nowhere near implementation in 

Another BIG problem as it's currently defined is that there's no lower 
bound for this limit, so all DataChannel users will need to implement 
manual chunking even if they use small fixed messages to guarantee spec 
compliance.  Of course they won't do so...  and even if they did, they 
wouldn't test it (another big problem).  You might say "ok, fine, lets 
set some small lower bound on this value, say 2 or 4 or 16K".  That 
doesn't really help much either.  Many will send variable-sized messages 
(because it's easy), and again won't test what happens when the messages 
trip over the spec limit (or the actual browser implementation limit!)  
Those with fixed-size messages larger than the spec lower-bound won't 
test the against that; they'll test against what Firefox and Chrome 
implement at the moment.  So the net result is they'll ship applications 
that can break randomly in the field for no obvious reason (say if IE 
implements and uses 16K when Chrome used 32K and Firefox used 100MB).

Why hand the application a footgun?

Jonas Sicking suggested if the IETF insists on not supporting arbitrary 
Send(blob), we'll need to push in the W3 for a W3 protocol run on top of 
IETF DataChannels that handles fragmentation and reassembly in order to 
preserve the JS API for Send().  We can do this, but part of the whole 
partnership between the IETF and W3 on WebRTC was to try to avoid having 
the W3 define network protocols and keep them in the IETF where they belong.

Note: abandoning Send(blob) in W3 doesn't help much, as the comments I 
made above about arbitrary limits and almost-certain lack of testing of 
messages violating the negotiated size would still apply.  Send(blob) 
just makes it easier to trip over the problem (and in fact more likely 
that the application will test very large sizes).

Our options are:

A) Just accept this complexity and just hope that people write good code 
or use good libraries. (See above about testing...)
Note: we'd need to set *some* lower bound for the value.

B) Make the W3 API implementation add a level of protocol on top of the 
underlying IETF network protocol. This protocol could then deal with 
fragmenting messages on the sending side and reassembling them on the 
receiving side.

C) Convince IETF WG to support arbitrarily sized messages at a protocol 
level, at least in the WebRTC context, similar to WebSockets.

Randell Jesup -- rjesup a t mozilla d o t com