Re: [rtcweb] Open data channel issues

Randell Jesup <> Mon, 03 March 2014 11:46 UTC

Return-Path: <>
Received: from localhost ( []) by (Postfix) with ESMTP id C96371A001A for <>; Mon, 3 Mar 2014 03:46:16 -0800 (PST)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -1.899
X-Spam-Status: No, score=-1.899 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_NONE=-0.0001] autolearn=ham
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id TI6ud44PYFMN for <>; Mon, 3 Mar 2014 03:46:12 -0800 (PST)
Received: from ( []) by (Postfix) with ESMTP id 9BAF61A0010 for <>; Mon, 3 Mar 2014 03:46:12 -0800 (PST)
Received: from ([]:1605 helo=[]) by with esmtpsa (TLSv1:DHE-RSA-AES128-SHA:128) (Exim 4.82) (envelope-from <>) id 1WKRJx-000EvW-Et for; Mon, 03 Mar 2014 05:46:09 -0600
Message-ID: <>
Date: Mon, 03 Mar 2014 06:44:44 -0500
From: Randell Jesup <>
User-Agent: Mozilla/5.0 (Windows NT 5.1; rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
References: <> <> <> <> <> <>
In-Reply-To: <>
Content-Type: multipart/alternative; boundary="------------060303070001080209080008"
X-AntiAbuse: This header was added to track abuse, please include it with any abuse report
X-AntiAbuse: Primary Hostname -
X-AntiAbuse: Original Domain -
X-AntiAbuse: Originator/Caller UID/GID - [47 12] / [47 12]
X-AntiAbuse: Sender Address Domain -
X-Get-Message-Sender-Via: authenticated_id:
Subject: Re: [rtcweb] Open data channel issues
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: Real-Time Communication in WEB-browsers working group list <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Mon, 03 Mar 2014 11:46:17 -0000

On 3/3/2014 6:18 AM, Justin Uberti wrote:
> On Sun, Mar 2, 2014 at 6:42 PM, Randell Jesup < 
> <>> wrote:
>     On 3/2/2014 4:27 PM, Justin Uberti wrote:
>>     I'm not sure I understand the actual problem here. We *are* going
>>     to handle arbitrary-sized messages once ndata gets done. The
>>     max-msg-size stuff is just a solution until ndata is deployed.
>>     (To cite your options, this is A followed by C).
>     So, here's where I think there may be a disconnect (and if I'm
>     wrong, great):
>     ndata solves the monopolization issue between streams, allowing
>     packets for different streams to be interleaved on the wire.  It
>     does not (so far as I know) relax the usrsctp library's returning
>     EMSGSIZE if the sendto() is larger than available buffer space,
>     and the alternative (EOR mode) doesn't allow for interleaving of
>     messages when sending at all (and I'm not sure it allows
>     interleaving on reception either, but I didn't check right now).
>     Now, this is an implementation/API issue which could be fixed with
>     some work, but so far as I know no work has been done on it.  So
>     ndata will allow us to expand the max-sending size to
>     min(SCTP_BUFFER_SIZE - N, other-sides-max-receive-size).  It does
>     not allow us to expand the size to any useful Blob size.
> Based on the discussion of ndata/eor in followup messages, I think 
> this problem will be addressed in the near future.

Hopefully so, though I hate forcing apps to include a bunch of code for 
backwards compat for a year or two (and many won't, or won't test it, 
etc).  That hurts the feature.  And today no one has ndata.

>>     That said, Send(blob) seems like a bit of a footgun to me anyway;
>>     I think apps are going to typically avoid it, a) to avoid memory
>>     bloat, and b) to get progress indications. The only way out of
>>     that situation is to provide stream semantics, which seems like a
>>     lot to take on given that WebSockets hasn't gone that route. I
>>     also agree that we shouldn't try to go the (B) route that you
>>     mention.
>     Well, Streams would be a good thing once they're done. Progress
>     indication could be added at the W3 level without much problem. 
>     The memory issue should be resolvable in a number of ways, per my
>     discussion with Sicking.  Note that application chunking has a
>     serious memory issue today in that the File Writer API hasn't been
>     agreed to; I think there's progress towards eventually adopting a
>     version of the Filesystem API (with changes), but that will be a
>     while.  Again, Stream should help - and note that the semantics
>     for Blob reception allow it to be written to disk as it's received
>     and not held entirely in memory when it's handed to the
>     application, which is NOT possible today for application chunking.
> Agree Streams will be useful when done, but that's probably post-1.0 
> (as Martin says, we should first let it be applied to WebSockets). 
> However I don't think app chunking will have any memory problems - the 
> chunks will be small enough that they can be easily spooled to disk as 
> they come in.

Agree on Streams (post 1.0).  Chunking does have memory problems unless 
you have a File Writer API (we don't) or a Filesystem API (we don't, 
though my understanding is that we believe that with some changes this 
can be adoptable in the future).  You have two choices today for app 
chunking (and I assume using blob slices for sending chunks):

1) create a blob from each received chunk then then create a new blob 
from the concatenation of all the sub-blobs.  This requires a large 
memory hit, though I suppose in theory maybe it could be done without a 
hit (painfully).  Even if the memory hit is avoided there will be a 
large disk IO hit to merge all those files and write a new file from them.

2) hold all the parts in memory, then write the entire thing as a blob 
after the last part is received.  Large memory and disk IO hit on 
reception of final part.

>>     So I still think manual chunking seems like a reasonable thing to
>>     support. I don't quite get the congestion control concerns; just
>>     because there is a max chunk size doesn't mean the impl can't
>>     buffer multiple chunks in bufferedAmount; the app could let that
>>     fill up to a degree to avoid having to poll constantly to prevent
>>     underrun.
>     On a slow link this will work if the browser isn't janky.  On a
>     fast link GC pauses and other things may cause the buffer to drain
>     out and go idle for significant periods.
> Since the amount buffered is up to the app, the app can just increase 
> the buffer size to provide enough data to cover the next second or so 
> of transfer.

How does the app increase the size of the internal SCTP buffers? (And 
that buffer is shared by all channels, of course).

> Note that app chunking will clearly be needed in some of the more 
> interesting cases we want to support, such as where the transfer is 
> not 1:1. For swarm-style downloading, we need to make sure app 
> chunking works well (and I think it does).

Agreed, we need good solutions for application chunking (and thus my 
interest in resolving the Filesystem API issues in the W3).  My problem 
is that we don't have those today for the reception side - but this (for 
apps that *need* chunking) is largely a W3 issue.

Randell Jesup -- rjesup a t mozilla d o t com