Re: [rtcweb] Open data channel issues

Justin Uberti <> Mon, 03 March 2014 11:19 UTC

Return-Path: <>
Received: from localhost ( []) by (Postfix) with ESMTP id 094E91A0F3C for <>; Mon, 3 Mar 2014 03:19:10 -0800 (PST)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -1.925
X-Spam-Status: No, score=-1.925 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FM_FORGED_GMAIL=0.622, HTML_MESSAGE=0.001, RP_MATCHES_RCVD=-0.547, SPF_PASS=-0.001] autolearn=ham
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id 4sYLBl3wAcX8 for <>; Mon, 3 Mar 2014 03:19:05 -0800 (PST)
Received: from ( [IPv6:2607:f8b0:400c:c01::232]) by (Postfix) with ESMTP id A5B861A0EEF for <>; Mon, 3 Mar 2014 03:19:05 -0800 (PST)
Received: by with SMTP id jw12so727660veb.37 for <>; Mon, 03 Mar 2014 03:19:02 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc:content-type; bh=EbKuYjNXPrsQzP56xzHnR+lq76sECR71Tfu82fR0N+s=; b=dxHDaWUTthOV6tTm/DqTNVhj+CKkE5wus/dGrG7yDKitX5LiUvXqFuoFj8pj4vMhfS AKsS7gmWUq95XBB1LvIz2rjyGlstNW+1IVGxa/aJQvqWVga54oKtc21VJ5ci/4Ipc4nl EwUPvDa4Ozel4cuauWReIlfbyoidQm6m1AgVLZWPpc/rF7AHlPCdnIivPecxww0H5YEV M/gmG8acrmAq7jZ7YL2Rb1+5ai61XjghPIlpd6AfRXtbWgZMK5Bypyf7NwrwGQjXp8+8 Ftys1wd9Z19A6FqDvyb8C9IUrxYhgFTbLh1tTjBGC6P80looPgeYvpZxcxiFH/r7dPdB z0pw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc:content-type; bh=EbKuYjNXPrsQzP56xzHnR+lq76sECR71Tfu82fR0N+s=; b=E314OgKm196Qz3d697quzqqGvuwpS+dgjXyAHylW3tZtv34NBG1XnXBRniOO28wBW4 u7PiROFeWTlI1N+SHCTcEbFqPwWcdcf7g6ytblF6vr0JtLBoG06e4YfOOslWfo2jMYfH GMDEvTV7D3R/+AGr8UC0RKuF6Ojj5U+wDij2dcShyI75fBf7nKeCrkkS0CnABgCK6K/8 dovfPBqhrIyfTvJzFWCiLwMUdVL0xlb/x5K8SUVA9/NYv/8mFwBYtd8vuy0TcFeiGfDe R8E1qCVJxLS6mZKc5ISL3hqFGiRakbtR3ybzEoPdg9cSU9zMuYAOn4+sQErsDmWIkYUQ nZUQ==
X-Gm-Message-State: ALoCoQmUwdpjgJNqm/B0hxJdZmxaCQKRdRF4hnxZ0g5bqbv2SxvKd2bzcdxZYaIQikBIEAiFT2JxRID+qszTWf7xZOzpTbxnEi+AUdrJqwBoGS+K+8EGAHV8mrWAAvZ6JysLsX/ce/Xau2rG+cBXi7RJrr4ecWmhK5Yhw8iQ2M39GROlk3BAqngdYQ73vI439I1MFXMJaoIY
X-Received: by with SMTP id ar7mr25197655vdc.5.1393845542539; Mon, 03 Mar 2014 03:19:02 -0800 (PST)
MIME-Version: 1.0
Received: by with HTTP; Mon, 3 Mar 2014 03:18:42 -0800 (PST)
In-Reply-To: <>
References: <> <> <> <> <>
From: Justin Uberti <>
Date: Mon, 03 Mar 2014 03:18:42 -0800
Message-ID: <>
To: Randell Jesup <>
Content-Type: multipart/alternative; boundary="047d7bacb7386f6ee804f3b1f407"
Cc: Michael Tuexen <>, "" <>
Subject: Re: [rtcweb] Open data channel issues
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: Real-Time Communication in WEB-browsers working group list <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Mon, 03 Mar 2014 11:19:10 -0000

On Sun, Mar 2, 2014 at 6:42 PM, Randell Jesup <>wrote:

>  On 3/2/2014 4:27 PM, Justin Uberti wrote:
> I'm not sure I understand the actual problem here. We *are* going to
> handle arbitrary-sized messages once ndata gets done. The max-msg-size
> stuff is just a solution until ndata is deployed. (To cite your options,
> this is A followed by C).
> So, here's where I think there may be a disconnect (and if I'm wrong,
> great):
> ndata solves the monopolization issue between streams, allowing packets
> for different streams to be interleaved on the wire.  It does not (so far
> as I know) relax the usrsctp library's returning EMSGSIZE if the sendto()
> is larger than available buffer space, and the alternative (EOR mode)
> doesn't allow for interleaving of messages when sending at all (and I'm not
> sure it allows interleaving on reception either, but I didn't check right
> now).
> Now, this is an implementation/API issue which could be fixed with some
> work, but so far as I know no work has been done on it.  So ndata will
> allow us to expand the max-sending size to min(SCTP_BUFFER_SIZE - N,
> other-sides-max-receive-size).  It does not allow us to expand the size to
> any useful Blob size.

Based on the discussion of ndata/eor in followup messages, I think this
problem will be addressed in the near future.

>  That said, Send(blob) seems like a bit of a footgun to me anyway; I
> think apps are going to typically avoid it, a) to avoid memory bloat, and
> b) to get progress indications. The only way out of that situation is to
> provide stream semantics, which seems like a lot to take on given that
> WebSockets hasn't gone that route. I also agree that we shouldn't try to go
> the (B) route that you mention.
> Well, Streams would be a good thing once they're done. Progress indication
> could be added at the W3 level without much problem.  The memory issue
> should be resolvable in a number of ways, per my discussion with Sicking.
> Note that application chunking has a serious memory issue today in that the
> File Writer API hasn't been agreed to; I think there's progress towards
> eventually adopting a version of the Filesystem API (with changes), but
> that will be a while.  Again, Stream should help - and note that the
> semantics for Blob reception allow it to be written to disk as it's
> received and not held entirely in memory when it's handed to the
> application, which is NOT possible today for application chunking.

Agree Streams will be useful when done, but that's probably post-1.0 (as
Martin says, we should first let it be applied to WebSockets). However I
don't think app chunking will have any memory problems - the chunks will be
small enough that they can be easily spooled to disk as they come in.

>  So I still think manual chunking seems like a reasonable thing to
> support. I don't quite get the congestion control concerns; just because
> there is a max chunk size doesn't mean the impl can't buffer multiple
> chunks in bufferedAmount; the app could let that fill up to a degree to
> avoid having to poll constantly to prevent underrun.
> On a slow link this will work if the browser isn't janky.  On a fast link
> GC pauses and other things may cause the buffer to drain out and go idle
> for significant periods.

Since the amount buffered is up to the app, the app can just increase the
buffer size to provide enough data to cover the next second or so of

Note that app chunking will clearly be needed in some of the more
interesting cases we want to support, such as where the transfer is not
1:1. For swarm-style downloading, we need to make sure app chunking works
well (and I think it does).

>   Randell
> On Sun, Mar 2, 2014 at 1:37 AM, Randell Jesup <>wrote:
>> On 2/26/2014 2:50 AM, Magnus Westerlund wrote:
>>> Thanks Michael,
>>> WG, please consider these open issues and try to form your own position
>>> on them. They are intended to be discussed at the meeting. If you have
>>> proposals on how to resolve them or other input you are very welcome to
>>> provide that on the list.
>>  One more big issue.  I realize this is very late for pre-meeting
>> discussion; I'd hoped to hash this out earlier but for various reasons
>> (including power outages and my own workload) this didn't happen.
>> We discussed a way to deal with the issues surrounding maximum message
>> sizes at the last IETF.  Right now we have a proposal in the MMUSIC draft
>> for limiting the maximum message size via the SDP.
>> There is a problem with this: it's at odds with the definition of
>> DataChannels in the W3 and with the "duck-typing" of DataChannels to work
>> largely as a superset of WebSockets (outside of channel creation), and the
>> WebAPI folk at Mozilla I talked to don't like the direction we're taking.
>> I've been having talks with the WebAPI people at Mozilla, in particular
>> Jonas Sicking, our WebAPI lead, and they strongly dislike encouraging
>> applications to try to implement their large-data/blob transfer protocols;
>> browsers have considerably more tools available to them to avoid memory
>> hits and to make use of off-main-thread resources than the JS apps do.
>>  "Having Send(blob) fail for any size of blob is crazy and non-sensical"
>> was one comment made when I described the impacts of the current plan.
>> Manual chunking in the application means poorly-implemented congestion
>> control in the app to keep the channel running efficiently (the only
>> feedback available directly is either having the far-end ack at the user
>> level, or trying to estimate sleep times via setTimeout() and
>> bufferedAmount() (which is simply not a great solution), or simply dumping
>> a large amount of smaller transfers into Send() and causing the browser to
>> have to buffer them in memory).  Also GC or other pauses in JS execution
>> may cause hiccups in the transfer and mis-estimation of available
>> bandwidth.  And of course this is being run over a congestion-controlled
>> channel in the first place.
>> Unless and until the W3 side makes DataChannels (and by extension,
>> PeerConnection) APIs available from JS workers (and this is implemented),
>> there will be compromises with packet-level protocols in JS.  One of those
>> will be "it's hard to implement your own congestion control well".  Even
>> with worker support, considerable extension of the APIs would be needed to
>> make it work really well there.  I'll also note that
>> DataChannels-from-worker support is nowhere near implementation in browsers.
>> Another BIG problem as it's currently defined is that there's no lower
>> bound for this limit, so all DataChannel users will need to implement
>> manual chunking even if they use small fixed messages to guarantee spec
>> compliance.  Of course they won't do so...  and even if they did, they
>> wouldn't test it (another big problem).  You might say "ok, fine, lets set
>> some small lower bound on this value, say 2 or 4 or 16K".  That doesn't
>> really help much either.  Many will send variable-sized messages (because
>> it's easy), and again won't test what happens when the messages trip over
>> the spec limit (or the actual browser implementation limit!)  Those with
>> fixed-size messages larger than the spec lower-bound won't test the against
>> that; they'll test against what Firefox and Chrome implement at the moment.
>>  So the net result is they'll ship applications that can break randomly in
>> the field for no obvious reason (say if IE implements and uses 16K when
>> Chrome used 32K and Firefox used 100MB).
>> Why hand the application a footgun?
>> Jonas Sicking suggested if the IETF insists on not supporting arbitrary
>> Send(blob), we'll need to push in the W3 for a W3 protocol run on top of
>> IETF DataChannels that handles fragmentation and reassembly in order to
>> preserve the JS API for Send().  We can do this, but part of the whole
>> partnership between the IETF and W3 on WebRTC was to try to avoid having
>> the W3 define network protocols and keep them in the IETF where they belong.
>> Note: abandoning Send(blob) in W3 doesn't help much, as the comments I
>> made above about arbitrary limits and almost-certain lack of testing of
>> messages violating the negotiated size would still apply.  Send(blob) just
>> makes it easier to trip over the problem (and in fact more likely that the
>> application will test very large sizes).
>> Our options are:
>> A) Just accept this complexity and just hope that people write good code
>> or use good libraries. (See above about testing...)
>> Note: we'd need to set *some* lower bound for the value.
>> B) Make the W3 API implementation add a level of protocol on top of the
>> underlying IETF network protocol. This protocol could then deal with
>> fragmenting messages on the sending side and reassembling them on the
>> receiving side.
>> C) Convince IETF WG to support arbitrarily sized messages at a protocol
>> level, at least in the WebRTC context, similar to WebSockets.
>> --
>> Randell Jesup -- rjesup a t mozilla d o t com
>>   _______________________________________________
>> rtcweb mailing list
> _______________________________________________
> rtcweb mailing listrtcweb@ietf.org
> --
> Randell Jesup -- rjesup a t mozilla d o t com
> _______________________________________________
> rtcweb mailing list