Re: [rtcweb] Open data channel issues

Justin Uberti <> Sun, 02 March 2014 21:28 UTC

Return-Path: <>
Received: from localhost ( []) by (Postfix) with ESMTP id D999B1A0AEC for <>; Sun, 2 Mar 2014 13:28:07 -0800 (PST)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -1.925
X-Spam-Status: No, score=-1.925 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FM_FORGED_GMAIL=0.622, HTML_MESSAGE=0.001, RP_MATCHES_RCVD=-0.547, SPF_PASS=-0.001] autolearn=ham
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id jgshKHg4jHst for <>; Sun, 2 Mar 2014 13:28:04 -0800 (PST)
Received: from ( [IPv6:2607:f8b0:400c:c01::22f]) by (Postfix) with ESMTP id BB4221A0AB9 for <>; Sun, 2 Mar 2014 13:28:03 -0800 (PST)
Received: by with SMTP id oz11so1953811veb.20 for <>; Sun, 02 Mar 2014 13:28:00 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc:content-type; bh=oX/k6GaIEqvcxyVacLoT3r92HcT7XYT+hxYpYX4OSf4=; b=V6mT7lj9G8hjg1ywI3NKkzel1MvlGqto1WIIt7QcylNQc5RWF0nlcXmF40UypLQ/4d wrzm201M+Ct7fwVLY9qgpwsLRtjfrTxD5AzCjNYEfu8lve6uqtEGYT9PpmumP5tG5ipd 24cmxapu2gmojCu7FLdCPN7ikAT8X7580xDBc3L2P36qgRLHS4sRVUl4mDlPMFcBNR9X LzSjNtYrY3tO8qYpgBbZtkadKTo6g/PQtAtLZfVgxd6/BZ6orz/2U14gGR/ZipyY6c7l aa4fh0ZubUi5rh6+dGdiMOBLWy7myyOoIPB8vUykl60pF5gfBZviVNE2iZ6wG3L09uJ0 +kXw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc:content-type; bh=oX/k6GaIEqvcxyVacLoT3r92HcT7XYT+hxYpYX4OSf4=; b=dUycEH8sLuANomz+toXyzxrp6oUfNUYajbhrhlZ9+NJgYRMJvy7MQOv1ct9zgiox+D UY4NhfZkslZOh0L2kdDCmreAkNUnlIQptQOpEvBtVyYycYpi4sggFU6zpcfeqiBtm3Bp iz/ziv23rxF3d3HGjHzM1TVsQ4f2wGNi/1z/rfuQld7Dkty2sZayi2E5uh/BtSNehhvm ZQLjSKqUp5FFVJA6DShxfZbve2UKh8DK6n9m7FHTxsvC+7SimOP1Zkk4S8Z/42JIJn/f 7p5P6IYX0q2BoRSeDoq91RAIv+bo+dbsMJsTQkt5xC+Xpb9LoGx7nH0i5vvv7Iow2Szm 55FQ==
X-Gm-Message-State: ALoCoQk+V5cdO6CTU7CXZn/h3roMRmRFL8FEcKs99I+UXQwNQpSHe72OZDdg8PM8V4uNA6a+oMWziULVVH8BC7VsprAUKUphebrJQm7YB8o3ZN5+Z05qE5lLbTOrQC+UOK8+bu2Puw0EriFZa9f15mx9AFySol4EZBWbL736N8HUCrWoIrX5C5hjw2dBdXx0PBaqOXf/TkvM
X-Received: by with SMTP id gp3mr1079425vec.32.1393795680736; Sun, 02 Mar 2014 13:28:00 -0800 (PST)
MIME-Version: 1.0
Received: by with HTTP; Sun, 2 Mar 2014 13:27:40 -0800 (PST)
In-Reply-To: <>
References: <> <> <>
From: Justin Uberti <>
Date: Sun, 02 Mar 2014 13:27:40 -0800
Message-ID: <>
To: Randell Jesup <>
Content-Type: multipart/alternative; boundary="047d7b675bf070f99404f3a65894"
Cc: "" <>
Subject: Re: [rtcweb] Open data channel issues
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: Real-Time Communication in WEB-browsers working group list <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Sun, 02 Mar 2014 21:28:08 -0000

I'm not sure I understand the actual problem here. We *are* going to handle
arbitrary-sized messages once ndata gets done. The max-msg-size stuff is
just a solution until ndata is deployed. (To cite your options, this is A
followed by C).

That said, Send(blob) seems like a bit of a footgun to me anyway; I think
apps are going to typically avoid it, a) to avoid memory bloat, and b) to
get progress indications. The only way out of that situation is to provide
stream semantics, which seems like a lot to take on given that WebSockets
hasn't gone that route. I also agree that we shouldn't try to go the (B)
route that you mention.

So I still think manual chunking seems like a reasonable thing to support.
I don't quite get the congestion control concerns; just because there is a
max chunk size doesn't mean the impl can't buffer multiple chunks in
bufferedAmount; the app could let that fill up to a degree to avoid having
to poll constantly to prevent underrun.

On Sun, Mar 2, 2014 at 1:37 AM, Randell Jesup <>wrote:

> On 2/26/2014 2:50 AM, Magnus Westerlund wrote:
>> Thanks Michael,
>> WG, please consider these open issues and try to form your own position
>> on them. They are intended to be discussed at the meeting. If you have
>> proposals on how to resolve them or other input you are very welcome to
>> provide that on the list.
> One more big issue.  I realize this is very late for pre-meeting
> discussion; I'd hoped to hash this out earlier but for various reasons
> (including power outages and my own workload) this didn't happen.
> We discussed a way to deal with the issues surrounding maximum message
> sizes at the last IETF.  Right now we have a proposal in the MMUSIC draft
> for limiting the maximum message size via the SDP.
> There is a problem with this: it's at odds with the definition of
> DataChannels in the W3 and with the "duck-typing" of DataChannels to work
> largely as a superset of WebSockets (outside of channel creation), and the
> WebAPI folk at Mozilla I talked to don't like the direction we're taking.
> I've been having talks with the WebAPI people at Mozilla, in particular
> Jonas Sicking, our WebAPI lead, and they strongly dislike encouraging
> applications to try to implement their large-data/blob transfer protocols;
> browsers have considerably more tools available to them to avoid memory
> hits and to make use of off-main-thread resources than the JS apps do.
>  "Having Send(blob) fail for any size of blob is crazy and non-sensical"
> was one comment made when I described the impacts of the current plan.
> Manual chunking in the application means poorly-implemented congestion
> control in the app to keep the channel running efficiently (the only
> feedback available directly is either having the far-end ack at the user
> level, or trying to estimate sleep times via setTimeout() and
> bufferedAmount() (which is simply not a great solution), or simply dumping
> a large amount of smaller transfers into Send() and causing the browser to
> have to buffer them in memory).  Also GC or other pauses in JS execution
> may cause hiccups in the transfer and mis-estimation of available
> bandwidth.  And of course this is being run over a congestion-controlled
> channel in the first place.
> Unless and until the W3 side makes DataChannels (and by extension,
> PeerConnection) APIs available from JS workers (and this is implemented),
> there will be compromises with packet-level protocols in JS.  One of those
> will be "it's hard to implement your own congestion control well".  Even
> with worker support, considerable extension of the APIs would be needed to
> make it work really well there.  I'll also note that
> DataChannels-from-worker support is nowhere near implementation in browsers.
> Another BIG problem as it's currently defined is that there's no lower
> bound for this limit, so all DataChannel users will need to implement
> manual chunking even if they use small fixed messages to guarantee spec
> compliance.  Of course they won't do so...  and even if they did, they
> wouldn't test it (another big problem).  You might say "ok, fine, lets set
> some small lower bound on this value, say 2 or 4 or 16K".  That doesn't
> really help much either.  Many will send variable-sized messages (because
> it's easy), and again won't test what happens when the messages trip over
> the spec limit (or the actual browser implementation limit!)  Those with
> fixed-size messages larger than the spec lower-bound won't test the against
> that; they'll test against what Firefox and Chrome implement at the moment.
>  So the net result is they'll ship applications that can break randomly in
> the field for no obvious reason (say if IE implements and uses 16K when
> Chrome used 32K and Firefox used 100MB).
> Why hand the application a footgun?
> Jonas Sicking suggested if the IETF insists on not supporting arbitrary
> Send(blob), we'll need to push in the W3 for a W3 protocol run on top of
> IETF DataChannels that handles fragmentation and reassembly in order to
> preserve the JS API for Send().  We can do this, but part of the whole
> partnership between the IETF and W3 on WebRTC was to try to avoid having
> the W3 define network protocols and keep them in the IETF where they belong.
> Note: abandoning Send(blob) in W3 doesn't help much, as the comments I
> made above about arbitrary limits and almost-certain lack of testing of
> messages violating the negotiated size would still apply.  Send(blob) just
> makes it easier to trip over the problem (and in fact more likely that the
> application will test very large sizes).
> Our options are:
> A) Just accept this complexity and just hope that people write good code
> or use good libraries. (See above about testing...)
> Note: we'd need to set *some* lower bound for the value.
> B) Make the W3 API implementation add a level of protocol on top of the
> underlying IETF network protocol. This protocol could then deal with
> fragmenting messages on the sending side and reassembling them on the
> receiving side.
> C) Convince IETF WG to support arbitrarily sized messages at a protocol
> level, at least in the WebRTC context, similar to WebSockets.
> --
> Randell Jesup -- rjesup a t mozilla d o t com
> _______________________________________________
> rtcweb mailing list