Re: [rtcweb] Open data channel issues

Justin Uberti <juberti@google.com> Mon, 03 March 2014 13:29 UTC

Return-Path: <juberti@google.com>
X-Original-To: rtcweb@ietfa.amsl.com
Delivered-To: rtcweb@ietfa.amsl.com
Received: from localhost (ietfa.amsl.com [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 5D96D1A0132 for <rtcweb@ietfa.amsl.com>; Mon, 3 Mar 2014 05:29:20 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.925
X-Spam-Level:
X-Spam-Status: No, score=-1.925 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FM_FORGED_GMAIL=0.622, HTML_MESSAGE=0.001, RP_MATCHES_RCVD=-0.547, SPF_PASS=-0.001] autolearn=ham
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id HF2ctRTPg245 for <rtcweb@ietfa.amsl.com>; Mon, 3 Mar 2014 05:29:17 -0800 (PST)
Received: from mail-vc0-x22c.google.com (mail-vc0-x22c.google.com [IPv6:2607:f8b0:400c:c03::22c]) by ietfa.amsl.com (Postfix) with ESMTP id 4A1451A0123 for <rtcweb@ietf.org>; Mon, 3 Mar 2014 05:29:17 -0800 (PST)
Received: by mail-vc0-f172.google.com with SMTP id lf12so3556457vcb.31 for <rtcweb@ietf.org>; Mon, 03 Mar 2014 05:29:14 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc:content-type; bh=52gZCBKmwyCNRAcejFGBTzQpxMxp39S77uIuQWdo6TA=; b=RPhcZ+b1qQeeNd0bJpUU0GlmFeJQWLCGUm8hCS5du34fwIkYmqC/0vAMk3zQKaF/Ff YrxH45FtZeOIuZOOsY7/5hBtVlL4+dp3oGs/ejl0Pb06dpjilUQsHrYs+hJwUAuAbR0m j8feS4vQABW0m3BS4h2iMXKz08aTol565z1lHO/mKJd/OedeOudMmQCke/B4b3O2gGDy K5rUYG778k74eIPJnrHCXLnLOeBjiOQN3gzgemWqtEfTnE2B4Q88yN0hVT+D+krT7U+x zzOpFf7p7CeTs5JifccITSk/yIWJ1kJ3kiPvAOEc3NME7eFeTCdkHdKSa0w3oN9iLlN5 Uf1w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc:content-type; bh=52gZCBKmwyCNRAcejFGBTzQpxMxp39S77uIuQWdo6TA=; b=ltKuVoDRmYW1z2/FLMv31FC79CpkX1ZJmd6zBdC9Br+oDXsjCg8iweGpH3JqoxQCk4 P6bUJkI8WFdT4v38dBRoRaPNJQoUKHU6DDsmpMJtoGk6UWi0Ztd7n3gAI9KpPvs5MwH2 o83WOzpghR2hP97xD/Crgasu+ZxchaYq2j92/21ngksudj9wwrsdwPdwgmj37GT/z8Ag Ja0Lv8kQvuK70r/i7YxClME9ngdaGi9cb9RPKJDrfqlpgsERgGYVpMXbikjvJIUtiQSG 4TPb71RlRHDEN8Fe8jhzy75jVjtPRS20jkwCkeLtppFphUUu/CVhEwcKbuPtn1OPtHiO xsPg==
X-Gm-Message-State: ALoCoQkGnWUxKxyFlzUgynW2Y2Ytrnx2lkZ6TvDcnFoc+QqJTJwBV6kSxaaZHrLH3mRzXcG1kYaXa0qJHhEsgl9Uu8n8YvKCmNcUegOhns7YlYxwHYqH08xmLSiZVUdsSTZNubJdf2+XIabio/9t+T4kpD0Cu6Nwkt0d/d8iNjw8fqc0ZFLPYojgx4YRF71mAhLxGHBfTxVF
X-Received: by 10.58.37.67 with SMTP id w3mr17228619vej.22.1393853354138; Mon, 03 Mar 2014 05:29:14 -0800 (PST)
MIME-Version: 1.0
Received: by 10.52.89.170 with HTTP; Mon, 3 Mar 2014 05:28:54 -0800 (PST)
In-Reply-To: <53146B2C.8080008@jesup.org>
References: <31B9253D-E826-4D07-A8A1-1B062B50F163@lurchi.franken.de> <530D9CC5.5080508@ericsson.com> <5312FBBC.5080006@jesup.org> <CAOJ7v-2GHt37u8raWDKquNFLCjSv-ptP0YGojPwuLv02da_m1Q@mail.gmail.com> <5313EC0D.9030808@jesup.org> <CAOJ7v-3L5_V2wuwVLKpz61VHey7yx3LnG_xrPLb5W+6DOLiEJw@mail.gmail.com> <53146B2C.8080008@jesup.org>
From: Justin Uberti <juberti@google.com>
Date: Mon, 03 Mar 2014 05:28:54 -0800
Message-ID: <CAOJ7v-3q2TJnVFYLJFJ=mCjdOtOY9-0W1t6=m+_gNC8+Z65b9w@mail.gmail.com>
To: Randell Jesup <randell-ietf@jesup.org>
Content-Type: multipart/alternative; boundary="089e013a0d420af76e04f3b3c611"
Archived-At: http://mailarchive.ietf.org/arch/msg/rtcweb/Iy0gIwn18KAsnYVB36kHgGxCC0U
Cc: "rtcweb@ietf.org" <rtcweb@ietf.org>
Subject: Re: [rtcweb] Open data channel issues
X-BeenThere: rtcweb@ietf.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: Real-Time Communication in WEB-browsers working group list <rtcweb.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/rtcweb>, <mailto:rtcweb-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/rtcweb/>
List-Post: <mailto:rtcweb@ietf.org>
List-Help: <mailto:rtcweb-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/rtcweb>, <mailto:rtcweb-request@ietf.org?subject=subscribe>
X-List-Received-Date: Mon, 03 Mar 2014 13:29:20 -0000

On Mon, Mar 3, 2014 at 3:44 AM, Randell Jesup <randell-ietf@jesup.org>wrote:

>  On 3/3/2014 6:18 AM, Justin Uberti wrote:
>
>
>
> On Sun, Mar 2, 2014 at 6:42 PM, Randell Jesup <randell-ietf@jesup.org>wrote:
>
>>  On 3/2/2014 4:27 PM, Justin Uberti wrote:
>>
>> I'm not sure I understand the actual problem here. We *are* going to
>> handle arbitrary-sized messages once ndata gets done. The max-msg-size
>> stuff is just a solution until ndata is deployed. (To cite your options,
>> this is A followed by C).
>>
>>
>>  So, here's where I think there may be a disconnect (and if I'm wrong,
>> great):
>>
>> ndata solves the monopolization issue between streams, allowing packets
>> for different streams to be interleaved on the wire.  It does not (so far
>> as I know) relax the usrsctp library's returning EMSGSIZE if the sendto()
>> is larger than available buffer space, and the alternative (EOR mode)
>> doesn't allow for interleaving of messages when sending at all (and I'm not
>> sure it allows interleaving on reception either, but I didn't check right
>> now).
>>
>> Now, this is an implementation/API issue which could be fixed with some
>> work, but so far as I know no work has been done on it.  So ndata will
>> allow us to expand the max-sending size to min(SCTP_BUFFER_SIZE - N,
>> other-sides-max-receive-size).  It does not allow us to expand the size to
>> any useful Blob size.
>>
>
>  Based on the discussion of ndata/eor in followup messages, I think this
> problem will be addressed in the near future.
>
>
> Hopefully so, though I hate forcing apps to include a bunch of code for
> backwards compat for a year or two (and many won't, or won't test it,
> etc).  That hurts the feature.  And today no one has ndata.
>
>
>
>
>>
>>   That said, Send(blob) seems like a bit of a footgun to me anyway; I
>> think apps are going to typically avoid it, a) to avoid memory bloat, and
>> b) to get progress indications. The only way out of that situation is to
>> provide stream semantics, which seems like a lot to take on given that
>> WebSockets hasn't gone that route. I also agree that we shouldn't try to go
>> the (B) route that you mention.
>>
>>
>>  Well, Streams would be a good thing once they're done. Progress
>> indication could be added at the W3 level without much problem.  The memory
>> issue should be resolvable in a number of ways, per my discussion with
>> Sicking.  Note that application chunking has a serious memory issue today
>> in that the File Writer API hasn't been agreed to; I think there's progress
>> towards eventually adopting a version of the Filesystem API (with changes),
>> but that will be a while.  Again, Stream should help - and note that the
>> semantics for Blob reception allow it to be written to disk as it's
>> received and not held entirely in memory when it's handed to the
>> application, which is NOT possible today for application chunking.
>>
>
>  Agree Streams will be useful when done, but that's probably post-1.0 (as
> Martin says, we should first let it be applied to WebSockets). However I
> don't think app chunking will have any memory problems - the chunks will be
> small enough that they can be easily spooled to disk as they come in.
>
>
> Agree on Streams (post 1.0).  Chunking does have memory problems unless
> you have a File Writer API (we don't) or a Filesystem API (we don't, though
> my understanding is that we believe that with some changes this can be
> adoptable in the future).  You have two choices today for app chunking (and
> I assume using blob slices for sending chunks):
>
> 1) create a blob from each received chunk then then create a new blob from
> the concatenation of all the sub-blobs.  This requires a large memory hit,
> though I suppose in theory maybe it could be done without a hit
> (painfully).  Even if the memory hit is avoided there will be a large disk
> IO hit to merge all those files and write a new file from them.
>
> 2) hold all the parts in memory, then write the entire thing as a blob
> after the last part is received.  Large memory and disk IO hit on reception
> of final part.
>
>
>
>>
>>  So I still think manual chunking seems like a reasonable thing to
>> support. I don't quite get the congestion control concerns; just because
>> there is a max chunk size doesn't mean the impl can't buffer multiple
>> chunks in bufferedAmount; the app could let that fill up to a degree to
>> avoid having to poll constantly to prevent underrun.
>>
>>
>>  On a slow link this will work if the browser isn't janky.  On a fast
>> link GC pauses and other things may cause the buffer to drain out and go
>> idle for significant periods.
>>
>
>  Since the amount buffered is up to the app, the app can just increase
> the buffer size to provide enough data to cover the next second or so of
> transfer.
>
>
> How does the app increase the size of the internal SCTP buffers?  (And
> that buffer is shared by all channels, of course).
>

What I meant is the app can make multiple calls to send() at a time, each
with a chunk, which will be buffered in the api layer and increase
bufferedAmount.

>
>
>   Note that app chunking will clearly be needed in some of the more
> interesting cases we want to support, such as where the transfer is not
> 1:1. For swarm-style downloading, we need to make sure app chunking works
> well (and I think it does).
>
>
> Agreed, we need good solutions for application chunking (and thus my
> interest in resolving the Filesystem API issues in the W3).  My problem is
> that we don't have those today for the reception side - but this (for apps
> that *need* chunking) is largely a W3 issue.
>
>
>
> --
> Randell Jesup -- rjesup a t mozilla d o t com
>
>
> _______________________________________________
> rtcweb mailing list
> rtcweb@ietf.org
> https://www.ietf.org/mailman/listinfo/rtcweb
>
>