Re: [rtcweb] #13: Transport of DATA_CHANNEL_OPEN

Michael Tuexen <Michael.Tuexen@lurchi.franken.de> Wed, 24 April 2013 06:24 UTC

Return-Path: <Michael.Tuexen@lurchi.franken.de>
X-Original-To: rtcweb@ietfa.amsl.com
Delivered-To: rtcweb@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 97BA521F86B2 for <rtcweb@ietfa.amsl.com>; Tue, 23 Apr 2013 23:24:11 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.299
X-Spam-Level:
X-Spam-Status: No, score=-2.299 tagged_above=-999 required=5 tests=[AWL=-0.300, BAYES_00=-2.599, J_CHICKENPOX_34=0.6]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 2IcCp1BoDvzK for <rtcweb@ietfa.amsl.com>; Tue, 23 Apr 2013 23:24:11 -0700 (PDT)
Received: from mail-n.franken.de (drew.ipv6.franken.de [IPv6:2001:638:a02:a001:20e:cff:fe4a:feaa]) by ietfa.amsl.com (Postfix) with ESMTP id 925A921F8630 for <rtcweb@ietf.org>; Tue, 23 Apr 2013 23:24:10 -0700 (PDT)
Received: from [10.225.1.125] (unknown [194.95.73.101]) (Authenticated sender: macmic) by mail-n.franken.de (Postfix) with ESMTP id BA3021C0C0692; Wed, 24 Apr 2013 08:24:09 +0200 (CEST)
Mime-Version: 1.0 (Apple Message framework v1283)
Content-Type: text/plain; charset="iso-8859-1"
From: Michael Tuexen <Michael.Tuexen@lurchi.franken.de>
In-Reply-To: <BLU169-W114DF9A01E2132B4B51167F93B50@phx.gbl>
Date: Wed, 24 Apr 2013 08:24:10 +0200
Content-Transfer-Encoding: quoted-printable
Message-Id: <A012A773-415A-4C93-9D0C-D48590204A7C@lurchi.franken.de>
References: <066.3120a55540cacaa74ee5fda0b5273a48@trac.tools.ietf.org>, <516CE3EC.2050804@jesup.org>, <CABkgnnVaTOLa-hs7AtEgaTk7eq00bEkCY+_8L96Y8pooqybBxA@mail.gmail.com>, <CAJrXDUFgxLT3-1HehKbg5byzifFi4Obe3XW9G4sbWRbnU+Hi1A@mail.gmail.com>, <CABkgnnXr85LZyJiSF+ok2KMS_xQnS0CE4VBq4PvEhBBscn2QZQ@mail.gmail.com>, <516F1AF9.2080301@alvestrand.no>, <CABkgnnVtUjk4jSDVioxQnrt-b69Hx0nZLefs7tpEzETSmLXeNA@mail.gmail.com>, <516F9A5A.6080402@alvestrand.no>, <CABkgnnWrAMnm5fTWCNA1jqC_8Js0a6ewfSkvni4xg0E6rXdCtA@mail.gmail.com>, <5170247F.4090908@alvestrand.no>, <CABkgnnXU4HeJT-QwDcJ5NTvr72gZXxXi5zHFkQjJS__UXqzvtQ@mail.gmail.com>, <206CB075-6754-4578-B623-866E410DACCC@lurchi.franken.de>, <CABkgnnUCXUH+0a+F1LVQVrtL=Q65HGgsdT-oBBF++zSVR4OhWw@mail.gmail.com>, <5171734E.3050300@jesup.org>, <CABkgnnW0V7Sjx27Ff7CHANLqPifRLMDNatqD=VgPOB+d-iR+4A@mail.gmail.com>, <CAJrXDUGQyuKA_DxSOPfsWqCnkSRd=_Qzkir+r0-27oprCz6=sw@mail.gmail.com>, <CAJrXDUFpWHWN5AD7mP6G0y+gdeYc04WjK4ofLSgKG2MfZ16nvQ@mail.gmail.com> <BLU169-W114DF9A01E2132B4B51167F93B50@phx.gbl>
To: Bernard Aboba <bernard_aboba@hotmail.com>
X-Mailer: Apple Mail (2.1283)
Cc: Randell Jesup <randell-ietf@jesup.org>, "rtcweb@ietf.org" <rtcweb@ietf.org>
Subject: Re: [rtcweb] #13: Transport of DATA_CHANNEL_OPEN
X-BeenThere: rtcweb@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: Real-Time Communication in WEB-browsers working group list <rtcweb.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/rtcweb>, <mailto:rtcweb-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/rtcweb>
List-Post: <mailto:rtcweb@ietf.org>
List-Help: <mailto:rtcweb-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/rtcweb>, <mailto:rtcweb-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 24 Apr 2013 06:24:11 -0000

On Apr 24, 2013, at 5:25 AM, Bernard Aboba wrote:

> There is still the need to define "a while".    IMHO, for "reliable" transport of the OPEN that could survive a routing transient, we are talking about roughly 30 seconds or 5 re-transmissions.   If we assume that only the OPEN is lost (no losses in the data),  required buffer = Timeout * min(RWIN, CWIN)/RTT .  This may not be small number. 
> 
> Example: Timeout = 30 seconds, min (CWIN, RWIN) = 16 KB, RTT = 50 ms,  buffer = 30 second * 16 KB/50 ms = 9.6 MB
I guess you are considering the case where the OPEN message was dropped by the network multiple times...

The sender and receiver normally bound the memory for processing data (socket buffers). So
your above only applies if the send buffer is larger than 9.6 MB. Ofter, the send and
receiver buffer are the same and reflect the RWIN, so you are considering on the receiver
side a 16 KB buffer, on the sender side a 10 MB buffer.

The sender can not delete message sent after the OPEN message, since they can be reneged at the
SCTP layer.

Best regards
Michael
> 
> 
> From: pthatcher@google.com
> Date: Tue, 23 Apr 2013 11:17:23 -0700
> To: martin.thomson@gmail.com
> CC: randell-ietf@jesup.org; rtcweb@ietf.org
> Subject: Re: [rtcweb] #13: Transport of DATA_CHANNEL_OPEN
> 
> Based on feedback, it sounds like we're basically narrowing this down to "buffer for a while and then fire an error locally without giving the data to JS, and close the stream so the remote side knows".  That sounds pretty good to me.
> 
> 
> On Fri, Apr 19, 2013 at 12:12 PM, Peter Thatcher <pthatcher@google.com> wrote:
> I like all the discussion, but I feel like we need to get back to the question and what options we have.  The question: "what does the browser do with unexpected data (before an open of an unregistered sid)?"
> 
> 1. Buffer forever without limits:  Randell thinks it's OK to buffer forever without limits.  Martin disagrees.  I disagree (I agree with Martin).
> 2. Buffer with limits, and then:
>   a.  Hand an error to JS saying "got some data for a data channel, but an OPEN never came" WITHOUT providing the data to JS: Harald likes this.  I'm OK with this.
>   b.  Hand an error to JS saying "got some data for a data channel, but an OPEN never came" WITH providing the data: I like this better, since I don't see a reason not to give JS the data.  
>   c.  Fire .ondatachannel: This is what I was was originally thinking, but I understand the downsides mentioned, and would be happy with (b) instead.
>  
> 
> Right now, it seems like 2a or 2b is our best options, mixed with resetting the stream.  In other words, if I'm a browser, I'd do something like:
> 
> function handle_data(sid, data) {
>   if (is_open_message(data)) {
>     var info = get_stream_info(data)
>     var buffered_data = clear_buffer(sid)
>     fire_ondatachannel(sid, info)
>     fire_ondata(sid, buffered_data)
>   } else if (has_known_stream(sid)) {
>     fire_ondata(sid, data)
>   } else if (buffer_has_space(sid)) {
>     add_to_buffer(sid, data)
>   } else {
>     var buffered_data = clear_buffer(sid)
>     fire_unknownchannelerror_in_js(sid, buffered_data)
>     reset_stream(sid)
>   }
> }
> 
> 
> Are there other options I'm missing or didn't understand?
> 
> 
> 
> On Fri, Apr 19, 2013 at 11:11 AM, Martin Thomson <martin.thomson@gmail.com> wrote:
> On 19 April 2013 09:39, Randell Jesup <randell-ietf@jesup.org> wrote:
> > the Open *will* eventually get through unless you have
> > 100% (or virtually so) packet loss
> 
> I'm going to pretend you didn't say that.  If you want to talk odds,
> that's fine, but I think that you'll find that this sort of error is
> far more likely than you realize.  We're talking the probability of
> incoming data exceeding a given threshold prior to an open being
> delivered.  After all, unless you have 0% loss, the *possible* maximum
> amount of data is infinite.  Though large numbers might be of
> relatively low probability on an individual basis, operating at scale
> you are going to encounter surprising spikes.
> 
> > I honestly feel it's ok to just buffer all incoming packets while waiting for the Open.
> 
> That's not a warm fuzzy that I share.
> 
> > No one is going to get a gigabyte of data in without an Open...  A
> > non-browser could fake up a session and start sending data without ever
> > sending an Open... but flushing the data doesn't actually help you against
> > that sort of active DOS (they can just start again, they can spread it
> > across thousands of channels, etc, etc), and there are FAR better DOS
> > methods - all this would do is burn some CPU and some memory.
> 
> I think that would be a mistake.  This isn't about denial of service,
> it's about genuine usage cases that encounter errors.  The receiver
> can't use the receive window to apply back pressure if they are
> reading from the stream to look for the open message, so you end up
> with an unbounded amount of data.  The amount of data will scale with
> bandwidth delay product.  A long, fat pipe might burn more CPU and
> memory than you are willing to tolerate.
> 
> Then it comes down to what experience you want to provide to the
> unfortunates who encounter this problem.
> _______________________________________________
> rtcweb mailing list
> rtcweb@ietf.org
> https://www.ietf.org/mailman/listinfo/rtcweb
> 
> 
> 
> _______________________________________________ rtcweb mailing list rtcweb@ietf.orghttps://www.ietf.org/mailman/listinfo/rtcweb
> _______________________________________________
> rtcweb mailing list
> rtcweb@ietf.org
> https://www.ietf.org/mailman/listinfo/rtcweb