Re: [rmcat] 5 tuples and rmcat-cc-requirements-01

Harald Alvestrand <harald@alvestrand.no> Fri, 03 January 2014 00:19 UTC

Return-Path: <harald@alvestrand.no>
X-Original-To: rmcat@ietfa.amsl.com
Delivered-To: rmcat@ietfa.amsl.com
Received: from localhost (ietfa.amsl.com [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id DFB071ACC84 for <rmcat@ietfa.amsl.com>; Thu, 2 Jan 2014 16:19:10 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: 0.263
X-Spam-Level:
X-Spam-Status: No, score=0.263 tagged_above=-999 required=5 tests=[BAYES_50=0.8, HTML_MESSAGE=0.001, RP_MATCHES_RCVD=-0.538] autolearn=ham
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 16ISmw131fQG for <rmcat@ietfa.amsl.com>; Thu, 2 Jan 2014 16:19:05 -0800 (PST)
Received: from eikenes.alvestrand.no (eikenes.alvestrand.no [IPv6:2001:700:1:2:213:72ff:fe0b:80d8]) by ietfa.amsl.com (Postfix) with ESMTP id AE85D1AC43F for <rmcat@ietf.org>; Thu, 2 Jan 2014 16:19:04 -0800 (PST)
Received: from localhost (localhost [127.0.0.1]) by eikenes.alvestrand.no (Postfix) with ESMTP id 6543539E0D2 for <rmcat@ietf.org>; Fri, 3 Jan 2014 01:19:02 +0100 (CET)
X-Virus-Scanned: Debian amavisd-new at eikenes.alvestrand.no
Received: from eikenes.alvestrand.no ([127.0.0.1]) by localhost (eikenes.alvestrand.no [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id g2TEXijIMOne for <rmcat@ietf.org>; Fri, 3 Jan 2014 01:18:59 +0100 (CET)
Received: from [IPv6:2001:470:de0a:27:60c4:879c:21ee:c898] (unknown [IPv6:2001:470:de0a:27:60c4:879c:21ee:c898]) by eikenes.alvestrand.no (Postfix) with ESMTPSA id 7529339E09F for <rmcat@ietf.org>; Fri, 3 Jan 2014 01:18:59 +0100 (CET)
Message-ID: <52C60213.9000401@alvestrand.no>
Date: Fri, 03 Jan 2014 01:19:31 +0100
From: Harald Alvestrand <harald@alvestrand.no>
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.1.1
MIME-Version: 1.0
To: rmcat@ietf.org
References: <CAA93jw7FDddn2n23fm=rdUsDyGakNfyLkJDgNjoC43fSc4GnSA@mail.gmail.com> <52C17E9F.8060703@alvestrand.no> <CAA93jw4g7bBrxJFfhqVkq0EpezBTY+hahhAJyt5wh=1zTs1EiA@mail.gmail.com>
In-Reply-To: <CAA93jw4g7bBrxJFfhqVkq0EpezBTY+hahhAJyt5wh=1zTs1EiA@mail.gmail.com>
Content-Type: multipart/alternative; boundary="------------000509080804000105080300"
Subject: Re: [rmcat] 5 tuples and rmcat-cc-requirements-01
X-BeenThere: rmcat@ietf.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: "RTP Media Congestion Avoidance Techniques \(RMCAT\) Working Group discussion list." <rmcat.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/rmcat>, <mailto:rmcat-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/rmcat/>
List-Post: <mailto:rmcat@ietf.org>
List-Help: <mailto:rmcat-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/rmcat>, <mailto:rmcat-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 03 Jan 2014 00:19:11 -0000

On 12/31/2013 07:37 AM, Dave Taht wrote:
> On Mon, Dec 30, 2013 at 6:09 AM, Harald Alvestrand <harald@alvestrand.no> wrote:
>> Dave, this is interesting .... I think characterizing what the connection
>> between type-of-traffic and applicable congestion is should be an rmcat
>> issue, but may feed back into RTCWEB and even MMUSIC - so I'm sending this
>> to rmcat only.
>>
>>
>>
>> On 12/30/2013 01:02 AM, Dave Taht wrote:
>>> I finally had a chance to read over
>>>
>>> http://tools.ietf.org/html/draft-ietf-rmcat-cc-requirements-01
>>>
>>> and also work at reproducing some of the difficulties present in the
>>> current implementation of chrome in light of the two papers presented
>>> at the last ietf on google congestion control. I went through tons of
>>> tests and packet captures that pretty much reproduced the legion of
>>> problems documented in
>>>
>>>
>>> https://speakerdeck.com/vr000m/evaluating-googles-congestion-control-for-webrtc
>>> and the other paper presented at ietf...
>>>
>>> and also fed webrtc traffic through the fq_codel and pie queue
>>> management systems (with pretty good results). I have tons of captures
>>> from each experiment if anyone wants them.
>>>
>>> So, I have some comments on this requirements document. (I'm new to
>>> paying attention on rmcat, so a few of my questions are probably
>>> answered by some document I haven't read yet. Steers appreciated. for
>>> all I know I'm on the wrong mailing lists too. Sorry)
>>>
>>> In particular, the flows I was looking at multiplexed voice and video
>>> on the same 5 tuple. I am curious as to the justification for this, as
>>> voice (without silence suppression) is isochronous and video is not.
>>> (is silence suppression used in webrtc?)
>>
>> This is a curious statement - what characteristic are you thinking about
>> when you say that voice is "isochronous" and video is not?
>> Both of them are produced one time-frame at a time, and both of them cause
>> irritation at the receiver when the frames are displayed in anything but the
>> same cadence.
> On a per packet level, voice is typically one packet ranging in size
> from 110 bytes to 300 every 10 or 20ms. (the opus codec goes down to
> 2.5ms) Video, in comparison, consists of a variable number of (often)
> many packets depending on encoding, subframes, etc.
>
> So from this viewpoint voice can be isochronous (lacking silence
> suppression), and video is not.

OK, so synchronous production of single packets is isochronous, while 
synchronous production of group of packets is not. I don't mind, as long 
as we agree what we're saying :-)

>
>> I think the irritation from jitter on voice is quite a bit higher than on
>> video, but stuttering video isn't a good thing either.
> jitter and loss on voice is often quite annoying. On music it's intolerable.
>
> On video, I would not use the word "stuttering" which implies a
> max-headroom sort of effect. Partial corruption of the output, blocky
> output, etc are closer to words I would use.

Yup, those are more irritating artifacts in video than distortions in time.
In fact, most of the time, the most irritating time-related aspect of 
video is most often loss of synchronization with the audio.


>
>>> To me, multiplexing these two very different flows on the same ports
>>> is not a good idea except under extreme port pressure, and not useful
>>> in ipv6 or in non-natted environments at all. Even in a natted
>>> environment, saving a single port is a false savings as (for example)
>>> dns lookups punch dozens of holes through nat on a regular basis.
>>>
>>> Requiring that in most or all cases these two flow types be on unique
>>> 5 tuples would give smarter queue management techniques on routers
>>> (like sfq, or sqf (deployed at FT), or fq_codel (deployed at
>>> free.fr)), a fighting chance. In particular it gives voice much higher
>>> probability for drop-and-delay free behavior, and analyzing the delta
>>> in drops and delay between a separate voice tuple and video tuple
>>> could provide insight as to the congestion in the system.
>> How does this effect happen?
> Could you be more specific?
>
>> In particular, does the separation of flows itself create lower loss
>> probability for voice, or are you thinking of systems where the network
>> elements "know" what the flows contain?
> In nearly any packet scheduled system (drr, sfq,sqf, fq_codel, etc),
> the separation of flows on a 5 tuple will result in (much) less delay,
> jitter and loss for the lower rate flows. With sqf and fq_codel, if
> the rate is low enough there will be near zero queuing delay, jitter
> and loss on many workloads.

I think you answered my previous question here - what you're saying is 
that when sfq etc deals with several flows, it will prioritize delivery 
of packets in the smaller flows ahead of delivery of packets in the 
larger flows.

Which, if the audio flow is smaller than the video flow (true almost all 
the time), will have the effect of causing video corruption to happen 
before audio corruption.

>
> I try to encourage people to just try it to get a feel for what
> happens. openwrt, cerowrt, any modern linux for all of the above,
> cisco routers do drr, many dsl devices do sfq or sqf in their
> firmware....
>
> separation of flows does nothing terrible useful on a strict drop tail
> system, but no harm either.
>
>> What about multiplexing between different flows of the same media type?
> Good question. There are many things I don't understand about webrtc's
> intended audience.
>
> What excites me most about it is the ability to do in-the-building
> videoconferences without having to go over the internet for them.  As
> for going over the internet... I think the cable industry is too
> broken (currently) for any e2e congestion control algorithm to work at
> all with serious cross traffic, with up and downlinks having 2 or more
> seconds of buffering each. (unless fixed by one of the algorithms
> above in a 3rd party router) FIOS and DSL users have a fighting
> chance, 3g and LTE less so, and wifi needs some serious work.
>
> So at the moment I'd like to be addressing the fixable e2e problems in
> gcc (it can't even compete with itself!) and wanted to be able to
> break things out on tuples to look at the data...
>
> before drawing any conclusions on how to multiplex video flows.
>
> My gut tells me you want one tuple per voice and per video flow, AND
> to manage their congestion indicators together, but depending on the
> level of statistical multiplexing of the video flows they could go on
> a single tuple.

That looks approximately sensible to me too.

I spent quite a bit of time arguing in AVT* that setting this kind of 
rule in the structure of the signalling was a grave architectural 
mistake (which was one of the impetuses for the BUNDLE work in the first 
place); that we needed to be free to experiment with this to figure out 
what worked, and change that in deployment, not in specs.

 From what you write above, it seems that one would wish to harness the 
network's (new) tendency to discrimiate against high-volume flows in 
order to get audio through with near zero loss - but I don't want to 
embed that assumption into the application; I'd much prefer that the 
application could make an explicit decision on what's important to it, 
and that ways be found to let the network respect that wish.

Which may lead us back very close to the guidelines that were enshrined 
in RTP and SDP specifications 20 years ago .... but then again, it might 
not.

[This looks so good as the end statement of the message, so it seems a 
pity that I need to add some specifics below. Other people commenting .. 
feel free to delete below this line.]


>
>>> In current implementations, is it possible to force flows onto
>>> different tuples?
>>
>> Absolutely - just don't negotiate bundling.
> Pointer to a working example?

People have been doing this by editing out the line in the SDP that 
starts off
"a=group:BUNDLE"

More elegant solutions should be deployed.

>
> keep hoping there's working ipv6 code out there too, getting the stun
> and turn stuff out of the loop long enough to look at simpler cases.

I think you're already copied on 
https://code.google.com/p/webrtc/issues/detail?id=1406


>
>>>    What are the other flaws in suggesting or requiring
>>> a different tuple?
>>
>>
>>> (note I'm not suggesting bulk data traffic get unique 5 tuples per flow)
>>
>> I wouldn't either.
>> In current deployments, it's multiplexed with the media traffic on the same
>> 5-tuple too - what would you suggest advising it to be bundled with, and
>> why?
> My understanding of the use case for that is that it is for p2p file
> transfer. Given how prone to failure setup has been for this in other
> protocols it seems to make some sense to bundle it on the video
> tuple...
>
>> (dropping the rest of the message - this should be enough for one thread)
>>
>
>