Re: [rmcat] How we should handle feedback, and where the congestion should run

"Michael Ramalho (mramalho)" <mramalho@cisco.com> Tue, 10 November 2015 15:34 UTC

Return-Path: <mramalho@cisco.com>
X-Original-To: rmcat@ietfa.amsl.com
Delivered-To: rmcat@ietfa.amsl.com
Received: from localhost (ietfa.amsl.com [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 7ABF31B2D54 for <rmcat@ietfa.amsl.com>; Tue, 10 Nov 2015 07:34:04 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -14.511
X-Spam-Level:
X-Spam-Status: No, score=-14.511 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, RCVD_IN_DNSWL_HI=-5, SPF_PASS=-0.001, T_RP_MATCHES_RCVD=-0.01, USER_IN_DEF_DKIM_WL=-7.5] autolearn=ham
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id Sf5NdI6WNAtk for <rmcat@ietfa.amsl.com>; Tue, 10 Nov 2015 07:34:02 -0800 (PST)
Received: from rcdn-iport-2.cisco.com (rcdn-iport-2.cisco.com [173.37.86.73]) (using TLSv1 with cipher RC4-SHA (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 3648F1B2D4E for <rmcat@ietf.org>; Tue, 10 Nov 2015 07:34:02 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=@cisco.com; l=6968; q=dns/txt; s=iport; t=1447169642; x=1448379242; h=from:to:cc:subject:date:message-id:references: in-reply-to:content-transfer-encoding:mime-version; bh=a3h8GOXyRf5ZmOBvvvEp1/Rs9AXDV8OX0hLDB62KpUc=; b=NZV4lWz+/jOkbBqCXve/78GLjgfg113Vk3/cQFSj0tGV5FdrVGQPnErM 1T/IrJhfht4Rk6ry5EcnON6BYD+QMiS+Diz2+EYX6f2isbPo5sWPWIgzE LjsRipzLOpN7qqrwT72h7vTXMSQDFdQdNc7KycuIvLiOns3qZYbFORDJF E=;
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: A0AOAgBADUJW/5BdJa1egztTbwa+PwENgWUhhW8CgUA4FAEBAQEBAQGBCoQ1AQEBAgInUgwEAgEIEQQBAQEVEgcyFAkIAgQBDQUIAYglDcMsAQEBAQEBAQEBAQEBAQEBAQEBAQEBGIZUAYN3gQaEKhEBXoQfBY1UiHQBjR6BYoRAkjiDcQEfAQFCghEdgVZyAYNsOoEHAQEB
X-IronPort-AV: E=Sophos;i="5.20,271,1444694400"; d="scan'208";a="48394029"
Received: from rcdn-core-8.cisco.com ([173.37.93.144]) by rcdn-iport-2.cisco.com with ESMTP; 10 Nov 2015 15:34:00 +0000
Received: from XCH-RCD-020.cisco.com (xch-rcd-020.cisco.com [173.37.102.30]) by rcdn-core-8.cisco.com (8.14.5/8.14.5) with ESMTP id tAAFY0Xw023968 (version=TLSv1/SSLv3 cipher=AES256-SHA bits=256 verify=FAIL); Tue, 10 Nov 2015 15:34:00 GMT
Received: from xch-aln-017.cisco.com (173.36.7.27) by XCH-RCD-020.cisco.com (173.37.102.30) with Microsoft SMTP Server (TLS) id 15.0.1104.5; Tue, 10 Nov 2015 09:33:59 -0600
Received: from xch-aln-017.cisco.com ([173.36.7.27]) by XCH-ALN-017.cisco.com ([173.36.7.27]) with mapi id 15.00.1104.000; Tue, 10 Nov 2015 09:33:59 -0600
From: "Michael Ramalho (mramalho)" <mramalho@cisco.com>
To: "Xiaoqing Zhu (xiaoqzhu)" <xiaoqzhu@cisco.com>, Randell Jesup <randell-ietf@jesup.org>
Thread-Topic: [rmcat] How we should handle feedback, and where the congestion should run
Thread-Index: AQHRGGgZJYB2cv99w0q45wB+uMJ8Fp6PjtIAgADP6wCAABOIAIAEU3IA///wJeA=
Date: Tue, 10 Nov 2015 15:33:59 +0000
Message-ID: <b95e7562004b4ff2ac245f297c948dfe@XCH-ALN-017.cisco.com>
References: <563BF7C3.40500@jesup.org> <2CEE6E71-BCDC-4778-88D1-8EDE87BAAE4D@ifi.uio.no> <563CD0BE.1010807@jesup.org> <D262BB29.29148%xiaoqzhu@cisco.com> <563D8F8A.1050406@jesup.org> <D266887D.2946B%xiaoqzhu@cisco.com>
In-Reply-To: <D266887D.2946B%xiaoqzhu@cisco.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-transport-fromentityheader: Hosted
x-originating-ip: [10.82.243.148]
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
Archived-At: <http://mailarchive.ietf.org/arch/msg/rmcat/bvxiQYWv57sn0LM_zD9gDCwtjfI>
Cc: "rmcat@ietf.org" <rmcat@ietf.org>
Subject: Re: [rmcat] How we should handle feedback, and where the congestion should run
X-BeenThere: rmcat@ietf.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: "RTP Media Congestion Avoidance Techniques \(RMCAT\) Working Group discussion list." <rmcat.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/rmcat>, <mailto:rmcat-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/rmcat/>
List-Post: <mailto:rmcat@ietf.org>
List-Help: <mailto:rmcat-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/rmcat>, <mailto:rmcat-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 10 Nov 2015 15:34:04 -0000

+1 to what Xiaoqing said below.

In my discussion time at the microphone I mentioned that all the information needed in the feedback for a per-packet feedback reporting scheme (defined as information on every packet reception sent in one or more the reverse direction packets but at a rate much less than the send packet rate) is highly compressible.

If I assume for the purpose of this discussion that we won't use a transport-wide sequence number (because you can account for all sequence numbers IF you know all the SSRCs you want in the aggregate), then you need to compress the identifier for the equivalent information of 1: {sequence numbers + SSRC} mapping (to know gaps in the aggregate) and 2: receive timestamps for packets in the immediate past.

If I further assume an RTCP XR like reporting which has an overlap in packet reception data (i.e., some portion of the information in older packets were also included in one or more previous reports), I mentioned that one could do an experiment on how efficiently one could compress the data with a "trivial/simple" compressor. Perhaps a delta on sequence numbers, a delta on timestamps and some form of entropy or run-length encoding?

If we can come to agreement on the rough syntax of such information, I'd like to see a back-of-the-envelope calculation for how many bits (on average) the result of this compression would look like with reasonable assumptions on receive time granularity (~100 microseconds) and a reasonable number of SSRCs (for simulcast situations) in the aggregate.

Then we could re-run the experiment with a "transport wide sequence number" in each sent packet (we know exactly how much bandwidth this takes). Then the report need not have the SSRC mapping part of what I describe above (because we have sequence numbers on the aggregate) and the resulting information may compress better (I really don't have a good feeling for this - perhaps Stefan does).

If the result of this thought experiment is that the number of bits per RTCP XR feedback isn't too great  (i.e., the bandwidth penalty relative to the feedback each proposal has now isn't too great) ... then we can think of developing this per-packet feedback approach some more.

Thoughts? Anyone want to take a stab at: 1) the format, 2) the RTCP XR reporting interval (e.g., every 100ms), 3) the overlap in packet reporting (1/3 new packets, 2/3 old packets similar to the 1-out-of-3 RTCP loss assumption for DTMF?).

Michael Ramalho

-----Original Message-----
From: rmcat [mailto:rmcat-bounces@ietf.org] On Behalf Of Xiaoqing Zhu (xiaoqzhu)
Sent: Monday, November 09, 2015 6:47 PM
To: Randell Jesup
Cc: rmcat@ietf.org
Subject: Re: [rmcat] How we should handle feedback, and where the congestion should run

Thanks, Randell, for your comments regarding the bidirectional test case results and also what performance metrics to show. Agree that it will be good to have a side-by-side comparison between scenarios with congested vs. uncongested feedback path. Will add such graphs in future.

Back to the more general discussions on sender-based vs. receiver-based
CC: the discussions here are helpful so we are aware of the advantages and
limitations of either approach.    To me, one additional advantage of the
sender is that it¹s right at the ³place of action², and may have better knowledge on the limitations and current status of the live encoder (e.g., how fast the encoder is reacting to the ramp-up rate request). So, even if the CC rate is recommended by a receiver, the sender will have the final say on what rate to encode at and send.

That said, it¹s probably still possible to combine the best of both worlds. The original GCC proposal has both a sender-based and a receiver-based component, whereas their current design has moved to sender-based. Maybe some good lessons to be learned there from that ³migration². 


Currently, since all three candidate schemes are sender-based, one obvious first step is for us to figure out how often and how much feedback from the receiver is needed as feedback to support this mode of operation, and whether the amount of required overhead is within reason.

For the scenario with bidirectional calls (A<=>B), there should also be the opportunity to piggy-back feedback for A->B along with payload for
B->A, which typically saves on feedback bandwidth. We may either want to
consider that optional now, or wait till after we have a good understanding on the feedback needed for the basic case with one-directional calls.

- Xiaoqing


On 11/6/15, 11:43 PM, "rmcat on behalf of Randell Jesup"
<rmcat-bounces@ietf.org on behalf of randell-ietf@jesup.org> wrote:

>On 11/6/2015 11:33 PM, Xiaoqing Zhu (xiaoqzhu) wrote:
>> In the eval-test-case draft, there is currently one test case 
>> dedicated for exploring impact of two-way traffic (Sec. 5.3.  
>> Congested Feedback Link with Bi-directional RMCAT flows). Some 
>> corresponding graphs can be found below (they may not reflect the 
>> most up-to-date algorithm performance).
>>
>> * 
>> NADA:http://www.ietf.org/proceedings/92/slides/slides-92-rmcat-4.pdf
>> (page #9 and #10)
>> * SCReAM:
>> 
>>http://www.ietf.org/proceedings/interim/2014/11/09/rmcat/slides/slides
>>-in
>>te
>> rim-2014-rmcat-1-1.pdf (page #9)
>> (Sorry, I was not able to find one with GCC yet.).
>
>Thanks!  Those don't look too bad, though it can be hard to drill down 
>into the details due to the PDF and thickness of the lines (and 
>dense-ness of the graph); I found the closeups at the end of your pdf 
>especially interesting (though not focused on this case).  One question 
>that will be interesting as we develop a feedback mechanism is going to 
>be the requirements from the algorithms on it, and the bandwidth it uses.
>
>I note NADA seems to be slower to respond to delay, though it also 
>seems to do much better against competing TCP flows (and likely the two 
>items are linked, which will make for some interesting tradeoffs and 
>tuning to do).
>
>> Would like to know whether you think that current design of the test 
>>case  has captured gist of the issue encountered by two-way calls, or 
>>any  suggestions on additional test scenarios in that regard?
>
>I think showing a graph similar to Scream's, with congested and 
>non-congested feedback paths is a useful comparison - much easier to 
>see the impact.  Feedback bandwidth would be useful, though that might 
>be fairly fixed right now (multiple of packet rate), so unless it's 
>surprising it may not need graphing (just noting on the graph).
>
>--
>Randell Jesup -- rjesup a t mozilla d o t com Please please please 
>don't email randell-ietf@jesup.org!  Way too much spam
>