Re: [rmcat] How we should handle feedback, and where the congestion should run

Randell Jesup <randell-ietf@jesup.org> Fri, 06 November 2015 16:09 UTC

Return-Path: <randell-ietf@jesup.org>
X-Original-To: rmcat@ietfa.amsl.com
Delivered-To: rmcat@ietfa.amsl.com
Received: from localhost (ietfa.amsl.com [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id C76F61ACDBC for <rmcat@ietfa.amsl.com>; Fri, 6 Nov 2015 08:09:33 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.901
X-Spam-Level:
X-Spam-Status: No, score=-1.901 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_PASS=-0.001] autolearn=ham
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id UU7L7d_1p8rO for <rmcat@ietfa.amsl.com>; Fri, 6 Nov 2015 08:09:32 -0800 (PST)
Received: from relay.mailchannels.net (aso-006-i418.relay.mailchannels.net [207.210.193.27]) (using TLSv1.1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 260631ACDBB for <rmcat@ietf.org>; Fri, 6 Nov 2015 08:09:30 -0800 (PST)
X-Sender-Id: wwwh|x-authuser|randell@jesup.org
Received: from relay.mailchannels.net (localhost [127.0.0.1]) by relay.mailchannels.net (Postfix) with ESMTP id C9EEC56D0 for <rmcat@ietf.org>; Fri, 6 Nov 2015 16:09:28 +0000 (UTC)
Received: from r2-chicago.webserversystems.com (ip-10-42-131-234.us-west-2.compute.internal [10.42.131.234]) by relay.mailchannels.net (Postfix) with ESMTPA id D4FE0595E for <rmcat@ietf.org>; Fri, 6 Nov 2015 16:09:27 +0000 (UTC)
X-Sender-Id: wwwh|x-authuser|randell@jesup.org
Received: from r2-chicago.webserversystems.com ([TEMPUNAVAIL]. [10.42.134.37]) (using TLSv1 with cipher DHE-RSA-AES256-SHA) by 0.0.0.0:2500 (trex/5.5.5); Fri, 06 Nov 2015 16:09:28 +0000
X-MC-Relay: Neutral
X-MailChannels-SenderId: wwwh|x-authuser|randell@jesup.org
X-MailChannels-Auth-Id: wwwh
X-MC-Loop-Signature: 1446826168059:2411628116
X-MC-Ingress-Time: 1446826168059
Received: from pool-96-227-119-208.phlapa.fios.verizon.net ([96.227.119.208]:53289 helo=[192.168.1.12]) by r2-chicago.webserversystems.com with esmtpsa (TLSv1:DHE-RSA-AES128-SHA:128) (Exim 4.85) (envelope-from <randell-ietf@jesup.org>) id 1ZujZq-000E7t-3m for rmcat@ietf.org; Fri, 06 Nov 2015 10:09:22 -0600
References: <563BF7C3.40500@jesup.org> <2CEE6E71-BCDC-4778-88D1-8EDE87BAAE4D@ifi.uio.no>
To: rmcat@ietf.org
From: Randell Jesup <randell-ietf@jesup.org>
Message-ID: <563CD0BE.1010807@jesup.org>
Date: Fri, 06 Nov 2015 11:09:34 -0500
User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.3.0
MIME-Version: 1.0
In-Reply-To: <2CEE6E71-BCDC-4778-88D1-8EDE87BAAE4D@ifi.uio.no>
Content-Type: text/plain; charset="windows-1252"; format="flowed"
Content-Transfer-Encoding: 7bit
X-AuthUser: randell@jesup.org
Archived-At: <http://mailarchive.ietf.org/arch/msg/rmcat/C3WVpSt0CXhiF-JpVXjE7nZXzx4>
Subject: Re: [rmcat] How we should handle feedback, and where the congestion should run
X-BeenThere: rmcat@ietf.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: "RTP Media Congestion Avoidance Techniques \(RMCAT\) Working Group discussion list." <rmcat.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/rmcat>, <mailto:rmcat-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/rmcat/>
List-Post: <mailto:rmcat@ietf.org>
List-Help: <mailto:rmcat-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/rmcat>, <mailto:rmcat-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 06 Nov 2015 16:09:33 -0000

On 11/6/2015 2:52 AM, Michael Welzl wrote:
> The big problem I see with receiver-side schemes is that you don't only need to standardize 1) the feedback message, you also need to standardize the sender behavior in the absence of feedback.
> 2) If feedback is missing for a while, maybe that's a good thing, as a result of feedback suppression?  So we just gradually and blindly increase the rate?  See early versions of the GCC scheme.
> 3) If feedback is missing for a long time, at some point you HAVE to have a timeout and react accordingly.
>
> Can we agree on all these things? At least 1) AND 3) are necessary.

I agree that we'd need to standardize that when using a receiver-side 
algorithm with a generic message (i.e. not a matched sender with a 
custom/private message), the sender side must be prepared to deal with 
1) and 3) and likely 2), and we would need to specify fairly completely 
the sender-side behavior in response to the feedback (though that 
behavior can be simple).  I.e. the send will attempt to send at the rate 
requested in a feedback message; will inform codecs of the new rate as 
fast as possible; will optionally pace data entering the network to not 
exceed either the congestion rate, or a separate pacing rate supplied.  
Time periods for calculating both rates shall also be provided, though 
the sender may not be able to use an arbitrary time for calculating rate 
(i.e. encoders may not be flexible on that - easier to do for pacing).

In terms of 1/2/3 above: if we do this, the receive-side algorithm could 
configure the sender via these messages and/or negotiation. The feedback 
messages can (optionally) have control information that tells the sender 
what to do on missing feedback, and when.  For example, it could have a 
feedback-timeout field, or timeout and what-to-do-at-timeout fields.  We 
could also define a "are you still there" RTCP that can be sent when we 
see feedback timeouts or are getting close to them, asking the receiver 
to resend (and exactly how to respond to those can be part of the 
receive-side CC algorithm).  It could include updated RS/RR stats for 
any RTP flows, letting the receiver-side update their idea of the 
return-path congestion.

This brings up a more general point for coupling congestion control (I 
forget if this has been discussed) - not only can it make sense to 
couple multiple CC's running in the same direction, but also to inform 
CC's of return-path congestion (and expected network delay!) that 
feedback (RTCP) will contend with.  This applies no matter which end the 
algorithms run at.

I'm sure more specification of a standard receive-side feedback message 
would needed if we decide to go that way; the above would be a start.  I 
do think it's *very* worthwhile defining a standard for feedback 
(whichever end we end up with) such that new single-ended algorithms can 
be trialed or deployed (and coincidentally provides a way for the CC 
algorithm to probe to see if the other side supports something 
compatible, without mandating it be part of a higher-level negotiation).

> To me, this seems like creating difficulties for a problem that MAY also be a non-problem. Note that TCP sends tons of feedback, maybe including SACK blocks, across the same thin uplinks that we're considering. I don't think there is conclusive evidence of that being a problem.

True - but the constraints on TCP are very different, forward delay is 
encouraged to maximize throughput, traffic is often primarily 
unidirectional (so the feedback path is uncontended and often close to 
loss-free especially where the bottleneck link is the local last-mile or 
Wifi), while for us we typically have saturated links in both directions 
while trying to keep delays low, and each packet of feedback takes away 
from payload.

> => I tend to think that moving everything to the sender side is an easy way out.

This still may be so, since in many ways (see above) it's easier to 
feedback raw data and keep *all* decisions at the sender side. (Though a 
sender-side standard feedback packet may still need rules on how to 
operate in reverse-path contention/loss that will need to be added.)  
However, as several of the candidates mentioned, there are issues with 
reverse-path bandwidth use - and I don't think they've even really 
gotten much into two-way tests; all the graphs I see are for one-way.  
Two-way will complicate things - and it's a critical part of the use-cases.

A send-side setup with feedback from the receiver for each packet 
(generic or not!) will almost certainly require careful compression 
decisions (and feedback-rate decisions, perhaps semi-controlled by the 
sender - see above similar to the receive-side discussion).  For this 
reason alone, I'm not sure if RTCP-XR will work, even with modifications 
or additions - but I haven't looked much at RTCP-XR other than to note 
it has a LOT of stuff and isn't very thin.  But take that with a grain 
of salt.

-- 
Randell Jesup -- rjesup a t mozilla d o t com
Please please please don't email randell-ietf@jesup.org!  Way too much spam