Re: [icnrg] [Ndn-interest] Congestion Control related draft

"David R. Oran" <daveoran@orandom.net> Thu, 08 August 2019 21:44 UTC

Return-Path: <daveoran@orandom.net>
X-Original-To: icnrg@ietfa.amsl.com
Delivered-To: icnrg@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 60D09120098 for <icnrg@ietfa.amsl.com>; Thu, 8 Aug 2019 14:44:18 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.898
X-Spam-Level:
X-Spam-Status: No, score=-1.898 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, HTML_MESSAGE=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id uwWNY75iDV0R for <icnrg@ietfa.amsl.com>; Thu, 8 Aug 2019 14:44:12 -0700 (PDT)
Received: from spark.crystalorb.net (spark.crystalorb.net [IPv6:2607:fca8:1530::c]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id CC745120073 for <icnrg@irtf.org>; Thu, 8 Aug 2019 14:44:12 -0700 (PDT)
Received: from [66.31.201.96] ([IPv6:2601:184:4081:19c1:6df4:91c5:1665:dab0]) (authenticated bits=0) by spark.crystalorb.net (8.14.4/8.14.4/Debian-4+deb7u1) with ESMTP id x78Li3cO018372 (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256 verify=NO); Thu, 8 Aug 2019 14:44:05 -0700
From: "David R. Oran" <daveoran@orandom.net>
To: "Klaus Schneider" <klaus@cs.arizona.edu>
Cc: ndn-interest <ndn-interest@lists.cs.ucla.edu>, icnrg@irtf.org
Date: Thu, 08 Aug 2019 17:44:03 -0400
X-Mailer: MailMate (1.12.5r5643)
Message-ID: <EED54E7B-4464-4FA3-AA20-277A9ECC7D37@orandom.net>
In-Reply-To: <cbcd42f3-11fe-1009-6b25-504f2da235e1@cs.arizona.edu>
References: <AF576FE9-D69B-4772-BF7B-6B2EDB332D70@orandom.net> <cbcd42f3-11fe-1009-6b25-504f2da235e1@cs.arizona.edu>
MIME-Version: 1.0
Content-Type: multipart/alternative; boundary="=_MailMate_67775AD9-3A6D-4893-B86D-1837A2967D4D_="
Archived-At: <https://mailarchive.ietf.org/arch/msg/icnrg/ptU6XoB3muKQyckMF-YVatjrBek>
Subject: Re: [icnrg] [Ndn-interest] Congestion Control related draft
X-BeenThere: icnrg@irtf.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Information-Centric Networking research group discussion list <icnrg.irtf.org>
List-Unsubscribe: <https://www.irtf.org/mailman/options/icnrg>, <mailto:icnrg-request@irtf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/icnrg/>
List-Post: <mailto:icnrg@irtf.org>
List-Help: <mailto:icnrg-request@irtf.org?subject=help>
List-Subscribe: <https://www.irtf.org/mailman/listinfo/icnrg>, <mailto:icnrg-request@irtf.org?subject=subscribe>
X-List-Received-Date: Thu, 08 Aug 2019 21:44:19 -0000

On 8 Aug 2019, at 15:43, Klaus Schneider wrote:

> Dear Dave,
>
> Thanks for the draft. I want to ask a related question.
>
> As far as I understand, the Hop-by-Hop flow balance congestion control 
> makes the following two assumptions:
>
> 1. The capacity of each link is fixed and you know its value (e.g. 10 
> Gbps)
Current schemes mostly do, but it isn’t an inherent characteristic. 
Schemes like the hop-by-hop interest shaper can adjust in one or two 
link RTTs. One thing to be careful of is deep transmit buffering if you 
can only do shaping on enqueue, because the link will stay saturated for 
a long time if you can’t adjust quickly.

> 2. You can access the current queue size/occupancy of each outgoing 
> link (to check if it is congested)
>
Even better is to have a lower layer that can report to you what’s 
going on rather than having to infer bandwidth changes from queue depth 
getting bigger. A complicating factor of course is tunnels, where you 
have little to no idea what’s going on inside. That’s what you’re 
addressing below.

I’ll also point out the (hopefully obvious) observation that while 
knowing data size isn’t a panacea, it’s hard to see how it could to 
*worse* than interest counting.

> I think these assumptions are fine for most networks, but I want to 
> ask about a special case where they don't hold.
>
>
> Specifically, consider a global ICN testbed deployment where NDN 
> routers are connected via IP tunnels that span an unknown number of IP 
> routers.
>
> Here, you don't know the effective tunnel capacity (since there is an 
> unknown amount of IP traffic). You can access the local router queue 
> occupancy, but it's not very useful (since congestion can happen 
> between two of the IP routers inside the tunnel).
>
Correct. Datagram tunnels suck. Maybe the LOOPS work in IETF might make 
some progress here, but I’m not optimistic. (see 
https://datatracker.ietf.org/wg/loops/about/)

> Assume you have full control over the NDN routers, but no control over 
> the IP routers. You can choose TCP or UDP tunnels.
>
> How would you design the congestion control?
>
>
> There are some existing designs for this case (e.g. 
> http://conferences2.sigcomm.org/acm-icn/2016/proceedings/p21-schneider.pdf 
> https://datatracker.ietf.org/meeting/interim-2017-icnrg-03/materials/slides-interim-2017-icnrg-03-sessa-icn-congestion-control-how-handle-unknown-and-varying-link-capacity-ahlgren/). 
> But none of them have been tested much in practice.
>
I’m familiar with both of these. I’ve only given bit of thought to 
how to prevent congestion of data packets coming back, since their loss 
incurs a big performance hit. (It’s one reason why the pure AQM 
approach in the ICN paper you did for 2016 isn’t that appealing to me 
- but that’s a longer discussion).

I suspect there is only so much you can do without changing the 
protocols. CCNx has hop-by-hop congestion NACKs, but as I recall NDN 
doesn’t, and Van recently gave a talk in which he argued that they are 
a certified “bad idea” or something equivalent. I disagree, but have 
not had much success in the past influencing the NDN design.

Another thing not done even in CCNx implementations is to ALWAYS reserve 
enough bandwidth on the inverse link to carry a congestion NACK back if 
you have to drop a Data packet due to congestion. That way recovery can 
be a lot quicker than waiting for a timeout, and if you support 
in-network retransmission, it can occur at the RTT between the 
bottleneck and the producer/cache rather than end-to-end.

A third thing one could do is piggyback queue state on returning Data or 
NACK packets. This helps deal with the general case of varying 
bandwidth, even when the path goes asymmetric and there aren’t 
convenience Interests to piggyback it on going in the opposite 
direction. Hysteresis is of course the big enemy here.


> Best regards,
> Klaus
>
Thanks for the feedback!

>
>
>
>
>
> On 8/3/19 7:14 AM, David R. Oran wrote:
>> NDN folks,
>>
>> Those of you interested in congestion control for NDN might care to 
>> take a look at the draft I submitted to ICNRG about doing a better 
>> job of resource allocation than current interest-counting schemes. 
>> You can find it at:
>>
>> https://datatracker.ietf.org/doc/draft-oran-icnrg-flowbalance/.
>>
>> This is something I’ve been noodling on for a long time because of 
>> my continuing interest in congestion control for ICN. It derives from 
>> some work I did at Cisco a few years ago which I believe has become 
>> timely. A number of interesting ICN use cases have either highly 
>> variable (e.g. video), very large (e.g. scientific data), or very 
>> small (e.g. IoT sensor) data objects, while all the congestion 
>> control schemes currently published (to my knowledge) only do 
>> interest counting which can’t account for this variability.
>>
>> I’m obviously super-interested in what people think of this 
>> approach. It is deceptively simple (only requires one new TLV with 
>> easy switch from interest counting to fine-grained byte-based 
>> resource control for congestion and possibly QoS extensions). 
>> However, like most things with NDN or CCNx, it has interesting 
>> subtleties with respect to interest aggregation and consumers trying 
>> to game the system, both of which are covered in the spec.
>>
>> I’d appreciate it if you’d post your comments on the ICNRG list 
>> <icnrg@irtf.org>; in order to have one place to discuss it. However, 
>> if you’d prefer to reply to non-interest or to me privately, 
>> that’s ok as long as you don’t mind if I repost responses and 
>> further discussion in ICNRG.
>>
>> As a final note, this has Cisco IPR on it. I’ve talked to the Cisco 
>> folks and they’ll put in the usual Cisco IETF-oriented IPR 
>> declaration when they get back from vacation (i.e. don’t hold your 
>> breath).
>>
>> Thanks much!
>>
>> DaveO
>> _______________________________________________
>> Ndn-interest mailing list
>> Ndn-interest@lists.cs.ucla.edu
>> http://www.lists.cs.ucla.edu/mailman/listinfo/ndn-interest
>
> _______________________________________________
> icnrg mailing list
> icnrg@irtf.org
> https://www.irtf.org/mailman/listinfo/icnrg

DaveO