Re: [icnrg] Some background on draft-oran-icnrg-flowbalance-00.txt

"David R. Oran" <> Sun, 04 August 2019 14:21 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 30566120019 for <>; Sun, 4 Aug 2019 07:21:38 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -1.9
X-Spam-Status: No, score=-1.9 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, SPF_HELO_NONE=0.001, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id 1m4UcgSfyycl for <>; Sun, 4 Aug 2019 07:21:36 -0700 (PDT)
Received: from ( [IPv6:2607:fca8:1530::c]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by (Postfix) with ESMTPS id 2E187120025 for <>; Sun, 4 Aug 2019 07:21:36 -0700 (PDT)
Received: from [] ([IPv6:2601:184:4081:19c1:3461:3d25:3cea:41b9]) (authenticated bits=0) by (8.14.4/8.14.4/Debian-4+deb7u1) with ESMTP id x74ELVhM009324 (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256 verify=NO); Sun, 4 Aug 2019 07:21:33 -0700
From: "David R. Oran" <>
To: "Naveen Nathan" <>
Date: Sun, 04 Aug 2019 10:21:30 -0400
X-Mailer: MailMate (1.12.5r5643)
Message-ID: <>
In-Reply-To: <>
References: <> <> <> <>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Archived-At: <>
Subject: Re: [icnrg] Some background on draft-oran-icnrg-flowbalance-00.txt
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Information-Centric Networking research group discussion list <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Sun, 04 Aug 2019 14:21:38 -0000

Thanks again for these really useful follow-up comments. I hope to get 
comments from others before posting an updated draft.

On 4 Aug 2019, at 2:37, Naveen Nathan wrote:

> See responses below.
> - Naveen
> On Sun, 4 Aug 2019, at 12:08 AM, David R. Oran wrote:
>>> [...]
>>> Regarding the consumer under-estimating (i.e. too big objects), I
>>> think it
>>> might be good to elaborate on a different dimension such a lossy
>>> versus
>>> lossless protocols. E.g. video streaming can deal with a loss of a
>>> large
>>> object, and be informed of what to request in subsequent chunks. But
>>> this can
>>> get tricky with VBR style encoding. But perhaps this is actually a 
>>> bad
>>> example
>>> since most video stream will use a manifest mechanism so requests
>>> would have
>>> accurate estimates.
>> The issue here isn’t the loss, since any dynamic congestion control
>> algorithm will either wind up under-allocating bottleneck links by a
>> large fraction, or occasionally allow buffers to overflow and drop
>> packets. I didn’t put much of a tutorial on basic congestion 
>> control
>> in the document since the reader is assumed familiar with the cited
>> references. What may be worth reiterating is that even congestion
>> control algorithm has an explicit fairness goal and associated 
>> objective
>> function (usually either min-max fairness or proportional fairness). 
>> If
>> your fairness is to be based on resource usage, pure interest 
>> counting
>> doesn’t do the trick, since a consumer asking for large thing can
>> saturate a link and shift loss to consumers asking for small things.
>> Does that make sense? Should the document say more about this?
> Yes, and yes.
> So I think the congestion control algorithm (and knowledge these
> algorithms lack) is sufficiently explained in section 3. I think you 
> should
> incorporate the point you made about fairness in the above response
> in section 3 as well.
Makes sense to add this to Section 3. I did that and it will be in the 
next version posted.

>>> I also think you should mention that this mechanism can be used to
>>> inform a
>>> congestion control algorithm. I do note that this is explicitly
>>> mentioned in
>>> section 3.
>> Hmmm, maybe I need to be more direct?
> Yes. But I forgot to say earlier that it should be mentioned in the 
> end
> of section 1 (the introduction).
Also done, with references to min-max and proportional fairness in 

>>> About section 3.4. where you describe interest aggregation. You
>>> recommend the
>>> second policy of giving a large object to correctly/over-estimating
>>> consumers,
>>> whereas underestimating consumers get T_MTU_TOO_LARGE (also does 
>>> "MTU"
>>> make
>>> sense instead of "REQUEST"?). Shouldn't all consumers get it, if the
>>> link isn't
>>> congested? I think this T_MTU_TOO_LARGE should only be sent if 
>>> there's
>>> contention on the link and insufficient bandwidth can be allocated.
>> That’s of course a possibility. I decided recommending the error 
>> for
>> the following reasons (which perhaps ought to be explicitly stated in
>> the spec):
>> 1. The link can be congested quite quickly after the queuing decision 
>> is
>> made, especially if the data has a long link-occupancy time, so this 
>> is
>> a safer alternative.
>> 2. The cost of returning the error is only one link RTT, since the
>> consumer can immediately re-issue the interest with the correct size 
>> and
>> pick up the cached object from the upstream forwarder’s CS.
>> 3. I tried to completely avoid the issues of aggregate resource 
>> control
>> and the associated billing swamp, since returning the data raises the
>> messy issue of whether to “charge” the actual used bandwidth to 
>> the
>> consumer, or only the requested bandwidth through the expected data
>> size. The rabbit hole goes deeper if you add differential QoS to the
>> equation (another subject intentioanally not talked about in the
>> document for many reasons) as consumers “playing games” and
>> intentionally underestimating so their interests get satisfied when
>> links aren’t congested gets you into all the considerations of
>> malicious actors discussed later.
> I think it should be explicitly stated since intuitively if you have 
> the
> resources to send the data to the client, you would. But this is going
> against the grain for the above reasons you stated.
Good idea, I added a slightly edited version of what I wrote above.

> _______________________________________________
> icnrg mailing list