Re: [icnrg] I just submitted an update to my ICN QoSArch draft

"David R. Oran" <daveoran@orandom.net> Wed, 27 November 2019 21:00 UTC

Return-Path: <daveoran@orandom.net>
X-Original-To: icnrg@ietfa.amsl.com
Delivered-To: icnrg@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 10F1E120041 for <icnrg@ietfa.amsl.com>; Wed, 27 Nov 2019 13:00:50 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.9
X-Spam-Level:
X-Spam-Status: No, score=-1.9 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, SPF_HELO_NONE=0.001, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 2SWVVgnUt4Mk for <icnrg@ietfa.amsl.com>; Wed, 27 Nov 2019 13:00:47 -0800 (PST)
Received: from spark.crystalorb.net (spark.crystalorb.net [IPv6:2607:fca8:1530::c]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 437DB120033 for <icnrg@irtf.org>; Wed, 27 Nov 2019 13:00:47 -0800 (PST)
Received: from [66.31.202.149] ([IPv6:2601:184:407f:80ce:466:ff54:dfbf:9f19]) (authenticated bits=0) by spark.crystalorb.net (8.14.4/8.14.4/Debian-4+deb7u1) with ESMTP id xARL0bj3015634 (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256 verify=NO); Wed, 27 Nov 2019 13:00:39 -0800
From: "David R. Oran" <daveoran@orandom.net>
To: Anil Jangam <anjangam@cisco.com>
Cc: ICNRG <icnrg@irtf.org>
Date: Wed, 27 Nov 2019 16:00:32 -0500
X-Mailer: MailMate (1.13r5667)
Message-ID: <E24DD044-315B-4711-AAA7-5DD116E974B4@orandom.net>
In-Reply-To: <69D3AAAF-4022-4B79-8CC3-5F7BA1F4E5A7@cisco.com>
References: <157089431975.1372.17365919232442804449@ietfa.amsl.com> <5C68D385-F901-4A3A-88C9-350234E7C162@orandom.net> <69D3AAAF-4022-4B79-8CC3-5F7BA1F4E5A7@cisco.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"; format="flowed"
Content-Transfer-Encoding: 8bit
Archived-At: <https://mailarchive.ietf.org/arch/msg/icnrg/G6MfDeX86E00FCArgop4L3Kusvo>
Subject: Re: [icnrg] I just submitted an update to my ICN QoSArch draft
X-BeenThere: icnrg@irtf.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Information-Centric Networking research group discussion list <icnrg.irtf.org>
List-Unsubscribe: <https://www.irtf.org/mailman/options/icnrg>, <mailto:icnrg-request@irtf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/icnrg/>
List-Post: <mailto:icnrg@irtf.org>
List-Help: <mailto:icnrg-request@irtf.org?subject=help>
List-Subscribe: <https://www.irtf.org/mailman/listinfo/icnrg>, <mailto:icnrg-request@irtf.org?subject=subscribe>
X-List-Received-Date: Wed, 27 Nov 2019 21:00:50 -0000

On 16 Nov 2019, at 20:52, Anil Jangam (anjangam) wrote:

> Hello Dave,
>
> We have reviewed the QoS arch draft and added our comments in the 
> attached word document.
>
My responses (extracted from the comments in the word doc) are below. I 
will be issuing an -03 soon, after incorporating comments from Thomas 
Schmidt.



——
>> It does not propose detailed protocol

>>   machinery to achieve these goals; it leaves these to supplementary

>>   specifications, such as [I-D.moiseenko-icnrg-flowclass].

> We can also add reference to qos treatment draft here.
Ok. Done.

>> Some background on the nature and properties of Quality of Service in 
>> network protocols

> Title length can be optimized as here. “Background on the nature and 
> properties of QoS in Network Protocols” or just, “The nature and 
> properties of QoS in Network Protocols”

I like the current title, but getting rid of “Some” makes sense.

> decrease the total resources available to carry user traffic.

Fixed.

>> If your resources are lightly loaded, you don't need it, as neither 
>> congestive loss nor substantial queueing delay occurs
> A better performance for time-sensitive applications can be 
> experienced with QoS applied in these conditions.
I disagree. If there’s nothing in the queue most of the time, 
there’s nothing to delay or drop in order to make other traffic go 
faster. No change.


>> If your resources are heavily oversubscribed, it doesn't save you. So 
>> many users will be unhappy that you are probably not delivering a 
>> viable service

> Yes, but we could exercise some control over who will be less unhappy.
My point is that not enough people will be made less unhappy that QoS 
changes things enough to pay the price. Note also that I have a caveat 
later that QoS can benefit “dedicated or tightly managed networks 
whose economics depend on  strict adherence to challenging service level 
agreements (SLAs)” No change.

>> There is per-Interest/Data state at every hop of the path and 
>> therefore for each outstanding Interest, bandwidth for the data 
>> returning on the inverse path can be allocated.

> For outstanding Interest in the PIT? Just clarifying. I mean, the 
> bandwidth allocation shall not be in proportion to the total number of 
> aggregated Interests in the PIT.
In the absence of machinery like that suggested in my flow balance 
draft, then yes, the allocated bandwidth on a link will be proportional 
to the number of interests sent on that link, which is effectively 
bounded by the number of PIT entries for Interests forwarded over that 
link. Since the question of how you account for bandwidth is independent 
of whether you have QoS machinery or not, the flow balance draft is only 
relevant in terms of how it can do a better job of fairness in 
congestion control. No change, but I’ll reconsider if others think the 
congestion control discussion warrants this additional material.


>> By accepting one Interest packet from a downstream node, implicitly 
>> this provides a guarantee (either hard or soft) that there is 
>> sufficient bandwidth on the inverse direction of the link to send 
>> back one Data packet.
> This is an assumption IMO. Router at the time of receiving the 
> Interest, does not know about the size of the data packet that’s 
> going to receive.
If you do what I suggest in the flow balance draft, it does know. 
However, see my earlier comment that this is more material on congestion 
control than needed to make my points.


> Flow classification can help (to an extent) to make a calculated guess 
> about the estimated bandwidth requirement. For example, a flow type 
> (e.g. video, audio, or text) can give certain level of insights. 
> >Perhaps, knowing the type of encoding would enhance the accuracy of 
> this estimation.

Yes, but this would be a bad tradeoff in my view, as it requires routers 
to have semantic knowledge by interpreting names.

> How QoS treatment information would enhance this B/W estimation is for 
> further study and shall depend on the exact marking design.

Why would it depend on the marking design? You’ve lost me here. At any 
rate, I didn’t change anything here.


>> The former class of schemes is (arguably)

> s/is/are
“class” is singular. No change.

>> Two critical facts of life come into play when designing a QoS 
>> scheme:

> Can we merge these with above two points? If not, we can say that more 
> elaborate discussion is provided below, to establish a link between 
> the two? Alternatively, these two paragraphs can be made as bullet 
> points.
I like the way it’s currently written, and generally like to avoid 
long indented bullet points that contain top-level prose. This is just a 
stylistic disagreement, so I didn’t change anything.

>> or allow them to be _aggregated_, trading off the accuracy of 
>> policing the traffic.

> There may be a limited benefit of allowing aggregation. If treatment 
> is the same, for simplicity reasons, limiting the number of flow 
> classifiers may be a better solution.
I’m not sure I’m following you here. If you limit the flow 
classifiers, isn’t this just a static/architectural aggregation 
mandate? In any event, the general point here is that aggregation saves 
state at the cost of policing accuracy, which still holds.

>> The ability to encode the treatment requests in the protocol can be 
>> limited (as it is for IP - there are only 6 of the TOS bits available 
>> for Diffserv treatments),

> Are you saying that such limitation also applies to ICN? As the 
> content Name is expressive in ICN (and so does the flow classifiers), 
> the treatments be expressive as well.

I’m saying that expressibility depends on protocol encoding 
limitations (or the lack thereof). I’m not sure if I’m being unclear 
here, since the implication is that since ICN protocols have flexible 
encoding, and are not yet cast in stone like IP, their expressibility is 
not as limited.

>> as or more important is whether there are practical traffic policing, 
>> queuing, and pacing algorithms that can be combined to support a rich 
>> set of QoS treatments

> Conversely, can the design of QoS treatment in ICN be such that it can 
> be used or support these three algorithms.
Sure, that’s exactly the point. You have to limit the richness of the 
architecturally-defined QoS treatments to those that have practical 
queueing/policing/shaping algorithms.


>> The two considerations above in combination can easily be 
>> substantially more expressive than what can be achieved in practice

> Maybe it’s my understanding, but can we specify the scope of the 
> “combination” here? Or perhaps you meant to say that 
> ‘together’ they can easily achieve QoS well, right?
No, I’m saying the opposite, that unless you are careful on both 
counts to define narrow enough expressibility, you will wind up with 
something you can’t build cost-effectively. No change.

>> | subset+prefix match on IP
> Is it “subset” or “subnet”?
Subset, as in “a subset of the individual elements of the 5-tuple”

>> In ICN, QoS is not pre-bound to topology since names are 
>> non-topological,

> Should this be “the network topology”?
Yes. I thought this was obvious, but I changed the text anyway. Thanks.

>> ICN, even worst case, requires state O(#active Interest/Data 
>> exchanges), since state can be instantiated on arrival of an 
>> Interest, and removed lazily once the data has been returned

> May be, we can add that with multi-path forwarding capability of ICN, 
> this state is potentially replicated by factor of the number of 
> forwarding paths the Interest is forwarded on.
Yes, you’re right in the multi path case, but this muddies the 
comparison with IP, since IP doesn’t support multi-path Intserv 
(although it does support point-multipoint RSVP, which has state 
that’s hard to quantify). I’m not sure whether discussing this helps 
the exposition though, since the multiplier factor is the same for the 
two architectures. If you feel strongly about this I can add something, 
but it won’t change the basic point favoring ICN in state maintenance.

>> and one (or even more than one) new field can be easily defined to 
>> carry QoS treatment information.

> This would still mean an overhead and would have an associated cost 
> (in terms of compute, memory) that would need to be assessed.
I’m not sure this is true as it depends sensitively on the details. 
Whether you have one or multiple fields could make things either simpler 
or more complicated…

>> Such a QoS treatment for ICN could invoke native ICN mechanisms, none 
>> of which are present in IP, such as:

> These mechanisms will be important factors in design of the type of 
> treatments we can implement. For instance, in one scheme of QoS marker 
> design, we can model it as a hierarchical QoS treatment data, where 
> top-level treatment data decide the main class (e.g., the priority), 
> and sub-level treatment decide the sub class (i.e., retransmission, 
> multipath-serial, multipath-parallel).

I’m not sure I’m following you here. I think you may be confounding 
what is said in the abstract in a QoS treatment versus what a particular 
forwarder translates that to in terms of its local algorithms to achieve 
the desired treatment. In particular, I would (unless convinced 
otherwise) be opposed to encoding specific mechanisms like 
retransmission or depth-first versus breadth-first multipath as 
consumer-expressible QoS treatments.

>> protocol machinery is used to decide which forwarding strategy to use 
>> for which Interest that arrive at a forwarder.

> I would think forwarding strategy of a Data packet would be of an 
> equal importance.
The term forwarding strategy as used in the NDN literature describes how 
a forwarder selects an output face for an interest. Therefore, it 
explicitly excludes Data packets, which always go on the inverse path 
and are not subject to a forwarding strategy.

>> Associating forwarding strategies with the equivalence classes and 
>> QoS treatments directly can make them more accessible and useful to 
>> implement and deploy.

> Resulting into QoS aware forwarding of Interest/Data traffic.

Of Interests, yes, of Data, no.


> Once router admits the Interest into the network,
a router can only “admit” an Interest into its own forwarding state, 
not into “the network”

> the QoS in the interest shall be binding on the router to handle the 
> forwarding of the Data packet.
the queueing and policing yes, the forwarding, no. This is a detail we 
keep tripping over, as I use the term “forwarding” the same way IP 
router people do - the portion of the entire router handling of a packet 
that determines the output interface(s) for the next hop.

> It’s possible that router may override the QoS treatment (for reason 
> such as congestion control) originally received in the Interest 
> message. This remarking to happen only after admitting the original 
> Interest as it received. Just to clarify that any 
> remarking/modification of the QoS marker at given router is only 
> applicable to the upstream router.
We’re on a bit of thin ice here. In IP, remarking occurs for a variety 
of reasons. Some of the reasons are specific to a given set of related 
QoS operations, such as marking packets that exceed a signaled or 
inferred allocation. Other cases are due to the encoding limitations of 
Diffserv. The former carry over as important capabilities to have in an 
ICN QoS scheme. The latter do not.

In both cases above, in ICN we have the possibility to retain the 
original QoS treatment when remarking, rather than overriding it and 
losing the consumer’s original intent.

I don’t address these details currently; I’d be interested in 
whether people think discussion of remarking protocol mechanisms belong 
or not. I view it as a detail not critical to the principles I’m 
articulating but could be convinced otherwise.

>> intended to influence routing paths from producer to consumer will 
>> have no effect.

> While this is true, it would be interesting to see if it’s possible 
> for the producer to advertise the minimum QoS required to 
> request/receive the specific data.
If you can make the case that the producer knows enough about all the 
ways he can be reached to make an educated guess on this, I’d be at 
least somewhat receptive. It’s way more likely some forwarder knows, 
and in that case it might in fact be a possible protocol enhancement to 
have an error code on Interest-return to tell a consumer that he 
launched an interest with a QoS treatment unlikely to ever reach the 
producer and suggest a better one. Seems a lot of complexity for not a 
lot of gain to me though.

>> it can be tricky to decide whether or not to aggregate the interest 
>> or forward it
> s/whether or not/whether/
Fixed.

>> can be considered _fate shared_ in a cache whereby objects from the 
>> same equivalence class are purged as a group rather than 
>> individually.

> I think this may not be optimal. I don’t see a link between 
> equivalence class and content validity. For example, a video from news 
> and a movie may require the same treatment, but a movie would be valid 
> as a longer content.
This has nothing to do with validity. It has to do with whether other 
data packets of the same equivalence class are *useful* (as opposed to 
*valid*) if not all are present in the cache. It’s way better in most 
cases to concentrate evictions to related data than to spread them among 
all the Data packets in the cache based only on your basic eviction 
scheme (LRU, LFU, etc.). That said, no cache eviction algorithm other 
than a clairvoyant one will be optimal. For your movie versus news 
example, the RCT (recommend cache time) would likely be the first-order 
discriminator that picks a Data packet, and the fate sharing only drive 
further evictions in order to more efficiently free cache space for new 
content. Of course, if your cache is not under replacement pressure, you 
would not bother with the extra evictions. (As an aside, a few years ago 
we designed a NAND flash cache to go with our ICN router where you could 
more cheaply invalidate a whole block than individual data packet sized 
chunks.  There was consequently a big performance benefit to bulk 
evictions. Hence, if you kept things in the same equivalence class 
together in the cache, you could get substantially higher cache 
performance.)

>> In addition, since the forwarder remembers the QoS treatment for each 
>> pending Interest in its PIT, the above cache controls can be 
>> augmented by policy to prefer retention of cached content for some 
>> equivalence classes as part of the cache replacement algorithm.

> What is the relationship between the QoS treatment in PIT for the 
> pending Interest and the cache control directives?
None, as far as I can tell. However, as I point out, if a QoS treatment 
says to favor reliability over delay, then the CS might give higher 
precedence to store the returning Data packet independently of what the 
producer said in his cache directives.

> Or the relationship between QoS treatment in PIT and the equivalence 
> class?
If you buy my complete separation between QoS treatment and equivalence 
class, the relationship is “none”.

>> A strawman set of principles to guide QoS architecture for ICN
> This is an important section of this draft. I highly recommend 
> dividing it into numbered subsection than bulleted headings (enclosed 
> in **). Also, as this is a 5-page long section, makes it quite an 
> exhausting reading (but an important one). Breaking it into numbered 
> subsection would improve its readability.
The counter-argument is that each of the principles is a pretty short 
paragraph or so. Therefore they seem to read better (at least to me) as 
paragraphs, not sub-sections. I’ll reconsider if other readers agree 
with you rather than me. The one exception is the discussion of Intserv, 
which is only loosely coupled to the other principles. I did split that 
out as a subsection.

>> actionable architectural principals for how
> s/principals/principles/
Oops! Fixed.

>> It makes enforcement of QoS treatments a single-ended rather than a 
>> double-ended problem

> i.e. consumer-ended.
Added. thanks.

>> allocation of of cache
>> s/of of/of/
Fixed. also fixed in the next instance.

>> For caching to be effective, individual Data objects in an 
>> equivalence class need to have similar treatment; otherwise 
>> well-known cache thrashing pathologies due to self-interference 
>> emerge

> What kind of treatments are these?
I don’t think it matters. Can you give an example where it does?

>> Survive transient outages of either a producer or links close to the 
>> consumer

> Shouldn’t this be “the producer”?
No, it’s the producer who should be worried about links near him. 
Conversely the consumer can only reasonably affect links close to him.

>> A QoS treatment requesting better robustness against transient 
>> disconnection can be used by a forwarder close to the consumer (or 
>> downstream of an unreliable link) to preferentially cache the      
>> corresponding data.

> This may not result into a better cache utilization if only a few 
> consumers are requesting the given content or only few consumers are 
> specifying this type of treatment.
Yes, but the objective function is not cache utilization in this case. 
The appropriate way to trade this off against cache utilization is to 
police the Interests that request the preferential treatment in the 
cache.

>> Conversely a QoS treatment together with, or in addition to a request 
>> for short latency, to indicate that new data will be requested soon 
>> enough that caching the current data being requested would be 
>> ineffective and hence to only pay attention to the caching 
>> preferences of the producer.

> This use case is very similar to the use case stated in section 7.4 
> (last para) of [I-D.anilj-icnrg-dnc-qos-icn]. Pasting the specific 
> text for quick reference – “satisfy a pending Interest with lower 
> QoS marking with arrival of a Data packet having higher QoS marker.  
> As a result, a user with lower QoS subscription may experience a 
> better response time from the network.”

Sorry, I’m not following the relevance to what I’m saying here. This 
is not addressing requests from multiple consumers, it’s addressing 
the case where new data is produced frequently such that the prior data 
is unlikely to be useful to the consumer (or another consumer) and hence 
it’s a waste of resources to cache it.

>> A QoS treatment indicating a mobile consumer likely to incur a 
>> mobility event within an RTT (or a few RTTs).  Such a treatment would 
>> allow a mobile network operator to preferentially cache the data at a 
>> forwarder positioned at a _join point_ or _rendezvous point_ of their 
>> topology.

> This QoS marking decision is more of a network operator than the 
> consumer. AFAIK, the mobile location tracking and handover is detected 
> and managed by the network and not the mobile device itself. We need 
> to check the specific mobile network design (i.e., how mobility is 
> handled in 4G and/or 5G). There could be more specific use cases from 
> the consumer and producer mobility perspective as well.
I tend to disagree but it’s not worth arguing over this here. This is 
not about detecting or managing mobility events, but giving a hint that 
you are *likely* to move soon. The consumer device is in most cases in a 
way better position to figure this out than the network.

>> Amerliorate congestion hotspots
> s/Amerliorate/Ameliorate/
Fixed. Thanks.



>> as their mapping onto queuing algorithms for managing link buffering 
>> is well understood

> s/is/are/
mapping is singular, therefore “is” is correct. i could change 
mapping to mappings, but I think it reads better as is.

>> A "burst" treatment, where an initial Interest gives an aggregate 
>> data size to request allocation of link capacity

> Like mentioned on page 6, it’s not clear how Interest alone would 
> provide the data size without arrival of the data? Is it the case that 
> publisher publishes the size of the content? And consumer, by some 
> offline method, knows the size of content before it requests it?

As I point out in the flow balance draft, this can be done easily with 
Manifests. I could get into this here, but it wouldn’t do much given 
this is section is intended to be somewhat speculative. Clearly, if you 
in actually defined such a “burst” QoS treatment, you’d have to 
give credible evidence that the consumer could feasibly obtain or 
compute the desired burst size.

>> also accomodate Data
> s/accomodate/accommodate/
Fixed. Thanks.

>> The permissible *degree of divergence
>> /permissible/permissable/
Permissible is correct (https://www.google.com/search?q=permissible)

>> the author does not take any position as to whether any of these 
>> INTserv-like capabilities are needed for ICN

> IMO in case of ICN, this would be quite tricky. Establishment of an 
> end-to-end SLA controlled data path in ICN network would be 
> challenging as content may exist at more one cache points as well as 
> the producer. This would be a serious scale issue.
Maybe. In any event, I’m not taking a position on this.

>> to be successful.
> s/successful/succesful/
Successful is correct (https://www.google.com/search?q=successful)

>> packaged with the the
> s/the the/the/
Fixed. Thanks

>> Interest with with an
> s/with with/with/
Fixed, Thanks.

>> are sill available
> s/sill/still/
Fixed, thanks

>> Since the architcture
> s/architcture/architecture/
Fixed, thanks.


>


> Thank you,
> /anil.