Re: [6tsch] On the fly scheduling

Guillaume Gaillard <guillaume.gaillard.maze@gmail.com> Fri, 04 October 2013 15:07 UTC

Return-Path: <guillaume.gaillard.maze@gmail.com>
X-Original-To: 6tsch@ietfa.amsl.com
Delivered-To: 6tsch@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id E467121F9D68 for <6tsch@ietfa.amsl.com>; Fri, 4 Oct 2013 08:07:04 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.599
X-Spam-Level:
X-Spam-Status: No, score=-2.599 tagged_above=-999 required=5 tests=[BAYES_00=-2.599, HTML_MESSAGE=0.001, NO_RELAYS=-0.001]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id HwWpDStJ61Cs for <6tsch@ietfa.amsl.com>; Fri, 4 Oct 2013 08:06:54 -0700 (PDT)
Received: from mail-we0-x22e.google.com (mail-we0-x22e.google.com [IPv6:2a00:1450:400c:c03::22e]) by ietfa.amsl.com (Postfix) with ESMTP id CD5F121F9D2C for <6tsch@ietf.org>; Fri, 4 Oct 2013 08:03:43 -0700 (PDT)
Received: by mail-we0-f174.google.com with SMTP id u56so796584wes.33 for <6tsch@ietf.org>; Fri, 04 Oct 2013 08:03:41 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=+ZeoRIEsh0wYfZyBlWiSVCr6rhZn3yGMh+cbhadGIE0=; b=FrIQw4XVVOLqw1M64+/S0V48VR9rH1KyKjw4DY7jEltdsAOmeh3qHpf1khsNNB7KRY A7S4xUuaNcYAs17PvXD8vRDqf2dN1L6bDJuHTauf6Z3SdljBQiPISjLKcsQi1jzmaAWD YYcNrL+c2bY8GsLk21McEH/dWKcrw6TbZHOODHG8eRj7u75ijfQSdLq1QSVLncxlD5J/ ewgEX8tY8psr+z/WXwLEAIFIivySCCAdH9CVvR3ctoitVfGnjB3yeadPVdNnbuKjNZgP Zqbr1VS3bJoYsr5zgv8fc8tEvrYBddzLbFwmukKYDfmyu9V9lhXfdq+xgKxgPXwRtDYJ xmjA==
MIME-Version: 1.0
X-Received: by 10.194.109.35 with SMTP id hp3mr2052770wjb.55.1380899021360; Fri, 04 Oct 2013 08:03:41 -0700 (PDT)
Received: by 10.217.10.199 with HTTP; Fri, 4 Oct 2013 08:03:41 -0700 (PDT)
In-Reply-To: <CAAzoce7Zf1gNnhAoA98xZ_Snzd7JVBvBP2JAOcTsEJW6BvqKrQ@mail.gmail.com>
References: <CAH7SZV86jyR6d3LbOqqFswzUN3brPdNni3GFuD-yeDYPYktNZQ@mail.gmail.com> <CALEMV4atkUjm0yRG1oOo=ayNL2jjd1ygSc_v68JuUeCpoH4+EA@mail.gmail.com> <E045AECD98228444A58C61C200AE1BD8414CEBBD@xmb-rcd-x01.cisco.com> <F085911F642A6847987ADA23E611780D1859FA68@hoshi.uni.lux> <CADJ9OA_itqDTLObLX-bByT1cQ8s=CXvdiNkvmsSUsk-z+7CnUA@mail.gmail.com> <CAAzoce7AYoek5uL93t3bbfXnkPVwQeik4_gnytkyp6TD8YpqAw@mail.gmail.com> <CADJ9OA9kbCQeY6VtoZtSVMw-XC_Kw5b=u+8mc-R0eocR-ij-nQ@mail.gmail.com> <F085911F642A6847987ADA23E611780D185A39B6@hoshi.uni.lux> <E045AECD98228444A58C61C200AE1BD8414D2118@xmb-rcd-x01.cisco.com> <CAAzoce6UxUX74h5xM2Bd+jC_WKT3OOyGEvGW1Q=TLWaLdkTriw@mail.gmail.com> <CAMHDfJ7hg58RzFy64+j5V_2tm+_OtPaOhwftgAcdb9A8Z=9ACA@mail.gmail.com> <CAAzoce7Zf1gNnhAoA98xZ_Snzd7JVBvBP2JAOcTsEJW6BvqKrQ@mail.gmail.com>
Date: Fri, 04 Oct 2013 17:03:41 +0200
Message-ID: <CAMHDfJ4N+9j=RFce3TUhqS=YLPQP=NxjS56acP7EE4ps3OvUNw@mail.gmail.com>
From: Guillaume Gaillard <guillaume.gaillard.maze@gmail.com>
To: Qin Wang <qinwang@berkeley.edu>
Content-Type: multipart/alternative; boundary="089e010d8574a396ad04e7eb9b16"
Cc: 6TSCH <6tsch@ietf.org>
Subject: Re: [6tsch] On the fly scheduling
X-BeenThere: 6tsch@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: "Discuss link layer model for Deterministic IPv6 over the TSCH mode of IEEE 802.15.4e, and impacts on RPL and 6LoWPAN such as resource allocation" <6tsch.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/6tsch>, <mailto:6tsch-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/6tsch>
List-Post: <mailto:6tsch@ietf.org>
List-Help: <mailto:6tsch-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/6tsch>, <mailto:6tsch-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 04 Oct 2013 15:07:05 -0000

Hi Qin,

It makes sense for me,

Guillaume


2013/10/4 Qin Wang <qinwang@berkeley.edu>

> Hi Guillaume,
>
> I would like to keep the algorithm (4) open, something like Objective
> Function in RPL. Make sense?
>
> Qin
>
>
> On Fri, Oct 4, 2013 at 10:54 PM, Guillaume Gaillard <
> guillaume.gaillard.maze@gmail.com> wrote:
>
>> Hi Qin and all,
>>
>> I agree with your separation into two tools (pre-allocation vs
>> post-allocation). When one wants to meet a new QoS requirement, it is
>> possible:
>> - (1) to over-provision cells in the schedule in order to be "sure" to
>> meet the requirement (pre-allocation);
>> - (2) to adapt the cell allocation to the variations of the network
>> parameters (post-allocation);
>> - (3) to do both things.
>>
>> When solution (2) is adopted, one should tolerate a time of degraded QoS,
>> before the requirement is restored.
>>
>> I agree on your list of mechanisms a solution requires. Is the algorithm
>> (4) (matching number of cell with QoS requirements) implementation-specific
>> ?
>>
>> Regards,
>>
>> Guillaume Gaillard
>>
>> Orange Labs Meylan/CITI INRIA-INSA Lyon
>> PhD student "SLA pour Réseaux de Capteurs Multi-Services"
>> Advisors D. Barthel, F. Valois, F. Theoleyre
>>
>>
>>
>> 2013/10/4 Qin Wang <qinwang@berkeley.edu>
>>
>>> Hi Pascal and all,
>>>
>>> It is a interesting approach. I would like to put different approaches
>>> above together.
>>>
>>> Objective: meet some QoS requirement.
>>> Tool 1: pre-allocation, like what Pascal suggested, assign number of
>>> cells to a bundle, and on/off them on demand.
>>> Tool 2: post-allocation, like discussed in other emails in this thread,
>>> adjust the number of cells in a bundle according to statistics information.
>>>
>>> In reality, I think a mote may use the combination of tool 1 and tool 2,
>>> e.g. at the beginning, mote allocates some cells for a bundle, and use them
>>> in the mode of on/off on demand; when statistics information show 2 more
>>> cells needed, then mote is triggered to allocates 3 more cells for the
>>> bundle, and then also use them in the mode of on/off on demand.
>>>
>>> If it is the case, a solution should include the the following parts:
>>> (1) mechanism to create/delete softcells
>>> (2) mechanism to provide statistics information
>>> (3) mechanism to on/off cell on demand
>>> (4) algorithm to determine how many cells needed for a QoS requirement
>>> like (dataflow/burst-feature, delay-tolerance, {statistics info},  ...)
>>>
>>> What do you think?
>>>
>>> Qin
>>>
>>>
>>>
>>>
>>> On Fri, Oct 4, 2013 at 5:30 PM, Pascal Thubert (pthubert) <
>>> pthubert@cisco.com> wrote:
>>>
>>>>  Dear all:****
>>>>
>>>> ** **
>>>>
>>>> Statistics and hysteresis can be costly in terms of memory and CPU, may
>>>> not be the best strategy to address bursts of traffic in a stochastic IP
>>>> network, and the strategy to derive the allocation from the past
>>>> observation may be hard to determine in a particular case and even harder
>>>> to generalize.****
>>>>
>>>> ** **
>>>>
>>>> The needs of bandwidth may be very dynamic for instance in the case of
>>>> alarms/alerts, or in the case of a brutal variation of monitored data.
>>>> Changes in the network topology impacting the rank of a node within the DAG
>>>> may also impact brutally its needs of bandwidth. So we cannot only rely on
>>>> past observations and but also need to maintain some capabilities that rely
>>>> on current needs that operate efficiently in terms of control and yet very
>>>> dynamically to accept both brutal bursts of traffic and longer term
>>>> variations.****
>>>>
>>>> ** **
>>>>
>>>> Considering that we have ample room in a SlotFrame, a simple strategy,
>>>> derived from traditional memory management, could be to augment the number
>>>> of cells in a bundle each time it is saturated. As an image, GNU’s obstack
>>>> seem to grow linearly one chunk at a time, but I know of chunk
>>>> implementations that double the size of a dynamic chunk each time it is
>>>> saturated and a geometrical growth may a better approach for us. ****
>>>>
>>>> ** **
>>>>
>>>> In a bundle, a portion of the cells is always on. The rest can stay
>>>> idle (no wake to send or listen) unless a previous cell indicates that
>>>> there is more to come. If we agree on a strategy like this then we can
>>>> start working out the details, in particular how handle lazy release or
>>>> one-shot override, how we observe unallocated slots to make them candidate
>>>> for allocation, etc...****
>>>>
>>>> ** **
>>>>
>>>> As you see, even implementing such a simple strategy can already be
>>>> quite some work in a constrained device. ****
>>>>
>>>> ** **
>>>>
>>>> Cheers,****
>>>>
>>>> ** **
>>>>
>>>> Pascal****
>>>>
>>>> ** **
>>>>
>>>> *From:* 6tsch-bounces@ietf.org [mailto:6tsch-bounces@ietf.org] *On
>>>> Behalf Of *Maria Rita PALATTELLA
>>>> *Sent:* vendredi 4 octobre 2013 09:19
>>>> *To:* Thomas Watteyne; 6TSCH
>>>>
>>>> *Subject:* Re: [6tsch] On the fly scheduling****
>>>>
>>>>  ** **
>>>>
>>>> +1 for me too.****
>>>>
>>>> ** **
>>>>
>>>> For sure, according to the specific requirements of the applications,
>>>> different algorithms may be used for building the schedule. But the
>>>> functionalities offers by 6top (i.e., collect statist. Info, send commands
>>>> for scheduling cells) will be still the same (no matter which algorithm is
>>>> used). ****
>>>>
>>>> ** **
>>>>
>>>> The statistics needed could change because different algorithms may use
>>>> different input as parameters.  So, we will have to include a set of
>>>> statistic information, or anyway, make sure it is possible to extend them.
>>>> ****
>>>>
>>>> ** **
>>>>
>>>> Maria Rita****
>>>>
>>>> ** **
>>>>
>>>> ** **
>>>>
>>>> ** **
>>>>
>>>> *From:* 6tsch-bounces@ietf.org [mailto:6tsch-bounces@ietf.org<6tsch-bounces@ietf.org>]
>>>> *On Behalf Of *Thomas Watteyne
>>>> *Sent:* Thursday, October 03, 2013 10:11 PM
>>>> *To:* 6TSCH
>>>> *Subject:* Re: [6tsch] On the fly scheduling****
>>>>
>>>> ** **
>>>>
>>>> That would indeed be clean.****
>>>>
>>>> ** **
>>>>
>>>> On Thu, Oct 3, 2013 at 12:33 PM, Qin Wang <qinwang@berkeley.edu> wrote:
>>>> ****
>>>>
>>>> Hi all,****
>>>>
>>>> ** **
>>>>
>>>> +1 for the proposal. ****
>>>>
>>>> ** **
>>>>
>>>> Regarding to how to approach, I agree with Thomas. There is an entity
>>>> running on top of 6top, which reads queue information and other statistics
>>>> information from 6top, sends instruction like create/delete softcells to
>>>> 6top. I think we can use the design methodology of Objective Function in
>>>> RPL, i.e. define the statistics information and the interface to/from 6top,
>>>> and leave the specific algorithm open.****
>>>>
>>>> ** **
>>>>
>>>> What do you think?****
>>>>
>>>> ** **
>>>>
>>>> Qin ****
>>>>
>>>> ** **
>>>>
>>>> ** **
>>>>
>>>> ** **
>>>>
>>>> On Fri, Oct 4, 2013 at 12:45 AM, Thomas Watteyne <
>>>> watteyne@eecs.berkeley.edu> wrote:****
>>>>
>>>> +1 for the proposal.****
>>>>
>>>> ** **
>>>>
>>>> I believe it could be a very simple and powerful approach. Diego, would
>>>> you agree that this can be considered a distributed mechanism sitting on
>>>> top of 6top?****
>>>>
>>>> ** **
>>>>
>>>> That is, 6top provides:****
>>>>
>>>>    - commands to modify the number of soft cells in a bundle****
>>>>    - commands to retrieve usage statistics of the cells/bundles****
>>>>
>>>>  The way I see it, your proposal consists of an algorithm which feeds
>>>> from the usage statistics and triggers changes in the number of soft cells
>>>> in a bundle. Correct?****
>>>>
>>>> ** **
>>>>
>>>> The questions to answer for now is whether 6top provides the right
>>>> statistics.****
>>>>
>>>> ** **
>>>>
>>>> Maria Rita, one big difference with TASA is that OTF scheduling is
>>>> distributed.****
>>>>
>>>> ** **
>>>>
>>>> Thomas****
>>>>
>>>> ** **
>>>>
>>>> On Thu, Oct 3, 2013 at 9:11 AM, Maria Rita PALATTELLA <
>>>> maria-rita.palattella@uni.lu> wrote:****
>>>>
>>>> Diego,( all)
>>>>
>>>> what you are suggesting (i.e., reserve cells based on queue size,
>>>> delay) is actually the main idea behind TASA (Traffic Aware Scheduling
>>>> Algorithm).
>>>>
>>>> TASA builds the schedule based on the local (number of pkt generated by
>>>> the node) and global queue level ( i.e., local + pkt to be forwarded,
>>>> generated by children). It gives priority to nodes with longer queues and
>>>> it aims to reduce the latency for delivering the pkt. At the same time
>>>> while building the schedule it minimizes the number of scheduled cells in
>>>> order to reduce the network duty cycle.
>>>> TASA is centralized and thus it assumes that the PCE has all the info
>>>> needed for setting up the schedule. in other words, it knows the traffic
>>>> generated by each nodes, and the paths followed by each pkt.
>>>> With a "on the fly solution", we will not need to know all this info a
>>>> priori. but we will use 6top monitoring functions and the control flows
>>>> message for scheduling the cells.
>>>>
>>>> Btw, I agree with all the points raised up by Xavi.  We will have to
>>>> address his questions.
>>>>
>>>> And I support Pascal's suggestions about how to deal with bundle.
>>>>
>>>> Maria Rita****
>>>>  ------------------------------
>>>>
>>>> *From:* 6tsch-bounces@ietf.org [6tsch-bounces@ietf.org] on behalf of
>>>> Pascal Thubert (pthubert) [pthubert@cisco.com]
>>>> *Sent:* Thursday, October 03, 2013 4:09 PM
>>>> *To:* xvilajosana@eecs.berkeley.edu; Prof. Diego Dujovne****
>>>>
>>>>
>>>> *Cc:* 6TSCH
>>>> *Subject:* Re: [6tsch] On the fly scheduling****
>>>>
>>>> ** **
>>>>
>>>> +1 too.****
>>>>
>>>>  ****
>>>>
>>>> I think that the queue size matters at enqueue but the latency is
>>>> really what we care for at dequeue, that is how long did this device keep
>>>> this message in queue (even if we are far from ****
>>>>
>>>> buffer bloat conditions in such a device). If one of the 2 conditions
>>>> (size at enqueue, latency at dequeue) is reached then the bundle should be
>>>> increased. ****
>>>>
>>>>  ****
>>>>
>>>> I agree with Xavi that we want to avoid changing the bundle size all
>>>> the time. We discussed that with Qin and others earlier on the ML. One way
>>>> of increasing the bundle dynamically at a very low cost (not even a
>>>> hysteresis)  is to have it large amount of cells from the start but used
>>>> like 10% by default (xmit/listen happens only once in 10 time slots). A bit
>>>> in the frame indicates whether the next (normally unused) slot will indeed
>>>> be used. The bit can be present in the data and acked in the ack. This can
>>>> also implicitly be triggered for retries.****
>>>>
>>>>  ****
>>>>
>>>> Please keep us tuned!****
>>>>
>>>>  ****
>>>>
>>>> Cheers,****
>>>>
>>>>  ****
>>>>
>>>> PS Note that Cisco has IPR on chaining time slots and flagging whether
>>>> the next is used or not. We already declared our IPR against the
>>>> architecture draft and provided terms.****
>>>>
>>>>  ****
>>>>
>>>> Pascal****
>>>>
>>>>  ****
>>>>
>>>> *From:* 6tsch-bounces@ietf.org [mailto:6tsch-bounces@ietf.org] *On
>>>> Behalf Of *Xavier Vilajosana Guillen
>>>> *Sent:* jeudi 3 octobre 2013 15:46
>>>> *To:* Prof. Diego Dujovne
>>>> *Cc:* 6TSCH
>>>> *Subject:* Re: [6tsch] On the fly scheduling****
>>>>
>>>>  ****
>>>>
>>>> Diego,
>>>>
>>>> +1****
>>>>
>>>> it seems to a me a very interesting idea to explore. Maybe we can start
>>>> putting some rules of this mechanism on the table and prepare a simulation.
>>>> I am completely in with that idea.****
>>>>
>>>> Some questions arise:****
>>>>
>>>> 1-how fast do you react to changes on the queue size to avoid
>>>> hysteresis -- i.e how do you maintain certain stability in the schedule (so
>>>> you don't start installing and removing links very often)****
>>>>
>>>> 2-how you map queue size (only one or if more than one queue) to actual
>>>> link requirements****
>>>>
>>>> 3-how you recover from link collisions in case of multiple nodes
>>>> schedule the same cells.****
>>>>
>>>> 4-how to decide to who (what neighbor) install more links according to
>>>> queue size?****
>>>>
>>>>  ****
>>>>
>>>> cheers!
>>>> Xavi****
>>>>
>>>>  ****
>>>>
>>>>  ****
>>>>
>>>> On Thu, Oct 3, 2013 at 3:23 AM, Prof. Diego Dujovne <
>>>> diego.dujovne@mail.udp.cl> wrote:****
>>>>
>>>> Dear all,
>>>>             I've been looking into the idea of "on the fly scheduling",
>>>> presented on the Sept 27th webex call as "on-the-fly decentralized
>>>> reservation".
>>>> The basic mechanism would be based on analysing the queue size
>>>> on a node and dynamically adapt the number of reserved
>>>> cells to satisfy queue size, delay and/or power
>>>> consumption thresholds.
>>>>             This mechanism would work inside 6top, between pairs of
>>>> nodes.
>>>> As a first approach, it would be based on the minimal draft.
>>>> What do you think on this starting point?
>>>> I (gladly) receive comments to add or modify this proposal.
>>>>
>>>>                                      Diego
>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> DIEGO DUJOVNE
>>>> Académico Escuela de Ingeniería en Informática y Telecomunicaciones
>>>> Facultad de Ingeniería UDP
>>>> www.ingenieria.udp.cl
>>>> (56 2) 676 8125
>>>> _______________________________________________
>>>> 6tsch mailing list
>>>> 6tsch@ietf.org
>>>> https://www.ietf.org/mailman/listinfo/6tsch****
>>>>
>>>>  ****
>>>>
>>>>
>>>> _______________________________________________
>>>> 6tsch mailing list
>>>> 6tsch@ietf.org
>>>> https://www.ietf.org/mailman/listinfo/6tsch****
>>>>
>>>> ** **
>>>>
>>>>
>>>> _______________________________________________
>>>> 6tsch mailing list
>>>> 6tsch@ietf.org
>>>> https://www.ietf.org/mailman/listinfo/6tsch****
>>>>
>>>> ** **
>>>>
>>>> ** **
>>>>
>>>> _______________________________________________
>>>> 6tsch mailing list
>>>> 6tsch@ietf.org
>>>> https://www.ietf.org/mailman/listinfo/6tsch
>>>>
>>>>
>>>
>>> _______________________________________________
>>> 6tsch mailing list
>>> 6tsch@ietf.org
>>> https://www.ietf.org/mailman/listinfo/6tsch
>>>
>>>
>>
>