Re: [6lo-fragmentation-dt] Performance report for fragment forwarding

Rahul Jadhav <rahul.ietf@gmail.com> Thu, 20 September 2018 12:48 UTC

Return-Path: <rahul.ietf@gmail.com>
X-Original-To: 6lo-fragmentation-dt@ietfa.amsl.com
Delivered-To: 6lo-fragmentation-dt@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 61C58130EA3 for <6lo-fragmentation-dt@ietfa.amsl.com>; Thu, 20 Sep 2018 05:48:02 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.999
X-Spam-Level:
X-Spam-Status: No, score=-1.999 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FREEMAIL_FROM=0.001, HTML_MESSAGE=0.001, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id Sc8OC1fsbaF2 for <6lo-fragmentation-dt@ietfa.amsl.com>; Thu, 20 Sep 2018 05:47:59 -0700 (PDT)
Received: from mail-vs1-xe2c.google.com (mail-vs1-xe2c.google.com [IPv6:2607:f8b0:4864:20::e2c]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 2871D130DE2 for <6lo-fragmentation-dt@ietf.org>; Thu, 20 Sep 2018 05:47:59 -0700 (PDT)
Received: by mail-vs1-xe2c.google.com with SMTP id v18-v6so2902051vsl.11 for <6lo-fragmentation-dt@ietf.org>; Thu, 20 Sep 2018 05:47:59 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=qt1AjZDQyD9S6GHdiayNoOAaGc8vD40Et/0jxROZxTQ=; b=Em7LXqUr/LSv63R9lOpAkZzv5xOO56prbj9DjzXL6wr5BvMtD1Q1Iu93fSh/liF26j TPS0Vh7/C4YPJLltQveCZdLLiNbI4pO/g1aM1NNfncOvflz6f0ajlSuUCzw7O4xRFH8T kyahJ4PUeZGA9aYQt15Yf7PgEMA07aRWkOvi/aUPKulRqmMAEGj9i4u9GGbqdELN8GCa +BIBKWjOxPSeohppvmELJlsWLe1+m+CO8Rjs5msvl7X98U66+B/FMQnPSGK0ccO9LJZq Jf8wUpY6NtwOmS2VxpvYpcoU+5N5oxi7fV54N/f4UeU5K4sC2zkC+xEC6LIDrTO4C2Lm 6k8w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=qt1AjZDQyD9S6GHdiayNoOAaGc8vD40Et/0jxROZxTQ=; b=D5LPf7Lxwv71wOQrFesK/DMBP/pAvnSdHs35IgCBEFOv1umo03eHb+W5qSqzLfRpjx fM1ZrvQBJx9kKGH3W7mDxQGMpZPZs5/RqNmrJhiO33UaroJBMhCb2Czml1SfGrtdKqed 0wJ5/jjPqu4q4lflKAGEaZ92kUXV9wnahngLQkZhDOh7LgEqYmpHOy0vVDDKh42n3cJz tlkm1gGfzFdMzpAER6y7I0gEWS6m02uyHmqc2i7cHd0YEjSyVZJ1pYEQzFyRWZCIBauC y7c42YbsREPmeWOaUC9xs38OA+6j0Vh7GAPt+C6AQCF8oGx0VCKagfNC3U7kCe5Rqd+9 6Dbg==
X-Gm-Message-State: APzg51C35ZIWn8lyJGb7riZLW+wuILlMQSgunoEMOlsW2/HNcTDKyTAq 1kfijk0fcFwEvuvKRqh2LbQ468eUVnLxgvYTd4U=
X-Google-Smtp-Source: ANB0VdbPeaR/rhrdQL3CDyXyeEUFkT4/1on3AfJXgqOYXgnuRQljwQ40a2XrESvDBwqW7W7BeXlHrGcjWxC1awwRdKc=
X-Received: by 2002:a67:1644:: with SMTP id 65-v6mr11993655vsw.103.1537447678035; Thu, 20 Sep 2018 05:47:58 -0700 (PDT)
MIME-Version: 1.0
References: <CAO0Djp2EKyiZK5-b+_R4c557mXSktPCEtYxOQjQb4vreTVOX9g@mail.gmail.com> <C3A37ED0-C93B-4D1B-9E6D-857B14253874@tzi.org> <CAO0Djp37FHUaoLPEhfLMX2dmEb+=DY0XdYLUDvq1AOuT9to8ZQ@mail.gmail.com> <6069A8FE-34B7-404D-9894-CD29B89FB36F@tzi.org> <CAO0Djp0B7DugGrwRaBghEpRPak5Sb88w9mQLFMkNkJmsq+sxnw@mail.gmail.com> <453e08a54d574aad9c8cdb21b7c14b64@XCH-RCD-001.cisco.com> <CAO0Djp1M_REDLZQ5_5sWXZ1=QYXapWxUOVpqdRfP3+VRdsJSNw@mail.gmail.com> <9d096cd3974146c098ba8cdec39869ef@XCH-RCD-001.cisco.com>
In-Reply-To: <9d096cd3974146c098ba8cdec39869ef@XCH-RCD-001.cisco.com>
From: Rahul Jadhav <rahul.ietf@gmail.com>
Date: Thu, 20 Sep 2018 18:17:46 +0530
Message-ID: <CAO0Djp1smK0SB+Be9CrcjDLBsd6JD_2tHoGnrhBv_sVG6UPcng@mail.gmail.com>
To: "Pascal Thubert (pthubert)" <pthubert@cisco.com>
Cc: Carsten Bormann <cabo@tzi.org>, rabinarayans@huawei.com, 6lo-fragmentation-dt@ietf.org, yasuyuki.tanaka@inria.fr
Content-Type: multipart/alternative; boundary="000000000000b5ce0805764ceef9"
Archived-At: <https://mailarchive.ietf.org/arch/msg/6lo-fragmentation-dt/lClO1LqMRx4bDprzztCM26ppr5g>
Subject: Re: [6lo-fragmentation-dt] Performance report for fragment forwarding
X-BeenThere: 6lo-fragmentation-dt@ietf.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: 6lo Fragmentation Design Team <6lo-fragmentation-dt.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/6lo-fragmentation-dt>, <mailto:6lo-fragmentation-dt-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/6lo-fragmentation-dt/>
List-Post: <mailto:6lo-fragmentation-dt@ietf.org>
List-Help: <mailto:6lo-fragmentation-dt-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/6lo-fragmentation-dt>, <mailto:6lo-fragmentation-dt-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 20 Sep 2018 12:48:02 -0000

Please find my comments inline..

All in all, some of the concerns mentioned in this mail thread should be
reflected in the draft as well. A naive implementation or implementers
without any background on mac layer may seriously goof up the performance
if they simply go by the draft currently.

Regards,
Rahul


>
> 6TiSCH protects against fragments interfering with one another, but it
> does not protect against a Hop1 sending frag N+1 to Hop2 while Hop2 is
> sending frag N to Hop3. Even 6TiSCH needs to pace its traffic to be
> forwarded by a same next hop, and that’s true for all packets not just
> fragments. Do you have any form of listen before talk in your simulation?
>
>
[RJ] Yes, there is carrier sensing before transmission in my simulation.

>
>
> If you do not delay the transmissions of intermediate fragments, your test
> certainly proves that the system cannot work satisfactorily. This much was
> predictable on paper. Say it take 10ms for a transmit + ack, and most
> transmissions succeed within 3-4 tries, it makes sense to do your testing
> injecting 50 to 100ms between fragment transmissions. Every node along the
> path must validate that fragments are thus spaced. Could you please
> relaunch your test with a rule like that?
>
>
>
[RJ] Yes we ll try this and get the data again. Carsten also suggested the
same before in this mail chain.


> For the rest please see below:
>
>
>
>
>
> If you could instrument the intermediate nodes, I’d like to see if for
> reason of retries a fragment may catch up with another one. This can be
> tested by making a hop in the middle very far and unreliable.
>
>
>
> [RJ] Sorry but I m not sure what you mean here.
>
>
>
> *[PT>] The idea is to make a link less reliable then others in order to
> cause retries so a virtually slower link. You’ll see that packets pile up
> there even if paced at the source. That’s what I mean by other packets
> catching up.*
>

[RJ] Ok got it. Thanks

>
>
> Also, even if you pace at the source by inserting gaps, fragments may
> still pile up at the ingress of the slowest link on the way and your
> initial problem will be recreated there. This is only visible if you
> effectively have a slower hop, but is a very classical effect in window
> based networks, the full window ends up piling at ingress of the slowest
> link. You may recreate that by injecting additional traffic via a node on
> your path.
>
>
>
> [RJ] Agreed that a bottleneck link may still create a problem. But if you
> see the network in our report, there is no such bottleneck node/link ...
> the paths are sufficiently balanced.
>
>
>
> *[PT>] So the space must be inserted at the bottleneck too. In theory, a
> transmitter needs to ensure that every packet to a same next hop is
> sufficiently spaced with the next packet, unless the next hop is the
> destination, so the next hop can forward.*
>
> To avoid that you would need to pace at the source to a rate that is
> slower what the slowest hop would do if it was the source. Really hard to
> do. So ideally, each node would ensure that there is a minimum gap between
> fragments, which should be no-op everywhere along a path but at the slowest
> hop where fragments will pile up.
>
>
>
> Finally: you can limit the amount of queueing at the slowest hop if you
> use the recoverable frags draft, using windows of a few fragments for flow
> control.
>
> [RJ] Amount of queueing can be limited in our case by using L2 feedback as
> well i.e. if the fragment is lost (and the sender knows this based on L2
> feedback) then we can as well stop transmitting all subsequent fragments.
> For 802.15.4 the recovery procedure can be applied based on L2 feedback
> without any explicit ACK.
>
> But right now we arent using any of these advanced techniques.
>
>
>
> Our aim for this experiment was to check how much of an impact was it when
> fragment forwarding was used as mentioned in the VRB draft in single
> channel mode of operation.
>
>
>
> *[PT>] Without any pacing, the simulation is not so realistic and though
> interesting, conclusions there would be premature. I’d be very interested
> in seeing more tests varying the inter frame space and see how that impacts
> the delivery.*
>

[RJ] Yes we ll try this.

>
>
> Bottom line: the more you test with one frequency only the more you’ll
> find it’s a bad idea : )
>
> [RJ] Yes we have realized (it the hard way) that using fragment forwarding
> in single channel mode is a very bad idea.
>
> *[PT>] Note that the same would happen with a series of packets in a row.
> I do not think that the fact these are fragments have much to do with it.*
>

[RJ] With series of packets, do you mean per-hop reassembly? Well, per-hop
reassembly does not suffer from this issue. But it suffers from buffering
issue. So if the traffic is sufficiently sparse then using per-hop
reassembly in this scenario makes more sense.


>
>
> Take care,
>
>
>
> Pascal
>
>
>
> All the best;
>
>
>
> Pascal
>
>
>
> From: 6lo-fragmentation-dt <6lo-fragmentation-dt-bounces@ietf.org> On
> Behalf Of Rahul Jadhav
> Sent: mercredi 19 septembre 2018 07:59
> To: Carsten Bormann <cabo@tzi.org>
> Cc: rabinarayans@huawei.com; 6lo-fragmentation-dt@ietf.org;
> yasuyuki.tanaka@inria.fr
> Subject: Re: [6lo-fragmentation-dt] Performance report for fragment
> forwarding
>
>
>
> Doing pacing on origin-node only should be easy to change/hack and might
> improve the forwarding efficiency. But this may also shoot up the mean
> latency.
>
> We ll try this anways (add randomized delay between 10ms to 500ms) to
> check the impact on PDR. Thanks for this suggestion.
>
>
>
> On Wed, 19 Sep 2018 at 10:25, Carsten Bormann <cabo@tzi.org> wrote:
>
> Hi Rahul,
>
> > Carsten also mentioned a pacing mechanism .. while it might improve
> fwding efficiency it will add to buffer requirement. Also such a scheme
> might be non-trivial to be implemented.
>
> Pacing is something the origin of the datagram does; I wasn’t suggesting
> the forwarders to add that (which indeed essentially puts the datagram into
> buffers there).
> Clearly, pacing is counterproductive when forwarders do full reassembly,
> as it increases the time the partially assembled datagram is vulnerable to
> fragments of other datagrams hijacking the buffer — you want to capture the
> channel as much as possible then (to the level that you’d want to send
> following fragments with a higher MAC priority).  Of course, as the
> datagram travels through the network, it will become more and more smeared
> over time.
>
> It would be interesting to see how your numbers change with some pacing at
> the origin; it should be easy to hack that in if you are not trying to do
> this in a general way.
>
> > Regarding keeping higher mac-retry, ... We have chosen mac-retry of 3
> after some experimentation (considering expected node densities and tx
> frequency). increasing mac-retry might not necessarily help, in fact it may
> backfire in terms of both PDR as well as mean latency. Would you still
> suggest to give it a try and what mac-retry do you think makes sense ?
>
> Yeah, yet another tunable that in a robust protocol really should be
> self-tuning.
> A MAC-layer retransmission is not useful if the layers above have already
> timed out.
> So the latency introduced by the backoff mechanism is really the
> determining factor here.
>
> Grüße, Carsten
>
>