Re: [6lo-fragmentation-dt] Performance report for fragment forwarding

Rahul Jadhav <rahul.ietf@gmail.com> Mon, 01 October 2018 20:20 UTC

Return-Path: <rahul.ietf@gmail.com>
X-Original-To: 6lo-fragmentation-dt@ietfa.amsl.com
Delivered-To: 6lo-fragmentation-dt@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id A07411293FB for <6lo-fragmentation-dt@ietfa.amsl.com>; Mon, 1 Oct 2018 13:20:25 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.998
X-Spam-Level:
X-Spam-Status: No, score=-1.998 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FREEMAIL_FROM=0.001, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id qvdSjmw9Aaeu for <6lo-fragmentation-dt@ietfa.amsl.com>; Mon, 1 Oct 2018 13:20:22 -0700 (PDT)
Received: from mail-ua1-x936.google.com (mail-ua1-x936.google.com [IPv6:2607:f8b0:4864:20::936]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id A2FEC1252B7 for <6lo-fragmentation-dt@ietf.org>; Mon, 1 Oct 2018 13:20:21 -0700 (PDT)
Received: by mail-ua1-x936.google.com with SMTP id g18-v6so5418287uam.6 for <6lo-fragmentation-dt@ietf.org>; Mon, 01 Oct 2018 13:20:21 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=HGD542CMjolRBSCNMHWHlTsKIbmXXBADfCzdbFeH3mc=; b=JU0uPjLW+g96bXmdXLn604JpNw2ddBPqG+rBioUEolHVa0Srb1TvAhTuVp0VHwGsvk vpvFs6t8eTaK51EBcLxi0IzMXPMLIshUuf2XNsoLoBxJcOzAOWbjzCXqI40ItzkPK7g+ 1XISZzEqRbgSfW2e1BjgIzixejQgteejfwtk+hIZtgwBwmfWOhrAwim6Prm6QAuqaNWJ gq2mIx1unXw6L7W7Eud3m8QibYg5b4/TaTV/ALHKimHHMYYLKDtVxFCXSCPu6MJU3V8J +/fTmazUVPKoUZgds0nWTEoj4u8ow2W5+36uOZ0wiQ9xKtv5z3OYW5CbnLihOmafjWiO bfuQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=HGD542CMjolRBSCNMHWHlTsKIbmXXBADfCzdbFeH3mc=; b=cW4URHtvfZU3hH5ZwwBFX0H7cm8VutccZXFHR9lHxwRyYqn8P0AXy94CAEX/fOHb8C bJo6kOdytHsDtPcpIsXbhjJ9MI6Xa32x5zk0KWwg4q6VBmdEnhBFXfwjnSwCdxzHe3MY ctzT7wc9YN7dyqRQrVPn38qxd2sEin62BUmuZJZ41dK7SdUyiKef8Wbq0v3wNW/RJBjs RfpHbTBy1dlZXkHA2B1XHeU0jwmsMDl2mHGMxOQpdWsBnAeo8yLfIQ68VHh5g8Qv6bzl wfqNWj7YfOIaKTF9MHJt+LOPSGYJciBE+lqDY/CTD6Sc4kD05NDHcVBhQnEoFnvjf6zL /GFg==
X-Gm-Message-State: ABuFfojruoiZ6KzpN/dItWOKXRPlPSaQBAvVVacQGEakA+ZI44StsIzX jMIZamqMsbFRL87+QZaFsAzmnd0vjhQwQo04Kqc=
X-Google-Smtp-Source: ACcGV60zvkkomi9z2jk1BKOFyvnfdNNdXQdQvC2/ueyxOA0HAGDhodeeJ1XE0t/D2wUXWzgs3fy/z5kxR07MfyHNC04=
X-Received: by 2002:ab0:1052:: with SMTP id g18-v6mr5511430uab.38.1538425220446; Mon, 01 Oct 2018 13:20:20 -0700 (PDT)
MIME-Version: 1.0
References: <CAO0Djp2EKyiZK5-b+_R4c557mXSktPCEtYxOQjQb4vreTVOX9g@mail.gmail.com> <C3A37ED0-C93B-4D1B-9E6D-857B14253874@tzi.org> <CAO0Djp37FHUaoLPEhfLMX2dmEb+=DY0XdYLUDvq1AOuT9to8ZQ@mail.gmail.com> <6069A8FE-34B7-404D-9894-CD29B89FB36F@tzi.org> <CAO0Djp0B7DugGrwRaBghEpRPak5Sb88w9mQLFMkNkJmsq+sxnw@mail.gmail.com> <453e08a54d574aad9c8cdb21b7c14b64@XCH-RCD-001.cisco.com> <CAO0Djp1M_REDLZQ5_5sWXZ1=QYXapWxUOVpqdRfP3+VRdsJSNw@mail.gmail.com> <9d096cd3974146c098ba8cdec39869ef@XCH-RCD-001.cisco.com> <CAO0Djp1smK0SB+Be9CrcjDLBsd6JD_2tHoGnrhBv_sVG6UPcng@mail.gmail.com> <82ed39709c3e421694f4ec241a83ca5c@XCH-RCD-001.cisco.com>
In-Reply-To: <82ed39709c3e421694f4ec241a83ca5c@XCH-RCD-001.cisco.com>
From: Rahul Jadhav <rahul.ietf@gmail.com>
Date: Tue, 2 Oct 2018 01:50:07 +0530
Message-ID: <CAO0Djp2S+EhvC1QBNTaxyG7-mW-eTEXGKaGFXtJYL_GnodTzVw@mail.gmail.com>
To: "Pascal Thubert (pthubert)" <pthubert@cisco.com>
Cc: Carsten Bormann <cabo@tzi.org>, rabinarayans@huawei.com, 6lo-fragmentation-dt@ietf.org, yasuyuki.tanaka@inria.fr, "Georgios Z. Papadopoulos" <georgios.papadopoulos@imt-atlantique.fr>
Content-Type: multipart/alternative; boundary="000000000000c73ed80577308864"
Archived-At: <https://mailarchive.ietf.org/arch/msg/6lo-fragmentation-dt/ULVmxlkVTUNlMplwjQWARyTwQ3w>
Subject: Re: [6lo-fragmentation-dt] Performance report for fragment forwarding
X-BeenThere: 6lo-fragmentation-dt@ietf.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: 6lo Fragmentation Design Team <6lo-fragmentation-dt.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/6lo-fragmentation-dt>, <mailto:6lo-fragmentation-dt-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/6lo-fragmentation-dt/>
List-Post: <mailto:6lo-fragmentation-dt@ietf.org>
List-Help: <mailto:6lo-fragmentation-dt-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/6lo-fragmentation-dt>, <mailto:6lo-fragmentation-dt-request@ietf.org?subject=subscribe>
X-List-Received-Date: Mon, 01 Oct 2018 20:20:26 -0000

Hello All,

Basic data with pacing is now available. We have added flat 50ms delay
before individual fragments are transmitted on the original sender.
And this has resulted in __drastic improvement in the PDR__ as compared to
experiment with no pacing. But on the negative side it has also induced
higher end to end latency. Anyways this was as expected.

Please check the data.
https://github.com/nyrahul/ietf-data/blob/master/6lo-fragfwd-perf-report.rst

Best,
Rahul


On Thu, 20 Sep 2018 at 18:30, Pascal Thubert (pthubert) <pthubert@cisco.com>;
wrote:

> Hello Rahul
>
>
>
> *From:* Rahul Jadhav <rahul.ietf@gmail.com>;
> *Sent:* jeudi 20 septembre 2018 14:48
> *To:* Pascal Thubert (pthubert) <pthubert@cisco.com>;
> *Cc:* Carsten Bormann <cabo@tzi.org>;; rabinarayans@huawei.com;
> 6lo-fragmentation-dt@ietf.org; yasuyuki.tanaka@inria.fr
> *Subject:* Re: [6lo-fragmentation-dt] Performance report for fragment
> forwarding
>
>
>
> Please find my comments inline..
>
>
>
> All in all, some of the concerns mentioned in this mail thread should be
> reflected in the draft as well. A naive implementation or implementers
> without any background on mac layer may seriously goof up the performance
> if they simply go by the draft currently.
>
> *[PT>] makes sense. Note that a burst of IP packets for a same destination
> have the same issue. The problem has to do with transmitting frames at L2
> not that they are individual packets or multiple fragments at upper layer.
> If the upper layer makes 5 IP packets to avoid fragmentation of a larger
> packet in 5 fragments, and sends the 5 IP packets together, you’ll get the
> exact same result as if using fragments – unless I miss something always
> interesting to dig deeper..*
>
>
>
>
>
>
>
> 6TiSCH protects against fragments interfering with one another, but it
> does not protect against a Hop1 sending frag N+1 to Hop2 while Hop2 is
> sending frag N to Hop3. Even 6TiSCH needs to pace its traffic to be
> forwarded by a same next hop, and that’s true for all packets not just
> fragments. Do you have any form of listen before talk in your simulation?
>
>
>
> [RJ] Yes, there is carrier sensing before transmission in my simulation.
>
> *[PT>] Then other strategies are possible, like a longer CCA for the first
> sender. *
>
>
>
> If you do not delay the transmissions of intermediate fragments, your test
> certainly proves that the system cannot work satisfactorily. This much was
> predictable on paper. Say it take 10ms for a transmit + ack, and most
> transmissions succeed within 3-4 tries, it makes sense to do your testing
> injecting 50 to 100ms between fragment transmissions. Every node along the
> path must validate that fragments are thus spaced. Could you please
> relaunch your test with a rule like that?
>
>
>
> [RJ] Yes we ll try this and get the data again. Carsten also suggested the
> same before in this mail chain.
>
> *[PT>] very cool, looking forward to seeing the change. Great research!*
>
>
>
> For the rest please see below:
>
>
>
>
>
> If you could instrument the intermediate nodes, I’d like to see if for
> reason of retries a fragment may catch up with another one. This can be
> tested by making a hop in the middle very far and unreliable.
>
>
>
> [RJ] Sorry but I m not sure what you mean here.
>
>
>
> *[PT>] The idea is to make a link less reliable then others in order to
> cause retries so a virtually slower link. You’ll see that packets pile up
> there even if paced at the source. That’s what I mean by other packets
> catching up.*
>
>
>
> [RJ] Ok got it. Thanks
>
>
>
> Also, even if you pace at the source by inserting gaps, fragments may
> still pile up at the ingress of the slowest link on the way and your
> initial problem will be recreated there. This is only visible if you
> effectively have a slower hop, but is a very classical effect in window
> based networks, the full window ends up piling at ingress of the slowest
> link. You may recreate that by injecting additional traffic via a node on
> your path.
>
>
>
> [RJ] Agreed that a bottleneck link may still create a problem. But if you
> see the network in our report, there is no such bottleneck node/link ...
> the paths are sufficiently balanced.
>
>
>
> *[PT>] So the space must be inserted at the bottleneck too. In theory, a
> transmitter needs to ensure that every packet to a same next hop is
> sufficiently spaced with the next packet, unless the next hop is the
> destination, so the next hop can forward.*
>
> To avoid that you would need to pace at the source to a rate that is
> slower what the slowest hop would do if it was the source. Really hard to
> do. So ideally, each node would ensure that there is a minimum gap between
> fragments, which should be no-op everywhere along a path but at the slowest
> hop where fragments will pile up.
>
>
>
> Finally: you can limit the amount of queueing at the slowest hop if you
> use the recoverable frags draft, using windows of a few fragments for flow
> control.
>
> [RJ] Amount of queueing can be limited in our case by using L2 feedback as
> well i.e. if the fragment is lost (and the sender knows this based on L2
> feedback) then we can as well stop transmitting all subsequent fragments.
> For 802.15.4 the recovery procedure can be applied based on L2 feedback
> without any explicit ACK.
>
> But right now we arent using any of these advanced techniques.
>
>
>
> Our aim for this experiment was to check how much of an impact was it when
> fragment forwarding was used as mentioned in the VRB draft in single
> channel mode of operation.
>
>
>
> *[PT>] Without any pacing, the simulation is not so realistic and though
> interesting, conclusions there would be premature. I’d be very interested
> in seeing more tests varying the inter frame space and see how that impacts
> the delivery.*
>
>
>
> [RJ] Yes we ll try this.
>
>
>
> Bottom line: the more you test with one frequency only the more you’ll
> find it’s a bad idea : )
>
> [RJ] Yes we have realized (it the hard way) that using fragment forwarding
> in single channel mode is a very bad idea.
>
> *[PT>] Note that the same would happen with a series of packets in a row.
> I do not think that the fact these are fragments have much to do with it.*
>
>
>
> [RJ] With series of packets, do you mean per-hop reassembly? Well, per-hop
> reassembly does not suffer from this issue. But it suffers from buffering
> issue. So if the traffic is sufficiently sparse then using per-hop
> reassembly in this scenario makes more sense.
>
> *[PT>] No just different IP packet same destinations to be sent in a row,
> no fragmenting. Unless you do ECMP, they will be routed along a same path
> and have the same fate as fragments. This is why my expectation is that the
> the problem you point out is not limited to fragments.*
>
>
>
>  *[PT>] *: )
>
> *[PT>] *
>
> *Pascal*
>
>
>
> Take care,
>
>
>
> Pascal
>
>
>
> All the best;
>
>
>
> Pascal
>
>
>
> From: 6lo-fragmentation-dt <6lo-fragmentation-dt-bounces@ietf.org>; On
> Behalf Of Rahul Jadhav
> Sent: mercredi 19 septembre 2018 07:59
> To: Carsten Bormann <cabo@tzi.org>;
> Cc: rabinarayans@huawei.com; 6lo-fragmentation-dt@ietf.org;
> yasuyuki.tanaka@inria.fr
> Subject: Re: [6lo-fragmentation-dt] Performance report for fragment
> forwarding
>
>
>
> Doing pacing on origin-node only should be easy to change/hack and might
> improve the forwarding efficiency. But this may also shoot up the mean
> latency.
>
> We ll try this anways (add randomized delay between 10ms to 500ms) to
> check the impact on PDR. Thanks for this suggestion.
>
>
>
> On Wed, 19 Sep 2018 at 10:25, Carsten Bormann <cabo@tzi.org>; wrote:
>
> Hi Rahul,
>
> > Carsten also mentioned a pacing mechanism .. while it might improve
> fwding efficiency it will add to buffer requirement. Also such a scheme
> might be non-trivial to be implemented.
>
> Pacing is something the origin of the datagram does; I wasn’t suggesting
> the forwarders to add that (which indeed essentially puts the datagram into
> buffers there).
> Clearly, pacing is counterproductive when forwarders do full reassembly,
> as it increases the time the partially assembled datagram is vulnerable to
> fragments of other datagrams hijacking the buffer — you want to capture the
> channel as much as possible then (to the level that you’d want to send
> following fragments with a higher MAC priority).  Of course, as the
> datagram travels through the network, it will become more and more smeared
> over time.
>
> It would be interesting to see how your numbers change with some pacing at
> the origin; it should be easy to hack that in if you are not trying to do
> this in a general way.
>
> > Regarding keeping higher mac-retry, ... We have chosen mac-retry of 3
> after some experimentation (considering expected node densities and tx
> frequency). increasing mac-retry might not necessarily help, in fact it may
> backfire in terms of both PDR as well as mean latency. Would you still
> suggest to give it a try and what mac-retry do you think makes sense ?
>
> Yeah, yet another tunable that in a robust protocol really should be
> self-tuning.
> A MAC-layer retransmission is not useful if the layers above have already
> timed out.
> So the latency introduced by the backoff mechanism is really the
> determining factor here.
>
> Grüße, Carsten
>
>