Re: [6lo-fragmentation-dt] Performance report for fragment forwarding

Rahul Jadhav <rahul.ietf@gmail.com> Thu, 20 September 2018 11:32 UTC

Return-Path: <rahul.ietf@gmail.com>
X-Original-To: 6lo-fragmentation-dt@ietfa.amsl.com
Delivered-To: 6lo-fragmentation-dt@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id E237C12F1A6 for <6lo-fragmentation-dt@ietfa.amsl.com>; Thu, 20 Sep 2018 04:32:33 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.999
X-Spam-Level:
X-Spam-Status: No, score=-1.999 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FREEMAIL_FROM=0.001, HTML_MESSAGE=0.001, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id AS-FfBBJGLCW for <6lo-fragmentation-dt@ietfa.amsl.com>; Thu, 20 Sep 2018 04:32:30 -0700 (PDT)
Received: from mail-vs1-xe2f.google.com (mail-vs1-xe2f.google.com [IPv6:2607:f8b0:4864:20::e2f]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 9F77D130DD2 for <6lo-fragmentation-dt@ietf.org>; Thu, 20 Sep 2018 04:32:30 -0700 (PDT)
Received: by mail-vs1-xe2f.google.com with SMTP id s188-v6so2826038vss.9 for <6lo-fragmentation-dt@ietf.org>; Thu, 20 Sep 2018 04:32:30 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=ffiySFi8Pg2FA7Bv2StMH6UkESpBWtqVQPF/lGkI5m4=; b=LCeIfPc+dgdRnotSYNnCHaJHIJJ6VFZy7cccL9V1EXFoUKbSyWosecuVoQsR8nkHkK jDfeua+BBVdD0AJGr4WQxMrF551zqNBfR7z329XLipRf6o3+xy93hg/8FmgFqY6f8hQP I5pmx+nczXthOESnRv0vN2r30wnzF/DK+NNDxRPdJpUGF1nqZxJ+X1D1sMUa4F9uJb/I GEPEvtg8gWRyMwiOXFZqF6rO3lGstUFB52erwg+f0O7p0fj/3IuYmY8mmsZM/hKuGgl3 /yFdQvKNhJdqHgeecA66aRT/i7hYzEsNIBM88HV7dm8npYfay/tnE61WhQvj4botiTMV 2Yqg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=ffiySFi8Pg2FA7Bv2StMH6UkESpBWtqVQPF/lGkI5m4=; b=ShREY3sRHIxJkeZUp0/WqlWaOGDFp6/Y/zu265w5JaNcUkuH3W51YEHnGrCzyZZdWp mvjoHET5kBUOLOFXJvkv3GcjjYtdFLwT7RMbhfscHy05/FgFuDbKaDIeHDU4nV8ulF47 pxueKtu2EKqmrbsTsDQg7Ll4HGbvA+9KF8BbDW3KQ5oWauwXCPlbzT+gRHLdtYM+lw3L Ly/UduVZzF2oxKfJbzjj4nj1h/mSIsP2X+5Vjgc+yE3ODTilk//gerv3L8zUYnj/nnUJ RC2Ds8zxf7lEo3NMzrWvYNUirXVayfmM6lQvqG3DM1anX9V2XPVjIsoYUrFZUENmlMTZ GNkg==
X-Gm-Message-State: APzg51BBNWJmA/RH4vzbnPCvcRfNmtsBOPTIOm4SUolTehuCxQwNgE+f koJY6YxJkwhPV1FJYer5uZHFXjCHf7XopF7V9A0=
X-Google-Smtp-Source: ANB0VdYHs31gIXFi5PP8kfVmHjUT9HwA8BOCzinCC6y6L6+cZWMZxLSfeHpWgDIbmo8kCC2lXWtYi7FlWZmkFxutGhA=
X-Received: by 2002:a67:e2d8:: with SMTP id i24-v6mr1232546vsm.115.1537443149531; Thu, 20 Sep 2018 04:32:29 -0700 (PDT)
MIME-Version: 1.0
References: <CAO0Djp2EKyiZK5-b+_R4c557mXSktPCEtYxOQjQb4vreTVOX9g@mail.gmail.com> <C3A37ED0-C93B-4D1B-9E6D-857B14253874@tzi.org> <CAO0Djp37FHUaoLPEhfLMX2dmEb+=DY0XdYLUDvq1AOuT9to8ZQ@mail.gmail.com> <6069A8FE-34B7-404D-9894-CD29B89FB36F@tzi.org> <CAO0Djp0B7DugGrwRaBghEpRPak5Sb88w9mQLFMkNkJmsq+sxnw@mail.gmail.com> <453e08a54d574aad9c8cdb21b7c14b64@XCH-RCD-001.cisco.com>
In-Reply-To: <453e08a54d574aad9c8cdb21b7c14b64@XCH-RCD-001.cisco.com>
From: Rahul Jadhav <rahul.ietf@gmail.com>
Date: Thu, 20 Sep 2018 17:02:18 +0530
Message-ID: <CAO0Djp1M_REDLZQ5_5sWXZ1=QYXapWxUOVpqdRfP3+VRdsJSNw@mail.gmail.com>
To: "Pascal Thubert (pthubert)" <pthubert@cisco.com>
Cc: Carsten Bormann <cabo@tzi.org>, rabinarayans@huawei.com, 6lo-fragmentation-dt@ietf.org, yasuyuki.tanaka@inria.fr
Content-Type: multipart/alternative; boundary="000000000000ca4ede05764be0f2"
Archived-At: <https://mailarchive.ietf.org/arch/msg/6lo-fragmentation-dt/UHoOUTYBvlzy8e2RSi2OQWYj2NQ>
Subject: Re: [6lo-fragmentation-dt] Performance report for fragment forwarding
X-BeenThere: 6lo-fragmentation-dt@ietf.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: 6lo Fragmentation Design Team <6lo-fragmentation-dt.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/6lo-fragmentation-dt>, <mailto:6lo-fragmentation-dt-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/6lo-fragmentation-dt/>
List-Post: <mailto:6lo-fragmentation-dt@ietf.org>
List-Help: <mailto:6lo-fragmentation-dt-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/6lo-fragmentation-dt>, <mailto:6lo-fragmentation-dt-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 20 Sep 2018 11:32:34 -0000

Hello Pascal,

Please find my comments inline..

Regards,
Rahul

On Wed, 19 Sep 2018 at 20:26, Pascal Thubert (pthubert) <pthubert@cisco.com>;
wrote:

> Hello Rahul :
>
>
>
> If you could instrument the intermediate nodes, I’d like to see if for
> reason of retries a fragment may catch up with another one. This can be
> tested by making a hop in the middle very far and unreliable.
>

[RJ] Sorry but I m not sure what you mean here.


>
>
> Also, even if you pace at the source by inserting gaps, fragments may
> still pile up at the ingress of the slowest link on the way and your
> initial problem will be recreated there. This is only visible if you
> effectively have a slower hop, but is a very classical effect in window
> based networks, the full window ends up piling at ingress of the slowest
> link. You may recreate that by injecting additional traffic via a node on
> your path.
>
>
>
[RJ] Agreed that a bottleneck link may still create a problem. But if you
see the network in our report, there is no such bottleneck node/link ...
the paths are sufficiently balanced.


> To avoid that you would need to pace at the source to a rate that is
> slower what the slowest hop would do if it was the source. Really hard to
> do. So ideally, each node would ensure that there is a minimum gap between
> fragments, which should be no-op everywhere along a path but at the slowest
> hop where fragments will pile up.
>
>
>
> Finally: you can limit the amount of queueing at the slowest hop if you
> use the recoverable frags draft, using windows of a few fragments for flow
> control.
>
[RJ] Amount of queueing can be limited in our case by using L2 feedback as
well i.e. if the fragment is lost (and the sender knows this based on L2
feedback) then we can as well stop transmitting all subsequent fragments.
For 802.15.4 the recovery procedure can be applied based on L2 feedback
without any explicit ACK.
But right now we arent using any of these advanced techniques.

Our aim for this experiment was to check how much of an impact was it when
fragment forwarding was used as mentioned in the VRB draft in single
channel mode of operation.


>
>
> Bottom line: the more you test with one frequency only the more you’ll
> find it’s a bad idea : )
>
[RJ] Yes we have realized (it the hard way) that using fragment forwarding
in single channel mode is a very bad idea.

>
>
> All the best;
>
>
>
> Pascal
>
>
>
> *From:* 6lo-fragmentation-dt <6lo-fragmentation-dt-bounces@ietf.org>; *On
> Behalf Of *Rahul Jadhav
> *Sent:* mercredi 19 septembre 2018 07:59
> *To:* Carsten Bormann <cabo@tzi.org>;
> *Cc:* rabinarayans@huawei.com; 6lo-fragmentation-dt@ietf.org;
> yasuyuki.tanaka@inria.fr
> *Subject:* Re: [6lo-fragmentation-dt] Performance report for fragment
> forwarding
>
>
>
> Doing pacing on origin-node only should be easy to change/hack and might
> improve the forwarding efficiency. But this may also shoot up the mean
> latency.
>
> We ll try this anways (add randomized delay between 10ms to 500ms) to
> check the impact on PDR. Thanks for this suggestion.
>
>
>
> On Wed, 19 Sep 2018 at 10:25, Carsten Bormann <cabo@tzi.org>; wrote:
>
> Hi Rahul,
>
> > Carsten also mentioned a pacing mechanism .. while it might improve
> fwding efficiency it will add to buffer requirement. Also such a scheme
> might be non-trivial to be implemented.
>
> Pacing is something the origin of the datagram does; I wasn’t suggesting
> the forwarders to add that (which indeed essentially puts the datagram into
> buffers there).
> Clearly, pacing is counterproductive when forwarders do full reassembly,
> as it increases the time the partially assembled datagram is vulnerable to
> fragments of other datagrams hijacking the buffer — you want to capture the
> channel as much as possible then (to the level that you’d want to send
> following fragments with a higher MAC priority).  Of course, as the
> datagram travels through the network, it will become more and more smeared
> over time.
>
> It would be interesting to see how your numbers change with some pacing at
> the origin; it should be easy to hack that in if you are not trying to do
> this in a general way.
>
> > Regarding keeping higher mac-retry, ... We have chosen mac-retry of 3
> after some experimentation (considering expected node densities and tx
> frequency). increasing mac-retry might not necessarily help, in fact it may
> backfire in terms of both PDR as well as mean latency. Would you still
> suggest to give it a try and what mac-retry do you think makes sense ?
>
> Yeah, yet another tunable that in a robust protocol really should be
> self-tuning.
> A MAC-layer retransmission is not useful if the layers above have already
> timed out.
> So the latency introduced by the backoff mechanism is really the
> determining factor here.
>
> Grüße, Carsten
>
>