Re: [6lo-fragmentation-dt] Performance report for fragment forwarding

Rahul Jadhav <> Wed, 19 September 2018 05:59 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 00FE4130F65 for <>; Tue, 18 Sep 2018 22:59:07 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -1.998
X-Spam-Status: No, score=-1.998 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FREEMAIL_FROM=0.001, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Authentication-Results: (amavisd-new); dkim=pass (2048-bit key)
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id 9a7AU1ssuyhL for <>; Tue, 18 Sep 2018 22:59:05 -0700 (PDT)
Received: from ( [IPv6:2607:f8b0:4864:20::92e]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by (Postfix) with ESMTPS id DAD00130DEF for <>; Tue, 18 Sep 2018 22:59:04 -0700 (PDT)
Received: by with SMTP id i4-v6so2131329uak.0 for <>; Tue, 18 Sep 2018 22:59:04 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=jurwUjXCCFAC3U3nxmpQEQPJq029VPIApZriruTEelA=; b=oiN0GhRd+TmgBbWh+R70Uc7PokWCh1DAP/1/biNTkRsWiusRbbdLitKFzafDi6v5la c+iitoy8IGXs3wk+MA39RQ3b5tGh8QLq5ce08TYe9/sfVPk2cwQC3qwqhuLpFr5tffFN EzTZDQ1qmuo1N6w/mqrhq43Gmc1xZMA1vfTr7LSDK/k1yVD4DQT8JhZklnnI4BqBljUx 58Whuz+s2tgY+kb73LFYUDa+t2/s+FEnfvawcz0z9wILC1MqvgBKGwogoA0xwxeJP+lZ UvC1/TZRyeQNL55iM+NoQPfcheUazp2N7cfOoF+TH5g5asXdvezwfA+eSrrqHWXwoBgZ 4fkw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=jurwUjXCCFAC3U3nxmpQEQPJq029VPIApZriruTEelA=; b=H40EjL4DElMheZjZTD86V+K3MoewrXDAmGzGJ5J3rinIfxzlDLSeojTTz5ylCDXgWa 7ES58xTMAXJDvTV+Y2QHohCbBBAFVGVEfd6DhL4ljCWDtBGNjrBnvIUwbkwJLTuz3HNe ZNRXjYzlQKWNI1ETh+HbHfqAiBZGx0E23G6lcJKjMmhqNrd2n8n96452oUEJLk+7rcll HNGLQ9NJWz3xTlP6PbPlGeiNu6Bs6zRNEZDnmabS8JWa9Odvbe6GpdPa2dDPp9GSt+cf Vh3k5EX5W6C+JXWkfcVI0xSAmRGVutuqQzNYxIPHvq86+x94jE6eTvn7MmgtLyRo9dvp jpSQ==
X-Gm-Message-State: APzg51DgePX4Tskt/LERGhkxFUhijh5He7ZT+YTB6pDupo6LAMX7HMzY yk+03J0ZLbM1NSmkl2bcXxTeVfQbjaDjx/iBcXk=
X-Google-Smtp-Source: ANB0Vdb74jdyzntdd8KXNkmJcaUrRW4ik/F0H22XpPt8sspQLpxBQxX9w4qajTOkfPfvfsQE5NIdqck+KBUXwpw+mhE=
X-Received: by 2002:a9f:318a:: with SMTP id v10-v6mr10449797uad.36.1537336743754; Tue, 18 Sep 2018 22:59:03 -0700 (PDT)
MIME-Version: 1.0
References: <> <> <> <>
In-Reply-To: <>
From: Rahul Jadhav <>
Date: Wed, 19 Sep 2018 11:28:52 +0530
Message-ID: <>
To: Carsten Bormann <>
Content-Type: multipart/alternative; boundary="00000000000082ff730576331ac5"
Archived-At: <>
Subject: Re: [6lo-fragmentation-dt] Performance report for fragment forwarding
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: 6lo Fragmentation Design Team <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Wed, 19 Sep 2018 05:59:07 -0000

Doing pacing on origin-node only should be easy to change/hack and might
improve the forwarding efficiency. But this may also shoot up the mean
We ll try this anways (add randomized delay between 10ms to 500ms) to check
the impact on PDR. Thanks for this suggestion.

On Wed, 19 Sep 2018 at 10:25, Carsten Bormann <> wrote:

> Hi Rahul,
> > Carsten also mentioned a pacing mechanism .. while it might improve
> fwding efficiency it will add to buffer requirement. Also such a scheme
> might be non-trivial to be implemented.
> Pacing is something the origin of the datagram does; I wasn’t suggesting
> the forwarders to add that (which indeed essentially puts the datagram into
> buffers there).
> Clearly, pacing is counterproductive when forwarders do full reassembly,
> as it increases the time the partially assembled datagram is vulnerable to
> fragments of other datagrams hijacking the buffer — you want to capture the
> channel as much as possible then (to the level that you’d want to send
> following fragments with a higher MAC priority).  Of course, as the
> datagram travels through the network, it will become more and more smeared
> over time.
> It would be interesting to see how your numbers change with some pacing at
> the origin; it should be easy to hack that in if you are not trying to do
> this in a general way.
> > Regarding keeping higher mac-retry, ... We have chosen mac-retry of 3
> after some experimentation (considering expected node densities and tx
> frequency). increasing mac-retry might not necessarily help, in fact it may
> backfire in terms of both PDR as well as mean latency. Would you still
> suggest to give it a try and what mac-retry do you think makes sense ?
> Yeah, yet another tunable that in a robust protocol really should be
> self-tuning.
> A MAC-layer retransmission is not useful if the layers above have already
> timed out.
> So the latency introduced by the backoff mechanism is really the
> determining factor here.
> Grüße, Carsten