Re: [6lo-fragmentation-dt] Performance report for fragment forwarding

"Pascal Thubert (pthubert)" <> Wed, 19 September 2018 14:56 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 7E264130F2F for <>; Wed, 19 Sep 2018 07:56:45 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -14.509
X-Spam-Status: No, score=-14.509 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_HI=-5, SPF_PASS=-0.001, T_DKIMWL_WL_MED=-0.01, URIBL_BLOCKED=0.001, USER_IN_DEF_DKIM_WL=-7.5] autolearn=ham autolearn_force=no
Authentication-Results: (amavisd-new); dkim=pass (1024-bit key)
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id FHfjskgaqOcb for <>; Wed, 19 Sep 2018 07:56:43 -0700 (PDT)
Received: from ( []) (using TLSv1.2 with cipher DHE-RSA-SEED-SHA (128/128 bits)) (No client certificate requested) by (Postfix) with ESMTPS id A8AB9130E78 for <>; Wed, 19 Sep 2018 07:56:42 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;;; l=17352; q=dns/txt; s=iport; t=1537369002; x=1538578602; h=from:to:cc:subject:date:message-id:references: in-reply-to:mime-version; bh=UNJXNqLJwRqfrzPLlLclHx6h87g5zrYNREIqnJX5Mug=; b=JZmXb6Fp04A0tcyk7TQQuEM6FA3yc38xM/t6bAjJfHojFo08cH3pzG48 vKirpwlaqCeU7hmev/pX5m57KwF2zMalH7HRmtG3l7l1ezlJ8UT/9gqdX uzrzOg8r8U9pMKTmORJJy8hyK7t9iAFWf88Xwq0g3qlAUNe8tHG3+clQJ w=;
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-AV: E=Sophos;i="5.53,394,1531785600"; d="scan'208,217";a="454393974"
Received: from ([]) by with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 19 Sep 2018 14:56:41 +0000
Received: from ( []) by (8.15.2/8.15.2) with ESMTPS id w8JEufuf029181 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=FAIL); Wed, 19 Sep 2018 14:56:41 GMT
Received: from ( by ( with Microsoft SMTP Server (TLS) id 15.0.1395.4; Wed, 19 Sep 2018 09:56:40 -0500
Received: from ([]) by ([]) with mapi id 15.00.1395.000; Wed, 19 Sep 2018 09:56:40 -0500
From: "Pascal Thubert (pthubert)" <>
To: Rahul Jadhav <>, Carsten Bormann <>
CC: "" <>, "" <>, "" <>
Thread-Topic: [6lo-fragmentation-dt] Performance report for fragment forwarding
Thread-Index: AQHUT2lvdrEYlBd9b0+UYF3V1708sqT2yRmAgACNbQCAAAhwAIAAEbcA///l//A=
Date: Wed, 19 Sep 2018 14:56:30 +0000
Deferred-Delivery: Wed, 19 Sep 2018 14:56:17 +0000
Message-ID: <>
References: <> <> <> <> <>
In-Reply-To: <>
Accept-Language: fr-FR, en-US
Content-Language: en-US
x-ms-exchange-transport-fromentityheader: Hosted
x-originating-ip: []
Content-Type: multipart/alternative; boundary="_000_453e08a54d574aad9c8cdb21b7c14b64XCHRCD001ciscocom_"
MIME-Version: 1.0
Archived-At: <>
Subject: Re: [6lo-fragmentation-dt] Performance report for fragment forwarding
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: 6lo Fragmentation Design Team <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Wed, 19 Sep 2018 14:56:45 -0000

Hello Rahul :

If you could instrument the intermediate nodes, I’d like to see if for reason of retries a fragment may catch up with another one. This can be tested by making a hop in the middle very far and unreliable.

Also, even if you pace at the source by inserting gaps, fragments may still pile up at the ingress of the slowest link on the way and your initial problem will be recreated there. This is only visible if you effectively have a slower hop, but is a very classical effect in window based networks, the full window ends up piling at ingress of the slowest link. You may recreate that by injecting additional traffic via a node on your path.

To avoid that you would need to pace at the source to a rate that is slower what the slowest hop would do if it was the source. Really hard to do. So ideally, each node would ensure that there is a minimum gap between fragments, which should be no-op everywhere along a path but at the slowest hop where fragments will pile up.

Finally: you can limit the amount of queueing at the slowest hop if you use the recoverable frags draft, using windows of a few fragments for flow control.

Bottom line: the more you test with one frequency only the more you’ll find it’s a bad idea : )

All the best;


From: 6lo-fragmentation-dt <> On Behalf Of Rahul Jadhav
Sent: mercredi 19 septembre 2018 07:59
To: Carsten Bormann <>
Subject: Re: [6lo-fragmentation-dt] Performance report for fragment forwarding

Doing pacing on origin-node only should be easy to change/hack and might improve the forwarding efficiency. But this may also shoot up the mean latency.
We ll try this anways (add randomized delay between 10ms to 500ms) to check the impact on PDR. Thanks for this suggestion.

On Wed, 19 Sep 2018 at 10:25, Carsten Bormann <<>> wrote:
Hi Rahul,

> Carsten also mentioned a pacing mechanism .. while it might improve fwding efficiency it will add to buffer requirement. Also such a scheme might be non-trivial to be implemented.

Pacing is something the origin of the datagram does; I wasn’t suggesting the forwarders to add that (which indeed essentially puts the datagram into buffers there).
Clearly, pacing is counterproductive when forwarders do full reassembly, as it increases the time the partially assembled datagram is vulnerable to fragments of other datagrams hijacking the buffer — you want to capture the channel as much as possible then (to the level that you’d want to send following fragments with a higher MAC priority).  Of course, as the datagram travels through the network, it will become more and more smeared over time.

It would be interesting to see how your numbers change with some pacing at the origin; it should be easy to hack that in if you are not trying to do this in a general way.

> Regarding keeping higher mac-retry, ... We have chosen mac-retry of 3 after some experimentation (considering expected node densities and tx frequency). increasing mac-retry might not necessarily help, in fact it may backfire in terms of both PDR as well as mean latency. Would you still suggest to give it a try and what mac-retry do you think makes sense ?

Yeah, yet another tunable that in a robust protocol really should be self-tuning.
A MAC-layer retransmission is not useful if the layers above have already timed out.
So the latency introduced by the backoff mechanism is really the determining factor here.

Grüße, Carsten