Re: [MLS] Optimizing Unmerged Leaves (or: bringing the MLS tree size back to O(n))

Théophile Wallez <> Mon, 21 June 2021 11:14 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 69DBB3A0417 for <>; Mon, 21 Jun 2021 04:14:17 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -0.333
X-Spam-Status: No, score=-0.333 tagged_above=-999 required=5 tests=[HTML_MESSAGE=0.001, NICE_REPLY_A=-0.338, RCVD_IN_DNSWL_BLOCKED=0.001, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id k2bxccYXOeol for <>; Mon, 21 Jun 2021 04:14:12 -0700 (PDT)
Received: from ( []) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by (Postfix) with ESMTPS id 78D5F3A040C for <>; Mon, 21 Jun 2021 04:14:11 -0700 (PDT)
IronPort-HdrOrdr: =?us-ascii?q?A9a23=3Attii+6E6Q6XnTL74pLqE2ceALOsnbusQ8zAX?= =?us-ascii?q?Po5KKCC9E/bo8PxG+c5w6faaslossR0b9uxofZPwIk80lqQFhbX5X43DYOCOgg?= =?us-ascii?q?LBR+xfBMnZsl/d8kbFh4tgPNJbAtND4arLfCBHZKjBjjVQa+xQueVvp5rY49vj?= =?us-ascii?q?8w=3D=3D?=
X-IronPort-AV: E=Sophos;i="5.83,289,1616454000"; d="scan'208,217";a="385669634"
Received: from unknown (HELO []) ([]) by with ESMTP/TLS/AES256-GCM-SHA384; 21 Jun 2021 13:14:09 +0200
To: Raphael Robert <>
Cc: MLS List <>
References: <> <> <> <> <> <>
From: =?UTF-8?Q?Th=c3=a9ophile_Wallez?= <>
Message-ID: <>
Date: Mon, 21 Jun 2021 13:14:09 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <>
Content-Type: multipart/alternative; boundary="------------99814DA93854DDF8DE8746EF"
Content-Language: en-US
Archived-At: <>
Subject: Re: [MLS] Optimizing Unmerged Leaves (or: bringing the MLS tree size back to O(n))
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Messaging Layer Security <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Mon, 21 Jun 2021 11:14:17 -0000

Hi and sorry for the late answer,

I'll also respond inline:

On 22/05/2021 12:14, Raphael Robert wrote:
>> On 21. May 2021, at 16:59, Karthik Bhargavan 
>> < 
>> <>> wrote:
>> Solution 1
>> —————
>> One possibility, close to the current MLS spec would be to not store 
>> the unmerged leaf at a node if it is already stored at the parent.
>> This way, every leaf is present at most once in some 
>> `unmerged_leaves` array.
>> In the worst case, this uses 4*n + 4*(n/2) = 6*n bytes (4*n for the 
>> `unmerged_leaves` array size, 4*(n/2) because at most n/2 leaves are 
>> stored in the `unmerged_leaves` arrays).
>> In the best case, this uses 4*n bytes.
>> The tree resolution algorithm will now have to do a little more 
>> calculation to re-compute the unmerged leaves at each node.
> This sounds rather straight forward on a logical level. It clearly 
> reduces the stored/transmitted payload size, but it introduces a new 
> cost: previously only O(log n) nodes needed to be modified during an 
> Update/Remove. Now we might have to “push down” the book keeping 
> information from parents to children, in the worst case this affects 
> n/2 nodes, so the cost of modifying nodes goes up to O(n).
> If we consider a very large group, there would be a measurable 
> difference when clients load/store nodes from/to disk.
> My current intuition is that currently I/O operations can be done in 
> O(log n) with clever parent hash caching. The worst case here would be 
> the following: A client is a member of a large group and has been 
> offline for a while (which is not a far fetched scenario). The client 
> now comes online and has to process a large number of handshake 
> messages. For robustness, the client might be inclined to frequently 
> persist the intermediary state. This is where the I/O cost kicks in.

Indeed, when "pushing down" the unmerged leaves, if it is pushed down to 
a blank node, we might have to modify more than O(log n) nodes. This 
could be solved by storing unmerged leaves in blank nodes, but it is 
starting to get quite ugly.

>> Solution 2+
>> —————
>> However, if we wish to compress further, we can do so as well, using 
>> two facts:
>> - if a node `n` has two children `l` and `r`, then 
>> n.last_update_epoch is equal to either l.last_update_epoch or 
>> r.last_update_epoch,
>> - if `l` is unmerged for some node then l.creation_epoch = 
>> l.last_update_epoch (because in this case, the leaf never generated 
>> an UpdatePath) (the converse is not true, for example in the case 
>> when its sibling generates an UpdatePath).
>> So at each node, we can store the direction from which the node was 
>> last updated (for example with a boolean flag), then given any node 
>> `n` we can "follow the path" down the tree to obtain a leaf `l`, and 
>> we have n.last_update_epoch = l.last_update_epoch.
>> Furthermore, we don't need to store the creation epoch for leaves:
>> - if `l` is unmerged for `n`, then n.last_update_epoch < 
>> l.creation_epoch = l.last_update_epoch,
>> - if `l` is not unmerged for `n`, then n.last_update_epoch >= 
>> l.last_update_epoch (because when `l` is updated, it updates all the 
>> nodes above it).
>> This uses 8*n + 1*n = 9*n bytes for the whole tree.
> I didn’t check all the ramifications here tbh. I think it is similar 
> to solution 1 in terms of increased I/O cost.

Actually, there is no "pushing down" here, when modifying nodes on a 
path from leaf to the root, only those nodes are modified (we update the 
direction of the parent nodes in the path, and update the last update 
epoch of the leaf), so the I/O cost is actually O(log n).

Thank you for your comments,