Re: [MLS] Unpredictable epochs?

Richard Barnes <> Tue, 07 January 2020 23:34 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id CAA35120018 for <>; Tue, 7 Jan 2020 15:34:38 -0800 (PST)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -1.897
X-Spam-Status: No, score=-1.897 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, HTML_MESSAGE=0.001, SPF_HELO_NONE=0.001, SPF_NONE=0.001] autolearn=unavailable autolearn_force=no
Authentication-Results: (amavisd-new); dkim=pass (2048-bit key)
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id hCFOUsen5UsF for <>; Tue, 7 Jan 2020 15:34:35 -0800 (PST)
Received: from ( [IPv6:2607:f8b0:4864:20::f31]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by (Postfix) with ESMTPS id 535A2120020 for <>; Tue, 7 Jan 2020 15:34:35 -0800 (PST)
Received: by with SMTP id l14so641057qvu.12 for <>; Tue, 07 Jan 2020 15:34:35 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=lwM57TPo6tHoBP/Y7QbWyzyIPNazdOqEwirJrtSaTDc=; b=1d5gVl/k3gf4TdIoDdVFbsdr2tcbCKyXZSPmCpiQA1NpqwVwEdgwm8AtUsV2pWcHA0 rdUybrKPNUFqHz0SOQuhczmbIA/P4yXWJ8BFeJc+c4HoXP8+SuAquI8DMGzOld2ONNLz IDbGZafGcH7vPVe8FVpuHLebqJVNepgdWEwDAa8cZDKugmDMYgQcExAHJvS07advuMUe frRMMR+H3A4pqfYu1LQHfxnjhvwPQPtv4ZjT9jdL2v9/joEsy0KDimWXXgj89bgVa/2l Z6c0I7e2vL6ewCTyuHXcRhCjkBxWKeVeH2t+y3kTNWrWjctHZTShoyjAA9Pb3a784xEn jWoQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=lwM57TPo6tHoBP/Y7QbWyzyIPNazdOqEwirJrtSaTDc=; b=f2xtNunv7/6sIey1mzlASFp4NiTKXzsKDJ5Vikm1HE7WH/6lxMezIPbyAoxAPU68qV elJnHARl/Ne0214L/5jZDb+gv8ziopjiigrBIjxizV9Vx65WEE6eRvdEwEWzX55ckJCb glEqYV9gysQGQwAakC9soAuDkoqBAGo5GFMmk6ExcBvOAcwCEJYGVPWgVMnW5dkt41pq ALlxNOeocfEDo8HEHzxyaJ1oqAA6kMA80VJyiDT3TS+BkNfRr9nPN9wvGgNUCu3iRYRf UYY/D/peSQtkrEniBfQzlzs+E3wGcaLGz4D9QdilG8MOYtNqxr4E7YZlsmMFXIoM+vto fdog==
X-Gm-Message-State: APjAAAVkoJUJfmUBMOnqkeyP1ZsngtELtLDgvDNv47//m3AG1ycdS1Ax lkb/p1V9ggju5IWkUvOIwBmCWpwfUsivXFury+oNKA==
X-Google-Smtp-Source: APXvYqxVao61WlTzhCSgyE8HfypMPoxnZ0euvM5iiWcL0iYWZRxFWRwpwOixcl063lWnjF5VmNng2sYoMUlKyfKchHY=
X-Received: by 2002:a05:6214:13ef:: with SMTP id ch15mr1734531qvb.183.1578440074116; Tue, 07 Jan 2020 15:34:34 -0800 (PST)
MIME-Version: 1.0
References: <> <> <> <> <> <> <> <> <> <> <> <> <> <>
In-Reply-To: <>
From: Richard Barnes <>
Date: Tue, 7 Jan 2020 18:34:14 -0500
Message-ID: <>
To: Brendan McMillion <>
Cc: Raphael Robert <>, Kelvin Ritland <>, Michael Rosenberg <>, Messaging Layer Security WG <>, Benjamin Beurdouche <>, Jon Callas <>
Content-Type: multipart/alternative; boundary="000000000000ead1fe059b953798"
Archived-At: <>
Subject: Re: [MLS] Unpredictable epochs?
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Messaging Layer Security <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Tue, 07 Jan 2020 23:34:39 -0000

Strictly speaking, yes, but only if everyone sees everything.  If a member
doesn't see one of the two commits, they'll have no idea, and they'll try
to decrypt with (potentially) the wrong keys.  Indicating the fork in the
epoch identifier means that this case fails instead.


On Sat, Jan 4, 2020 at 7:41 PM Brendan McMillion <>

> Strictly speaking, it’s already possible to detect forks. The content_type
> and epoch are left unencrypted, so if you have two messages with
> content_type=Commit and the same epoch number, that’s a fork. You can layer
> any sort of consensus mechanism you want on top of the protocol to handle
> those situations, whether it’s a central server, quorum voting, or
> something else.
> It seems like the problem you’re trying to solve now is “How do we know
> which fork a message belongs to, if that message is sent after a fork
> happened?” And intuitively, I would think that this should also be dealt
> with by whatever consensus mechanism the DS uses. Since the mechanism
> exists, and already has to do all the work to keep track of forks and break
> ties.
> On Jan 3, 2020, at 8:55 AM, Richard Barnes <> wrote:
> Yeah, I'm on the same page here. My objective is to remove the roadblock
> without trying to solve the whole problem.  As Kelvin points out, there's a
> lot of interesting thinking to be done about how to manage a DAG of state.
> In chatting with Raphael just now, he proposed a nice framing -- that the
> tiebreaker bits could be used for fork *detection*.   Even in the case
> where the DS is trying to maintain order, there's a chance that things go
> wrong.  The tiebreaker bits would ensure that people don't try to use the
> wrong keys for a message in that case.
> Here's a quick PR implementing the "extended epoch" proposal:
> <>
> On Fri, Jan 3, 2020 at 10:32 AM Raphael Robert <> wrote:
>> I agree with Kelvin’s points in general. Allowing forks might be an
>> interesting goal, but it raises a wealth of new questions.
>> On the other hand I think Richard’s proposal of extending the Epoch ID
>> with a commit hash makes sense to detect potential forks.
>> I’d like to propose that we adopt that as a first step, but only in the
>> context of fork detection for now. This doesn’t preclude us from building
>> on top of that in the future, but we also don’t introduce something
>> dangerous right now.
>> Raphael
>> On 3 Jan 2020, at 00:15, Kelvin Ritland <
>>> wrote:
>> Not-pseudorandom implies some assumptions, e.g., that each sender would
>>> only send one Commit on top of each epoch
>> This isn't bad imo, it reduces complexity to some extent. I'm somewhat in
>> favour of a non-pseudorandom tiebreaker if there's no other arguments
>> against it.
>> How many bits of sequence number?
>> Have we considered using varints? Similar to the CTLS proposal -
>> Other forking considerations, I think there's more involved than just
>> changing the epoch identifier:
>> 1. We'll need merges as well, since forks can lead to inconsistent group
>> membership:
>> Suppose in epoch 1 a group has A, B, and C
>> A removes C in epoch 2A
>> B adds D in epoch 2B
>> We now have two group states 2A with AB and 2B with ABCD - and it's
>> unclear which group to use to send with. We could mandate that any client
>> MUST send a "merge" commit on it's view of history before sending any
>> message. This commit could just be the list of epochs that are being merged
>> (+a usual DirectPath update?), and processing it would mean replaying all
>> proposals up to the common ancestor for those epochs on the common ancestor.
>> 2. Forward secrecy issues.
>> By allowing forks, devices must now keep the previous epoch around to
>> derive possible future forks - but this means we'll be re-using
>> init_secret_. A possible solution could be to use the nth app secret from
>> the ASTree instead of init_secret_ as the start for the new epoch secret.
>> This ties nicely with using a non-pseudorandom tiebreaker.
>> 3. Ordering implications
>> It's unclear what a sensible ordering strategy is for the DS if we allow
>> forking. (simply ensuring that each commit is based on the "current" epoch
>> isn't well defined since there can be multiple "current" epochs).
>> By allowing forks the DS need not enforce ordering at all imo.. This
>> means that apps need to keep around old epochs, but this is already
>> required per (2). A different question is how many we need to keep around -
>> perhaps a TTL or the past N epochs after a topological sort would make
>> sense.
>> Generally I think getting rid of server ordering is good idea:
>> - Helps Matrix and general federation use cases
>> - A common consistent mutated state for group operations can cause server
>> side DB contention in my experience
>> Forks is one way to remove ordering, and if we need to support it anyways
>> it's a nice benefit.
>> On Thu, Jan 2, 2020 at 12:52 PM Richard Barnes <> wrote:
>>> Argh, hit send too soon.  Continuing below...
>>> On Thu, Jan 2, 2020 at 3:38 PM Richard Barnes <> wrote:
>>>> Hey all,
>>>> Resurrecting this thread, as it seems that we never came to resolution
>>>> on this question.
>>>> I'd like to propose we scope down and solve a narrower problem, namely
>>>> the problem of enabling forks in the group's history.  So we're not going
>>>> to try to provide any privacy properties, just remove the impediment to
>>>> forking.  This seems like a worthwhile thing to me just because it seems
>>>> silly to have forking blocked just because of syntactic constraint, in
>>>> addition to it likely being necessary for some decentralized use cases
>>>> (e.g., Matrix).
>>>> To allow forks, we really just need some "tie breaker" bits in the
>>>> epoch ID so that if a given epoch has multiple successors, they end up with
>>>> different epoch IDs.  And in order to avoid synchronization, each
>>>> participant needs to be able to generate those bits independently.  There
>>>> are a couple of basic questions about how to construct the tie breaker:
>>>> 1. Pseudorandom or not?
>>>>   - Not-pseudorandom example: tiebreaker = commitSenderID
>>>>   - Pseudorandom example: tiebreaker = H(MLSPlaintext(Commit))
>>>>   - Not-pseudorandom implies some assumptions, e.g., that each sender
>>>> would only send one Commit on top of each epoch
>>>>   - Pseudorandom risks random collisions
>>>>   - Possible to do both: tiebreaker = commitSenderID ||
>>>> H(MLSPlaintext(Commit))
>>>> 2. How many bits of tiebreaker?
>>>   - Non-pseudorandom size will be dictated by what we include
>>>   - Pseudorandom size will be dictated by tolerable collision probability
>>>   - Probability of collision ~ (number of forks per seq no) / 2^{number
>>> of bits}
>>> 3. How many bits of sequence number?
>>> - DS might want to see sequence to help enforce ordering
>>> - Probably want this big enough to avoid wrapping.
>>> (Note that this is a value that goes in every message, so there's some
>>> incentive to keep things small.)
>>> Personally, my proposal would be something like:
>>> struct {
>>>   uint64 sequence_number;
>>>   uint64 commit_hash; // H(MLSPlaintext(Commit))
>>> } EpochID;
>>> That seems to strike a reasonable balance between low collision
>>> probability (~2^-64) and a reasonably small identifier.  If 128 bits is
>>> good enough for IPv6, it can be good enough for us :)
>>> Does that seem workable to folks?
>>> --Richard
>>>> On Fri, Apr 26, 2019 at 4:18 PM Benjamin Beurdouche <
>>>> <>> wrote:
>>>>> > On Apr 26, 2019, at 9:26 PM, Jon Callas <> wrote:
>>>>> >
>>>>> >> On Apr 26, 2019, at 2:22 AM, Michael Rosenberg <>
>>>>> wrote:
>>>>> >>
>>>>> >> So why not remove epoch entirely?
>>>>> >
>>>>> > An epoch lets you deal with things happening neither too often nor
>>>>> not often enough. Presume there is a client that is either malicious or
>>>>> just stupid. You want to keep it from forcing a rekey every 100µs. You want
>>>>> to force a rekey every so often. Hence epochs. Yeah, picking the right
>>>>> epoch size is an exercise left to the reader.
>>>>> You can’t use the epoch number for that as it is just global counter
>>>>> for group operations, we will have to keep track of the latest group
>>>>> operation “timestamp” for each member within the group state to check
>>>>> “update frequency” and handle some of the situations you described.
>>>>> Btw in the TreeKEM formal spec I use a 64 bit unsigned integer, and I
>>>>> feel like having more than 2^32 group operations over the lifetime of a
>>>>> group is not unrealistic in certain extreme use cases, especially with
>>>>> large groups forcing PCS for application messages by triggering an update
>>>>> after each app message...
>>>>> We could remove the epoch number if we really want but it is necessary
>>>>> to give the Delivery Service some ordering information (unpredictable or
>>>>> not is an interesting question) to handle concurrent handshake messages
>>>>> which is, I believe, the main current goal of that information.
>>>>> Benjamin
>>>> _______________________________________________
>>> MLS mailing list
>> _______________________________________________
>> MLS mailing list
>> _______________________________________________
> MLS mailing list