Re: [CFRG] Security limits of ChachPoly (was: XChacha20 counter size)

Loup Vaillant-David <> Sat, 26 December 2020 22:38 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 7EAF13A0AF8 for <>; Sat, 26 Dec 2020 14:38:49 -0800 (PST)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -1.897
X-Spam-Status: No, score=-1.897 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, SPF_HELO_NONE=0.001, SPF_NONE=0.001, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id 5bYym-xbjwzb for <>; Sat, 26 Dec 2020 14:38:47 -0800 (PST)
Received: from ( []) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by (Postfix) with ESMTPS id 39CE13A0A4C for <>; Sat, 26 Dec 2020 14:38:46 -0800 (PST)
Received: from grey-fade ( []) (Authenticated sender: by (Postfix) with ESMTPSA id 8D006100002; Sat, 26 Dec 2020 22:38:42 +0000 (UTC)
Message-ID: <>
From: Loup Vaillant-David <>
To: Taylor R Campbell <>
Date: Sat, 26 Dec 2020 23:38:41 +0100
In-Reply-To: <>
References: <>
Content-Type: text/plain; charset="UTF-8"
X-Mailer: Evolution 3.28.5-0ubuntu0.18.04.2
Mime-Version: 1.0
Content-Transfer-Encoding: 8bit
Archived-At: <>
Subject: Re: [CFRG] Security limits of ChachPoly (was: XChacha20 counter size)
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Crypto Forum Research Group <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Sat, 26 Dec 2020 22:38:50 -0000

The facts you lay out seem to agree with me, yet your wording suggests
that you don't...

> The probability of each forgery attempt's success _independently_ is
> bounded by 8*floor(L/16)/2^106, where L is the maximum message
> length.

Yes it is.

> If the adversary makes q forgery attempts, then the probability of
> succeeding at _any_ forgery is bounded by q*8*floor(L/16)/2^106.

Of course. With q attempts chances of success are multiplied by q.

> Note that both the number of forgery attempts _and_ the maximum
> message length figure into this bound.

I noticed that they do. I will note however that the number of messages
do not appear in the formula. You were saying in your first message
that the number of sent messages matter (unless it was the number of
*received* messages?):

""" Accept a single 256 GiB message, and the number of messages you can
""" safely handle -- even if all the others are tiny -- dramatically
""" drops compared to an application limited to 2^14 bytes per message
""" like TLS.

This bound however suggest that it doesn't. Message length and number
of messages don't compete. Only the maximum message length matters.

> ChaCha/Poly1305 and AES-GCM are no different in this respect: for
> fixed number of messages and total bytes encrypted, the forgery
> probability is proportional to q/L for both of them, with a different
> constant factor arising from differences in Poly1305 and GHASH.

Ah, my speculation was wrong, then. Thanks.

But I have to disagree about "total bytes encrypted". It's "maximum
message length". It doesn't matter whether you send 1 message of 4KiB
or 1 million, forgery probability will still be proportional to q/2^12.

> Using a message sequence number as the nonce so you can quickly
> reject outdated message numbers mitigates the issue for
> ChaCha/Poly1305, for the reason you describe.  But that's not as
> relevant for XChaCha, which is meant for applications where you may
> not have a reliable message sequence number.

Wait a minute, what do sequence numbers have to do with anything? All
the sources I have seen so far do not mention the relevance of
rejecting outdated nonces.

> I attempted to sketch the qualitative
> differences in this table in the Daence paper:

Ah, *there* is the crucial detail you didn't repeat here:

""" We suppose for the sake of presentation on two-dimensional paper
""" that the adversary can attempt about a million times as many
""" forgeries as there are legitimate messages encrypted by the user.

This assumes that the number of attempts an adversary can make is
directly proportional to the number of sent messages. I disagree with
this assumption, at least to some extent. In real systems, the number
of forgery attempts an adversary can make is equal to the number of
messages the *recipient* can (and will) attempt to authenticate.

 | number of messages  | number of messages | Max number of    |
 | Alice sends         | Bob can receive    | forgery attempts |
 |        10           |        100         |       100        |
 |        90           |        100         |    100 again     |
 |     1 million       |        100         |    still 100     |

(In the last line, Alice is sending too many messages, Bob will not
receive them all.)

I reckon that in practice, we tend to size systems so they can accept
as many messages as they get, and not much more. But this is an
indirect relationship. If I were Wikileaks, I may receive many more
forgery attempts than genuine messages from the actual whistle-blower.
Yet it is critical that I process them all, so I can actually receive
the genuine messages. It doesn't matter that the whistle-blower only
sends 3 messages per week. What does matter is that three letter
agencies are sending millions of forgeries per day.

(I do acknowledge that the high number of forgery attempts means I
should limit message length to compensate. If your point was that
message length and number of forgery attempts compete… Yes, I admit
that they do. And no, I didn't know. Thanks for teaching me that.)

> No objection to making the XChaCha draft clearer about what the hard
> limit is.  I am a big fan of clearly articulated usage limits.

Perfect :-)