Re: [openpgp] AEAD Chunk Size

Bart Butler <> Sun, 31 March 2019 04:11 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 90E98120135 for <>; Sat, 30 Mar 2019 21:11:34 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -2.7
X-Spam-Status: No, score=-2.7 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FREEMAIL_FROM=0.001, RCVD_IN_DNSWL_LOW=-0.7, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Authentication-Results: (amavisd-new); dkim=pass (1024-bit key)
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id LVR6eqWZCoIP for <>; Sat, 30 Mar 2019 21:11:31 -0700 (PDT)
Received: from ( []) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by (Postfix) with ESMTPS id EFC681200F5 for <>; Sat, 30 Mar 2019 21:11:30 -0700 (PDT)
Date: Sun, 31 Mar 2019 04:11:25 +0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=default; t=1554005487; bh=UQWIOpsRyE0VVpRQ8KMe0DCN/ExWYHPRqaJB1PLyGOE=; h=Date:To:From:Cc:Reply-To:Subject:In-Reply-To:References: Feedback-ID:From; b=ZxujAmH8rGCwlFPmLQy9neuRvVwNYW3qTdDHU6EVFggcr8RMCKMxD4QMzEJjFY9r8 O5NJhDBeE9nK6+AfSjjxXk2/lF+J9MGmQwYFkNx34mDVf8Qqcv43CbeRx8HJyHYi2O Trm+nGGb7X6kF72HzyoXY2/gfa1+3iqBJTtD3UTI=
To: Jon Callas <>
From: Bart Butler <>
Cc: "Neal H. Walfield" <>, "" <>, Justus Winter <>, Peter Gutmann <>
Reply-To: Bart Butler <>
Message-ID: <>
In-Reply-To: <>
References: <> <> <> <> <> <> <>
Feedback-ID: XShtE-_o2KLy9dSshc6ANALRnvTQ9U24aqXW2ympbGschdpHbU6GYCTUCtfmGhY9HmOyP1Uweyandwh1AVDFrQ==:Ext:ProtonMail
MIME-Version: 1.0
Content-Type: multipart/signed; protocol="application/pgp-signature"; micalg=pgp-sha512; boundary="---------------------ccbb5d08964dba2b48d24756f2cc56f7"; charset=UTF-8
Archived-At: <>
Subject: Re: [openpgp] AEAD Chunk Size
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: "Ongoing discussion of OpenPGP issues." <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Sun, 31 Mar 2019 04:11:35 -0000

‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Saturday, March 30, 2019 4:09 PM, Jon Callas <> wrote:



> > On Mar 29, 2019, at 7:17 PM, Bart Butler wrote:
> > Hi Jon,
> > As others have noted, there is a lot of confusion on this thread, some of which you touched in your AEAD Conundrum message, like when we say AEAD should not release unauthenticated plaintext, do we mean the entire message or the chunk?

> That is precisely the question. But the bigger question is whether you care about that. Sometimes it matters, sometimes it does not.

> For example, let’s suppose that I have a very large blob on mass storage. Media failures happen. If there’s a bad block on a disk, do you want to lose the whole file because of it? Sometimes you want to throw your hands up in the air. This is most common in an interactive protocol, and in general the answer is yes. For example, an SSL connection to a server, if there’s funny business going on, you want to blow up the connection and try over again. On the other hand, if you had an archival thing (e.g. tar file with some historic documents), you want to recover as much as you can.

> OpenPGP is in general the latter case rather than the former. I believe it’s less important to have strict semantics on failures because it’s usually storage.

I agree. I would say my point is that with sufficiently small chunks, the user/decrypter can choose what kind of failure behavior is appropriate. Large chunks robs the decrypter of that.

> > Another piece of confusion is that Efail isn't a single vulnerability, it was several vulnerabilities related (at best) thematically.

> I understand Efail. Trust me.

I do. I was bit excessively pedantic defining everything because I felt that this thread has suffered from ambiguity in several terms.


> > So to be very specific, for the purpose of the following discussion, the advantage of smaller AEAD chunks is specifically to prevent Efail-style ciphertext malleability/gadget attacks, and the prohibition on releasing unauthenticated plaintext is applied to individual chunks, which is sufficient to foil this kind of attack in email.

> I disagree. If you want to prevent something like Efail, you want larger chunks. Assuming that you believe that early release matters.

> Let’s rewind here, and not talk about Efail, let’s talk about the real issue. If you want the entire blob to have all-or-nothing semantics, then you want the fewest number of chunks as is reasonable. If you have attacker-controlled inputs, then every joint between the chunks is a vulnerability.

OK, I think this is the part that I don't understand. Why does it matter what chunking scheme is used here? If my app requires all-or-nothing semantics, I would program my app to enforce that all chunks must pass and not release plaintext unless that happened, with no truncation, etc. So why would every joint be a vulnerability?

> > What value does large-chunk AEAD actually provide? What I'm getting from the AEAD Conundrum message is that it's a way for the message encrypter to leverage the "don't release unauthenticated chunks" prohibition to force the decrypter to decrypt the whole message before releasing anything. Why do we want to give the message creator this kind of power? Why should the message creator be given the choice to force her recipient to either decrypt the entire message before release or be less safe than she would have been with smaller chunks?

> Let me summarize the conundrum: If you want strict AEAD no-release semantics, you want a fewer number of chunks.

I guess this is my fundamental question. You can force no-release semantics at the application level for any chunk size scheme, right?


> > Coming back to Neal's point, it's really hard to see any sort of value in really large AEAD chunks, because the performance overhead is negligible at that point and the only security 'benefit' that I can see is the encrypter trying to use the spec to force the decrypter to not stream, which does not seem like something at all desirable.

> Okay, here’s another thing that’s a pet peeve of mine. We’re arguing security and you brought up performance. I never mentioned performance, the people who want large chunks haven’t brought it up. They want large chunks because they perceive it to be more secure.

> If you respond to a security request with a performance answer, you literally don’t know what you’re talking about. So let’s toss that aside.

I apologize, I was not trying to create a strawman here, but I am completely at a loss for what the benefit of large chunks is.

>     Elided out of this, and possibly important is that “support” includes chunks smaller than that size. I should have said that, but I wanted it to be as stark as possible. So let me repeat it and abstract it with some variables:

>     (1) MUST support up to <small-chunk-size> chunks.


> (2) SHOULD support up to <larger-chunk-size> chunks, as these are common.
> (3) MAY support larger up to the present very large size.
> (4) MAY reject or error out on chunks larger than <small-chunk-size>, but repeating ourselves, SHOULD support <larger-chunk-size>.

> Clauses (3) and (4) set up a sandbox for the people who want very large chunks. They can do whatever they want, and the rest of us can ignore them. Why get rid of that? It doesn’t add any complexity to the code. It lets the people who want the huge ones do them in their own environment and not bother other people.

> My concern is over (1) and (2) and specifically that there’s both <small> and <large> sizes.

> I think that’s an issue. If there are two numbers we are apt to end up with skew before settling on one, so it’s better to agree on just one. That’s the real wart in my proposal.

I'm OK with eliminating (2) and just using the MAY part to take care of any legacy 256K messages OpenPGP.js users might have. As I said, we don't have any of these messages in production yet and I'd err on the side of a cleaner spec. I just really want to understand the benefit of large chunks for security and right now I clearly do not.