Re: [Ohai] Regarding PFS

David Schinazi <dschinazi.ietf@gmail.com> Wed, 31 January 2024 01:20 UTC

Return-Path: <dschinazi.ietf@gmail.com>
X-Original-To: ohai@ietfa.amsl.com
Delivered-To: ohai@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 81A93C15109F for <ohai@ietfa.amsl.com>; Tue, 30 Jan 2024 17:20:12 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.107
X-Spam-Level:
X-Spam-Status: No, score=-2.107 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FROM=0.001, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_ZEN_BLOCKED_OPENDNS=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com
Received: from mail.ietf.org ([50.223.129.194]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 4MoJvy7LSpx3 for <ohai@ietfa.amsl.com>; Tue, 30 Jan 2024 17:20:11 -0800 (PST)
Received: from mail-ed1-x536.google.com (mail-ed1-x536.google.com [IPv6:2a00:1450:4864:20::536]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 9E0FFC14CE24 for <ohai@ietf.org>; Tue, 30 Jan 2024 17:20:11 -0800 (PST)
Received: by mail-ed1-x536.google.com with SMTP id 4fb4d7f45d1cf-55ef011e934so3909711a12.3 for <ohai@ietf.org>; Tue, 30 Jan 2024 17:20:11 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1706664010; x=1707268810; darn=ietf.org; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=BeEx/m+SpiQwzVoVuoiP4DMAByWPO+H9n7ZVxSL1Yp8=; b=PgycEo9DE9O6IXCSajAoHmRjumTUb0hVb1Ba/zYEBjmxiOX17NdbsIvAaQ0kpRDIrn JYi5SeyfYuA3mfYigOtY+/EtBxGYqM1S31nLmD20cy662kGccL49J923qtA+DhZ6EOpU 6Itc9h0r1Wr04MwB60RA6tBwReXdRe8BzcPP8sjyUQ8JmjnhJ5f/ZPYKBvK2gJr7qVuw zB1XLpE12fsgSX9nkuot+JayteuIVPkoJ+mblXFiH6eVF1RsueN41TQbzCX20sgK0YTC wnDCqE3CYcA5r6Pb6JvdqNTS6s2zQ1MWZuG+sAdIQJ0OY6asgI50n8RemUZ08Lqt9lag 0MVA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1706664010; x=1707268810; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=BeEx/m+SpiQwzVoVuoiP4DMAByWPO+H9n7ZVxSL1Yp8=; b=rGl7+U9oAkfWB5MbkSQ+11akDJaW2kBdHGDhJPktS5diIF2MzefXXOS1LJYq/6se2w aGxvKZXSJB8xErGHQYPqUna6N11B3zmawpVE2UEeW66oeInN5oLyJGFWn89lTC79U2eE PfF8RWA60IeLHlIS1l9t4u3Ak1T6+upwAwP89maBOndjtfHRtDKCkP+pirkYI7ng3jDv 5f7EsmfibT7K9+/DqDuJ4fHOheaCSPzqOQhgo6TSYJ6oz7qLR/U8v69uxlQVMJBMHNly UgN5eZ8SfkSUQJ09qm2jqwvo/v36f5RFWNfxcYNio67Pctiex+Mp5SPsBslGRcGF8vaZ bGdQ==
X-Gm-Message-State: AOJu0YxaMUlV2pJMqpf2iLogq8HZsU2zgeX4zi4ES28B4k7XCYT+BXxo ErHwu5xwhUXrkLAxDW/VpiMmj+yNtnzhnGJxBvNsa+wGXu//0+Rs8pPC52VAfXPktSir7++T9+x AuWkmZv9Y1BZLTokFZ19pxaPWtpLfCiWF
X-Google-Smtp-Source: AGHT+IHKfgSa/u2ibQ0VfxBnHRZFSHCESqPzho+wq13Eex0I1IqtPPda36/fMQ1g+/tSmfhML0xoDQYVn0a3VCBYSnE=
X-Received: by 2002:a17:906:f9d6:b0:a36:47d8:f929 with SMTP id lj22-20020a170906f9d600b00a3647d8f929mr52878ejb.65.1706664009616; Tue, 30 Jan 2024 17:20:09 -0800 (PST)
MIME-Version: 1.0
References: <CACpbDcfyrc3nArGNLssXAcJBcNQSfcb9yPk=qcJCx-BoaPuscg@mail.gmail.com> <1a2d0740-8f12-477a-b4c0-c878aa7fa15e@betaapp.fastmail.com>
In-Reply-To: <1a2d0740-8f12-477a-b4c0-c878aa7fa15e@betaapp.fastmail.com>
From: David Schinazi <dschinazi.ietf@gmail.com>
Date: Tue, 30 Jan 2024 17:19:58 -0800
Message-ID: <CAPDSy+6WGi31CAt-7mnoO3VAr-6S5J-viizm1q98K=voM8ynNg@mail.gmail.com>
To: Martin Thomson <mt@lowentropy.net>
Cc: ohai@ietf.org
Content-Type: multipart/alternative; boundary="0000000000000b523c061033ae39"
Archived-At: <https://mailarchive.ietf.org/arch/msg/ohai/sSrWSTifDfz77TYaEORHq432LsA>
Subject: Re: [Ohai] Regarding PFS
X-BeenThere: ohai@ietf.org
X-Mailman-Version: 2.1.39
Precedence: list
List-Id: Oblivious HTTP Application Intermediation <ohai.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/ohai>, <mailto:ohai-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/ohai/>
List-Post: <mailto:ohai@ietf.org>
List-Help: <mailto:ohai-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/ohai>, <mailto:ohai-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 31 Jan 2024 01:20:12 -0000

Thanks Martin, I agree with all the facts you wrote, but differ in the
conclusions.

If we focus on X25519, the request size is the same, the response size is
increased by 16 bytes, and the added CPU cost is not huge. While PFS
applies to both chunked and regular OHTTP, the fact that chunked enables
large responses significantly changes the tradeoffs, because in that
scenario the additional 16 bytes in the response and HPKE CPU costs are not
going to be noticeable compared to the cost of AEAD'ing the entire response.

If tomorrow a quantum computer breaks X25519 and we need to switch to
ML-KEM, the overheads will be more noticeable but I honestly don't think
they're going to be high on our list of things to fix compared to
everything else.

I'm also not too worried about negotiation: if we reuse the client's `enc`
value, the client's flight is exactly the same with and without PFS.
Additionally, the client can indicate support via an HTTP header inside the
encryption.

So, from my perspective, the costs are low, and the benefits are medium.
But that's because my intuition is that we might see more use-cases that
use OHTTP with blinded tokens in the future, but that's not a guarantee.

If the WG prefers to avoid this complexity, we'll instead have to add some
text stating that chunked OHTTP really shouldn't be used in cases where the
response is more privacy-sensitive than the request. That's not my favorite
path forward, but it's a reasonable one.

David

On Mon, Jan 29, 2024 at 7:28 PM Martin Thomson <mt@lowentropy.net> wrote:

> On Tue, Jan 30, 2024, at 13:58, Jana Iyengar wrote:
> > (Adding PFS to OHTTP is a great idea that we should pursue,
> > but it increases this overlap.)
>
> I'm picking on Jana here as a starting point of a conversation, but this
> is really about David's idea.
>
> David's suggestion was to have both request and response use HPKE.  The
> client would send a request toward a static server key; the server would
> send a response toward an ephemeral client key.
>
> The advantage of this is that responses depend on purely ephemeral keys.
> Only the client and server have the ability to decrypt them and only for as
> long as they retain those keys.  This would be an improvement in some
> situations and I'd be supportive of exploring those situations.
>
> There are, however, costs that need to be considered.
>
> 1. Additional request size.  To do this properly, the client would need to
> generate a fresh ephemeral key and send a public key with its request.  For
> something like X25519, that's 32 extra bytes; ML-KEM or hybrid KEMs are
> quite a lot larger.
>
> David originally suggested using the client's `enc` value for this, which
> would save this cost.  That would be possible with the X25519-based KEM
> that is in common usage and with other DH-based KEMs.  However, that would
> remove some generality as other KEMs don't work like that (ML-KEM is one).
>
> 2. Additional response size.  The 16 byte nonce could go and be replaced
> with a freshly encapsulated secret.  This would be anywhere from 32 bytes
> (X25519) to well over a kilobyte (ML-KEM).
>
> 3. Additional compute cost.  The cost of generating separate key pairs and
> then using them is modest, but significant relative to the existing costs
> as it essentially doubles the largest CPU cost component of the scheme.
>
> 4. Negotiating use.  This is the one that might be the hardest.  An
> in-place upgrade is not easy.  It comes down to all of the same questions
> that Tommy went through at the last meeting around format negotiation.  You
> can make it optional through in-band negotiation or having it be part of a
> selected key configuration, but that could divide anonymity sets in two.
> You can make it non-optional, but that risks excluding some peers who don't
> support the format.  Had this been part of the original design, it might
> have been easier to justify, but it gets a bit harder when there are
> deployment considerations in play.
>
> Then there is the benefits.  A lot of the uses for OHTTP I have seen
> depend on request privacy more than response privacy.  Often, this is
> because the client is the one with something to say that might have privacy
> implications.  The response might carry information that ties back to the
> request, but the request contains the really dangerous stuff that we're
> trying to protect.  Better protection for responses is not nothing, but nor
> does it necessarily improve the situation much.
>
> OHTTP clients are often anonymous.  A server will generally send the same
> response to a different client that makes the same request.  A compromise
> of the static server key will let an attacker decrypt requests.  In that
> case, an attacker can decrypt the request and make that same request to get
> the same answer. For those arrangements, response protection has negligible
> added value.
>
> This changes with the inclusion of anonymous tokens, provided that you use
> anti-replay.  So it is not nothing, but it is still worth keeping in mind
> the limited applicability (and this is the limited applicability working
> group, after all).
>
> The other fact to consider is that encrypted messages are only available
> to three entities: client, relay, and server, with only the relay not being
> able to read the content.  You might not trust a relay with the content,
> but you do trust it to protect privacy and to help shield the server from
> abusive clients.  That leaves a gap where you might want to strengthen
> confidentiality protections against the relay, but it is a small one.  I
> don't know how to balance that gain against the drawbacks.
>