Re: [Ohai] Regarding PFS

David Schinazi <dschinazi.ietf@gmail.com> Thu, 01 February 2024 19:27 UTC

Return-Path: <dschinazi.ietf@gmail.com>
X-Original-To: ohai@ietfa.amsl.com
Delivered-To: ohai@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 61260C14F6BA for <ohai@ietfa.amsl.com>; Thu, 1 Feb 2024 11:27:39 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.105
X-Spam-Level:
X-Spam-Status: No, score=-2.105 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FROM=0.001, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_ZEN_BLOCKED_OPENDNS=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01, URIBL_DBL_BLOCKED_OPENDNS=0.001, URIBL_ZEN_BLOCKED_OPENDNS=0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com
Received: from mail.ietf.org ([50.223.129.194]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id PO-f074SMIU5 for <ohai@ietfa.amsl.com>; Thu, 1 Feb 2024 11:27:38 -0800 (PST)
Received: from mail-ej1-x62d.google.com (mail-ej1-x62d.google.com [IPv6:2a00:1450:4864:20::62d]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 60889C14F6FB for <ohai@ietf.org>; Thu, 1 Feb 2024 11:27:38 -0800 (PST)
Received: by mail-ej1-x62d.google.com with SMTP id a640c23a62f3a-a3566c0309fso178209966b.1 for <ohai@ietf.org>; Thu, 01 Feb 2024 11:27:38 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1706815656; x=1707420456; darn=ietf.org; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=SdKgrTNVm9+oZ029zzrUjnaPrdBFdFeFfm002rAiXOA=; b=UXBdQ3zVhokDDSVUl75oZM48ur/nvhZsynA6orEfwKdSl1R38bV0A86mPq+SYQknI7 KbOEql1sdXZdTNOuzrbb9KDqQZwUpcxLy4GvBl7xH8F2LjgYpeVJvpnwAb08grhVmXRN es+ajTuWE9JpTgMpx70C7GVkpTo/b0vm0o+Qi0XsIM9kJOX6TQSI6ocfRvS3yhmWBrW8 ILD16TD8L3GTH5g5WAUZ3a4Fibs/6kufMplqUHHPwRbt2CL57aGOgQb0eJBtOkjqp+cD GiMiIuvHHMavRXwxjw3RjtjxTOT0vhDsLtinbJaPwUSwTXQv6OSx5xbxNe6nTHDjeNWX U0qw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1706815656; x=1707420456; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=SdKgrTNVm9+oZ029zzrUjnaPrdBFdFeFfm002rAiXOA=; b=vlx9ht3YsulJuq7V0PAVcMlpaGzyJ+P6HVDEfNanR027fxtlDm8Y/Za54m98LFOiob FFWjuErHVpg6/5PyIMaj+xoJMULyQxpIg7Ag5IljugIAT/WRTj3giBrtNAbSH4Cf1gu0 Kx4NMc0l6DiT8RKdQhiCoM6t69uqBx52B6i4ci/Fo8GMKHo50hEyqotAw8MKALsdsPY4 VvnhIpgn8ES92Cjn5VbJjfavsln7yzOgc2lgu+7+6imf5INKP3hdTxtlpIZoMNiaH47Y SMQxXAQBSXRU4Dz01D5rO1bJbQl32zJiKDqRNUvh9MKph2bJ+QyK9yIwC4/Dm76PaiTh u6xA==
X-Gm-Message-State: AOJu0Yzdf2DqooXlFb90sPJd+O3NKmfjs+YBDRy/mjngmVYbne/87/aR km7QjBi01EUAyqfH2nW3yBvOt91OmMmKAgkx6tSiWO9XfCHnij5AQZ6nLjLEi/zNrba6Z9she+F ccXI8qalWMVU0DaJ5YG1g29BMGKrJi3Vy
X-Google-Smtp-Source: AGHT+IEgS9dPSqe4hThzRgnTP5pMSdSMwQiPHjyE2jr7+TOMykkPjL0ctRCUu3dTBaJDm32XHE3jT84reDWqXo9GCa8=
X-Received: by 2002:a17:906:4a89:b0:a36:8494:eef1 with SMTP id x9-20020a1709064a8900b00a368494eef1mr4098852eju.36.1706815655474; Thu, 01 Feb 2024 11:27:35 -0800 (PST)
MIME-Version: 1.0
References: <CACpbDcfyrc3nArGNLssXAcJBcNQSfcb9yPk=qcJCx-BoaPuscg@mail.gmail.com> <1a2d0740-8f12-477a-b4c0-c878aa7fa15e@betaapp.fastmail.com> <CAPDSy+6WGi31CAt-7mnoO3VAr-6S5J-viizm1q98K=voM8ynNg@mail.gmail.com> <CAN8C-_JzW1m5gU-t0RXayyA8DToZeGtRwyyv+G1ei9VbpVVi9g@mail.gmail.com>
In-Reply-To: <CAN8C-_JzW1m5gU-t0RXayyA8DToZeGtRwyyv+G1ei9VbpVVi9g@mail.gmail.com>
From: David Schinazi <dschinazi.ietf@gmail.com>
Date: Thu, 01 Feb 2024 11:27:24 -0800
Message-ID: <CAPDSy+5gbowAFvgdxqjNAmKO6rO80pMGjnT2+e5NF7ddmnp9pg@mail.gmail.com>
To: Orie Steele <orie@transmute.industries>
Cc: Martin Thomson <mt@lowentropy.net>, ohai@ietf.org
Content-Type: multipart/alternative; boundary="000000000000d77973061056fc22"
Archived-At: <https://mailarchive.ietf.org/arch/msg/ohai/ZDwRvwuHutGhUfLezAKycujrDdA>
Subject: Re: [Ohai] Regarding PFS
X-BeenThere: ohai@ietf.org
X-Mailman-Version: 2.1.39
Precedence: list
List-Id: Oblivious HTTP Application Intermediation <ohai.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/ohai>, <mailto:ohai-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/ohai/>
List-Post: <mailto:ohai@ietf.org>
List-Help: <mailto:ohai-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/ohai>, <mailto:ohai-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 01 Feb 2024 19:27:39 -0000

That's right, the trick of reusing `enc` works for X25519 but not ML-KEM.
I don't think AuthEncap/AuthDecap change this property, but I'm not
knowledgeable enough about HPKE to be sure.
David

On Tue, Jan 30, 2024 at 5:48 PM Orie Steele <orie@transmute.industries>
wrote:

> Unless I am missing something, you won't be able to switch to any kem that
> does not output a public key as it's encapsulated key value.
>
> So you won't be able to migrate to ML-KEM, without changing the protocol.
>
> Doesn't look like HPKE auth encap / decap are currently supported, would
> they help?
>
> OS
>
>
>
> On Tue, Jan 30, 2024, 7:20 PM David Schinazi <dschinazi.ietf@gmail.com>
> wrote:
>
>> Thanks Martin, I agree with all the facts you wrote, but differ in the
>> conclusions.
>>
>> If we focus on X25519, the request size is the same, the response size is
>> increased by 16 bytes, and the added CPU cost is not huge. While PFS
>> applies to both chunked and regular OHTTP, the fact that chunked enables
>> large responses significantly changes the tradeoffs, because in that
>> scenario the additional 16 bytes in the response and HPKE CPU costs are not
>> going to be noticeable compared to the cost of AEAD'ing the entire response.
>>
>> If tomorrow a quantum computer breaks X25519 and we need to switch to
>> ML-KEM, the overheads will be more noticeable but I honestly don't think
>> they're going to be high on our list of things to fix compared to
>> everything else.
>>
>> I'm also not too worried about negotiation: if we reuse the client's
>> `enc` value, the client's flight is exactly the same with and without PFS.
>> Additionally, the client can indicate support via an HTTP header inside the
>> encryption.
>>
>> So, from my perspective, the costs are low, and the benefits are medium.
>> But that's because my intuition is that we might see more use-cases that
>> use OHTTP with blinded tokens in the future, but that's not a guarantee.
>>
>> If the WG prefers to avoid this complexity, we'll instead have to add
>> some text stating that chunked OHTTP really shouldn't be used in cases
>> where the response is more privacy-sensitive than the request. That's not
>> my favorite path forward, but it's a reasonable one.
>>
>> David
>>
>> On Mon, Jan 29, 2024 at 7:28 PM Martin Thomson <mt@lowentropy.net> wrote:
>>
>>> On Tue, Jan 30, 2024, at 13:58, Jana Iyengar wrote:
>>> > (Adding PFS to OHTTP is a great idea that we should pursue,
>>> > but it increases this overlap.)
>>>
>>> I'm picking on Jana here as a starting point of a conversation, but this
>>> is really about David's idea.
>>>
>>> David's suggestion was to have both request and response use HPKE.  The
>>> client would send a request toward a static server key; the server would
>>> send a response toward an ephemeral client key.
>>>
>>> The advantage of this is that responses depend on purely ephemeral
>>> keys.  Only the client and server have the ability to decrypt them and only
>>> for as long as they retain those keys.  This would be an improvement in
>>> some situations and I'd be supportive of exploring those situations.
>>>
>>> There are, however, costs that need to be considered.
>>>
>>> 1. Additional request size.  To do this properly, the client would need
>>> to generate a fresh ephemeral key and send a public key with its request.
>>> For something like X25519, that's 32 extra bytes; ML-KEM or hybrid KEMs are
>>> quite a lot larger.
>>>
>>> David originally suggested using the client's `enc` value for this,
>>> which would save this cost.  That would be possible with the X25519-based
>>> KEM that is in common usage and with other DH-based KEMs.  However, that
>>> would remove some generality as other KEMs don't work like that (ML-KEM is
>>> one).
>>>
>>> 2. Additional response size.  The 16 byte nonce could go and be replaced
>>> with a freshly encapsulated secret.  This would be anywhere from 32 bytes
>>> (X25519) to well over a kilobyte (ML-KEM).
>>>
>>> 3. Additional compute cost.  The cost of generating separate key pairs
>>> and then using them is modest, but significant relative to the existing
>>> costs as it essentially doubles the largest CPU cost component of the
>>> scheme.
>>>
>>> 4. Negotiating use.  This is the one that might be the hardest.  An
>>> in-place upgrade is not easy.  It comes down to all of the same questions
>>> that Tommy went through at the last meeting around format negotiation.  You
>>> can make it optional through in-band negotiation or having it be part of a
>>> selected key configuration, but that could divide anonymity sets in two.
>>> You can make it non-optional, but that risks excluding some peers who don't
>>> support the format.  Had this been part of the original design, it might
>>> have been easier to justify, but it gets a bit harder when there are
>>> deployment considerations in play.
>>>
>>> Then there is the benefits.  A lot of the uses for OHTTP I have seen
>>> depend on request privacy more than response privacy.  Often, this is
>>> because the client is the one with something to say that might have privacy
>>> implications.  The response might carry information that ties back to the
>>> request, but the request contains the really dangerous stuff that we're
>>> trying to protect.  Better protection for responses is not nothing, but nor
>>> does it necessarily improve the situation much.
>>>
>>> OHTTP clients are often anonymous.  A server will generally send the
>>> same response to a different client that makes the same request.  A
>>> compromise of the static server key will let an attacker decrypt requests.
>>> In that case, an attacker can decrypt the request and make that same
>>> request to get the same answer. For those arrangements, response protection
>>> has negligible added value.
>>>
>>> This changes with the inclusion of anonymous tokens, provided that you
>>> use anti-replay.  So it is not nothing, but it is still worth keeping in
>>> mind the limited applicability (and this is the limited applicability
>>> working group, after all).
>>>
>>> The other fact to consider is that encrypted messages are only available
>>> to three entities: client, relay, and server, with only the relay not being
>>> able to read the content.  You might not trust a relay with the content,
>>> but you do trust it to protect privacy and to help shield the server from
>>> abusive clients.  That leaves a gap where you might want to strengthen
>>> confidentiality protections against the relay, but it is a small one.  I
>>> don't know how to balance that gain against the drawbacks.
>>>
>> --
>> Ohai mailing list
>> Ohai@ietf.org
>> https://www.ietf.org/mailman/listinfo/ohai
>>
>