Re: [Ohai] The OHAI WG has placed draft-ohai-chunked-ohttp in state "Call For Adoption By WG Issued"

David Schinazi <dschinazi.ietf@gmail.com> Fri, 26 January 2024 01:08 UTC

Return-Path: <dschinazi.ietf@gmail.com>
X-Original-To: ohai@ietfa.amsl.com
Delivered-To: ohai@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 4BC47C14F68B for <ohai@ietfa.amsl.com>; Thu, 25 Jan 2024 17:08:58 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -7.105
X-Spam-Level:
X-Spam-Status: No, score=-7.105 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FROM=0.001, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_HI=-5, RCVD_IN_ZEN_BLOCKED_OPENDNS=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01, URIBL_DBL_BLOCKED_OPENDNS=0.001, URIBL_ZEN_BLOCKED_OPENDNS=0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com
Received: from mail.ietf.org ([50.223.129.194]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id tWb3xsXzZXzz for <ohai@ietfa.amsl.com>; Thu, 25 Jan 2024 17:08:54 -0800 (PST)
Received: from mail-ed1-x52c.google.com (mail-ed1-x52c.google.com [IPv6:2a00:1450:4864:20::52c]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 172C9C14F5F5 for <ohai@ietf.org>; Thu, 25 Jan 2024 17:08:54 -0800 (PST)
Received: by mail-ed1-x52c.google.com with SMTP id 4fb4d7f45d1cf-55a179f5fa1so8766537a12.0 for <ohai@ietf.org>; Thu, 25 Jan 2024 17:08:54 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1706231332; x=1706836132; darn=ietf.org; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=Jr5FrX/uVAoi0iaQSxUeBKQSfh5ji5ioPjoUBujLkh0=; b=ek1zNyxzT+KjEyaXSaoF3YpwyS2pQp+bjgfbAk8tOmgBclTlA3rhZqemvb3PdfNnQb 2V3TEll/rQda/rPsOrJBrYW22if23ZpLkpIXoYcPJop3XoZ+DGLFMq6oe4d02OY/Nwvd 2FmgWSIluUgVz1cssPniFNVeghUTGQPjLkuOvH2KLGDJvz9SLPRzCaDQHaAsdaiNNDFS SP8Klf2KvFD4OvDaXpOBm9gX61RLipdb++Isw6rIH5YTqVJUUd2W62TQU490+Nc6ILf7 A4/vVwC4oygISLfNk0UoCU6k5UiOsN1w9Vy3HvvGzS1QNeQifWOn/1CPGiVjP6YV/P2r a+4g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1706231332; x=1706836132; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=Jr5FrX/uVAoi0iaQSxUeBKQSfh5ji5ioPjoUBujLkh0=; b=h5xFopz8cm3USjk23Y4MIxvKzhS8XX9rUGyrM1MwVHZxfTooFkbHzNT57BqQjyx/kn 9CDmV0vX/MwUj3zMCEDnmTsknXNXepmZmKvjsOqKW4mXYGzonPFgF+h7Wk80NkjlW4bY jL3TI5OwofzvBrZOsFYNt8uFDTiMX0ENZS4wwhGpWyn4vOaiAHnQovFS2Z8BR98H8AWE /A3N8b73OnXz9zdAkV3G4CVMHRB4g/hfPSr9gajm+RPRDJOsuC+DUQBfEaAQvg8ZgjGg m4Y3BXP7Keamvo+fFdU+LN1P3zCmZJgxBm7P3hq/BsgxJwryDeRh9X7ptdrs4Et7NJ+8 G2UA==
X-Gm-Message-State: AOJu0Yyt0ZL3LpSE2COuB71NE7TVxA5SvFd4af3c/VhIH48PDjaAk7dj cR9NMKxlkl1ELFLymg5B4tiG5WvnAaGGuAtWaD/PJ2EJAn1Q5f+8rIHLeSK0+o7rSEZS2VmTVB4 F6DCuvYV0af/ObV9LhmsKTOW0pSo=
X-Google-Smtp-Source: AGHT+IE+WQhZHtxGZjvmzuCOgDpk8/ujzcHqOLxnsmH+c0sa8wJvWgZPURBszVsrU1zPAzrZep7TIcjTgl2rq6ErxwI=
X-Received: by 2002:a17:906:4893:b0:a31:88d4:9a4c with SMTP id v19-20020a170906489300b00a3188d49a4cmr201046ejq.22.1706231332150; Thu, 25 Jan 2024 17:08:52 -0800 (PST)
MIME-Version: 1.0
References: <170605229077.32114.14133160573475368161@ietfa.amsl.com> <CAG3f7MgroSwXa=QpTU-vxx4fXRs3+-PyUxMtEsJXwncoD3v7pQ@mail.gmail.com> <2534E21A-7B9A-46E0-AE88-D1F6BD70F2C2@mnot.net> <CAPDSy+5T6shZm9B0BC6gEB6uAckJHEGD8veeOyrFka3O3f366Q@mail.gmail.com> <BA994197-6917-4119-8BCA-0B53860D3516@apple.com>
In-Reply-To: <BA994197-6917-4119-8BCA-0B53860D3516@apple.com>
From: David Schinazi <dschinazi.ietf@gmail.com>
Date: Thu, 25 Jan 2024 17:08:40 -0800
Message-ID: <CAPDSy+4L_hfKxY0d5CZNgtYQp=tGRLmN0BS86x7r9cT4XOf0wA@mail.gmail.com>
To: Tommy Pauly <tpauly@apple.com>
Cc: ohai@ietf.org, Shivan Kaul Sahib <shivankaulsahib@gmail.com>
Content-Type: multipart/alternative; boundary="000000000000752028060fcef07b"
Archived-At: <https://mailarchive.ietf.org/arch/msg/ohai/yTNFfm1fu3papSASWMK7IduqL34>
Subject: Re: [Ohai] The OHAI WG has placed draft-ohai-chunked-ohttp in state "Call For Adoption By WG Issued"
X-BeenThere: ohai@ietf.org
X-Mailman-Version: 2.1.39
Precedence: list
List-Id: Oblivious HTTP Application Intermediation <ohai.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/ohai>, <mailto:ohai-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/ohai/>
List-Post: <mailto:ohai@ietf.org>
List-Help: <mailto:ohai-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/ohai>, <mailto:ohai-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 26 Jan 2024 01:08:58 -0000

Thank all, that was definitely the bit of context that I was missing. I do
find the "99% short responses, 1% long responses" use case to be quite
compelling. Based on that, and on the linkability of TLS 0-RTT, it makes
sense to build a solution at the OHTTP layer. I'd suggest getting rid of
the ability to chunk requests but I now see the value of chunked responses.

The sticking point for me remains the lack of perfect forward secrecy for
large responses though. If we want the request to be unlinkable, sent in
the first flight, and processed immediately, then there's no way around
losing replay-protection and PFS for the request. That's fine. The
response, however, doesn't have to be this way. You could toss in an extra
HPKE operation to make that happen:
// same as regular OHTTP
* server starts with skR, pkR static keys - client knows pkR
* client starts by generating skE, pkE ephemeral keys
* clients does sender part of HPKE with (skE, pkR), sends enc_request
* server does receiver part of HPKE with (skR, pkE), generates a fresh
response_nonce, computes aead_key, aead_nonce
// new - instead of sending the response encrypted using aead_key,
aead_nonce, the server does its own HPKE in the other direction with fresh
ephemeral keys
* server generates skE2, pkE2 ephemeral keys
* server sends pkE2 but AEAD-sealed with aead_key, aead_nonce
* server does sender part of HPKE with (skE2, pkE), can now send encrypted
chunked response with new set of keys
* client does receiver part of HPKE with (skE, pkE2), can decrypt response

This has the same latency properties of the draft as currently written
(i.e., no additional round trips needed) but provides PFS for the response.
It involves a bit more CPU for crypto operations, but those are negligible
if this is only used for chunked (i.e., large) responses.

I'm sure this idea has flaws that'd need to be ironed out, and I'm sure the
adoption call is not the right place to do that, but I do think that we
should discuss whether this new protocol needs PFS or not during adoption.
If we were to say that this document is being adopted contingent on
addition of PFS to the response, then I would support adoption.

David

On Thu, Jan 25, 2024 at 3:50 PM Tommy Pauly <tpauly@apple.com> wrote:

> Hi David,
>
> A few salient points I want to highlight from the meeting that will help
> with context (sorry they’re not in the document yet, that’s what needs to
> be done):
>
> - As Eric Rosenberg brought up, one of the main benefits here is when a
> client is making requests and 90% of them will be short/fast responses, but
> there’s another 10% that may be slower to generate, it makes far more sense
> for clients to request with OHTTP as opposed to making unique TLS
> connections for each request that *might* be slow.
>
> - As Jana brought up, the role of a relay to do OHTTP (essentially a
> specific kind of reverse proxy) compared to a TLS forwarding / MASQUE
> proxying is quite different. A MASQUE-style proxy with per-request
> decoupling needs establish new connections to the next hop for every
> request, dealing with port allocation, IPs, etc. For OHTTP, it’s just a
> normal reverse proxy model. This is one reason the proposal of doing short
> TLS connections for every single request doesn’t really scale in practice.
>
> - While it’s totally true that OHTTP doesn’t come with PFS, it also has
> many privacy advantages: not exposing the latency to the client, and being
> able to support 0-RTT data without incurring correlation. The sketch you
> include below would currently involve having likability between these 0-RTT
> requests and also would end up exposing latency to the client as the client
> finished the full handshake.
>
> As I said in the meeting, I think we do need to make sure we are not
> reinventing TLS at a different layer, but there are solutions that fit
> squarely in the OHTTP privacy model that are best solved by letting an
> OHTTP message come in multiple pieces. I’m certainly not advocating that
> “everything should be built on chunked OHTTP”, but rather that there is a
> (limited) place for it in the overall solution ecosystem.
>
> Thanks,
> Tommy
>
> On Jan 24, 2024, at 6:00 PM, David Schinazi <dschinazi.ietf@gmail.com>
> wrote:
>
> I'm opposed to adoption.
>
> This mechanism appears to be geared at use cases that would be better
> served by single-HTTP-request-over-TLS-over-CONNECT (which I'll
> conveniently abbreviate to SHROTOC for the rest of this email). The reason
> that OHTTP itself exists is that it provides better performance than
> SHROTOC for small requests and responses, because the TLS handshake
> overhead is quite noticeable when the application data is small. This
> performance win justified the weaker security that OHTTP provides compared
> to SHROTOC. In particular, OHTTP lacks perfect forward secrecy and is
> vulnerable to replay attacks. Extending OHTTP to large messages creates
> something that has performance similar to SHROTOC but with much weaker
> security. If early data is considered useful, SHROTOC can leverage TLS
> 0-RTT with much better security properties: only the early data lacks PFS
> and replay-protection, any data exchanged after the client first's flights
> gets those protections. I'm opposed to creating a new mechanism when there
> is already an available solution with better security.
>
> Apologies if this was covered in yesterday's meeting, I was unable to
> attend and did not find minutes or recordings for it.
>
> Thanks,
> David
>
> On Wed, Jan 24, 2024 at 2:10 PM Mark Nottingham <mnot=
> 40mnot.net@dmarc.ietf.org> wrote:
>
>> I support adoption.
>>
>> > On 24 Jan 2024, at 10:27 am, Shivan Kaul Sahib <
>> shivankaulsahib@gmail.com> wrote:
>> >
>> > ohai all,
>> >
>> > Thanks to folks who attended the interim today to discuss
>> https://www.ietf.org/archive/id/draft-ohai-chunked-ohttp-01.html.
>> Overall, there was interest in adopting and working on the document.
>> >
>> > This email starts a 2 week call for adoption for
>> https://datatracker.ietf.org/doc/draft-ohai-chunked-ohttp/. Please let
>> us know what you think about OHAI adopting this document by February 6.
>> >
>> > Thanks,
>> > Shivan & Richard
>> >
>> > On Tue, 23 Jan 2024 at 15:24, IETF Secretariat <
>> ietf-secretariat-reply@ietf.org> wrote:
>> >
>> > The OHAI WG has placed draft-ohai-chunked-ohttp in state
>> > Call For Adoption By WG Issued (entered by Shivan Sahib)
>> >
>> > The document is available at
>> > https://datatracker.ietf.org/doc/draft-ohai-chunked-ohttp/
>> >
>> >
>> > --
>> > Ohai mailing list
>> > Ohai@ietf.org
>> > https://www.ietf.org/mailman/listinfo/ohai
>>
>> --
>> Mark Nottingham   https://www.mnot.net/
>>
>> --
>> Ohai mailing list
>> Ohai@ietf.org
>> https://www.ietf.org/mailman/listinfo/ohai
>>
> --
> Ohai mailing list
> Ohai@ietf.org
> https://www.ietf.org/mailman/listinfo/ohai
>
>
>