Re: [Ohai] The OHAI WG has placed draft-ohai-chunked-ohttp in state "Call For Adoption By WG Issued"

David Schinazi <dschinazi.ietf@gmail.com> Fri, 26 January 2024 18:40 UTC

Return-Path: <dschinazi.ietf@gmail.com>
X-Original-To: ohai@ietfa.amsl.com
Delivered-To: ohai@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 9305FC14F60C for <ohai@ietfa.amsl.com>; Fri, 26 Jan 2024 10:40:57 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.104
X-Spam-Level:
X-Spam-Status: No, score=-2.104 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FROM=0.001, HTML_MESSAGE=0.001, RCVD_IN_ZEN_BLOCKED_OPENDNS=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01, URIBL_BLOCKED=0.001, URIBL_DBL_BLOCKED_OPENDNS=0.001, URIBL_ZEN_BLOCKED_OPENDNS=0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com
Received: from mail.ietf.org ([50.223.129.194]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id sO02sz2MtTa7 for <ohai@ietfa.amsl.com>; Fri, 26 Jan 2024 10:40:53 -0800 (PST)
Received: from mail-lf1-x132.google.com (mail-lf1-x132.google.com [IPv6:2a00:1450:4864:20::132]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id B7053C14F5FD for <ohai@ietf.org>; Fri, 26 Jan 2024 10:40:53 -0800 (PST)
Received: by mail-lf1-x132.google.com with SMTP id 2adb3069b0e04-5102188d2aeso656632e87.2 for <ohai@ietf.org>; Fri, 26 Jan 2024 10:40:53 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1706294451; x=1706899251; darn=ietf.org; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=U0Grn+DuxTfgPIajiUKnX09+cm3h+OVSVR4fGpxlh1Y=; b=Bx9F0Ka+gV+7ideEih9Oe4VICxS0D5rUp0Y+pCCxvb3bn71JKQNUHb0ra9+ale5Ue1 0X1APdIJ6KIEVz+VTePR600NwUrYrmJq4E+SMUuUrl/MCYQHEQjRUi2yF8M01ojgckbv HJGevYVsadWiUnTnCNRWn1gWHUz5x5TkRKuT5VA6MMci8zS6xXZzysi/igQIWABfnqkE /r4UImjSsyidSYOTbOeD+i2Fuw6CXBLxDjznb7+som1QJgBx1LJhYVQhUBP9k/knjI/h 0iTxfeh+4QmWse1SG80tOjaeQFY2OK0I1nXLQax4ISG7F8Nvbpuzn3hLEQBIACtNtdff 0iSQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1706294451; x=1706899251; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=U0Grn+DuxTfgPIajiUKnX09+cm3h+OVSVR4fGpxlh1Y=; b=aDa1Dih8xjJMUM48YoDU2z5Ed2Tfvhmn3t4ZIKOKQjwXnbwFKb4ZL3+jCK4sDpak/6 dVKiht5lFO/dSAjEtcX8KgHxdfYna6xAoSLpIvpFvD6BHryfhh3TpdjiE8xFtJ4mESR7 KJdQ2l0MNezduNddWJwUW9vsPz811GTVHtVbwaQVopOr93pbvHeH3hISq06N4OfqQcj8 3O9s6cp/q8H4csE0hopmc8oaaxZkB8oS0eNn4GBdRuDqzoow8pbFffVVQakYv8zSsfvt LVClMqo5hBLqQ3PQ4CoEgVVm/8kWBOXG5xb57t96F48fiuJcyOhyEyL61hbw0+GAzq/Q jDBA==
X-Gm-Message-State: AOJu0YxEem7Q8hy5omDdf1zifDE9SO5LFE3U8XN2CclgBxW2dEje7PPS fqAKZwKmBedO+fA2ND4JTNjPnPoHfESuoaOrb4kS/mxGGSw+GL9Sgr4v65L5jmSak3hWUD9zBE9 xC/ANclvAShi0GBS2hVtHmQKbDak=
X-Google-Smtp-Source: AGHT+IGVkkZI9QvwEG2YrJCd0/stzcoJipq57WG27UphX6lZWoiiH7IacR0NHZnMyDHxmNQZHuEA6UT3X1Qd1m1C/B8=
X-Received: by 2002:ac2:5dc9:0:b0:50e:74f0:c077 with SMTP id x9-20020ac25dc9000000b0050e74f0c077mr67686lfq.8.1706294450885; Fri, 26 Jan 2024 10:40:50 -0800 (PST)
MIME-Version: 1.0
References: <170605229077.32114.14133160573475368161@ietfa.amsl.com> <CAG3f7MgroSwXa=QpTU-vxx4fXRs3+-PyUxMtEsJXwncoD3v7pQ@mail.gmail.com> <2534E21A-7B9A-46E0-AE88-D1F6BD70F2C2@mnot.net> <CAPDSy+5T6shZm9B0BC6gEB6uAckJHEGD8veeOyrFka3O3f366Q@mail.gmail.com> <BA994197-6917-4119-8BCA-0B53860D3516@apple.com> <CAPDSy+4L_hfKxY0d5CZNgtYQp=tGRLmN0BS86x7r9cT4XOf0wA@mail.gmail.com> <a34d53bf-2ad4-4adc-9ebb-276723e74236@betaapp.fastmail.com> <E06FE3E8-F037-4E8E-A0D4-1634A43F30B1@heapingbits.net> <CAPDSy+4RTq7GwwCVpmB+SNEtQt-CLf3cSdk_BQF-1ucWRnVKmQ@mail.gmail.com> <59F9FC21-5883-48FE-9E75-99056D1FCDE0@heapingbits.net>
In-Reply-To: <59F9FC21-5883-48FE-9E75-99056D1FCDE0@heapingbits.net>
From: David Schinazi <dschinazi.ietf@gmail.com>
Date: Fri, 26 Jan 2024 10:40:39 -0800
Message-ID: <CAPDSy+7XBt+0V_HDsCzFjSvskOn7mmzpLLHtwq38sDT8b_AM_g@mail.gmail.com>
To: Christopher Wood <caw@heapingbits.net>
Cc: Martin Thomson <mt@lowentropy.net>, Tommy Pauly <tpauly@apple.com>, ohai@ietf.org, "shivankaulsahib@gmail.com" <shivankaulsahib@gmail.com>
Content-Type: multipart/alternative; boundary="000000000000a09763060fdda20a"
Archived-At: <https://mailarchive.ietf.org/arch/msg/ohai/-fd6rlM567ub4pDsFOF87DcOlHw>
Subject: Re: [Ohai] The OHAI WG has placed draft-ohai-chunked-ohttp in state "Call For Adoption By WG Issued"
X-BeenThere: ohai@ietf.org
X-Mailman-Version: 2.1.39
Precedence: list
List-Id: Oblivious HTTP Application Intermediation <ohai.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/ohai>, <mailto:ohai-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/ohai/>
List-Post: <mailto:ohai@ietf.org>
List-Help: <mailto:ohai-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/ohai>, <mailto:ohai-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 26 Jan 2024 18:40:57 -0000

In practice, OHTTP being non-chunked limits it to small requests and small
responses. Yes, in theory you could send big ones in one go - but the
performance properties are such that you're better off using TLS+CONNECT
instead for such use cases, so that's what people do. I agree that this
performance property isn't perfectly mapped to the corresponding privacy
property, but it's pretty close. If you remove this, then all of a sudden
chunked OHTTP becomes more attractive for other use cases. Here's an
example: imagine you'd want to use this for your favorite voice assistant.
You have a very cheap smartphone so you can't compute or store much on the
device, but you don't want the server to know where you happen to be in the
world right now. You generate a short-lived auth token for this specific
user, and send your query "please read out the top 5 emails in my inbox".
Now in this scenario PFS becomes quite important. If the server's static
HPKE private key accidentally leaks later, the auth token in the request is
now worthless because it's expired, but the contents of the response are
highly valuable. Of course, you might answer "well don't do that" but we've
now built an attractive high-performance footgun.

David

On Fri, Jan 26, 2024 at 10:24 AM Christopher Wood <caw@heapingbits.net>
wrote:

>
> On Jan 26, 2024, at 1:20 PM, David Schinazi <dschinazi.ietf@gmail.com>
> wrote:
>
> I agree that this is more generally applicable, the question is about what
> tools this WG wants to build. The original use cases for OHTTP (DNS, Safe
> Browsing, etc) had a very specific property that the response had no more
> privacy-sensitive information than the request (i.e., if your DNS exchange
> leaks, what you really care about is leaking your browsing history from the
> hostname, not what IP addresses were returned). The constraints of the
> original version of OHTTP (no chunking) imposes a bound on what use cases
> it can serve, and in practice that does somewhat approximate pretty close
> to the response-is-equally-private-than-request property. Now, if we're
> removing this constraint, we're opening OHTTP to new use cases, that don't
> necessarily have this privacy property.
>
>
> I think this is where we may diverge. Can you say more about how chunked
> OHTTP removes this constraint?
>
> Sure, we can write "please don't point the footgun at your foot" in
> Security Considerations, but that doesn't always end well. We actually have
> a track record of removing footguns from protocols to improve the overall
> security of the ecosystem. So to me the important question here is: does
> the OHTTP WG believe that PFS is important for chunked mode? If the answer
> is yes, then I'm happy to help and can write a draft. If the answer is no,
> then I can live with being in the rough and I'll step out of the way. But
> that's the question we should be asking ourselves.
> David
>
> On Fri, Jan 26, 2024 at 9:35 AM Christopher Wood <caw@heapingbits.net>
> wrote:
>
>> I agree with Martin. This is a generally applicable mechanism that
>> applies to vanilla OHTTP and chunked OHTTP (if we care about PFS for
>> chunked responses then I think we ought to also care about it for
>> non-chunked responses). David, please write the draft so both can benefit!
>>
>> I support adoption of draft-ohai-chunked-ohttp.
>>
>> Best,
>> Chris
>>
>> > On Jan 25, 2024, at 10:00 PM, Martin Thomson <mt@lowentropy.net> wrote:
>> >
>> > That's an interesting idea David, but isn't that generally applicable
>> to OHTTP?  Or maybe I should say it generalizes to any chained HPKE
>> interaction that uses ECDHE (that last bit is critical, because it doesn't
>> generalize to any use of HPKE without a tweak, more below).
>> >
>> > I think that you are looking to reuse the client key share, which is
>> where that caveat comes from.  The client uses an ephemeral share paired
>> with the server's static share to send the request, but the response takes
>> the client ephemeral in place of the server.  It's a little bigger overall,
>> because the server can't just send a nonce and the response, it has to send
>> a fresh share with its response.  The benefit is that the server static key
>> is not a threat to that response.
>> >
>> > For general KEM usage, the client ephemeral key pair won't be usable
>> for sending from the server, so you likely need a new key share for the
>> response.  That makes the request bigger as well.
>> >
>> > Maybe you should write a draft.
>> >
>> > On Fri, Jan 26, 2024, at 12:08, David Schinazi wrote:
>> >> Thank all, that was definitely the bit of context that I was missing.
>> I
>> >> do find the "99% short responses, 1% long responses" use case to be
>> >> quite compelling. Based on that, and on the linkability of TLS 0-RTT,
>> >> it makes sense to build a solution at the OHTTP layer. I'd suggest
>> >> getting rid of the ability to chunk requests but I now see the value
>> of
>> >> chunked responses.
>> >>
>> >> The sticking point for me remains the lack of perfect forward secrecy
>> >> for large responses though. If we want the request to be unlinkable,
>> >> sent in the first flight, and processed immediately, then there's no
>> >> way around losing replay-protection and PFS for the request. That's
>> >> fine. The response, however, doesn't have to be this way. You could
>> >> toss in an extra HPKE operation to make that happen:
>> >> // same as regular OHTTP
>> >> * server starts with skR, pkR static keys - client knows pkR
>> >> * client starts by generating skE, pkE ephemeral keys
>> >> * clients does sender part of HPKE with (skE, pkR), sends enc_request
>> >> * server does receiver part of HPKE with (skR, pkE), generates a fresh
>> >> response_nonce, computes aead_key, aead_nonce
>> >> // new - instead of sending the response encrypted using aead_key,
>> >> aead_nonce, the server does its own HPKE in the other direction with
>> >> fresh ephemeral keys
>> >> * server generates skE2, pkE2 ephemeral keys
>> >> * server sends pkE2 but AEAD-sealed with aead_key, aead_nonce
>> >> * server does sender part of HPKE with (skE2, pkE), can now send
>> >> encrypted chunked response with new set of keys
>> >> * client does receiver part of HPKE with (skE, pkE2), can decrypt
>> >> response
>> >>
>> >> This has the same latency properties of the draft as currently written
>> >> (i.e., no additional round trips needed) but provides PFS for the
>> >> response. It involves a bit more CPU for crypto operations, but those
>> >> are negligible if this is only used for chunked (i.e., large)
>> responses.
>> >>
>> >> I'm sure this idea has flaws that'd need to be ironed out, and I'm
>> sure
>> >> the adoption call is not the right place to do that, but I do think
>> >> that we should discuss whether this new protocol needs PFS or not
>> >> during adoption. If we were to say that this document is being adopted
>> >> contingent on addition of PFS to the response, then I would support
>> >> adoption.
>> >>
>> >> David
>> >>
>> >> On Thu, Jan 25, 2024 at 3:50 PM Tommy Pauly <tpauly@apple.com> wrote:
>> >>> Hi David,
>> >>>
>> >>> A few salient points I want to highlight from the meeting that will
>> help with context (sorry they’re not in the document yet, that’s what needs
>> to be done):
>> >>>
>> >>> - As Eric Rosenberg brought up, one of the main benefits here is when
>> a client is making requests and 90% of them will be short/fast responses,
>> but there’s another 10% that may be slower to generate, it makes far more
>> sense for clients to request with OHTTP as opposed to making unique TLS
>> connections for each request that *might* be slow.
>> >>>
>> >>> - As Jana brought up, the role of a relay to do OHTTP (essentially a
>> specific kind of reverse proxy) compared to a TLS forwarding / MASQUE
>> proxying is quite different. A MASQUE-style proxy with per-request
>> decoupling needs establish new connections to the next hop for every
>> request, dealing with port allocation, IPs, etc. For OHTTP, it’s just a
>> normal reverse proxy model. This is one reason the proposal of doing short
>> TLS connections for every single request doesn’t really scale in practice.
>> >>>
>> >>> - While it’s totally true that OHTTP doesn’t come with PFS, it also
>> has many privacy advantages: not exposing the latency to the client, and
>> being able to support 0-RTT data without incurring correlation. The sketch
>> you include below would currently involve having likability between these
>> 0-RTT requests and also would end up exposing latency to the client as the
>> client finished the full handshake.
>> >>>
>> >>> As I said in the meeting, I think we do need to make sure we are not
>> reinventing TLS at a different layer, but there are solutions that fit
>> squarely in the OHTTP privacy model that are best solved by letting an
>> OHTTP message come in multiple pieces. I’m certainly not advocating that
>> “everything should be built on chunked OHTTP”, but rather that there is a
>> (limited) place for it in the overall solution ecosystem.
>> >>>
>> >>> Thanks,
>> >>> Tommy
>> >>>
>> >>>> On Jan 24, 2024, at 6:00 PM, David Schinazi <
>> dschinazi.ietf@gmail.com> wrote:
>> >>>>
>> >>>> I'm opposed to adoption.
>> >>>>
>> >>>> This mechanism appears to be geared at use cases that would be
>> better served by single-HTTP-request-over-TLS-over-CONNECT (which I'll
>> conveniently abbreviate to SHROTOC for the rest of this email). The reason
>> that OHTTP itself exists is that it provides better performance than
>> SHROTOC for small requests and responses, because the TLS handshake
>> overhead is quite noticeable when the application data is small. This
>> performance win justified the weaker security that OHTTP provides compared
>> to SHROTOC. In particular, OHTTP lacks perfect forward secrecy and is
>> vulnerable to replay attacks. Extending OHTTP to large messages creates
>> something that has performance similar to SHROTOC but with much weaker
>> security. If early data is considered useful, SHROTOC can leverage TLS
>> 0-RTT with much better security properties: only the early data lacks PFS
>> and replay-protection, any data exchanged after the client first's flights
>> gets those protections. I'm opposed to creating a new mechanism when there
>> is already an available solution with better security.
>> >>>>
>> >>>> Apologies if this was covered in yesterday's meeting, I was unable
>> to attend and did not find minutes or recordings for it.
>> >>>>
>> >>>> Thanks,
>> >>>> David
>> >>>>
>> >>>> On Wed, Jan 24, 2024 at 2:10 PM Mark Nottingham <mnot=
>> 40mnot.net@dmarc.ietf.org> wrote:
>> >>>>> I support adoption.
>> >>>>>
>> >>>>>> On 24 Jan 2024, at 10:27 am, Shivan Kaul Sahib <
>> shivankaulsahib@gmail.com> wrote:
>> >>>>>>
>> >>>>>> ohai all,
>> >>>>>>
>> >>>>>> Thanks to folks who attended the interim today to discuss
>> https://www.ietf.org/archive/id/draft-ohai-chunked-ohttp-01.html.
>> Overall, there was interest in adopting and working on the document.
>> >>>>>>
>> >>>>>> This email starts a 2 week call for adoption for
>> https://datatracker.ietf.org/doc/draft-ohai-chunked-ohttp/. Please let
>> us know what you think about OHAI adopting this document by February 6.
>> >>>>>>
>> >>>>>> Thanks,
>> >>>>>> Shivan & Richard
>> >>>>>>
>> >>>>>> On Tue, 23 Jan 2024 at 15:24, IETF Secretariat <
>> ietf-secretariat-reply@ietf.org> wrote:
>> >>>>>>
>> >>>>>> The OHAI WG has placed draft-ohai-chunked-ohttp in state
>> >>>>>> Call For Adoption By WG Issued (entered by Shivan Sahib)
>> >>>>>>
>> >>>>>> The document is available at
>> >>>>>> https://datatracker.ietf.org/doc/draft-ohai-chunked-ohttp/
>> >>>>>>
>> >>>>>>
>> >>>>>> --
>> >>>>>> Ohai mailing list
>> >>>>>> Ohai@ietf.org
>> >>>>>> https://www.ietf.org/mailman/listinfo/ohai
>> >>>>>
>> >>>>> --
>> >>>>> Mark Nottingham   https://www.mnot.net/
>> >>>>>
>> >>>>> --
>> >>>>> Ohai mailing list
>> >>>>> Ohai@ietf.org
>> >>>>> https://www.ietf.org/mailman/listinfo/ohai
>> >>>> --
>> >>>> Ohai mailing list
>> >>>> Ohai@ietf.org
>> >>>> https://www.ietf.org/mailman/listinfo/ohai
>> >>>
>> >> --
>> >> Ohai mailing list
>> >> Ohai@ietf.org
>> >> https://www.ietf.org/mailman/listinfo/ohai
>> >
>> > --
>> > Ohai mailing list
>> > Ohai@ietf.org
>> > https://www.ietf.org/mailman/listinfo/ohai
>>
>>
>