Re: [Ohai] The OHAI WG has placed draft-ohai-chunked-ohttp in state "Call For Adoption By WG Issued"

Christopher Wood <caw@heapingbits.net> Fri, 26 January 2024 20:46 UTC

Return-Path: <caw@heapingbits.net>
X-Original-To: ohai@ietfa.amsl.com
Delivered-To: ohai@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 14BE2C15153E for <ohai@ietfa.amsl.com>; Fri, 26 Jan 2024 12:46:42 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -7.105
X-Spam-Level:
X-Spam-Status: No, score=-7.105 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_HI=-5, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_ZEN_BLOCKED_OPENDNS=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01, URIBL_DBL_BLOCKED_OPENDNS=0.001, URIBL_ZEN_BLOCKED_OPENDNS=0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=heapingbits.net header.b="lWSEzbD9"; dkim=pass (2048-bit key) header.d=messagingengine.com header.b="SpE6nuJh"
Received: from mail.ietf.org ([50.223.129.194]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 6oiTYYKaTAlh for <ohai@ietfa.amsl.com>; Fri, 26 Jan 2024 12:46:36 -0800 (PST)
Received: from out1-smtp.messagingengine.com (out1-smtp.messagingengine.com [66.111.4.25]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id B9F25C151539 for <ohai@ietf.org>; Fri, 26 Jan 2024 12:46:36 -0800 (PST)
Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailout.nyi.internal (Postfix) with ESMTP id 009A75C03FA; Fri, 26 Jan 2024 15:46:36 -0500 (EST)
Received: from mailfrontend2 ([10.202.2.163]) by compute4.internal (MEProxy); Fri, 26 Jan 2024 15:46:36 -0500
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=heapingbits.net; h=cc:cc:content-type:content-type:date:date:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:subject:subject:to:to; s=fm3; t=1706301995; x= 1706388395; bh=tv8Aw6O88m6Bi098uWfYARCnAqSRsXt2VYYJS4k6C5Q=; b=l WSEzbD9EWEIxqJVMZRKWrwendwRMXPbuufuDiyaSL3c4o395zGfyMGUT3hTaJAJj OnGlTJaVFizHA3N4Lz2y74QFOYElG9O5ijcxAPxiqYCCL1kLrj/kycjG/HkUwMlV L36wqBHYx8buA95QIZBRMkjGdKwa0ZleLStVqVS1ee9ZLV8TfrtYXx5R8mJ8l/ND MXb6GqBv5tIhepS43Xl0hOxpWPP0GdmQfteSCs8YisxywTaAJYB7pT6n5+REx6ls Ost0BpJPsiEhOteRMcG+v4EU+SjTYxemt42MCtB/a6DpKODFCWlJshR1G//yuCoD xCtz2ZkToYa/eGXg6rB7g==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-type:content-type:date:date :feedback-id:feedback-id:from:from:in-reply-to:in-reply-to :message-id:mime-version:references:reply-to:subject:subject:to :to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm3; t=1706301995; x=1706388395; bh=tv8Aw6O88m6Bi098uWfYARCnAqSR sXt2VYYJS4k6C5Q=; b=SpE6nuJhudc2kAU2lOFSlwC1ypbz/WwYyzCP+t81yBDF IyqNlKT4a1gFz7qzObLVmfRt5z7hajOeF+UdUeGVtxWwismSgvWLpdEazcTVNXHm fD5hi/+2zMfglDPMdrMINO8iio7R7OIWE+dEyhDreoGH7Koko07WdzQuWCtmxmXN i486xiVKFYFcdIBCnTVUI/b08bGF7YSSSkIK1fzJOjxUQu2qSQGMOGmraAYOhQ0d Xy2Ztsyx03ASSXHA7jmGzI3hLNjU3Fq9kLAiESv/XHWzsYoShIuaZ+86e+s0oW0l rQGO/fIDq62l1yIY0mBBfo3IqIT9EeON7OSqRkwjQg==
X-ME-Sender: <xms:Kxq0ZXurC81P8hky2hB3gY7_4Hai2T7XdqhcSnwCzuci8PBLvIb3wg> <xme:Kxq0ZYehYtEQBDEcwCe0v0lvpjdvJ2PxvdeblDjf9edDnLMqydRqi_fDi-WUsHCFO 9VHBkbF09coa01M_Ik>
X-ME-Received: <xmr:Kxq0Zaxo8rXN4S299OsXJIVB3TVMe2cMhnOUaB_WlskwBfI7F7onFE7l-mjEF_hCed4tW2oxoEIaQiMt3rvvX2kIccCeDJGantqHfVBAHn6cUmbXSDCakg>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvkedrvdeljedgudefkecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd enucfjughrpefhkfgtggfuffgjvefvfhfosegrtdhmrehhtdejnecuhfhrohhmpeevhhhr ihhsthhophhhvghrucghohhougcuoegtrgifsehhvggrphhinhhgsghithhsrdhnvghtqe enucggtffrrghtthgvrhhnpeduvdduveffvddukeegvdeggfffleffleekhfehfffhhefg gefhhfelvedvffduueenucffohhmrghinhepihgvthhfrdhorhhgpdhmnhhothdrnhgvth enucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpegtrgif sehhvggrphhinhhgsghithhsrdhnvght
X-ME-Proxy: <xmx:Kxq0ZWOFtC3--9qRPTmgm-A3rA_j4R5km2jeCO4mgjKT4Ha9OxcV8g> <xmx:Kxq0ZX-BO7PLzbpOouZGT4cRnD9-PfX7AAcuzB4zOB5Gw749U3-noA> <xmx:Kxq0ZWXZqMp3qegmd8xvfT6XYRh_J5epLS5ubPhiPN-J9zbNMGZsfg> <xmx:Kxq0ZaYmCwtBKDSC-ueyrfsfko32dBLdewqPRDmFG4RZ-d1lqPksPQ>
Feedback-ID: i2f494406:Fastmail
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 26 Jan 2024 15:46:35 -0500 (EST)
From: Christopher Wood <caw@heapingbits.net>
Message-Id: <5AF17E8B-1300-4E05-8C0E-6D7F4938ABD8@heapingbits.net>
Content-Type: multipart/alternative; boundary="Apple-Mail=_5677CC49-260D-433B-9055-E6F9035EF2FC"
Mime-Version: 1.0 (Mac OS X Mail 16.0 \(3774.300.61.1.2\))
Date: Fri, 26 Jan 2024 15:45:08 -0500
In-Reply-To: <CAPDSy+7XBt+0V_HDsCzFjSvskOn7mmzpLLHtwq38sDT8b_AM_g@mail.gmail.com>
Cc: Martin Thomson <mt@lowentropy.net>, Tommy Pauly <tpauly@apple.com>, ohai@ietf.org, "shivankaulsahib@gmail.com" <shivankaulsahib@gmail.com>
To: David Schinazi <dschinazi.ietf@gmail.com>
References: <170605229077.32114.14133160573475368161@ietfa.amsl.com> <CAG3f7MgroSwXa=QpTU-vxx4fXRs3+-PyUxMtEsJXwncoD3v7pQ@mail.gmail.com> <2534E21A-7B9A-46E0-AE88-D1F6BD70F2C2@mnot.net> <CAPDSy+5T6shZm9B0BC6gEB6uAckJHEGD8veeOyrFka3O3f366Q@mail.gmail.com> <BA994197-6917-4119-8BCA-0B53860D3516@apple.com> <CAPDSy+4L_hfKxY0d5CZNgtYQp=tGRLmN0BS86x7r9cT4XOf0wA@mail.gmail.com> <a34d53bf-2ad4-4adc-9ebb-276723e74236@betaapp.fastmail.com> <E06FE3E8-F037-4E8E-A0D4-1634A43F30B1@heapingbits.net> <CAPDSy+4RTq7GwwCVpmB+SNEtQt-CLf3cSdk_BQF-1ucWRnVKmQ@mail.gmail.com> <59F9FC21-5883-48FE-9E75-99056D1FCDE0@heapingbits.net> <CAPDSy+7XBt+0V_HDsCzFjSvskOn7mmzpLLHtwq38sDT8b_AM_g@mail.gmail.com>
X-Mailer: Apple Mail (2.3774.300.61.1.2)
Archived-At: <https://mailarchive.ietf.org/arch/msg/ohai/bLD_klUzBwVHUPeXI4a5NRIaoT8>
Subject: Re: [Ohai] The OHAI WG has placed draft-ohai-chunked-ohttp in state "Call For Adoption By WG Issued"
X-BeenThere: ohai@ietf.org
X-Mailman-Version: 2.1.39
Precedence: list
List-Id: Oblivious HTTP Application Intermediation <ohai.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/ohai>, <mailto:ohai-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/ohai/>
List-Post: <mailto:ohai@ietf.org>
List-Help: <mailto:ohai-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/ohai>, <mailto:ohai-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 26 Jan 2024 20:46:42 -0000

I hear you, but I don’t find this unique to chunked OHTTP. In any case, I’m supportive of the PFS extension, if that’s what you want to call it. =)

> On Jan 26, 2024, at 1:40 PM, David Schinazi <dschinazi.ietf@gmail.com> wrote:
> 
> In practice, OHTTP being non-chunked limits it to small requests and small responses. Yes, in theory you could send big ones in one go - but the performance properties are such that you're better off using TLS+CONNECT instead for such use cases, so that's what people do. I agree that this performance property isn't perfectly mapped to the corresponding privacy property, but it's pretty close. If you remove this, then all of a sudden chunked OHTTP becomes more attractive for other use cases. Here's an example: imagine you'd want to use this for your favorite voice assistant. You have a very cheap smartphone so you can't compute or store much on the device, but you don't want the server to know where you happen to be in the world right now. You generate a short-lived auth token for this specific user, and send your query "please read out the top 5 emails in my inbox". Now in this scenario PFS becomes quite important. If the server's static HPKE private key accidentally leaks later, the auth token in the request is now worthless because it's expired, but the contents of the response are highly valuable. Of course, you might answer "well don't do that" but we've now built an attractive high-performance footgun.
> 
> David
> 
> On Fri, Jan 26, 2024 at 10:24 AM Christopher Wood <caw@heapingbits.net <mailto:caw@heapingbits.net>> wrote:
>> 
>>> On Jan 26, 2024, at 1:20 PM, David Schinazi <dschinazi.ietf@gmail.com <mailto:dschinazi.ietf@gmail.com>> wrote:
>>> 
>>> I agree that this is more generally applicable, the question is about what tools this WG wants to build. The original use cases for OHTTP (DNS, Safe Browsing, etc) had a very specific property that the response had no more privacy-sensitive information than the request (i.e., if your DNS exchange leaks, what you really care about is leaking your browsing history from the hostname, not what IP addresses were returned). The constraints of the original version of OHTTP (no chunking) imposes a bound on what use cases it can serve, and in practice that does somewhat approximate pretty close to the response-is-equally-private-than-request property. Now, if we're removing this constraint, we're opening OHTTP to new use cases, that don't necessarily have this privacy property.
>> 
>> I think this is where we may diverge. Can you say more about how chunked OHTTP removes this constraint?
>> 
>>> Sure, we can write "please don't point the footgun at your foot" in Security Considerations, but that doesn't always end well. We actually have a track record of removing footguns from protocols to improve the overall security of the ecosystem. So to me the important question here is: does the OHTTP WG believe that PFS is important for chunked mode? If the answer is yes, then I'm happy to help and can write a draft. If the answer is no, then I can live with being in the rough and I'll step out of the way. But that's the question we should be asking ourselves.
>>> David
>>> 
>>> On Fri, Jan 26, 2024 at 9:35 AM Christopher Wood <caw@heapingbits.net <mailto:caw@heapingbits.net>> wrote:
>>>> I agree with Martin. This is a generally applicable mechanism that applies to vanilla OHTTP and chunked OHTTP (if we care about PFS for chunked responses then I think we ought to also care about it for non-chunked responses). David, please write the draft so both can benefit!
>>>> 
>>>> I support adoption of draft-ohai-chunked-ohttp.
>>>> 
>>>> Best,
>>>> Chris
>>>> 
>>>> > On Jan 25, 2024, at 10:00 PM, Martin Thomson <mt@lowentropy.net <mailto:mt@lowentropy.net>> wrote:
>>>> > 
>>>> > That's an interesting idea David, but isn't that generally applicable to OHTTP?  Or maybe I should say it generalizes to any chained HPKE interaction that uses ECDHE (that last bit is critical, because it doesn't generalize to any use of HPKE without a tweak, more below).
>>>> > 
>>>> > I think that you are looking to reuse the client key share, which is where that caveat comes from.  The client uses an ephemeral share paired with the server's static share to send the request, but the response takes the client ephemeral in place of the server.  It's a little bigger overall, because the server can't just send a nonce and the response, it has to send a fresh share with its response.  The benefit is that the server static key is not a threat to that response.
>>>> > 
>>>> > For general KEM usage, the client ephemeral key pair won't be usable for sending from the server, so you likely need a new key share for the response.  That makes the request bigger as well.
>>>> > 
>>>> > Maybe you should write a draft.
>>>> > 
>>>> > On Fri, Jan 26, 2024, at 12:08, David Schinazi wrote:
>>>> >> Thank all, that was definitely the bit of context that I was missing. I 
>>>> >> do find the "99% short responses, 1% long responses" use case to be 
>>>> >> quite compelling. Based on that, and on the linkability of TLS 0-RTT, 
>>>> >> it makes sense to build a solution at the OHTTP layer. I'd suggest 
>>>> >> getting rid of the ability to chunk requests but I now see the value of 
>>>> >> chunked responses.
>>>> >> 
>>>> >> The sticking point for me remains the lack of perfect forward secrecy 
>>>> >> for large responses though. If we want the request to be unlinkable, 
>>>> >> sent in the first flight, and processed immediately, then there's no 
>>>> >> way around losing replay-protection and PFS for the request. That's 
>>>> >> fine. The response, however, doesn't have to be this way. You could 
>>>> >> toss in an extra HPKE operation to make that happen:
>>>> >> // same as regular OHTTP
>>>> >> * server starts with skR, pkR static keys - client knows pkR
>>>> >> * client starts by generating skE, pkE ephemeral keys
>>>> >> * clients does sender part of HPKE with (skE, pkR), sends enc_request
>>>> >> * server does receiver part of HPKE with (skR, pkE), generates a fresh 
>>>> >> response_nonce, computes aead_key, aead_nonce
>>>> >> // new - instead of sending the response encrypted using aead_key, 
>>>> >> aead_nonce, the server does its own HPKE in the other direction with 
>>>> >> fresh ephemeral keys
>>>> >> * server generates skE2, pkE2 ephemeral keys
>>>> >> * server sends pkE2 but AEAD-sealed with aead_key, aead_nonce
>>>> >> * server does sender part of HPKE with (skE2, pkE), can now send 
>>>> >> encrypted chunked response with new set of keys
>>>> >> * client does receiver part of HPKE with (skE, pkE2), can decrypt 
>>>> >> response
>>>> >> 
>>>> >> This has the same latency properties of the draft as currently written 
>>>> >> (i.e., no additional round trips needed) but provides PFS for the 
>>>> >> response. It involves a bit more CPU for crypto operations, but those 
>>>> >> are negligible if this is only used for chunked (i.e., large) responses.
>>>> >> 
>>>> >> I'm sure this idea has flaws that'd need to be ironed out, and I'm sure 
>>>> >> the adoption call is not the right place to do that, but I do think 
>>>> >> that we should discuss whether this new protocol needs PFS or not 
>>>> >> during adoption. If we were to say that this document is being adopted 
>>>> >> contingent on addition of PFS to the response, then I would support 
>>>> >> adoption.
>>>> >> 
>>>> >> David
>>>> >> 
>>>> >> On Thu, Jan 25, 2024 at 3:50 PM Tommy Pauly <tpauly@apple.com <mailto:tpauly@apple.com>> wrote:
>>>> >>> Hi David,
>>>> >>> 
>>>> >>> A few salient points I want to highlight from the meeting that will help with context (sorry they’re not in the document yet, that’s what needs to be done):
>>>> >>> 
>>>> >>> - As Eric Rosenberg brought up, one of the main benefits here is when a client is making requests and 90% of them will be short/fast responses, but there’s another 10% that may be slower to generate, it makes far more sense for clients to request with OHTTP as opposed to making unique TLS connections for each request that *might* be slow.
>>>> >>> 
>>>> >>> - As Jana brought up, the role of a relay to do OHTTP (essentially a specific kind of reverse proxy) compared to a TLS forwarding / MASQUE proxying is quite different. A MASQUE-style proxy with per-request decoupling needs establish new connections to the next hop for every request, dealing with port allocation, IPs, etc. For OHTTP, it’s just a normal reverse proxy model. This is one reason the proposal of doing short TLS connections for every single request doesn’t really scale in practice.
>>>> >>> 
>>>> >>> - While it’s totally true that OHTTP doesn’t come with PFS, it also has many privacy advantages: not exposing the latency to the client, and being able to support 0-RTT data without incurring correlation. The sketch you include below would currently involve having likability between these 0-RTT requests and also would end up exposing latency to the client as the client finished the full handshake.
>>>> >>> 
>>>> >>> As I said in the meeting, I think we do need to make sure we are not reinventing TLS at a different layer, but there are solutions that fit squarely in the OHTTP privacy model that are best solved by letting an OHTTP message come in multiple pieces. I’m certainly not advocating that “everything should be built on chunked OHTTP”, but rather that there is a (limited) place for it in the overall solution ecosystem.
>>>> >>> 
>>>> >>> Thanks,
>>>> >>> Tommy
>>>> >>> 
>>>> >>>> On Jan 24, 2024, at 6:00 PM, David Schinazi <dschinazi.ietf@gmail.com <mailto:dschinazi.ietf@gmail.com>> wrote:
>>>> >>>> 
>>>> >>>> I'm opposed to adoption.
>>>> >>>> 
>>>> >>>> This mechanism appears to be geared at use cases that would be better served by single-HTTP-request-over-TLS-over-CONNECT (which I'll conveniently abbreviate to SHROTOC for the rest of this email). The reason that OHTTP itself exists is that it provides better performance than SHROTOC for small requests and responses, because the TLS handshake overhead is quite noticeable when the application data is small. This performance win justified the weaker security that OHTTP provides compared to SHROTOC. In particular, OHTTP lacks perfect forward secrecy and is vulnerable to replay attacks. Extending OHTTP to large messages creates something that has performance similar to SHROTOC but with much weaker security. If early data is considered useful, SHROTOC can leverage TLS 0-RTT with much better security properties: only the early data lacks PFS and replay-protection, any data exchanged after the client first's flights gets those protections. I'm opposed to creating a new mechanism when there is already an available solution with better security.
>>>> >>>> 
>>>> >>>> Apologies if this was covered in yesterday's meeting, I was unable to attend and did not find minutes or recordings for it.
>>>> >>>> 
>>>> >>>> Thanks,
>>>> >>>> David
>>>> >>>> 
>>>> >>>> On Wed, Jan 24, 2024 at 2:10 PM Mark Nottingham <mnot=40mnot.net@dmarc.ietf.org <mailto:40mnot.net@dmarc.ietf.org>> wrote:
>>>> >>>>> I support adoption.
>>>> >>>>> 
>>>> >>>>>> On 24 Jan 2024, at 10:27 am, Shivan Kaul Sahib <shivankaulsahib@gmail.com <mailto:shivankaulsahib@gmail.com>> wrote:
>>>> >>>>>> 
>>>> >>>>>> ohai all, 
>>>> >>>>>> 
>>>> >>>>>> Thanks to folks who attended the interim today to discuss https://www.ietf.org/archive/id/draft-ohai-chunked-ohttp-01.html. Overall, there was interest in adopting and working on the document. 
>>>> >>>>>> 
>>>> >>>>>> This email starts a 2 week call for adoption for https://datatracker.ietf.org/doc/draft-ohai-chunked-ohttp/. Please let us know what you think about OHAI adopting this document by February 6.
>>>> >>>>>> 
>>>> >>>>>> Thanks,
>>>> >>>>>> Shivan & Richard
>>>> >>>>>> 
>>>> >>>>>> On Tue, 23 Jan 2024 at 15:24, IETF Secretariat <ietf-secretariat-reply@ietf.org <mailto:ietf-secretariat-reply@ietf.org>> wrote:
>>>> >>>>>> 
>>>> >>>>>> The OHAI WG has placed draft-ohai-chunked-ohttp in state
>>>> >>>>>> Call For Adoption By WG Issued (entered by Shivan Sahib)
>>>> >>>>>> 
>>>> >>>>>> The document is available at
>>>> >>>>>> https://datatracker.ietf.org/doc/draft-ohai-chunked-ohttp/
>>>> >>>>>> 
>>>> >>>>>> 
>>>> >>>>>> -- 
>>>> >>>>>> Ohai mailing list
>>>> >>>>>> Ohai@ietf.org <mailto:Ohai@ietf.org>
>>>> >>>>>> https://www.ietf.org/mailman/listinfo/ohai
>>>> >>>>> 
>>>> >>>>> --
>>>> >>>>> Mark Nottingham   https://www.mnot.net/
>>>> >>>>> 
>>>> >>>>> -- 
>>>> >>>>> Ohai mailing list
>>>> >>>>> Ohai@ietf.org <mailto:Ohai@ietf.org>
>>>> >>>>> https://www.ietf.org/mailman/listinfo/ohai
>>>> >>>> -- 
>>>> >>>> Ohai mailing list
>>>> >>>> Ohai@ietf.org <mailto:Ohai@ietf.org>
>>>> >>>> https://www.ietf.org/mailman/listinfo/ohai
>>>> >>> 
>>>> >> -- 
>>>> >> Ohai mailing list
>>>> >> Ohai@ietf.org <mailto:Ohai@ietf.org>
>>>> >> https://www.ietf.org/mailman/listinfo/ohai
>>>> > 
>>>> > -- 
>>>> > Ohai mailing list
>>>> > Ohai@ietf.org <mailto:Ohai@ietf.org>
>>>> > https://www.ietf.org/mailman/listinfo/ohai
>>>> 
>>