Re: Ambiguity on HTTP/3 HEADERS and QUIC STREAM FIN requirement

Willy Tarreau <> Fri, 17 June 2022 10:20 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 4A865C1595E6 for <>; Fri, 17 Jun 2022 03:20:07 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -2.66
X-Spam-Status: No, score=-2.66 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, HEADER_FROM_DIFFERENT_DOMAINS=0.25, MAILING_LIST_MULTI=-1, RCVD_IN_DNSWL_BLOCKED=0.001, RCVD_IN_MSPIKE_H2=-0.001, RCVD_IN_ZEN_BLOCKED_OPENDNS=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01] autolearn=ham autolearn_force=no
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id f93gnRYwytYv for <>; Fri, 17 Jun 2022 03:20:05 -0700 (PDT)
Received: from ( []) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-256) server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by (Postfix) with ESMTPS id 638F6C157B34 for <>; Fri, 17 Jun 2022 03:20:05 -0700 (PDT)
Received: from lists by with local (Exim 4.92) (envelope-from <>) id 1o292s-0007kR-Bo for; Fri, 17 Jun 2022 10:17:42 +0000
Resent-Date: Fri, 17 Jun 2022 10:17:42 +0000
Resent-Message-Id: <>
Received: from ([]) by with esmtps (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from <>) id 1o292q-0007j4-Qw for; Fri, 17 Jun 2022 10:17:40 +0000
Received: from ([] by with esmtp (Exim 4.92) (envelope-from <>) id 1o292o-0001PW-Rx for; Fri, 17 Jun 2022 10:17:40 +0000
Received: (from willy@localhost) by pcw.home.local (8.15.2/8.15.2/Submit) id 25HAHMfp030677; Fri, 17 Jun 2022 12:17:22 +0200
Date: Fri, 17 Jun 2022 12:17:22 +0200
From: Willy Tarreau <>
To: Martin Thomson <>
Message-ID: <>
References: <YqsBZ0M4lDXGRQTo@miskatonic> <> <> <> <> <> <> <> <>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <>
User-Agent: Mutt/1.10.1 (2018-07-13)
Received-SPF: pass client-ip=;;
X-W3C-Hub-Spam-Status: No, score=-4.9
X-W3C-Hub-Spam-Report: BAYES_00=-1.9, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01, W3C_AA=-1, W3C_IRA=-1, W3C_WL=-1
X-W3C-Scan-Sig: 1o292o-0001PW-Rx 6a854a9121b839838d2eb897e8cadbce
Subject: Re: Ambiguity on HTTP/3 HEADERS and QUIC STREAM FIN requirement
Archived-At: <>
X-Mailing-List: <> archive/latest/40152
Precedence: list
List-Id: <>
List-Help: <>
List-Post: <>
List-Unsubscribe: <>

On Fri, Jun 17, 2022 at 05:05:14PM +1000, Martin Thomson wrote:
> On Fri, Jun 17, 2022, at 15:37, Willy Tarreau wrote:
> >> The HEADERS frame has ended in this case, so you have a clear indication that
> >> you have all the headers.
> >
> > Not exactly. In HTTP/1 we used to have Transfer-Encoding which was a
> > connection-level header field to bridge the gap between what was explicitly
> > advertised in headers and what could have been ambiguous at the connection
> > level (such as receiving a FIN late). 
> Ah, that is a property of HTTP/1.1 that ensures that - when using
> Content-Length - once you have the headers, you know if you have the whole
> thing (or at least how much more stuff to expect).  That's not true with
> Transfer-Encoding: chunked or those nasty requests that end when the
> connection closes.

No but with Transfer-Encoding, you know that the client will send
something (even if only 0 CRLF CRLF), so you know whether you can
proceed or not.

> HTTP/3 uses something like TCP connection closing to terminate requests,

Yes, absolutely.

> for which I can see how that might seem awkward, but the differences
> between how you deal with TCP closures and QUIC stream ending probably
> make the latter easier to deal with.

The latter definitely is easier to deal with from the stream closure
perspective, but it lacks this signal that says "Do not start to process
right now, I intend to send something else".

> > And in H2, it's not the
> > same to send a HEADERS + ES and a HEADERS followed by DATA+ES. The first
> > one doesn't have a body, the second one has an empty body. 
> I never thought that that distinction mattered.  I'm surprised to see you
> claim that this is the case. I understand that you might use different
> strategies for forwarding the two, but semantically, I don't think there
> is a difference.

It's not me, I personally don't care, it's all the servers that we're
facing :-)  It's not uncommon to see a server reject a GET request that
shows a Transfer-Encoding header field. And that's undertandable because
in the early days of HTTP/1.1 it wasn't very clear that a GET would
support this or not, and lots of implementations relying on mechanisms
inherited from CGI needed to know early if they would have to deal with
a payload or not (sometimes the code to handle that wasn't implemented
at all so the request had to be rejected). Remember all the trouble we
faced when trying to design a working Upgrade scheme for WebSocket, and
that's not *that* long ago.

So that's what we're seeing here, two clients using H3 to haproxy, using
H1 to an origin server, getting a 400 bad request in response, because we
had no better guess than "the client didn't close the stream after headers
so it wants to send a payload, let's put a Transfer-Encoding there".

> > With H3 you have neither the transfer-encoding header nor the ES bit on
> > the frame to indicate that presence/absence. The only indication that
> > matches the H2 ES is the QUIC FIN that also signals the end of stream,
> > albeit at a lower level. That's why I think we've slowly deviated from
> > something very explict (H1) to something subtly explicit (H2) then
> > something ambiguous (H3).
> I see this differently.  While I can see how you might find this annoying,
> this is much the same as the HTTP/2 case.  Sometimes you know the stream
> ended after HEADERS is done, sometimes you don't.

Yes but given that in H2 the frame is produced at a high level, the info
is already known by the sender when the frame is assembled. And yes we do
have that case in H2 and it will result in such a Transfer-Encoding header
as well, but then it really translates the client's intent.

> That this comes from a
> different layer of the stack adds a bit of complexity (you basically have to
> peek to see if the FIN is there),

My understanding is that in H3/QUIC it can arrive separately, hence
possibly much later. During this time the client's timeout ticks, and
it believes the request is being processed, while the gateway is still
waiting for the client to indicate whether or not it's complete.

> but if the request is done, you should be able to access that information.

If it's done, yes, it's not a problem at all, that's why it works well
with the vast majority of the clients in the interop test matrix.

> > Paying the cost of making two ends understand each other is the daily
> > job of a gateway :-)  Regardless it's also the one that takes all the
> > dirty stuff in the face and it needs to be robust by design. My concern
> > here precisely is that waiting will both make it less robust *and* will
> > possibly not work with some clients which forget to send their FIN.
> It's not OK to forget entirely, so I would support taking action against
> those clients that forget (maybe constant connection drops or ending up in a
> tar pit will motivate them to fix that problem).  This is more about sending
> the two pieces separately, which is legitimate, but annoying.

That's exactly my point. We all know that there are rough edges in specs
that need a bit of effort. We do know that a number of implementations
will not like to receive one byte at a time, or thousands of tiny H2
CONTINUATION frames, etc. Still this is legitimate, but nobody does that
because it hurts the whole ecosystem.

Here it's about the same except that it's way less obvious for an
implementer that this tiny trick can make a big difference in field,
which is why I think it ought to be made a bit more explicit.

> For me, I'd say "send an email to the developers of that client" if you start
> seeing problems often.

I don't know the details here but I heard by word-of-mouth that one
response was around "I don't see the problem, we're allowed to". Similarly
we're allowed to wait or even to punish bad actors by inflicting a 3 second
delay when seeing this but that's not the best way to deal with interop
issues, and my best guess is that an extra paragraph explaining the corner
case serves the cause better.

> >> But if you take the fact that you have a clear signal that the headers are
> >> done, you can - even as a gateway - make some decisions.  It might not be
> >> 100% safe, but I can't see any origin servers complaining if you started
> >> processing from that point, for GET and HEAD requests at least.
> >
> > Sadly that's not even true :-(  We've seen recently, I think it was
> > Elastic Search that takes JSON requests sent as the body of a GET
> > request. So now that we managed to better define the presence/absence
> > of a body in a request, we're back trying to guess it with a certain
> > probability based on a method, and I'd definitely not encourage
> > implementations to start to guess again.
> I would think it entirely reasonable if your default configuration started
> processing GET/HEAD requests after the header.  Maybe you can have a toggle
> that turns off that capability for people who want to go off-script and lose
> the performance gains, but I'd imagine most people would find the default to
> be very much acceptable.

Actually if we had no choice, we'll have to do the opposite. People always
complain first about breakage: "I inserted your component here and nothing
works anymore". So by default we would have to wait. Then we can have a
warning about the fact that waiting for supporting poorly acting clients
does have nasty security consequences and make DoS much easier, and provide
an option to simply not wait.

But given that we're still in the early days of such implementations, I'd
rather see the pieces arrange correctly so that everything works as smoothly
as it can. That's why if there's no strong objection against this, I'd file
an errata to at least mention the issue and how to best deal with it from
both sides, so that newcomers have a chance to notice it before they're
engaged too deeply with their code.