Re: Priority implementation complexity (was: Re: Extensible Priorities and Reprioritization)

Lucas Pardue <> Fri, 19 June 2020 21:47 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 70B1B3A0ED2 for <>; Fri, 19 Jun 2020 14:47:47 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -2.748
X-Spam-Status: No, score=-2.748 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HEADER_FROM_DIFFERENT_DOMAINS=0.249, HTML_MESSAGE=0.001, MAILING_LIST_MULTI=-1, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Authentication-Results: (amavisd-new); dkim=pass (2048-bit key)
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id YtXiYnDzmZvt for <>; Fri, 19 Jun 2020 14:47:43 -0700 (PDT)
Received: from ( []) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by (Postfix) with ESMTPS id 49E4F3A0ED1 for <>; Fri, 19 Jun 2020 14:47:42 -0700 (PDT)
Received: from lists by with local (Exim 4.92) (envelope-from <>) id 1jmOoL-00086l-7t for; Fri, 19 Jun 2020 21:44:33 +0000
Resent-Date: Fri, 19 Jun 2020 21:44:33 +0000
Resent-Message-Id: <>
Received: from ([]) by with esmtps (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from <>) id 1jmOoI-00085v-RR for; Fri, 19 Jun 2020 21:44:30 +0000
Received: from ([2a00:1450:4864:20::32a]) by with esmtps (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92) (envelope-from <>) id 1jmOoG-0008T8-ME for; Fri, 19 Jun 2020 21:44:30 +0000
Received: by with SMTP id t194so10351833wmt.4 for <>; Fri, 19 Jun 2020 14:44:28 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=rRbxPSLb634MXjpMbYpmbp7ZHpVAGctmTs4EIIR7Gfs=; b=guQ94SskdeFn6q696QttTksM5pFqyeLasPDKDn86lQoY2afph4HARuzV1s7C0KbjNB eEbgyXL8P6OYN2TX9ZCQuOz4qsbxKO7Wa7fElQ2HYtFFAN+p151gciESaH9kmdNcuQnP LglJQ/yiW9al9MFSJtc+is5c98n/Rf2rbYLzow6AaTQTQyoPwkv7SCIxHM1YvrRzoaES fnGZmTdqn6cmmweAIVZCQCrU1mQO7p4QT0vCOySkN9zulMsVxmu+1Pw0+T/yRge0C3M8 sDJu2EmnmHOlJXc8dNpITNXrgWl08cxmwQEgBp+55ewGOC/dnKLfA6zRmWZIiT3FXjC7 qomg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=rRbxPSLb634MXjpMbYpmbp7ZHpVAGctmTs4EIIR7Gfs=; b=HN2fRsFxSpCps0ex7+x7LapMbGIlssuagI11X0NkTyIgokWqJHYD0XSoy5KqLXujep /MT0DuYh6TxL5YpqQCRb0YvID39ovdo95jTEDDIt1SBYCSdj1bexWqqGk6RwgKEjzhvW oFNg3agCirS48lYBz/gMUTX9biyACDOeaMsw3Do2QfxNclQW5F7kVtH6HPEawpK8mRza WI59QLouB1BDScumTCd5vh3R0PgrjTjQPUrs5c/5H0XMsafaxB+NcQZt7yBD0/zchBwh 6bMTvTeBED8dfXoa6YJ9Qtm231NBA+vXcOmn2/D1wH+TZtCZH3/iCwxdWKm4FBno15Ng gWYg==
X-Gm-Message-State: AOAM531pzBSbsst9bUCf0Qh2ISISOYIRbLIj26BU3cLt0jSc/Bj8OA7b fFbfAHPdM4SATFVG05OTUYxYSLyf2EtyWopuSPI=
X-Google-Smtp-Source: ABdhPJwPLrOmozXXD0SMvkigK7Xo1KsY0lvTYL7RsjV+ogW/pPOqHObSa4xfM14Q4zPclRSZmPhhgTHUNHLyu6Sa4B0=
X-Received: by 2002:a1c:2b01:: with SMTP id r1mr5966758wmr.26.1592603057218; Fri, 19 Jun 2020 14:44:17 -0700 (PDT)
MIME-Version: 1.0
References: <> <> <> <> <> <20200609144428.GC22180@lubuntu> <> <> <> <> <> <> <> <> <> <> <> <> <> <> <> <> <> <>
In-Reply-To: <>
From: Lucas Pardue <>
Date: Fri, 19 Jun 2020 22:44:05 +0100
Message-ID: <>
To: Tom Bergan <>
Cc: HTTP Working Group <>
Content-Type: multipart/alternative; boundary="0000000000007e690005a876cb7f"
Received-SPF: pass client-ip=2a00:1450:4864:20::32a;;
X-W3C-Hub-Spam-Status: No, score=-7.8
X-W3C-Scan-Sig: 1jmOoG-0008T8-ME c2fb5f4e8d711b9d8012660db0d50539
Subject: Re: Priority implementation complexity (was: Re: Extensible Priorities and Reprioritization)
Archived-At: <>
X-Mailing-List: <> archive/latest/37802
Precedence: list
List-Id: <>
List-Help: <>
List-Post: <>
List-Unsubscribe: <>

On Fri, Jun 19, 2020 at 7:10 PM Tom Bergan <> wrote:

> I didn't follow those details. I think it would be helpful to summarize
> the API you're referring to.
> This might be naive: While there are potentially tricky implementation
> issues, as discussed by Kazuho and Patrick earlier, and potentially tricky
> scheduling decisions, as discussed by Stefan, I'm not seeing how those
> translate into API problems. Generally speaking, at the HTTP application
> level, a request doesn't really exist until the HEADERS arrive (example
> <>; all other HTTP libraries I'm
> familiar with work in basically the same way). At that point, the request
> has an initial priority, defined either by the HEADERS, or by the
> PRIORITY_UPDATE, if one arrived before HEADERS and there's no Priority
> field. Further PRIORITY_UPDATEs can be delivered with whatever event
> mechanism is most convenient (callbacks, channels, etc).

The quiche HTTP/3 library layer provides a poll() method that an
application calls. This queries readable transport streams and tries to
read requests (a complete headers list). Today, communicating the initial
priority is easy, I just pass the Priority header that I received. The
application chooses the priority to send responses by providing a desired
priority in the send_request() method - there is no coupling between the
request and response priority.  send_request() more or less passes
everything through to the QUIC library which manages the packetization of
STREAM frames once data is queued up. Minimal state is required in the
HTTP/3 library layer, once a response is started the application just tries
to fill the transport stream send buffer (via the H3 layer) as quickly as
the client drains it. When the application is ready to complete the
response, it send the last piece of data with a FIN. The application can
then forget about it. At this stage only the transport maintains stream
state, because it is not complete until the client reads the remaining
data. If we deem reprioritization as useful, then it needs to be supported
through the full lifetime of the stream.

Adding PRIORITY_UPDATE requires some more work in my HTTP/3 library layer.
One question that comes to mind is if the application cares about the full
sequence of PRIORITY_UPDATES, or if it is fine to skip/collapse them.
Before the request has been poll()ed out it seems sensible to buffer
PRIORITY_UPDATE, and then only present the most recent one. To call this
the initial priority is a slight fib, "most recent priority at the time you
discovered the request existed" is more apt but this is splitting hairs.
The one concern I would have is where the priority header and most-recent
value disagree. By passing both priorities out to the application I'm
making it responsible for picking.

Witholding a reprioritization event until after the request has been
poll()ed helps a bit. But I think there is not a clean way to deal with
reprioritization events after the application is done with the stream; if
the application is nominally done with processing the request all it can do
is tell the transport to behave differently. What's the point in that?
Attempting to explain the oddities caused by QUIC's behavior is part of the
API problem IMO.

> We also haven't mentioned reprioritization of server push. The client
>> cannot control the initial priority of a pushed response and there is an
>> open issue about the default priority of a push [2]. In that thread we are
>> leaning towards defining no default priority and letting a server pick
>> based on information *it* has. However, Mike Bishop's point about
>> reprioritizing pushes is interesting [3]. To paraphrase, if you consider
>> the RTT of the connection, there are three conditions:
>> a) the push priority was low: so no data was sent by the time a
>> reprioritization was received at the server. It is possible to apply the
>> reprioritization but importantly, the push was pointless and we may as well
>> have waited for the client to make the request.
>> b) the push priority was high, response size "small": so all data was
>> sent by the time a reprioritization was received at the server. The
>> reprioritization was useless.
>> c) the push priority was high, response size "large": some data sent at
>> initial priority but at the time a reprioritization is received at the
>> server, the remaining data can be sent appropriately. However, anecdotally
>> we know that pushing large objects is not a good idea.
>> If we agree to those conditions, it makes for a poor argument to keep
>> reprioritization of server push. But maybe there is data that disagrees.
> FWIW, I have the opposite interpretation. We can't ignore case (a) by
> simply saying that "the push was pointless and we may as well have waited
> for the client". That assumes the server should have known the push would
> be pointless, but in practice that conclusion depends on a number of
> factors that can be difficult to predict (size of other responses,
> congestion control state, network BDP). Sometimes push is useful, sometimes
> it's not, and when it's not, we should gracefully fallback to behavior that
> is equivalent to not using push at all. From that perspective, case (a) is
> WAI.
> This lack of a graceful fallback is a big reason why push can be such a
> footgun. Frankly, if pushes cannot be reprioritized in this way, then IMO
> push is essentially dead as a feature (and it's already on rocky ground, as
> it's so hard to find cases where it works well in the first place).

That's a fair opinion too. Have you any thoughts about server push
reprioritization being a motivating factor for maintaining the feature?

Unfortunately I don't have a server push API that I can use to speculate
about reprioritization, although I suspect that I'd have similar problems
determining when a pushed response was "done", with the added complication
that something would need to maintain state to map push IDs to stream IDs.