Re: Priority implementation complexity (was: Re: Extensible Priorities and Reprioritization)

Lucas Pardue <> Fri, 19 June 2020 16:26 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id AE9063A09BD for <>; Fri, 19 Jun 2020 09:26:17 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -2.748
X-Spam-Status: No, score=-2.748 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HEADER_FROM_DIFFERENT_DOMAINS=0.249, HTML_MESSAGE=0.001, MAILING_LIST_MULTI=-1, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Authentication-Results: (amavisd-new); dkim=pass (2048-bit key)
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id r73uDVrpZgBm for <>; Fri, 19 Jun 2020 09:26:15 -0700 (PDT)
Received: from ( []) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by (Postfix) with ESMTPS id 84E543A09C3 for <>; Fri, 19 Jun 2020 09:26:15 -0700 (PDT)
Received: from lists by with local (Exim 4.92) (envelope-from <>) id 1jmJnD-0000iT-86 for; Fri, 19 Jun 2020 16:23:03 +0000
Resent-Date: Fri, 19 Jun 2020 16:23:03 +0000
Resent-Message-Id: <>
Received: from ([]) by with esmtps (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from <>) id 1jmJn6-0000hZ-II for; Fri, 19 Jun 2020 16:22:56 +0000
Received: from ([2a00:1450:4864:20::32a]) by with esmtps (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92) (envelope-from <>) id 1jmJn4-00008d-6H for; Fri, 19 Jun 2020 16:22:56 +0000
Received: by with SMTP id g21so5182458wmg.0 for <>; Fri, 19 Jun 2020 09:22:53 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to; bh=9v2QZXp91sgHw1J+VGHJDzr56qQBRPwLC48NQEQHmQc=; b=P5cpl/ez7GaoM/wJVPEcgZLI8s9uZaHQRQidJj+yLTSbB+kImhqGKMEgsWWDHOOV3k YMmnux5RzKl4QOrgCDGX497iS2GWQqHYd3KGtTsPCGnRdX7RiR2K4ncccK64ZYPBl/Ox LLn6mf8WrYwfIw47kHQAClK6crhQqhKVB5gqW/BEXUNnz0nXO6KoWmqcvelrQ9wKVGX6 JaWp5B+M6HfCLGhcfhC8XLFRSXcQGBMp0ei/Bo5kOOkjVK2wuTUTjnJziEzrNIF5MLjb d8fRTB+9YZ7Mv9MBpI5zwUvDa+dRt7DQaP71+SBNmAndgGXNsFO/0dlddMuNRtmZUmOa vwKQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to; bh=9v2QZXp91sgHw1J+VGHJDzr56qQBRPwLC48NQEQHmQc=; b=pcoutR3ugFN6LlVTPq9xeaDtRZj2AMBEds0QrDim8WQ973uk83nX3O98DzJrMaPMZP g91fGn1suj5/SNwJ99P/saBmAY73tpeaBEGexMf7BeP2OgD6FuG79tfnzcXMBOO2SLeP IQ7mm7GZto7rXH5aM4wUIk62htPwAi7BwF7HXi87sOhtiWiPThG6NU0ms+qdZ1AYKO0R PvW2PEIZ+OnsmTHkZdFF8P7Q2KMGFFVOax5P0dQ1/rT3sdb9lKWc9u6id9YapmsiFFCJ afuojTXByHdq5BOS3yONY24h6XtzKBa++e9fj+ppgJYPbExq8BjSadptLIJnfxP/uNVR dgBg==
X-Gm-Message-State: AOAM531Yuk36zcnW9f5fJpc0kF7lLG5CndV+cdRdbGVlJZPw2KUcxGb6 Gqp8shVdiijukcaUgfPgbyJLKX59kG0vRqbU0yHRGGOf
X-Google-Smtp-Source: ABdhPJywx0RH7ytXBbjw0OIrzviVSKUkXkcPlPIsJWcaxkjKS3aZ9A4WAH8DruE2EHhozkMzBej6Ikmj0bwMqbJ5UvU=
X-Received: by 2002:a1c:6788:: with SMTP id b130mr3903497wmc.100.1592583762524; Fri, 19 Jun 2020 09:22:42 -0700 (PDT)
MIME-Version: 1.0
References: <> <> <> <> <> <20200609144428.GC22180@lubuntu> <> <> <> <> <> <> <> <> <> <> <> <> <> <> <> <>
In-Reply-To: <>
From: Lucas Pardue <>
Date: Fri, 19 Jun 2020 17:22:30 +0100
Message-ID: <>
To: HTTP Working Group <>
Content-Type: multipart/alternative; boundary="00000000000070bdfd05a8724d02"
Received-SPF: pass client-ip=2a00:1450:4864:20::32a;;
X-W3C-Hub-Spam-Status: No, score=-7.8
X-W3C-Scan-Sig: 1jmJn4-00008d-6H 3d2b1b61a7e38b2afadfbb36a3f3065b
Subject: Re: Priority implementation complexity (was: Re: Extensible Priorities and Reprioritization)
Archived-At: <>
X-Mailing-List: <> archive/latest/37800
Precedence: list
List-Id: <>
List-Help: <>
List-Post: <>
List-Unsubscribe: <>

Thanks for the different perspectives on this. Quoting isn't going to work
so great so I'll pick out some points from this and the parent thread:

Google have added a flag [1] to Chrome that allows toggling of H2
reprioritization, some experimental work is happening with this. Thanks!

We've talked a bit about how priorities might affect both scheduling server
work, and selecting the bytes to emit from the server's response send
queue. I agree with Kazuho that we don't want to specify much about the
internals of server applications. However, there are some DoS
considerations (depending on the outcome of the repriority discussion), so
looking ahead we might find it is useful to capture anything already not
already covered in the spec.

The server's role in optimizing the use of available bandwidth is an
interesting perspective to take, especially considering the client's
responsibility for providing flow control updates. In the basic HTTP/3
priority implementation of the quiche library, the application processes
the priority header and provides that information when sending response
data. Internally the library uses an implementation-specific API method to
set the priority of the transport stream; this does account for the
properties Kazuho mentioned i) it returns an error if the stream ID exceeds
current maximum ii) it no-ops if the stream ID refers to a closed stream.
HTTP response payload data is written to each stream until the QUIC layer
tells me it cannot take any more because the send window is full. In a
separate loop, the QUIC layer selects stream data to transmit based on the
priority. The client's expedience in providing flow control updates affects
phase 1 (local buffering) but not phase 2 (emission). A client
reprioritization would affect phase 2 not phase 1. In my case, quiche's
transport priority method does account for the properties Kazuho mentioned
i) it returns an error if the stream ID exceeds current maximum ii) it
no-ops if the stream ID refers to a closed stream.

The funnies will happen with trying to accommodate reprioritization

   - Exposing the reception of a reprioritization signal (PRIORITY_UPDATE
   frame) to the application might be useful, or useless if we consider some
   of Stefan's points.
   - Reordering can cause the reprioritization to arrive before the initial
   priority. Exposing an event to the application just made things harder.
      - Reordering isn't the only concern. In quiche, when an application
      asks us to read from transport, we internally always read from
the control
      stream and QPACK streams before request streams. So we'd always pull out
      the PRIORITY_UPDATE first.
      - Exposing this scenario of reprioritization event to the application
      is mostly useless because the application has no idea of what is being
      reprioritized. If the priority is used for deciding server work,
one of the
      layers above transport needs to first validate and then remember the
      details. This means that the library needs to expose a broader
API surface
      than it already does (e.g. exposing Kazuho's properties)
      - If the transport layer API simply actions the last invoked
   priority, naively calling it when the signals were received in the "wrong"
   order means that reprioritization might be ignored.
   - If a reprioritization event is simply hair pinned back into the quiche
   library, there is an argument for not exposing it.
   - I could simply accommodate things by modifying the transport priority
   method to take a bool, is_initial. This would prevent an initial priority
   from being applied after a reprioritization. In conjunction, defining in
   the spec that initial priority is *always the header* would remove some of
   the complexity of buffering data above the transport layer.

All of this is additional consideration and speculation specific to my
implementation, applicability to others can vary. I can see how things
would be harder for implementers that attempt to manage more of the
priority scheme in the HTTP/3 layer than the QUIC one.

We also haven't mentioned reprioritization of server push. The client
cannot control the initial priority of a pushed response and there is an
open issue about the default priority of a push [2]. In that thread we are
leaning towards defining no default priority and letting a server pick
based on information *it* has. However, Mike Bishop's point about
reprioritizing pushes is interesting [3]. To paraphrase, if you consider
the RTT of the connection, there are three conditions:

a) the push priority was low: so no data was sent by the time a
reprioritization was received at the server. It is possible to apply the
reprioritization but importantly, the push was pointless and we may as well
have waited for the client to make the request.
b) the push priority was high, response size "small": so all data was sent
by the time a reprioritization was received at the server. The
reprioritization was useless.
c) the push priority was high, response size "large": some data sent at
initial priority but at the time a reprioritization is received at the
server, the remaining data can be sent appropriately. However, anecdotally
we know that pushing large objects is not a good idea.

If we agree to those conditions, it makes for a poor argument to keep
reprioritization of server push. But maybe there is data that disagrees.


[1] -
[2] -
[3] -