Re: Priority implementation complexity (was: Re: Extensible Priorities and Reprioritization)

Kazuho Oku <kazuhooku@gmail.com> Thu, 11 June 2020 08:45 UTC

Return-Path: <ietf-http-wg-request+bounce-httpbisa-archive-bis2juki=lists.ie@listhub.w3.org>
X-Original-To: ietfarch-httpbisa-archive-bis2Juki@ietfa.amsl.com
Delivered-To: ietfarch-httpbisa-archive-bis2Juki@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 655C33A0F8C for <ietfarch-httpbisa-archive-bis2Juki@ietfa.amsl.com>; Thu, 11 Jun 2020 01:45:28 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.748
X-Spam-Level:
X-Spam-Status: No, score=-2.748 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HEADER_FROM_DIFFERENT_DOMAINS=0.249, HTML_MESSAGE=0.001, MAILING_LIST_MULTI=-1, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id IvRp8lOh6GNf for <ietfarch-httpbisa-archive-bis2Juki@ietfa.amsl.com>; Thu, 11 Jun 2020 01:45:26 -0700 (PDT)
Received: from lyra.w3.org (lyra.w3.org [128.30.52.18]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 03B9C3A0F6A for <httpbisa-archive-bis2Juki@lists.ietf.org>; Thu, 11 Jun 2020 01:45:26 -0700 (PDT)
Received: from lists by lyra.w3.org with local (Exim 4.92) (envelope-from <ietf-http-wg-request@listhub.w3.org>) id 1jjImv-0005Ea-N5 for ietf-http-wg-dist@listhub.w3.org; Thu, 11 Jun 2020 08:42:17 +0000
Resent-Date: Thu, 11 Jun 2020 08:42:17 +0000
Resent-Message-Id: <E1jjImv-0005Ea-N5@lyra.w3.org>
Received: from titan.w3.org ([128.30.52.76]) by lyra.w3.org with esmtps (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from <kazuhooku@gmail.com>) id 1jjImu-0005DK-8P for ietf-http-wg@listhub.w3.org; Thu, 11 Jun 2020 08:42:16 +0000
Received: from mail-ej1-x633.google.com ([2a00:1450:4864:20::633]) by titan.w3.org with esmtps (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92) (envelope-from <kazuhooku@gmail.com>) id 1jjImr-0001Dt-Io for ietf-http-wg@w3.org; Thu, 11 Jun 2020 08:42:16 +0000
Received: by mail-ej1-x633.google.com with SMTP id k11so5590169ejr.9 for <ietf-http-wg@w3.org>; Thu, 11 Jun 2020 01:42:13 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=omULjg8ZaLPNrcpcmgECTrWlWHRRQw6eEsu+vdmV0Ig=; b=Kz4uN0lRQoLhBb+kOkV8Yl2Vbcmz8USpbX2vVVCFJV1eSvToL/vpAfAjWulLSu3cLV 68J6ZY9cXNaBy0j+vEw96ewpTPiq2b/Kd0v/CyorYr1aIL+qUkkLRq4dpHutQJrJMjUP MMGXKozYMBu7vOazyHSnWv4e9rhzCuty3RCikLDaa+odmIqMao39ubuwTJ2TzEA3sggv /+QbtVKGDdFDbQ06BcllUK/qaHP1bulxhHAdzjAbPR7ZtJigkL4IqUIuxmRwCWOyMw/p mIOIN7I4GSDa0ilLvBnbY89iQwD45qxULKubOpWPjxLUZ6gFURaBCUueckiez9SMO7fJ QqVg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=omULjg8ZaLPNrcpcmgECTrWlWHRRQw6eEsu+vdmV0Ig=; b=ENSC3sFVQhhgswrK9tit9lLzBDcrS+ZLXR4rEBoVB4cMfGx1yJS9yNwmq+7mxz8Odw 7rHSn2bqJKi1WCdapH13ocEa6UKiQFWgykIqhhg+WUziCUUMwvQvd6eIrTpFGIczD74U Vxa5Kq6JQNfvlaKjyjvW2t0fXSKjTqwfzNad825xvaUOW4UCbi4ds5ncnRaMZrrhOP+k 0IfwDyhkNqbk0R+fo4Qv6cumF1nE9rPXuUR80Pam7qndMtMQg7I+wF4QPsi5K+Gg1k9s 4o32geV1EttOKnFKjTYkeHVlfb3MNs8NQ+irFQx+82Fg9rjV58vuLYa/aJU8N5Znxoo8 PkVA==
X-Gm-Message-State: AOAM5308+pzzruFSvPTr01otO3fl+05Ehpr2IX84VizWc8eJPIf5AooN OSQtq4/G9wd2bfbVgF40mgfim9S70OUYM6kirUU=
X-Google-Smtp-Source: ABdhPJzA/TBOamZRU9fYUB650eGudeKPDznuQ/OOewSQs9MeNhKIwW0uXh20Jz3Qqxbws8cMZZJZu33RGkl/ByCtbto=
X-Received: by 2002:a17:906:5496:: with SMTP id r22mr7675577ejo.449.1591864921903; Thu, 11 Jun 2020 01:42:01 -0700 (PDT)
MIME-Version: 1.0
References: <CALGR9obRjBSADN1KtKF6jvFVzNS1+JzaS0D0kCVKHKkd4sn+MQ@mail.gmail.com> <459C86F8-A989-4EF4-84DC-3568FF594F36@apple.com> <CANatvzwSpSHd7kZD-4tyMGkBJDdCBi6r_pLBvnaT8rrQy6SBHQ@mail.gmail.com> <CACMu3treK0m2mbpw9FebOjOcEed0bW-DbLbryHJH1DWAHoz+9g@mail.gmail.com> <CALGR9oZgE7ZfXdoYdUh9LUYC1fi8fMUyyTpvmV3GF7Z6Oxgg1g@mail.gmail.com> <20200609144428.GC22180@lubuntu> <CAJV+MGyuhxx=P6kZKktuREeq5pipZjxmwWP4jE_Sxhj_+krU2Q@mail.gmail.com> <CANatvzx_eg84V7UefOtSF+NHGHnTg7h-9n5bsRZRXxBqsaOkfQ@mail.gmail.com> <CACj=BEip6+7AunFsD=6qM5rsgrTfg6bRctOMu1gOe-KVjAW7Dw@mail.gmail.com>
In-Reply-To: <CACj=BEip6+7AunFsD=6qM5rsgrTfg6bRctOMu1gOe-KVjAW7Dw@mail.gmail.com>
From: Kazuho Oku <kazuhooku@gmail.com>
Date: Thu, 11 Jun 2020 17:41:51 +0900
Message-ID: <CANatvzyv03VH9=+J=M2yY0EwCXp7HMWsXYaXOE=WYGDKBHdaVA@mail.gmail.com>
To: Yoav Weiss <yoav@yoav.ws>
Cc: Patrick Meenan <patmeenan@gmail.com>, Lucas Pardue <lucaspardue.24.7@gmail.com>, HTTP Working Group <ietf-http-wg@w3.org>, Bence Béky <bnc@chromium.org>
Content-Type: multipart/alternative; boundary="00000000000033558905a7caefb3"
Received-SPF: pass client-ip=2a00:1450:4864:20::633; envelope-from=kazuhooku@gmail.com; helo=mail-ej1-x633.google.com
X-W3C-Hub-Spam-Status: No, score=-4.1
X-W3C-Hub-Spam-Report: BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FROM=0.001, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, W3C_AA=-1, W3C_WL=-1
X-W3C-Scan-Sig: titan.w3.org 1jjImr-0001Dt-Io 2bbc9cd7b39449b7bc7de4418931b76a
X-Original-To: ietf-http-wg@w3.org
Subject: Re: Priority implementation complexity (was: Re: Extensible Priorities and Reprioritization)
Archived-At: <https://www.w3.org/mid/CANatvzyv03VH9=+J=M2yY0EwCXp7HMWsXYaXOE=WYGDKBHdaVA@mail.gmail.com>
Resent-From: ietf-http-wg@w3.org
X-Mailing-List: <ietf-http-wg@w3.org> archive/latest/37750
X-Loop: ietf-http-wg@w3.org
Resent-Sender: ietf-http-wg-request@w3.org
Precedence: list
List-Id: <ietf-http-wg.w3.org>
List-Help: <https://www.w3.org/Mail/>
List-Post: <mailto:ietf-http-wg@w3.org>
List-Unsubscribe: <mailto:ietf-http-wg-request@w3.org?subject=unsubscribe>

2020年6月11日(木) 16:51 Yoav Weiss <yoav@yoav.ws>:

>
>
> On Thu, Jun 11, 2020 at 1:57 AM Kazuho Oku <kazuhooku@gmail.com> wrote:
>
>>
>>
>> 2020年6月10日(水) 0:20 Patrick Meenan <patmeenan@gmail.com>:
>>
>>> Maybe I'm missing something but the priority updates don't need to
>>> coordinate across multiple data streams, just between the one stream that
>>> is being reprioritized and the control stream.
>>>
>>> Would something like this not work?
>>> - Control stream gets priority update for stream X
>>> - If stream X is known and the request side is complete/closed then
>>> update the priority as requested
>>>
>>
>> The problem is that when a H3 server receives a reprioritization frame
>> and fails to find the state of the stream being designated by that frame,
>> it has to decide either to queue or drop the frame. As you correctly point
>> out, the size of the queue has to be bounded.
>>
>> To determine if it should queue or drop, a server needs to have access to
>> both of the following states maintained by the QUIC stack:
>> * i) current maximum stream ID permitted to the peer
>> * ii) list of stream IDs that have not been closed yet
>>
>> Without having access to (i), a server cannot reject reprioritization
>> frames specifying an unreasonable large stream ID. Without having access to
>> (ii), a server might start remembering information for streams that have
>> already been closed.
>>
>> The question is if we think it is okay to require all QUIC stacks to
>> provide access to this information (or to provide an API that allows an
>> application to query if a given stream ID meets the two criteria).
>>
>> I would also point out that the size of the queue should not be
>> restricted any further. This is because when reprioritization is considered
>> as an indispensable part of Extensible Priorities, a client might use the
>> reprioritization frame for sending initial priorities too, instead of using
>> the header field for indicating the initial priority.
>>
>> That is what Chrome does today. If an HTML contains 100 images, and if
>> Chrome receives them at once, it sends 100 PRIORITY_UPDATE frames, then
>> sends the requests for all those images, assuming that 100 is the maximum
>> stream concurrency permitted by the server.
>>
>
> If what Chrome is doing today is sub-optimal, it would be great if you
> could file an issue on that front.
>

If it is an issue of Chrome depends on if we think that the specification
can require all servers to implement reprioritization correctly. That's why
this is an specifaction issue.


>
>>
>> If some servers fail to implement reprioritization correctly, and if
>> clients rely overly on reprioritization, the negative impact on performance
>> could be far greater than when *not* having reprioritization. I think that
>> the concern that some of us have, and the reason why they (I) think
>> defining reprioritization as an optional feature would be a safer approach.
>>
>
> If that's indeed the case, we shouldn't define reprioritization *at all*.
> Making it optional is a cop-out, which would result in inconsistencies,
> meaning clients can't rely on it and need to work-around its absence.
>

That depends on how much clients would rely on reprioritization. Unlike H2
priorities, Extensible Priority does not have inter-stream dependencies.
Therefore, losing *some* prioritization signals is less of an issue
compared to H2 priorities.

Assuming that reprioritization is used mostly for refining the initial
priorities of a fraction of all the requests, I think there'd be benefit in
defining reprioritization as an optional feature. Though I can see some
might argue for not having reprioritization even as an optional feature
unless there is proof that it would be useful.

We should decide if reprioritization is good or bad, based on as much data
> as we can pull, and make sure it's implemented only if we see benefits for
> it in some cases, and then make sure it's only used in those cases.
>
>
>>
>>
>>> - If stream X is either not known or still in the process of receiving
>>> request details, store the priority update for stream X in a fixed
>>> queue/map (size can be small but a safe size would be the max number of
>>> streams supported)
>>> - If there is already a pending priority update for stream X, discard it
>>> and replace it with the current priority update
>>> - If the pending priority update queue is full, drop the oldest and
>>> insert the new update
>>> - When a new request stream closes, check the pending priority update
>>> queue to see if there is an update waiting for the stream. If so, remove it
>>> from the queue and apply the new priority
>>>
>>> There should be no DOS concerns since the queue is fixed and small. The
>>> performance overhead would be trivial if we assume that out-of-order
>>> reprioritizations are rare (i.e. the list will almost always be empty).
>>>
>>> On Tue, Jun 9, 2020 at 10:48 AM Dmitri Tikhonov <
>>> dtikhonov@litespeedtech.com> wrote:
>>>
>>>> On Tue, Jun 09, 2020 at 03:15:44PM +0100, Lucas Pardue wrote:
>>>> > I can hypothesize that an implementation with QPACK dynamic support
>>>> has
>>>> > already crossed the threshold of complexity that means implementing
>>>> > reprioritization is not burdensome. I'd like to hear from other
>>>> > implementers if they agree or disagree with this.
>>>>
>>>> I don't think we can judge either way.  If Alice implements QPACK and
>>>> Bob implement reprioritization, results will vary based on their level
>>>> of competence.  The degree of burden will also vary for each
>>>> particular implementation.  Speaking for lsquic, reprioritization
>>>> had to [1] touch more code and was much more tightly coupled than
>>>> QPACK; on the other had, QPACK encoder logic was a lot more code.
>>>>
>>>> At a higher level, I don't understand the concern with complexity.
>>>> If you look up "complexity" in the dictionary, you will see
>>>>
>>>>     complexity (n), see QUIC.
>>>>
>>>>   - Dmitri.
>>>>
>>>> 1. Before it was ripped out of the spec, that is, thanks a lot...
>>>>
>>>>
>>
>> --
>> Kazuho Oku
>>
>

-- 
Kazuho Oku