Re: Is a faithful HTTP/2 response scheduler necessarily O(n) in the worst case?

Kazuho Oku <kazuhooku@gmail.com> Wed, 25 January 2017 02:03 UTC

Return-Path: <ietf-http-wg-request+bounce-httpbisa-archive-bis2juki=lists.ie@listhub.w3.org>
X-Original-To: ietfarch-httpbisa-archive-bis2Juki@ietfa.amsl.com
Delivered-To: ietfarch-httpbisa-archive-bis2Juki@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 2724A129606 for <ietfarch-httpbisa-archive-bis2Juki@ietfa.amsl.com>; Tue, 24 Jan 2017 18:03:38 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -9.72
X-Spam-Level:
X-Spam-Status: No, score=-9.72 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HEADER_FROM_DIFFERENT_DOMAINS=0.001, RCVD_IN_DNSWL_HI=-5, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, RCVD_IN_SORBS_SPAM=0.5, RP_MATCHES_RCVD=-3.199, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 97btc3LwTpXs for <ietfarch-httpbisa-archive-bis2Juki@ietfa.amsl.com>; Tue, 24 Jan 2017 18:03:36 -0800 (PST)
Received: from frink.w3.org (frink.w3.org [128.30.52.56]) (using TLSv1.2 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id EF3D91295F8 for <httpbisa-archive-bis2Juki@lists.ietf.org>; Tue, 24 Jan 2017 18:03:35 -0800 (PST)
Received: from lists by frink.w3.org with local (Exim 4.80) (envelope-from <ietf-http-wg-request@listhub.w3.org>) id 1cWCt2-00078o-Kj for ietf-http-wg-dist@listhub.w3.org; Wed, 25 Jan 2017 02:00:36 +0000
Resent-Date: Wed, 25 Jan 2017 02:00:36 +0000
Resent-Message-Id: <E1cWCt2-00078o-Kj@frink.w3.org>
Received: from mimas.w3.org ([128.30.52.79]) by frink.w3.org with esmtps (TLS1.2:RSA_AES_128_CBC_SHA1:128) (Exim 4.80) (envelope-from <kazuhooku@gmail.com>) id 1cWCsx-00077V-LT for ietf-http-wg@listhub.w3.org; Wed, 25 Jan 2017 02:00:31 +0000
Received: from mail-ot0-f174.google.com ([74.125.82.174]) by mimas.w3.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.84_2) (envelope-from <kazuhooku@gmail.com>) id 1cWCsr-0005eL-BV for ietf-http-wg@w3.org; Wed, 25 Jan 2017 02:00:26 +0000
Received: by mail-ot0-f174.google.com with SMTP id 104so141864965otd.3 for <ietf-http-wg@w3.org>; Tue, 24 Jan 2017 18:00:05 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc:content-transfer-encoding; bh=gmVFzqhWWiw1XeSYHminePgyIYXVPyzjQSKvc9r8+cE=; b=XSwjz+Xm28S/JhncrJIh55cW/RTvjLNi0XqvxKFMGb3BeG32L+FoGwkhqb3L1c7VL8 MQbKUWWEo4MQe/ukclnZ+DtNz5f2SmUcshmICfifQYEv3nrKj5M05B88FciEttEOnVQL QoL/xMYiOUKWSO1LJPA+0mwpKjnA3YHpbOVpnuRa5YnSXlfaYS1UgTHtg/jwgQQWXNHH YR8QSaTF1f/uZap0XTLRMZ+spasZRPhHMoR/Dbo3wSd17U6u1HXFjg4lki+H+6HxynHX TpfD1GXkzbpmkzampvM1S6EvW3OMFijIsWBSMvrlAD+4F7v8TnX8pMlQwRS7aHUdeo8M xLtg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc:content-transfer-encoding; bh=gmVFzqhWWiw1XeSYHminePgyIYXVPyzjQSKvc9r8+cE=; b=uMgODdsXcX/CDzEAoTdjEqACHuMOLzS7T/BQ8K1bHsVRQRaodBnIsXJFQTzwuoh2NK 2NjQjN2bvZPCOl/0jJCnx2SmxPF22fCZMfMsFu6ajRMbyfpg9mOkiv+MOA4Uu37eYIGK jMZcNMZGu4FSSykToZ1d46YzfhXKwYYLA7Fn2b087jBYLDY5qgqE8GD0hEBpDRxb+p4d 10HohTukV7Eo9w0BE5YjJUoey+lRAmX0s85keZX965PNwXZKsRgH1uDNAN+iSvBZqd4Z LKeeI4+If1USJsTwTBnzirgo7duEaAlhqHkEboHbIU83zvFqmMtZqFy06YUlJRIb6Hzi XZnQ==
X-Gm-Message-State: AIkVDXIS6Knf7THwA3JoQ5H59bvP7yms/q/K77cGejhb1+mt52aH6NbJno1ZovKi9wC78aRZ8wgGHREbEd7P0w==
X-Received: by 10.157.11.137 with SMTP id 9mr16955838oth.14.1485309598984; Tue, 24 Jan 2017 17:59:58 -0800 (PST)
MIME-Version: 1.0
Received: by 10.157.16.113 with HTTP; Tue, 24 Jan 2017 17:59:58 -0800 (PST)
In-Reply-To: <CA+3+x5EdkLSAR2gWR9TT72o2Tg4Z_xKXMh8yVREYD7mvNuLB8w@mail.gmail.com>
References: <CA+3+x5Ft8MfRduWp1RzJ9_qAgQiCujac8f5FLz81Xpyu3cTDxw@mail.gmail.com> <CANatvzz0AYxBDDyEbpvnq1xPd1-vrFazbnVt2usanXS0CTKbeA@mail.gmail.com> <20170124.165356.870174430965764062.kazu@iij.ad.jp> <900A5D6B-0752-470E-840C-4518D933DD09@greenbytes.de> <CA+3+x5EdkLSAR2gWR9TT72o2Tg4Z_xKXMh8yVREYD7mvNuLB8w@mail.gmail.com>
From: Kazuho Oku <kazuhooku@gmail.com>
Date: Wed, 25 Jan 2017 10:59:58 +0900
Message-ID: <CANatvzxviLMMvn3VLJDn3oYJACSBwB8Qhy_43uQ9uBDnnc1s0Q@mail.gmail.com>
To: Tom Bergan <tombergan@chromium.org>
Cc: Stefan Eissing <stefan.eissing@greenbytes.de>, "Kazu Yamamoto (山本和彦)" <kazu@iij.ad.jp>, HTTP Working Group <ietf-http-wg@w3.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
Received-SPF: pass client-ip=74.125.82.174; envelope-from=kazuhooku@gmail.com; helo=mail-ot0-f174.google.com
X-W3C-Hub-Spam-Status: No, score=-4.2
X-W3C-Hub-Spam-Report: AWL=-0.670, BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FREEMAIL_FROM=0.001, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, RCVD_IN_SORBS_SPAM=0.5, SPF_PASS=-0.001, W3C_AA=-1, W3C_WL=-1
X-W3C-Scan-Sig: mimas.w3.org 1cWCsr-0005eL-BV f53fd89c8f49338b653ea819221eebff
X-Original-To: ietf-http-wg@w3.org
Subject: Re: Is a faithful HTTP/2 response scheduler necessarily O(n) in the worst case?
Archived-At: <http://www.w3.org/mid/CANatvzxviLMMvn3VLJDn3oYJACSBwB8Qhy_43uQ9uBDnnc1s0Q@mail.gmail.com>
Resent-From: ietf-http-wg@w3.org
X-Mailing-List: <ietf-http-wg@w3.org> archive/latest/33372
X-Loop: ietf-http-wg@w3.org
Resent-Sender: ietf-http-wg-request@w3.org
Precedence: list
List-Id: <ietf-http-wg.w3.org>
List-Help: <http://www.w3.org/Mail/>
List-Post: <mailto:ietf-http-wg@w3.org>
List-Unsubscribe: <mailto:ietf-http-wg-request@w3.org?subject=unsubscribe>

2017-01-25 7:22 GMT+09:00 Tom Bergan <tombergan@chromium.org>:
> Thanks for the feedback. Sounds like the worst-case time really is O(n).
>
>> kazuhooku@
>> the constant for O(n) would be small enough so that it cannot be used an
>> attack vector
>
> Don't you need to ensure that n is small, otherwise even O(n) with a small
> constant factor can lead to degraded performance? e.g., if
> SETTINGS_MAX_CONCURRENT_STREAMS = 20,000, I may end up computing a sum of
> 20,000 numbers every few frames, which is not terribly slow, but not
> terribly fast either.

I agree. For H2O, the maximum is hard-coded to 100 partly to avoid
that kind of attack.

>> kazu@
>> http://www.mew.org/~kazu/material/2015-http2-priority2.pdf
>> http://www.mew.org/~kazu/doc/paper/http2-haskell-2016.pdf
>
> Thanks. IIUC, the algorithms described in both links are still at least
> O(depth), which can be O(n) for dependency trees generated by certain
> clients such as Chrome.
>
>> stefan.eissing@
>> The very same problem exists for stream processing in order to generate
>> response data.
>
> What did you mean by "stream processing"?
>
> On Tue, Jan 24, 2017 at 12:26 AM, Stefan Eissing
> <stefan.eissing@greenbytes.de> wrote:
>>
>> The very same problem exists for stream processing in order to generate
>> response data.
>>
>> > Am 24.01.2017 um 08:53 schrieb Kazu Yamamoto (山本和彦) <kazu@iij.ad.jp>:
>> >
>> > Hi Tom,
>> >
>> > Probably, this material would help you:
>> >
>> > http://www.mew.org/~kazu/material/2015-http2-priority2.pdf
>> > http://www.mew.org/~kazu/doc/paper/http2-haskell-2016.pdf
>> >
>> > --Kazu
>> >
>> >> Regarding your question, I am unaware of a scheduler design that is
>> >> better than O(n) for both dequeuing (i.e. selecting the stream to
>> >> send) and for rescheduling a stream (especially a stream with many
>> >> decendants).
>> >>
>> >> While designing the scheduler of H2O (you are right in pointing out
>> >> the fact that it is O(depth)), I came up with two options. One was to
>> >> retain the vertical (i.e. parent-children) relationship between the
>> >> streams. The other was to squash the vertical relationships to
>> >> generate a one-dimensional list of streams ordered by priority.
>> >>
>> >> By taking the latter approach, you could create a scheduler that
>> >> dequeues at O(1). But such scheduler would need to perform O(N)
>> >> operation when receiving a priority frame or a stream-level window
>> >> update (in this case N is number of direct and indirect decendants of
>> >> the reprioritized stream).
>> >>
>> >> Considering this, we chose to implement the scheduler of H2O as O(1)
>> >> weight-wise, and O(n) depth-wise, but that the constant for O(n) would
>> >> be small enough so that it cannot be used an attack vector.
>> >>
>> >> 2017-01-24 9:39 GMT+09:00 Tom Bergan <tombergan@chromium.org>:
>> >>> I implemented the HTTP/2 response scheduler in Go's HTTP/2 server
>> >>> library.
>> >>> I'm trying to understand the worst-case behavior of that scheduler. I
>> >>> believe the worst-case behavior is necessarily O(n) operations per
>> >>> frame
>> >>> sent on the wire, where n is the number of streams in the dependency
>> >>> tree.
>> >>> Here is my rationale.
>> >>>
>> >>> The key operation is finding the highest-priority stream that is ready
>> >>> to
>> >>> send data.
>> >>>
>> >>> If we don't care about weights, and we don't care about balancing
>> >>> bandwidth
>> >>> usage across sibling nodes in a tree, then we can label each node with
>> >>> two
>> >>> fields: "ready" (true if the stream is ready to send data) and "depth"
>> >>> (the
>> >>> node's distance from the root of the tree). The scheduler must find a
>> >>> node
>> >>> with the smallest depth over all nodes with ready = true. It is fairly
>> >>> trivial to implement this in O(log n).
>> >>>
>> >>> Now, let's introduce weights. The scheduler must allocate bandwidth to
>> >>> all
>> >>> ready nodes, which happens recursively as follows:
>> >>>
>> >>>  func allocateBandwidth(node, bw) {
>> >>>    if (node.ready) {
>> >>>      node.bandwidthShare = bw
>> >>>      return
>> >>>    }
>> >>>    totalWeight = 0
>> >>>    for (n in node.children) {
>> >>>      if (n.ready || descendantIsReady(n)) {
>> >>>        totalWeight += n.weight
>> >>>      }
>> >>>    }
>> >>>    for (n in node.children) {
>> >>>      if (n.ready || descendantIsReady(n)) {
>> >>>        allocateBandwidth(n, bw * n.weight / totalWeight)
>> >>>      }
>> >>>    }
>> >>>  }
>> >>>  allocateBandwidth(root, 1.0)
>> >>>
>> >>> I believe the above definition is a direct translation of RFC 7540
>> >>> Section
>> >>> 5.3.2 (also see this thread, which discussed bandwidth allocation).
>> >>> Given a
>> >>> complete bandwidth allocation, the server can translate each node's
>> >>> bandwidthShare to a number of tokens that are updated using a token
>> >>> bucket
>> >>> algorithm (or similar). The scheduler would then pick the node with
>> >>> the most
>> >>> available tokens. The scheduler looks something like:
>> >>>
>> >>>  func scheduleNextFrame() {
>> >>>    if (tree changed since last frame written) {
>> >>>      allocateBandwidth(root, 1.0)
>> >>>      assign tokens and build a priority queue containing all nodes
>> >>> with
>> >>> allocated bandwidth
>> >>>    }
>> >>>    node = priorityQueue.head()
>> >>>    node.consumeTokens() // consume up to frame size or flow-control
>> >>> limit
>> >>>    priorityQueue.update(node)
>> >>>    return node
>> >>>  }
>> >>>
>> >>> There are two steps in scheduleNextFrame. The first step updates the
>> >>> bandwidth allocation if the tree changed since the last frame was
>> >>> written.
>> >>> This is the most expensive step. I don't believe it's possible to
>> >>> implement
>> >>> allocateBandwidth using fewer than O(n) worst-case steps. For example,
>> >>> if
>> >>> the tree is mostly flat, meaning that most nodes are children of the
>> >>> same
>> >>> node, then the loop to compute totalWeight is O(n). I believe Firefox
>> >>> creates mostly-flat trees. Further, allocateBandwidth makes O(depth)
>> >>> recursive steps, where "depth" is the maximum depth of the dependency
>> >>> tree.
>> >>> If the tree is mostly-linear, then O(depth) becomes O(n). Chrome
>> >>> creates
>> >>> mostly-linear trees.
>> >>>
>> >>> The overall runtime depends on how often the tree changes. If the tree
>> >>> changes rarely, the scheduler is cheap. If the tree changes
>> >>> frequently, the
>> >>> scheduler is worst-case O(n). A tree has "changed" if it changed shape
>> >>> (nodes are added/moved/removed) or if any node's ready state has
>> >>> changed.
>> >>> Both kinds of changes can happen often in practice, suggesting that
>> >>> the
>> >>> overall scheduler is worst-case O(n). For example, consider a server
>> >>> that
>> >>> sends small responses (one or two DATA frames per response) -- each
>> >>> stream
>> >>> will be closed after one or two DATA frames, so on average, the tree
>> >>> will
>> >>> change shape every few frames. Further, it's possible for ready states
>> >>> to
>> >>> change more frequently than you might think. In Go's HTTP/2
>> >>> implementation,
>> >>> a stream does not become ready until the application handler calls
>> >>> Write to
>> >>> buffer response data. After that buffered data is serialized on the
>> >>> wire,
>> >>> the stream transitions to "not ready" because the scheduler cannot
>> >>> know when
>> >>> the next Write call will happen. The stream will transition back to
>> >>> "ready"
>> >>> during the next Write call. Each Write call typically buffers about
>> >>> 32KB of
>> >>> data. This is a "push" model, where the application handler "pushes"
>> >>> data
>> >>> into the scheduler. I'm aware of one other HTTP/2 server that works
>> >>> similarly. I suspect that frequent ready/not-ready transitions are
>> >>> common to
>> >>> most HTTP/2 servers that use a "push" model. These servers will be
>> >>> more
>> >>> susceptible to the O(n) worst case.
>> >>>
>> >>> Questions:
>> >>>
>> >>> 1. Am I missing a clever implementation, or is it true that a faithful
>> >>> HTTP/2 scheduler necessarily requires O(n) operations per frame sent
>> >>> on the
>> >>> wire, in the worst case? I could not find much discussion of this
>> >>> question
>> >>> after a quick search. H2O claims to implement an O(1) scheduler,
>> >>> however,
>> >>> the code seems to be worst-case O(depth) or O(n) -- see here, here,
>> >>> and
>> >>> here.
>> >>>
>> >>> 2. If the above is correct, should I be concerned about the O(n) worst
>> >>> case?
>> >>> I doubt that a typical web browsing session will trigger O(n) behavior
>> >>> frequently, so I'm less concerned about the average case; I'm more
>> >>> concerned
>> >>> about pathological cases or possible DoS vectors. Also, think about
>> >>> cases
>> >>> where the "client" is actually a proxy server, meaning the HTTP/2
>> >>> connection
>> >>> may have many more concurrent streams than a typical browsing session.
>> >>> For
>> >>> comparison, if you recall the predecessor to HTTP/2 (SPDY), a SPDY
>> >>> scheduler
>> >>> could be trivially implemented in O(1), since SPDY used just eight
>> >>> priority
>> >>> buckets.
>> >>>
>> >>> 3. If I should be concerned about an O(n) worst case, are there any
>> >>> suggested mitigations beyond setting SETTINGS_MAX_CONCURRENT_STREAMS
>> >>> to a
>> >>> smallish constant?
>> >>
>> >>
>> >>
>> >> --
>> >> Kazuho Oku
>> >>
>> >
>>
>>
>



-- 
Kazuho Oku