Re: Is a faithful HTTP/2 response scheduler necessarily O(n) in the worst case?

Kazuho Oku <kazuhooku@gmail.com> Tue, 24 January 2017 07:40 UTC

Return-Path: <ietf-http-wg-request+bounce-httpbisa-archive-bis2juki=lists.ie@listhub.w3.org>
X-Original-To: ietfarch-httpbisa-archive-bis2Juki@ietfa.amsl.com
Delivered-To: ietfarch-httpbisa-archive-bis2Juki@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 3C09612957A for <ietfarch-httpbisa-archive-bis2Juki@ietfa.amsl.com>; Mon, 23 Jan 2017 23:40:33 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -9.72
X-Spam-Level:
X-Spam-Status: No, score=-9.72 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HEADER_FROM_DIFFERENT_DOMAINS=0.001, RCVD_IN_DNSWL_HI=-5, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, RCVD_IN_SORBS_SPAM=0.5, RP_MATCHES_RCVD=-3.199, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id no23vW_OKbaU for <ietfarch-httpbisa-archive-bis2Juki@ietfa.amsl.com>; Mon, 23 Jan 2017 23:40:31 -0800 (PST)
Received: from frink.w3.org (frink.w3.org [128.30.52.56]) (using TLSv1.2 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 93EAF129578 for <httpbisa-archive-bis2Juki@lists.ietf.org>; Mon, 23 Jan 2017 23:40:31 -0800 (PST)
Received: from lists by frink.w3.org with local (Exim 4.80) (envelope-from <ietf-http-wg-request@listhub.w3.org>) id 1cVvfO-0003Mu-K1 for ietf-http-wg-dist@listhub.w3.org; Tue, 24 Jan 2017 07:37:22 +0000
Resent-Date: Tue, 24 Jan 2017 07:37:22 +0000
Resent-Message-Id: <E1cVvfO-0003Mu-K1@frink.w3.org>
Received: from mimas.w3.org ([128.30.52.79]) by frink.w3.org with esmtps (TLS1.2:RSA_AES_128_CBC_SHA1:128) (Exim 4.80) (envelope-from <kazuhooku@gmail.com>) id 1cVvfJ-0002jh-Od for ietf-http-wg@listhub.w3.org; Tue, 24 Jan 2017 07:37:17 +0000
Received: from mail-ot0-f173.google.com ([74.125.82.173]) by mimas.w3.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.84_2) (envelope-from <kazuhooku@gmail.com>) id 1cVvfD-0003KL-MQ for ietf-http-wg@w3.org; Tue, 24 Jan 2017 07:37:12 +0000
Received: by mail-ot0-f173.google.com with SMTP id 73so121090187otj.0 for <ietf-http-wg@w3.org>; Mon, 23 Jan 2017 23:36:51 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=3V/bn089vvt1CH7HWshIU2Q20mfDScvafNLovPYObJY=; b=O0gLNGvhY0tC+/uWvlck0k2FCgnJo/ZMgw4IyLqMg1CXJdmWSM8PXN3of/1dxgmJlo N/bKtd6DuHowWNLuX2u9PMQFvzSilOfvr1Sdb4ypLO57hfg94tUNaempJ3ohkb9/JaQ/ d3C6WrAsAjh5Z85ySZ8kT9DVIRkWWOfC/FqRMA9x54XBJDkotNGVGXNWWUBleJQ4Q6rw 0P06bp7XRUrDkl4zQSZkIgHMeDDThMD6w1aKav8detSezIR8Xe/Ebmy2EpKio3mc89iY qFQcXYuUanWn49sT6HRpPRT4DZrHRPS7pChqvfletam1Db0q3OCM5mfGhMkXKjMVBoEu Bs9w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=3V/bn089vvt1CH7HWshIU2Q20mfDScvafNLovPYObJY=; b=hZu5/7F0wiLvRGlLOlvlMH9z8RWO+vaneJCW3oEf704sO8ghmQzYftEPTlOnxeGbr1 mn6LOvj8ZLIWesaHOTDGedNOCxF3aneHzg1DiEJ1AjIJyYGKNgmyvfdO9Jf2mzVsQLTe Vo4RHXyXV5TV2siOY1FeHpTSxekuIe5hiVlsfDZsuaFedEpntA3Wx8lts+8Xa6ktWoKd L6zvxBwMeoGidj/zL9o9lrYCNtds9t97g9OJ3LZyHvjGQZULg/9OlPhjKkw5nWEYeosb GRj4l9CTpKdplb2Hg+04i0YjA7jCoeZ45i0jqyvkjMlEziBX8FMhXd6xucpcFP8js1ah BH8Q==
X-Gm-Message-State: AIkVDXIu6Z9arwNlgLcVAvbXvma5q1gbeuXL/FbvNaC7cqrATEdv7Hyme+58HGqzfMTY8Ft08TzCMSc/3xoD+w==
X-Received: by 10.157.42.7 with SMTP id t7mr14351087ota.237.1485243405338; Mon, 23 Jan 2017 23:36:45 -0800 (PST)
MIME-Version: 1.0
Received: by 10.157.16.113 with HTTP; Mon, 23 Jan 2017 23:36:44 -0800 (PST)
In-Reply-To: <CA+3+x5Ft8MfRduWp1RzJ9_qAgQiCujac8f5FLz81Xpyu3cTDxw@mail.gmail.com>
References: <CA+3+x5Ft8MfRduWp1RzJ9_qAgQiCujac8f5FLz81Xpyu3cTDxw@mail.gmail.com>
From: Kazuho Oku <kazuhooku@gmail.com>
Date: Tue, 24 Jan 2017 16:36:44 +0900
Message-ID: <CANatvzz0AYxBDDyEbpvnq1xPd1-vrFazbnVt2usanXS0CTKbeA@mail.gmail.com>
To: Tom Bergan <tombergan@chromium.org>
Cc: HTTP Working Group <ietf-http-wg@w3.org>
Content-Type: text/plain; charset="UTF-8"
Received-SPF: pass client-ip=74.125.82.173; envelope-from=kazuhooku@gmail.com; helo=mail-ot0-f173.google.com
X-W3C-Hub-Spam-Status: No, score=-4.2
X-W3C-Hub-Spam-Report: AWL=-0.700, BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FREEMAIL_FROM=0.001, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, RCVD_IN_SORBS_SPAM=0.5, SPF_PASS=-0.001, W3C_AA=-1, W3C_WL=-1
X-W3C-Scan-Sig: mimas.w3.org 1cVvfD-0003KL-MQ b6cff1ee919850f08ab367a1d15e63a6
X-Original-To: ietf-http-wg@w3.org
Subject: Re: Is a faithful HTTP/2 response scheduler necessarily O(n) in the worst case?
Archived-At: <http://www.w3.org/mid/CANatvzz0AYxBDDyEbpvnq1xPd1-vrFazbnVt2usanXS0CTKbeA@mail.gmail.com>
Resent-From: ietf-http-wg@w3.org
X-Mailing-List: <ietf-http-wg@w3.org> archive/latest/33363
X-Loop: ietf-http-wg@w3.org
Resent-Sender: ietf-http-wg-request@w3.org
Precedence: list
List-Id: <ietf-http-wg.w3.org>
List-Help: <http://www.w3.org/Mail/>
List-Post: <mailto:ietf-http-wg@w3.org>
List-Unsubscribe: <mailto:ietf-http-wg-request@w3.org?subject=unsubscribe>

Regarding your question, I am unaware of a scheduler design that is
better than O(n) for both dequeuing (i.e. selecting the stream to
send) and for rescheduling a stream (especially a stream with many
decendants).

While designing the scheduler of H2O (you are right in pointing out
the fact that it is O(depth)), I came up with two options. One was to
retain the vertical (i.e. parent-children) relationship between the
streams. The other was to squash the vertical relationships to
generate a one-dimensional list of streams ordered by priority.

By taking the latter approach, you could create a scheduler that
dequeues at O(1). But such scheduler would need to perform O(N)
operation when receiving a priority frame or a stream-level window
update (in this case N is number of direct and indirect decendants of
the reprioritized stream).

Considering this, we chose to implement the scheduler of H2O as O(1)
weight-wise, and O(n) depth-wise, but that the constant for O(n) would
be small enough so that it cannot be used an attack vector.

2017-01-24 9:39 GMT+09:00 Tom Bergan <tombergan@chromium.org>:
> I implemented the HTTP/2 response scheduler in Go's HTTP/2 server library.
> I'm trying to understand the worst-case behavior of that scheduler. I
> believe the worst-case behavior is necessarily O(n) operations per frame
> sent on the wire, where n is the number of streams in the dependency tree.
> Here is my rationale.
>
> The key operation is finding the highest-priority stream that is ready to
> send data.
>
> If we don't care about weights, and we don't care about balancing bandwidth
> usage across sibling nodes in a tree, then we can label each node with two
> fields: "ready" (true if the stream is ready to send data) and "depth" (the
> node's distance from the root of the tree). The scheduler must find a node
> with the smallest depth over all nodes with ready = true. It is fairly
> trivial to implement this in O(log n).
>
> Now, let's introduce weights. The scheduler must allocate bandwidth to all
> ready nodes, which happens recursively as follows:
>
>   func allocateBandwidth(node, bw) {
>     if (node.ready) {
>       node.bandwidthShare = bw
>       return
>     }
>     totalWeight = 0
>     for (n in node.children) {
>       if (n.ready || descendantIsReady(n)) {
>         totalWeight += n.weight
>       }
>     }
>     for (n in node.children) {
>       if (n.ready || descendantIsReady(n)) {
>         allocateBandwidth(n, bw * n.weight / totalWeight)
>       }
>     }
>   }
>   allocateBandwidth(root, 1.0)
>
> I believe the above definition is a direct translation of RFC 7540 Section
> 5.3.2 (also see this thread, which discussed bandwidth allocation). Given a
> complete bandwidth allocation, the server can translate each node's
> bandwidthShare to a number of tokens that are updated using a token bucket
> algorithm (or similar). The scheduler would then pick the node with the most
> available tokens. The scheduler looks something like:
>
>   func scheduleNextFrame() {
>     if (tree changed since last frame written) {
>       allocateBandwidth(root, 1.0)
>       assign tokens and build a priority queue containing all nodes with
> allocated bandwidth
>     }
>     node = priorityQueue.head()
>     node.consumeTokens() // consume up to frame size or flow-control limit
>     priorityQueue.update(node)
>     return node
>   }
>
> There are two steps in scheduleNextFrame. The first step updates the
> bandwidth allocation if the tree changed since the last frame was written.
> This is the most expensive step. I don't believe it's possible to implement
> allocateBandwidth using fewer than O(n) worst-case steps. For example, if
> the tree is mostly flat, meaning that most nodes are children of the same
> node, then the loop to compute totalWeight is O(n). I believe Firefox
> creates mostly-flat trees. Further, allocateBandwidth makes O(depth)
> recursive steps, where "depth" is the maximum depth of the dependency tree.
> If the tree is mostly-linear, then O(depth) becomes O(n). Chrome creates
> mostly-linear trees.
>
> The overall runtime depends on how often the tree changes. If the tree
> changes rarely, the scheduler is cheap. If the tree changes frequently, the
> scheduler is worst-case O(n). A tree has "changed" if it changed shape
> (nodes are added/moved/removed) or if any node's ready state has changed.
> Both kinds of changes can happen often in practice, suggesting that the
> overall scheduler is worst-case O(n). For example, consider a server that
> sends small responses (one or two DATA frames per response) -- each stream
> will be closed after one or two DATA frames, so on average, the tree will
> change shape every few frames. Further, it's possible for ready states to
> change more frequently than you might think. In Go's HTTP/2 implementation,
> a stream does not become ready until the application handler calls Write to
> buffer response data. After that buffered data is serialized on the wire,
> the stream transitions to "not ready" because the scheduler cannot know when
> the next Write call will happen. The stream will transition back to "ready"
> during the next Write call. Each Write call typically buffers about 32KB of
> data. This is a "push" model, where the application handler "pushes" data
> into the scheduler. I'm aware of one other HTTP/2 server that works
> similarly. I suspect that frequent ready/not-ready transitions are common to
> most HTTP/2 servers that use a "push" model. These servers will be more
> susceptible to the O(n) worst case.
>
> Questions:
>
> 1. Am I missing a clever implementation, or is it true that a faithful
> HTTP/2 scheduler necessarily requires O(n) operations per frame sent on the
> wire, in the worst case? I could not find much discussion of this question
> after a quick search. H2O claims to implement an O(1) scheduler, however,
> the code seems to be worst-case O(depth) or O(n) -- see here, here, and
> here.
>
> 2. If the above is correct, should I be concerned about the O(n) worst case?
> I doubt that a typical web browsing session will trigger O(n) behavior
> frequently, so I'm less concerned about the average case; I'm more concerned
> about pathological cases or possible DoS vectors. Also, think about cases
> where the "client" is actually a proxy server, meaning the HTTP/2 connection
> may have many more concurrent streams than a typical browsing session. For
> comparison, if you recall the predecessor to HTTP/2 (SPDY), a SPDY scheduler
> could be trivially implemented in O(1), since SPDY used just eight priority
> buckets.
>
> 3. If I should be concerned about an O(n) worst case, are there any
> suggested mitigations beyond setting SETTINGS_MAX_CONCURRENT_STREAMS to a
> smallish constant?



-- 
Kazuho Oku