Re: Is a faithful HTTP/2 response scheduler necessarily O(n) in the worst case?
Kazu Yamamoto ( 山本和彦 ) <kazu@iij.ad.jp> Tue, 24 January 2017 07:56 UTC
Return-Path: <ietf-http-wg-request+bounce-httpbisa-archive-bis2juki=lists.ie@listhub.w3.org>
X-Original-To: ietfarch-httpbisa-archive-bis2Juki@ietfa.amsl.com
Delivered-To: ietfarch-httpbisa-archive-bis2Juki@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 99A4712957C for <ietfarch-httpbisa-archive-bis2Juki@ietfa.amsl.com>; Mon, 23 Jan 2017 23:56:58 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -10.22
X-Spam-Level:
X-Spam-Status: No, score=-10.22 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HEADER_FROM_DIFFERENT_DOMAINS=0.001, RCVD_IN_DNSWL_HI=-5, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, RP_MATCHES_RCVD=-3.199, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=iij.ad.jp
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 5ZSZH2YdLJAc for <ietfarch-httpbisa-archive-bis2Juki@ietfa.amsl.com>; Mon, 23 Jan 2017 23:56:56 -0800 (PST)
Received: from frink.w3.org (frink.w3.org [128.30.52.56]) (using TLSv1.2 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 1588212957A for <httpbisa-archive-bis2Juki@lists.ietf.org>; Mon, 23 Jan 2017 23:56:56 -0800 (PST)
Received: from lists by frink.w3.org with local (Exim 4.80) (envelope-from <ietf-http-wg-request@listhub.w3.org>) id 1cVvw0-0004Wj-UJ for ietf-http-wg-dist@listhub.w3.org; Tue, 24 Jan 2017 07:54:32 +0000
Resent-Date: Tue, 24 Jan 2017 07:54:32 +0000
Resent-Message-Id: <E1cVvw0-0004Wj-UJ@frink.w3.org>
Received: from mimas.w3.org ([128.30.52.79]) by frink.w3.org with esmtps (TLS1.2:RSA_AES_128_CBC_SHA1:128) (Exim 4.80) (envelope-from <kazu@iij.ad.jp>) id 1cVvvw-0004Qh-Gt for ietf-http-wg@listhub.w3.org; Tue, 24 Jan 2017 07:54:28 +0000
Received: from mo901.iij.ad.jp ([202.232.31.77] helo=omgo.iij.ad.jp) by mimas.w3.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.84_2) (envelope-from <kazu@iij.ad.jp>) id 1cVvvp-0003qs-LZ for ietf-http-wg@w3.org; Tue, 24 Jan 2017 07:54:23 +0000
DKIM-Signature: v=1;a=rsa-sha256;c=relaxed/simple;d=iij.ad.jp;h=Date: Message-Id:To:Subject:From:In-Reply-To:References:Mime-Version:Content-Type: Content-Transfer-Encoding; i=kazu@iij.ad.jp; s=omgo2; t=1485244437; x=1486454037; bh=PPzHEDRBmt4SfYkH+eCIFr/50L8IiCWtti/YDym5zac=; b=PFQ2vgyk9jyooMjNcPeRlNNi5bx tuFMbhzVl+zUEZ7PNMuMvM+t3Am9n4pG7bP0nHNqk2uwthrZcyg8j1NxiA4dHeAl5ctZXlUQYhqLd emu+nV2bZCZmCJ7qBTbBtDsb70oK51fQVZrFnqIzlU/DL58ZX1HWao6qkGgGKeDpN2ORNvuFpgbEm VcKWYal4s9p2C04JfNrITacufxu6/H5i6LN7QqgnGClFqkqd96Fhoyc3/20MofKth9pM4Cf/ywSai pms+YQOMIjpf+jSlgzVpEyJG1ZlRJD7DKmCDeR03JFu+CRWpH7x8wmeimyHzjKqNpIuV+Jut7ogac fHvalMQ==;
Received: by omgo.iij.ad.jp (mo901) id v0O7rvCO003616; Tue, 24 Jan 2017 16:53:57 +0900
X-MXL-Hash: 5887081500b54bf3-240f3bc31dec7860950a82ea28c03964e746a474
Date: Tue, 24 Jan 2017 16:53:56 +0900
Message-Id: <20170124.165356.870174430965764062.kazu@iij.ad.jp>
To: ietf-http-wg@w3.org
From: Kazu Yamamoto <kazu@iij.ad.jp>
In-Reply-To: <CANatvzz0AYxBDDyEbpvnq1xPd1-vrFazbnVt2usanXS0CTKbeA@mail.gmail.com>
References: <CA+3+x5Ft8MfRduWp1RzJ9_qAgQiCujac8f5FLz81Xpyu3cTDxw@mail.gmail.com> <CANatvzz0AYxBDDyEbpvnq1xPd1-vrFazbnVt2usanXS0CTKbeA@mail.gmail.com>
X-Mailer: Mew version 6.7 on Emacs 25.1 / Mule 6.0 (HANACHIRUSATO)
Mime-Version: 1.0
Content-Type: Text/Plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Received-SPF: pass client-ip=202.232.31.77; envelope-from=kazu@iij.ad.jp; helo=omgo.iij.ad.jp
X-W3C-Hub-Spam-Status: No, score=-7.1
X-W3C-Hub-Spam-Report: AWL=0.100, BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, RP_MATCHES_RCVD=-3.199, SPF_PASS=-0.001, W3C_AA=-1, W3C_WL=-1
X-W3C-Scan-Sig: mimas.w3.org 1cVvvp-0003qs-LZ 42d1863ff32c4dfa19b87029a170056d
X-Original-To: ietf-http-wg@w3.org
Subject: Re: Is a faithful HTTP/2 response scheduler necessarily O(n) in the worst case?
Archived-At: <http://www.w3.org/mid/20170124.165356.870174430965764062.kazu@iij.ad.jp>
Resent-From: ietf-http-wg@w3.org
X-Mailing-List: <ietf-http-wg@w3.org> archive/latest/33364
X-Loop: ietf-http-wg@w3.org
Resent-Sender: ietf-http-wg-request@w3.org
Precedence: list
List-Id: <ietf-http-wg.w3.org>
List-Help: <http://www.w3.org/Mail/>
List-Post: <mailto:ietf-http-wg@w3.org>
List-Unsubscribe: <mailto:ietf-http-wg-request@w3.org?subject=unsubscribe>
Hi Tom, Probably, this material would help you: http://www.mew.org/~kazu/material/2015-http2-priority2.pdf http://www.mew.org/~kazu/doc/paper/http2-haskell-2016.pdf --Kazu > Regarding your question, I am unaware of a scheduler design that is > better than O(n) for both dequeuing (i.e. selecting the stream to > send) and for rescheduling a stream (especially a stream with many > decendants). > > While designing the scheduler of H2O (you are right in pointing out > the fact that it is O(depth)), I came up with two options. One was to > retain the vertical (i.e. parent-children) relationship between the > streams. The other was to squash the vertical relationships to > generate a one-dimensional list of streams ordered by priority. > > By taking the latter approach, you could create a scheduler that > dequeues at O(1). But such scheduler would need to perform O(N) > operation when receiving a priority frame or a stream-level window > update (in this case N is number of direct and indirect decendants of > the reprioritized stream). > > Considering this, we chose to implement the scheduler of H2O as O(1) > weight-wise, and O(n) depth-wise, but that the constant for O(n) would > be small enough so that it cannot be used an attack vector. > > 2017-01-24 9:39 GMT+09:00 Tom Bergan <tombergan@chromium.org>: >> I implemented the HTTP/2 response scheduler in Go's HTTP/2 server library. >> I'm trying to understand the worst-case behavior of that scheduler. I >> believe the worst-case behavior is necessarily O(n) operations per frame >> sent on the wire, where n is the number of streams in the dependency tree. >> Here is my rationale. >> >> The key operation is finding the highest-priority stream that is ready to >> send data. >> >> If we don't care about weights, and we don't care about balancing bandwidth >> usage across sibling nodes in a tree, then we can label each node with two >> fields: "ready" (true if the stream is ready to send data) and "depth" (the >> node's distance from the root of the tree). The scheduler must find a node >> with the smallest depth over all nodes with ready = true. It is fairly >> trivial to implement this in O(log n). >> >> Now, let's introduce weights. The scheduler must allocate bandwidth to all >> ready nodes, which happens recursively as follows: >> >> func allocateBandwidth(node, bw) { >> if (node.ready) { >> node.bandwidthShare = bw >> return >> } >> totalWeight = 0 >> for (n in node.children) { >> if (n.ready || descendantIsReady(n)) { >> totalWeight += n.weight >> } >> } >> for (n in node.children) { >> if (n.ready || descendantIsReady(n)) { >> allocateBandwidth(n, bw * n.weight / totalWeight) >> } >> } >> } >> allocateBandwidth(root, 1.0) >> >> I believe the above definition is a direct translation of RFC 7540 Section >> 5.3.2 (also see this thread, which discussed bandwidth allocation). Given a >> complete bandwidth allocation, the server can translate each node's >> bandwidthShare to a number of tokens that are updated using a token bucket >> algorithm (or similar). The scheduler would then pick the node with the most >> available tokens. The scheduler looks something like: >> >> func scheduleNextFrame() { >> if (tree changed since last frame written) { >> allocateBandwidth(root, 1.0) >> assign tokens and build a priority queue containing all nodes with >> allocated bandwidth >> } >> node = priorityQueue.head() >> node.consumeTokens() // consume up to frame size or flow-control limit >> priorityQueue.update(node) >> return node >> } >> >> There are two steps in scheduleNextFrame. The first step updates the >> bandwidth allocation if the tree changed since the last frame was written. >> This is the most expensive step. I don't believe it's possible to implement >> allocateBandwidth using fewer than O(n) worst-case steps. For example, if >> the tree is mostly flat, meaning that most nodes are children of the same >> node, then the loop to compute totalWeight is O(n). I believe Firefox >> creates mostly-flat trees. Further, allocateBandwidth makes O(depth) >> recursive steps, where "depth" is the maximum depth of the dependency tree. >> If the tree is mostly-linear, then O(depth) becomes O(n). Chrome creates >> mostly-linear trees. >> >> The overall runtime depends on how often the tree changes. If the tree >> changes rarely, the scheduler is cheap. If the tree changes frequently, the >> scheduler is worst-case O(n). A tree has "changed" if it changed shape >> (nodes are added/moved/removed) or if any node's ready state has changed. >> Both kinds of changes can happen often in practice, suggesting that the >> overall scheduler is worst-case O(n). For example, consider a server that >> sends small responses (one or two DATA frames per response) -- each stream >> will be closed after one or two DATA frames, so on average, the tree will >> change shape every few frames. Further, it's possible for ready states to >> change more frequently than you might think. In Go's HTTP/2 implementation, >> a stream does not become ready until the application handler calls Write to >> buffer response data. After that buffered data is serialized on the wire, >> the stream transitions to "not ready" because the scheduler cannot know when >> the next Write call will happen. The stream will transition back to "ready" >> during the next Write call. Each Write call typically buffers about 32KB of >> data. This is a "push" model, where the application handler "pushes" data >> into the scheduler. I'm aware of one other HTTP/2 server that works >> similarly. I suspect that frequent ready/not-ready transitions are common to >> most HTTP/2 servers that use a "push" model. These servers will be more >> susceptible to the O(n) worst case. >> >> Questions: >> >> 1. Am I missing a clever implementation, or is it true that a faithful >> HTTP/2 scheduler necessarily requires O(n) operations per frame sent on the >> wire, in the worst case? I could not find much discussion of this question >> after a quick search. H2O claims to implement an O(1) scheduler, however, >> the code seems to be worst-case O(depth) or O(n) -- see here, here, and >> here. >> >> 2. If the above is correct, should I be concerned about the O(n) worst case? >> I doubt that a typical web browsing session will trigger O(n) behavior >> frequently, so I'm less concerned about the average case; I'm more concerned >> about pathological cases or possible DoS vectors. Also, think about cases >> where the "client" is actually a proxy server, meaning the HTTP/2 connection >> may have many more concurrent streams than a typical browsing session. For >> comparison, if you recall the predecessor to HTTP/2 (SPDY), a SPDY scheduler >> could be trivially implemented in O(1), since SPDY used just eight priority >> buckets. >> >> 3. If I should be concerned about an O(n) worst case, are there any >> suggested mitigations beyond setting SETTINGS_MAX_CONCURRENT_STREAMS to a >> smallish constant? > > > > -- > Kazuho Oku >
- Is a faithful HTTP/2 response scheduler necessari… Tom Bergan
- Re: Is a faithful HTTP/2 response scheduler neces… Kazuho Oku
- Re: Is a faithful HTTP/2 response scheduler neces… Kazu Yamamoto ( 山本和彦 )
- Re: Is a faithful HTTP/2 response scheduler neces… Stefan Eissing
- Re: Is a faithful HTTP/2 response scheduler neces… Tom Bergan
- Re: Is a faithful HTTP/2 response scheduler neces… Kazu Yamamoto ( 山本和彦 )
- Re: Is a faithful HTTP/2 response scheduler neces… Tom Bergan
- Re: Is a faithful HTTP/2 response scheduler neces… Patrick McManus
- Re: Is a faithful HTTP/2 response scheduler neces… Kazuho Oku
- Re: Is a faithful HTTP/2 response scheduler neces… Tatsuhiro Tsujikawa
- Re: Is a faithful HTTP/2 response scheduler neces… Kazu Yamamoto ( 山本和彦 )
- Re: Is a faithful HTTP/2 response scheduler neces… Cory Benfield
- Re: Is a faithful HTTP/2 response scheduler neces… Stefan Eissing
- Re: Is a faithful HTTP/2 response scheduler neces… Amos Jeffries