Re: Is a faithful HTTP/2 response scheduler necessarily O(n) in the worst case?
Tatsuhiro Tsujikawa <tatsuhiro.t@gmail.com> Wed, 25 January 2017 02:04 UTC
Return-Path: <ietf-http-wg-request+bounce-httpbisa-archive-bis2juki=lists.ie@listhub.w3.org>
X-Original-To: ietfarch-httpbisa-archive-bis2Juki@ietfa.amsl.com
Delivered-To: ietfarch-httpbisa-archive-bis2Juki@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id F024C129601 for <ietfarch-httpbisa-archive-bis2Juki@ietfa.amsl.com>; Tue, 24 Jan 2017 18:04:12 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -9.719
X-Spam-Level:
X-Spam-Status: No, score=-9.719 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HEADER_FROM_DIFFERENT_DOMAINS=0.001, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_HI=-5, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, RCVD_IN_SORBS_SPAM=0.5, RP_MATCHES_RCVD=-3.199, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id i4Y5bH5qqmI6 for <ietfarch-httpbisa-archive-bis2Juki@ietfa.amsl.com>; Tue, 24 Jan 2017 18:04:11 -0800 (PST)
Received: from frink.w3.org (frink.w3.org [128.30.52.56]) (using TLSv1.2 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 142BE1295F8 for <httpbisa-archive-bis2Juki@lists.ietf.org>; Tue, 24 Jan 2017 18:04:11 -0800 (PST)
Received: from lists by frink.w3.org with local (Exim 4.80) (envelope-from <ietf-http-wg-request@listhub.w3.org>) id 1cWCum-0007JT-IE for ietf-http-wg-dist@listhub.w3.org; Wed, 25 Jan 2017 02:02:24 +0000
Resent-Date: Wed, 25 Jan 2017 02:02:24 +0000
Resent-Message-Id: <E1cWCum-0007JT-IE@frink.w3.org>
Received: from mimas.w3.org ([128.30.52.79]) by frink.w3.org with esmtps (TLS1.2:RSA_AES_128_CBC_SHA1:128) (Exim 4.80) (envelope-from <tatsuhiro.t@gmail.com>) id 1cWCuh-0007Ia-TA for ietf-http-wg@listhub.w3.org; Wed, 25 Jan 2017 02:02:19 +0000
Received: from mail-qt0-f180.google.com ([209.85.216.180]) by mimas.w3.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.84_2) (envelope-from <tatsuhiro.t@gmail.com>) id 1cWCua-0005jO-TG for ietf-http-wg@w3.org; Wed, 25 Jan 2017 02:02:14 +0000
Received: by mail-qt0-f180.google.com with SMTP id k15so4451395qtg.3 for <ietf-http-wg@w3.org>; Tue, 24 Jan 2017 18:01:52 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=UvsorjzceJxvOvyZNduE41WMoM00r/Zav/L8vXTvYDk=; b=Li8jzvtchaNm2Q3e5m5NW0GLxUL9Vnzqj7v6GSNOGzN4G8k3KRezlYtoabZ46M1nD9 9OUcsvN6bJwp6p+fMGXa4PNYQh0v1sctym2JVMQIj7iVKdcbwJyzCvxs60lp2tTfpcQW tL6FtEfCzuCWwOwyIfX/7b+WwNjBWN7ZkFgFkBS0224s+Je5UhQgyRhJPBDDNON2jiyf CcmsFKZYJ0zT/URk1SdoM3KOFfXc/HYt1yxQJTqT2HwTPCoCEQl8PuoiFjQBPL1LfWvr FyNnD0+wrSvGuE+3/N7+sHY4JjNYogZ+IYOft02jT23EfiQoqLtBrpYAHXREyhQs1WDe X/Jw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=UvsorjzceJxvOvyZNduE41WMoM00r/Zav/L8vXTvYDk=; b=cSaBxvvHs7GzjrKoUcN/yOFRYBu0xLGARIDNRWA6faSFmmG3XViz7F8JqrNZd/WMLo 5T4crNKWSCYPONHQVejgpeuslfooG2WBaZ731MrRjwj9+SZK/LLn/n1tbv2uyNR3Y5df lp1PCix9kVDk2lGXdydnLM789PpbZPaFuDedhUHlUzEmyyXTTV1oMbxgb44CYUC/xsss fyDI/emN/anrs3wZdglrBF3yDX1XtiN0JFwoJdRYpPcbqD3Ih4+je18euSZzUfCy+YAn VcY3R9a9hil4fjEgTQWUE238pYBZfA/c0nWmehO2OXjbaHQQ7FlM1CuKMauhRv7vUvBQ r7PQ==
X-Gm-Message-State: AIkVDXIAyUYSeb1q7dnr8reHh1Oub2GCCk9K5yw+Ca4/cBpRV7AUqIr7hIuUH5LajEVOCBzQZq7RDYO0Ognt1Q==
X-Received: by 10.200.42.200 with SMTP id c8mr32913130qta.156.1485309706667; Tue, 24 Jan 2017 18:01:46 -0800 (PST)
MIME-Version: 1.0
Received: by 10.12.138.186 with HTTP; Tue, 24 Jan 2017 18:01:26 -0800 (PST)
In-Reply-To: <CA+3+x5EdkLSAR2gWR9TT72o2Tg4Z_xKXMh8yVREYD7mvNuLB8w@mail.gmail.com>
References: <CA+3+x5Ft8MfRduWp1RzJ9_qAgQiCujac8f5FLz81Xpyu3cTDxw@mail.gmail.com> <CANatvzz0AYxBDDyEbpvnq1xPd1-vrFazbnVt2usanXS0CTKbeA@mail.gmail.com> <20170124.165356.870174430965764062.kazu@iij.ad.jp> <900A5D6B-0752-470E-840C-4518D933DD09@greenbytes.de> <CA+3+x5EdkLSAR2gWR9TT72o2Tg4Z_xKXMh8yVREYD7mvNuLB8w@mail.gmail.com>
From: Tatsuhiro Tsujikawa <tatsuhiro.t@gmail.com>
Date: Wed, 25 Jan 2017 11:01:26 +0900
Message-ID: <CAPyZ6=LYEufwdqnXZGrP9o3dfuNVXkyL=Mj04s=PhQG9v0oLGg@mail.gmail.com>
To: Tom Bergan <tombergan@chromium.org>
Cc: Stefan Eissing <stefan.eissing@greenbytes.de>, "Kazu Yamamoto (山本和彦)" <kazu@iij.ad.jp>, HTTP Working Group <ietf-http-wg@w3.org>
Content-Type: multipart/alternative; boundary="001a1140058e7295a40546e19dcb"
Received-SPF: pass client-ip=209.85.216.180; envelope-from=tatsuhiro.t@gmail.com; helo=mail-qt0-f180.google.com
X-W3C-Hub-Spam-Status: No, score=-5.2
X-W3C-Hub-Spam-Report: AWL=-0.998, BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FREEMAIL_FROM=0.001, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, RCVD_IN_SORBS_SPAM=0.5, SPF_PASS=-0.001, W3C_AA=-1, W3C_WL=-1
X-W3C-Scan-Sig: mimas.w3.org 1cWCua-0005jO-TG 54b8e6ada8105584e18f05048a6eba69
X-Original-To: ietf-http-wg@w3.org
Subject: Re: Is a faithful HTTP/2 response scheduler necessarily O(n) in the worst case?
Archived-At: <http://www.w3.org/mid/CAPyZ6=LYEufwdqnXZGrP9o3dfuNVXkyL=Mj04s=PhQG9v0oLGg@mail.gmail.com>
Resent-From: ietf-http-wg@w3.org
X-Mailing-List: <ietf-http-wg@w3.org> archive/latest/33373
X-Loop: ietf-http-wg@w3.org
Resent-Sender: ietf-http-wg-request@w3.org
Precedence: list
List-Id: <ietf-http-wg.w3.org>
List-Help: <http://www.w3.org/Mail/>
List-Post: <mailto:ietf-http-wg@w3.org>
List-Unsubscribe: <mailto:ietf-http-wg-request@w3.org?subject=unsubscribe>
Hi, On Wed, Jan 25, 2017 at 7:22 AM, Tom Bergan <tombergan@chromium.org> wrote: > Thanks for the feedback. Sounds like the worst-case time really is O(n). > > > kazuhooku@ > > the constant for O(n) would be small enough so that it cannot be used an > attack vector > > Don't you need to ensure that n is small, otherwise even O(n) with a small > constant factor can lead to degraded performance? e.g., > if SETTINGS_MAX_CONCURRENT_STREAMS = 20,000, I may end up computing a sum > of 20,000 numbers every few frames, which is not terribly slow, but not > terribly fast either. > > If large n is concern, server can limit SETTINGS_MAX_CONCURRENT_STREAMS to the reasonable number. Usually we use 100, and it is pretty fast. Best regards, Tatsuhiro Tsujikawa > > kazu@ > > http://www.mew.org/~kazu/material/2015-http2-priority2.pdf > > http://www.mew.org/~kazu/doc/paper/http2-haskell-2016.pdf > > Thanks. IIUC, the algorithms described in both links are still at least > O(depth), which can be O(n) for dependency trees generated by certain > clients such as Chrome. > > > stefan.eissing@ > > The very same problem exists for stream processing in order to generate > response data. > > What did you mean by "stream processing"? > > On Tue, Jan 24, 2017 at 12:26 AM, Stefan Eissing < > stefan.eissing@greenbytes.de> wrote: > >> The very same problem exists for stream processing in order to generate >> response data. >> >> > Am 24.01.2017 um 08:53 schrieb Kazu Yamamoto (山本和彦) <kazu@iij.ad.jp>: >> > >> > Hi Tom, >> > >> > Probably, this material would help you: >> > >> > http://www.mew.org/~kazu/material/2015-http2-priority2.pdf >> > http://www.mew.org/~kazu/doc/paper/http2-haskell-2016.pdf >> > >> > --Kazu >> > >> >> Regarding your question, I am unaware of a scheduler design that is >> >> better than O(n) for both dequeuing (i.e. selecting the stream to >> >> send) and for rescheduling a stream (especially a stream with many >> >> decendants). >> >> >> >> While designing the scheduler of H2O (you are right in pointing out >> >> the fact that it is O(depth)), I came up with two options. One was to >> >> retain the vertical (i.e. parent-children) relationship between the >> >> streams. The other was to squash the vertical relationships to >> >> generate a one-dimensional list of streams ordered by priority. >> >> >> >> By taking the latter approach, you could create a scheduler that >> >> dequeues at O(1). But such scheduler would need to perform O(N) >> >> operation when receiving a priority frame or a stream-level window >> >> update (in this case N is number of direct and indirect decendants of >> >> the reprioritized stream). >> >> >> >> Considering this, we chose to implement the scheduler of H2O as O(1) >> >> weight-wise, and O(n) depth-wise, but that the constant for O(n) would >> >> be small enough so that it cannot be used an attack vector. >> >> >> >> 2017-01-24 9:39 GMT+09:00 Tom Bergan <tombergan@chromium.org>: >> >>> I implemented the HTTP/2 response scheduler in Go's HTTP/2 server >> library. >> >>> I'm trying to understand the worst-case behavior of that scheduler. I >> >>> believe the worst-case behavior is necessarily O(n) operations per >> frame >> >>> sent on the wire, where n is the number of streams in the dependency >> tree. >> >>> Here is my rationale. >> >>> >> >>> The key operation is finding the highest-priority stream that is >> ready to >> >>> send data. >> >>> >> >>> If we don't care about weights, and we don't care about balancing >> bandwidth >> >>> usage across sibling nodes in a tree, then we can label each node >> with two >> >>> fields: "ready" (true if the stream is ready to send data) and >> "depth" (the >> >>> node's distance from the root of the tree). The scheduler must find a >> node >> >>> with the smallest depth over all nodes with ready = true. It is fairly >> >>> trivial to implement this in O(log n). >> >>> >> >>> Now, let's introduce weights. The scheduler must allocate bandwidth >> to all >> >>> ready nodes, which happens recursively as follows: >> >>> >> >>> func allocateBandwidth(node, bw) { >> >>> if (node.ready) { >> >>> node.bandwidthShare = bw >> >>> return >> >>> } >> >>> totalWeight = 0 >> >>> for (n in node.children) { >> >>> if (n.ready || descendantIsReady(n)) { >> >>> totalWeight += n.weight >> >>> } >> >>> } >> >>> for (n in node.children) { >> >>> if (n.ready || descendantIsReady(n)) { >> >>> allocateBandwidth(n, bw * n.weight / totalWeight) >> >>> } >> >>> } >> >>> } >> >>> allocateBandwidth(root, 1.0) >> >>> >> >>> I believe the above definition is a direct translation of RFC 7540 >> Section >> >>> 5.3.2 (also see this thread, which discussed bandwidth allocation). >> Given a >> >>> complete bandwidth allocation, the server can translate each node's >> >>> bandwidthShare to a number of tokens that are updated using a token >> bucket >> >>> algorithm (or similar). The scheduler would then pick the node with >> the most >> >>> available tokens. The scheduler looks something like: >> >>> >> >>> func scheduleNextFrame() { >> >>> if (tree changed since last frame written) { >> >>> allocateBandwidth(root, 1.0) >> >>> assign tokens and build a priority queue containing all nodes >> with >> >>> allocated bandwidth >> >>> } >> >>> node = priorityQueue.head() >> >>> node.consumeTokens() // consume up to frame size or flow-control >> limit >> >>> priorityQueue.update(node) >> >>> return node >> >>> } >> >>> >> >>> There are two steps in scheduleNextFrame. The first step updates the >> >>> bandwidth allocation if the tree changed since the last frame was >> written. >> >>> This is the most expensive step. I don't believe it's possible to >> implement >> >>> allocateBandwidth using fewer than O(n) worst-case steps. For >> example, if >> >>> the tree is mostly flat, meaning that most nodes are children of the >> same >> >>> node, then the loop to compute totalWeight is O(n). I believe Firefox >> >>> creates mostly-flat trees. Further, allocateBandwidth makes O(depth) >> >>> recursive steps, where "depth" is the maximum depth of the dependency >> tree. >> >>> If the tree is mostly-linear, then O(depth) becomes O(n). Chrome >> creates >> >>> mostly-linear trees. >> >>> >> >>> The overall runtime depends on how often the tree changes. If the tree >> >>> changes rarely, the scheduler is cheap. If the tree changes >> frequently, the >> >>> scheduler is worst-case O(n). A tree has "changed" if it changed shape >> >>> (nodes are added/moved/removed) or if any node's ready state has >> changed. >> >>> Both kinds of changes can happen often in practice, suggesting that >> the >> >>> overall scheduler is worst-case O(n). For example, consider a server >> that >> >>> sends small responses (one or two DATA frames per response) -- each >> stream >> >>> will be closed after one or two DATA frames, so on average, the tree >> will >> >>> change shape every few frames. Further, it's possible for ready >> states to >> >>> change more frequently than you might think. In Go's HTTP/2 >> implementation, >> >>> a stream does not become ready until the application handler calls >> Write to >> >>> buffer response data. After that buffered data is serialized on the >> wire, >> >>> the stream transitions to "not ready" because the scheduler cannot >> know when >> >>> the next Write call will happen. The stream will transition back to >> "ready" >> >>> during the next Write call. Each Write call typically buffers about >> 32KB of >> >>> data. This is a "push" model, where the application handler "pushes" >> data >> >>> into the scheduler. I'm aware of one other HTTP/2 server that works >> >>> similarly. I suspect that frequent ready/not-ready transitions are >> common to >> >>> most HTTP/2 servers that use a "push" model. These servers will be >> more >> >>> susceptible to the O(n) worst case. >> >>> >> >>> Questions: >> >>> >> >>> 1. Am I missing a clever implementation, or is it true that a faithful >> >>> HTTP/2 scheduler necessarily requires O(n) operations per frame sent >> on the >> >>> wire, in the worst case? I could not find much discussion of this >> question >> >>> after a quick search. H2O claims to implement an O(1) scheduler, >> however, >> >>> the code seems to be worst-case O(depth) or O(n) -- see here, here, >> and >> >>> here. >> >>> >> >>> 2. If the above is correct, should I be concerned about the O(n) >> worst case? >> >>> I doubt that a typical web browsing session will trigger O(n) behavior >> >>> frequently, so I'm less concerned about the average case; I'm more >> concerned >> >>> about pathological cases or possible DoS vectors. Also, think about >> cases >> >>> where the "client" is actually a proxy server, meaning the HTTP/2 >> connection >> >>> may have many more concurrent streams than a typical browsing >> session. For >> >>> comparison, if you recall the predecessor to HTTP/2 (SPDY), a SPDY >> scheduler >> >>> could be trivially implemented in O(1), since SPDY used just eight >> priority >> >>> buckets. >> >>> >> >>> 3. If I should be concerned about an O(n) worst case, are there any >> >>> suggested mitigations beyond setting SETTINGS_MAX_CONCURRENT_STREAMS >> to a >> >>> smallish constant? >> >> >> >> >> >> >> >> -- >> >> Kazuho Oku >> >> >> > >> >> >> >
- Is a faithful HTTP/2 response scheduler necessari… Tom Bergan
- Re: Is a faithful HTTP/2 response scheduler neces… Kazuho Oku
- Re: Is a faithful HTTP/2 response scheduler neces… Kazu Yamamoto ( 山本和彦 )
- Re: Is a faithful HTTP/2 response scheduler neces… Stefan Eissing
- Re: Is a faithful HTTP/2 response scheduler neces… Tom Bergan
- Re: Is a faithful HTTP/2 response scheduler neces… Kazu Yamamoto ( 山本和彦 )
- Re: Is a faithful HTTP/2 response scheduler neces… Tom Bergan
- Re: Is a faithful HTTP/2 response scheduler neces… Patrick McManus
- Re: Is a faithful HTTP/2 response scheduler neces… Kazuho Oku
- Re: Is a faithful HTTP/2 response scheduler neces… Tatsuhiro Tsujikawa
- Re: Is a faithful HTTP/2 response scheduler neces… Kazu Yamamoto ( 山本和彦 )
- Re: Is a faithful HTTP/2 response scheduler neces… Cory Benfield
- Re: Is a faithful HTTP/2 response scheduler neces… Stefan Eissing
- Re: Is a faithful HTTP/2 response scheduler neces… Amos Jeffries