Re: Is a faithful HTTP/2 response scheduler necessarily O(n) in the worst case?

Stefan Eissing <stefan.eissing@greenbytes.de> Tue, 24 January 2017 08:30 UTC

Return-Path: <ietf-http-wg-request+bounce-httpbisa-archive-bis2juki=lists.ie@listhub.w3.org>
X-Original-To: ietfarch-httpbisa-archive-bis2Juki@ietfa.amsl.com
Delivered-To: ietfarch-httpbisa-archive-bis2Juki@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 47E351294BE for <ietfarch-httpbisa-archive-bis2Juki@ietfa.amsl.com>; Tue, 24 Jan 2017 00:30:13 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -10.219
X-Spam-Level:
X-Spam-Status: No, score=-10.219 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HEADER_FROM_DIFFERENT_DOMAINS=0.001, MIME_QP_LONG_LINE=0.001, RCVD_IN_DNSWL_HI=-5, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, RP_MATCHES_RCVD=-3.199, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (1024-bit key) header.d=greenbytes.de header.b=Z6sZkhkW; dkim=pass (1024-bit key) header.d=greenbytes.de header.b=c2Ksf4sn
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id yvSdLDRGzyv0 for <ietfarch-httpbisa-archive-bis2Juki@ietfa.amsl.com>; Tue, 24 Jan 2017 00:30:11 -0800 (PST)
Received: from frink.w3.org (frink.w3.org [128.30.52.56]) (using TLSv1.2 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 3737C129458 for <httpbisa-archive-bis2Juki@lists.ietf.org>; Tue, 24 Jan 2017 00:30:11 -0800 (PST)
Received: from lists by frink.w3.org with local (Exim 4.80) (envelope-from <ietf-http-wg-request@listhub.w3.org>) id 1cVwRp-00077r-5Z for ietf-http-wg-dist@listhub.w3.org; Tue, 24 Jan 2017 08:27:25 +0000
Resent-Date: Tue, 24 Jan 2017 08:27:25 +0000
Resent-Message-Id: <E1cVwRp-00077r-5Z@frink.w3.org>
Received: from mimas.w3.org ([128.30.52.79]) by frink.w3.org with esmtps (TLS1.2:RSA_AES_128_CBC_SHA1:128) (Exim 4.80) (envelope-from <stefan.eissing@greenbytes.de>) id 1cVwRn-00076z-7c for ietf-http-wg@listhub.w3.org; Tue, 24 Jan 2017 08:27:23 +0000
Received: from mail.greenbytes.de ([5.10.171.186]) by mimas.w3.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.84_2) (envelope-from <stefan.eissing@greenbytes.de>) id 1cVwRg-0004u4-Kk for ietf-http-wg@w3.org; Tue, 24 Jan 2017 08:27:17 +0000
Received: by mail.greenbytes.de (Postfix, from userid 117) id 2623515A04B0; Tue, 24 Jan 2017 09:26:49 +0100 (CET)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=greenbytes.de; s=mail; t=1485246409; bh=epcbeo1A+sW3cSySox+PfuO8OodKxpHLw+ObQxcui1o=; h=Subject:From:In-Reply-To:Date:Cc:References:To:From; b=Z6sZkhkWFjHDEQMWCJbVxkTUpoTboWe3TxLQSMJNmkUln5QPv5+u/YZ94JaClk5DT N/gql0065tiKC6xAuOzUZGKS3ohbQomaO0RF4/yovu3G4mDIzPOgirJyl+gKz+nwZB CckojskDCteQIo0fzJo5fOdaw/t0uoJEbl/gQtSA=
Received: from [192.168.178.54] (unknown [93.211.122.88]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mail.greenbytes.de (Postfix) with ESMTPSA id 70DEA15A0468; Tue, 24 Jan 2017 09:26:48 +0100 (CET)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=greenbytes.de; s=mail; t=1485246408; bh=epcbeo1A+sW3cSySox+PfuO8OodKxpHLw+ObQxcui1o=; h=Subject:From:In-Reply-To:Date:Cc:References:To:From; b=c2Ksf4snsVbt4Qn0mZk3eDFiaYEcXYh3ahEjd6s+GB/iuta0izA8oaAMwokWcKXXE p673o2qinAluzd0dpexTR5Jb9WX3Oz8DJjqWH5L1TjXvGgUuB8U/h1SMfFDgj0p/5n DRreTDRxa4g4Kk4qRqJu1US3ndcAUTh4QX38C5AE=
Content-Type: text/plain; charset="utf-8"
Mime-Version: 1.0 (1.0)
From: Stefan Eissing <stefan.eissing@greenbytes.de>
X-Mailer: iPhone Mail (14D27)
In-Reply-To: <20170124.165356.870174430965764062.kazu@iij.ad.jp>
Date: Tue, 24 Jan 2017 09:26:47 +0100
Cc: ietf-http-wg@w3.org
Content-Transfer-Encoding: quoted-printable
Message-Id: <900A5D6B-0752-470E-840C-4518D933DD09@greenbytes.de>
References: <CA+3+x5Ft8MfRduWp1RzJ9_qAgQiCujac8f5FLz81Xpyu3cTDxw@mail.gmail.com> <CANatvzz0AYxBDDyEbpvnq1xPd1-vrFazbnVt2usanXS0CTKbeA@mail.gmail.com> <20170124.165356.870174430965764062.kazu@iij.ad.jp>
To: "\"Kazu Yamamoto (山本和彦)\"" <kazu@iij.ad.jp>
Received-SPF: pass client-ip=5.10.171.186; envelope-from=stefan.eissing@greenbytes.de; helo=mail.greenbytes.de
X-W3C-Hub-Spam-Status: No, score=-6.7
X-W3C-Hub-Spam-Report: AWL=0.482, BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, MIME_QP_LONG_LINE=0.001, RP_MATCHES_RCVD=-3.199, SPF_PASS=-0.001, W3C_AA=-1, W3C_WL=-1
X-W3C-Scan-Sig: mimas.w3.org 1cVwRg-0004u4-Kk da2b5706046356fe8c3e4f20062a193a
X-Original-To: ietf-http-wg@w3.org
Subject: Re: Is a faithful HTTP/2 response scheduler necessarily O(n) in the worst case?
Archived-At: <http://www.w3.org/mid/900A5D6B-0752-470E-840C-4518D933DD09@greenbytes.de>
Resent-From: ietf-http-wg@w3.org
X-Mailing-List: <ietf-http-wg@w3.org> archive/latest/33365
X-Loop: ietf-http-wg@w3.org
Resent-Sender: ietf-http-wg-request@w3.org
Precedence: list
List-Id: <ietf-http-wg.w3.org>
List-Help: <http://www.w3.org/Mail/>
List-Post: <mailto:ietf-http-wg@w3.org>
List-Unsubscribe: <mailto:ietf-http-wg-request@w3.org?subject=unsubscribe>

The very same problem exists for stream processing in order to generate response data.

> Am 24.01.2017 um 08:53 schrieb Kazu Yamamoto (山本和彦) <kazu@iij.ad.jp>:
> 
> Hi Tom,
> 
> Probably, this material would help you:
> 
> http://www.mew.org/~kazu/material/2015-http2-priority2.pdf
> http://www.mew.org/~kazu/doc/paper/http2-haskell-2016.pdf
> 
> --Kazu
> 
>> Regarding your question, I am unaware of a scheduler design that is
>> better than O(n) for both dequeuing (i.e. selecting the stream to
>> send) and for rescheduling a stream (especially a stream with many
>> decendants).
>> 
>> While designing the scheduler of H2O (you are right in pointing out
>> the fact that it is O(depth)), I came up with two options. One was to
>> retain the vertical (i.e. parent-children) relationship between the
>> streams. The other was to squash the vertical relationships to
>> generate a one-dimensional list of streams ordered by priority.
>> 
>> By taking the latter approach, you could create a scheduler that
>> dequeues at O(1). But such scheduler would need to perform O(N)
>> operation when receiving a priority frame or a stream-level window
>> update (in this case N is number of direct and indirect decendants of
>> the reprioritized stream).
>> 
>> Considering this, we chose to implement the scheduler of H2O as O(1)
>> weight-wise, and O(n) depth-wise, but that the constant for O(n) would
>> be small enough so that it cannot be used an attack vector.
>> 
>> 2017-01-24 9:39 GMT+09:00 Tom Bergan <tombergan@chromium.org>:
>>> I implemented the HTTP/2 response scheduler in Go's HTTP/2 server library.
>>> I'm trying to understand the worst-case behavior of that scheduler. I
>>> believe the worst-case behavior is necessarily O(n) operations per frame
>>> sent on the wire, where n is the number of streams in the dependency tree.
>>> Here is my rationale.
>>> 
>>> The key operation is finding the highest-priority stream that is ready to
>>> send data.
>>> 
>>> If we don't care about weights, and we don't care about balancing bandwidth
>>> usage across sibling nodes in a tree, then we can label each node with two
>>> fields: "ready" (true if the stream is ready to send data) and "depth" (the
>>> node's distance from the root of the tree). The scheduler must find a node
>>> with the smallest depth over all nodes with ready = true. It is fairly
>>> trivial to implement this in O(log n).
>>> 
>>> Now, let's introduce weights. The scheduler must allocate bandwidth to all
>>> ready nodes, which happens recursively as follows:
>>> 
>>>  func allocateBandwidth(node, bw) {
>>>    if (node.ready) {
>>>      node.bandwidthShare = bw
>>>      return
>>>    }
>>>    totalWeight = 0
>>>    for (n in node.children) {
>>>      if (n.ready || descendantIsReady(n)) {
>>>        totalWeight += n.weight
>>>      }
>>>    }
>>>    for (n in node.children) {
>>>      if (n.ready || descendantIsReady(n)) {
>>>        allocateBandwidth(n, bw * n.weight / totalWeight)
>>>      }
>>>    }
>>>  }
>>>  allocateBandwidth(root, 1.0)
>>> 
>>> I believe the above definition is a direct translation of RFC 7540 Section
>>> 5.3.2 (also see this thread, which discussed bandwidth allocation). Given a
>>> complete bandwidth allocation, the server can translate each node's
>>> bandwidthShare to a number of tokens that are updated using a token bucket
>>> algorithm (or similar). The scheduler would then pick the node with the most
>>> available tokens. The scheduler looks something like:
>>> 
>>>  func scheduleNextFrame() {
>>>    if (tree changed since last frame written) {
>>>      allocateBandwidth(root, 1.0)
>>>      assign tokens and build a priority queue containing all nodes with
>>> allocated bandwidth
>>>    }
>>>    node = priorityQueue.head()
>>>    node.consumeTokens() // consume up to frame size or flow-control limit
>>>    priorityQueue.update(node)
>>>    return node
>>>  }
>>> 
>>> There are two steps in scheduleNextFrame. The first step updates the
>>> bandwidth allocation if the tree changed since the last frame was written.
>>> This is the most expensive step. I don't believe it's possible to implement
>>> allocateBandwidth using fewer than O(n) worst-case steps. For example, if
>>> the tree is mostly flat, meaning that most nodes are children of the same
>>> node, then the loop to compute totalWeight is O(n). I believe Firefox
>>> creates mostly-flat trees. Further, allocateBandwidth makes O(depth)
>>> recursive steps, where "depth" is the maximum depth of the dependency tree.
>>> If the tree is mostly-linear, then O(depth) becomes O(n). Chrome creates
>>> mostly-linear trees.
>>> 
>>> The overall runtime depends on how often the tree changes. If the tree
>>> changes rarely, the scheduler is cheap. If the tree changes frequently, the
>>> scheduler is worst-case O(n). A tree has "changed" if it changed shape
>>> (nodes are added/moved/removed) or if any node's ready state has changed.
>>> Both kinds of changes can happen often in practice, suggesting that the
>>> overall scheduler is worst-case O(n). For example, consider a server that
>>> sends small responses (one or two DATA frames per response) -- each stream
>>> will be closed after one or two DATA frames, so on average, the tree will
>>> change shape every few frames. Further, it's possible for ready states to
>>> change more frequently than you might think. In Go's HTTP/2 implementation,
>>> a stream does not become ready until the application handler calls Write to
>>> buffer response data. After that buffered data is serialized on the wire,
>>> the stream transitions to "not ready" because the scheduler cannot know when
>>> the next Write call will happen. The stream will transition back to "ready"
>>> during the next Write call. Each Write call typically buffers about 32KB of
>>> data. This is a "push" model, where the application handler "pushes" data
>>> into the scheduler. I'm aware of one other HTTP/2 server that works
>>> similarly. I suspect that frequent ready/not-ready transitions are common to
>>> most HTTP/2 servers that use a "push" model. These servers will be more
>>> susceptible to the O(n) worst case.
>>> 
>>> Questions:
>>> 
>>> 1. Am I missing a clever implementation, or is it true that a faithful
>>> HTTP/2 scheduler necessarily requires O(n) operations per frame sent on the
>>> wire, in the worst case? I could not find much discussion of this question
>>> after a quick search. H2O claims to implement an O(1) scheduler, however,
>>> the code seems to be worst-case O(depth) or O(n) -- see here, here, and
>>> here.
>>> 
>>> 2. If the above is correct, should I be concerned about the O(n) worst case?
>>> I doubt that a typical web browsing session will trigger O(n) behavior
>>> frequently, so I'm less concerned about the average case; I'm more concerned
>>> about pathological cases or possible DoS vectors. Also, think about cases
>>> where the "client" is actually a proxy server, meaning the HTTP/2 connection
>>> may have many more concurrent streams than a typical browsing session. For
>>> comparison, if you recall the predecessor to HTTP/2 (SPDY), a SPDY scheduler
>>> could be trivially implemented in O(1), since SPDY used just eight priority
>>> buckets.
>>> 
>>> 3. If I should be concerned about an O(n) worst case, are there any
>>> suggested mitigations beyond setting SETTINGS_MAX_CONCURRENT_STREAMS to a
>>> smallish constant?
>> 
>> 
>> 
>> -- 
>> Kazuho Oku
>> 
>