Re: Design Issue: Max Concurrent Streams Limit and Unidirectional Streams

James M Snell <> Mon, 29 April 2013 21:18 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id E3EAB21F9B7D for <>; Mon, 29 Apr 2013 14:18:59 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -10.523
X-Spam-Status: No, score=-10.523 tagged_above=-999 required=5 tests=[AWL=0.075, BAYES_00=-2.599, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_HI=-8]
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id 6GIe5edt-hCi for <>; Mon, 29 Apr 2013 14:18:53 -0700 (PDT)
Received: from ( []) by (Postfix) with ESMTP id 8FB7721F9C3A for <>; Mon, 29 Apr 2013 14:18:50 -0700 (PDT)
Received: from lists by with local (Exim 4.72) (envelope-from <>) id 1UWvSl-0002mt-UT for; Mon, 29 Apr 2013 21:18:20 +0000
Resent-Date: Mon, 29 Apr 2013 21:18:19 +0000
Resent-Message-Id: <>
Received: from ([]) by with esmtp (Exim 4.72) (envelope-from <>) id 1UWvSY-0002lV-Q3 for; Mon, 29 Apr 2013 21:18:07 +0000
Received: from ([]) by with esmtps (TLS1.0:RSA_ARCFOUR_SHA1:16) (Exim 4.72) (envelope-from <>) id 1UWvSV-00056U-08 for; Mon, 29 Apr 2013 21:18:05 +0000
Received: by with SMTP id o17so6631340oag.18 for <>; Mon, 29 Apr 2013 14:17:37 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:cc:content-type; bh=xR+MTZKPPNRINmKqogx9cjztB7EjI5P/SFu8FaBnQ3E=; b=dGfkAwlTh96f02gXFTQ0DZPtzFj7RVIXpfu+vq0D3/ks+iWHpZEr36TxfO/IYKC3Rk KrWuynze0oRlgZ1eopqm1BoW9+N7y1zc3+TekVrIrf0fm2modsSAu9xlDHskA2z6dV6z 7m7sAZqs2VKPq1u7t/lJQK3+KLE6mSSbfFU0QuOPHZv3H39TkCx9ESTbVl0YPqoXKuwY uRcjD01BhW79IdhIycSqllK3ZriroFSbyJZ1hCvTI8AP8hDz8O9JpX8/5105nCTE9lkn 7ZGEP9V6rpkLNk70xc2YA2a5OaRCTE/jXu7LqE5rqtOn9TGpzuLEeS0uUXhTOJxVEn7f XTYw==
MIME-Version: 1.0
X-Received: by with SMTP id n10mr29944576oew.63.1367270257059; Mon, 29 Apr 2013 14:17:37 -0700 (PDT)
Received: by with HTTP; Mon, 29 Apr 2013 14:17:36 -0700 (PDT)
Received: by with HTTP; Mon, 29 Apr 2013 14:17:36 -0700 (PDT)
In-Reply-To: <>
References: <> <> <> <> <> <> <>
Date: Mon, 29 Apr 2013 14:17:36 -0700
Message-ID: <>
From: James M Snell <>
To: Roberto Peon <>
Cc: =?UTF-8?B?Q2hhbldpbGxpYW0o6ZmI5pm65piMKQ==?= <>,, Martin Thomson <>
Content-Type: multipart/alternative; boundary=047d7b33d4bafbf23f04db86690f
Received-SPF: pass client-ip=;;
X-W3C-Hub-Spam-Status: No, score=-3.4
X-W3C-Hub-Spam-Report: AWL=-2.649, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FREEMAIL_FROM=0.001, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_LOW=-0.7, SPF_PASS=-0.001
X-W3C-Scan-Sig: 1UWvSV-00056U-08 b12cd96e107a9fd4c7b0afb1b5015bda
Subject: Re: Design Issue: Max Concurrent Streams Limit and Unidirectional Streams
Archived-At: <>
X-Mailing-List: <> archive/latest/17680
Precedence: list
List-Id: <>
List-Help: <>
List-Post: <>
List-Unsubscribe: <>

+1 on this.  I like this approach.
On Apr 29, 2013 2:15 PM, "Roberto Peon" <> wrote:

> I had thought to provide no explicit limit for PUSH_PROMISE, just as there
> is no limit to the size of a webpage, or the number of links upon it.
> The memory requirements for PUSH are similar or the same (push should
> consume a single additional bit of overhead per url, when one considers
> that the URL should be parsed, enqueued, etc.).
> If the browser isn't done efficiently, or, the server is for some unknown
> reason being stupid and attempting to DoS the browser with many resources
> that it will never use, then the client sends RST_STREAM for the ones it
> doesn't want, and makes a request on its own. all tidy.
> As for PUSH'd streams, the easiest solution is likely to assume that the
> stream starts out in a half-closed state.
> -=R
> On Mon, Apr 29, 2013 at 12:33 PM, William Chan (陈智昌) <
>> wrote:
>> On Mon, Apr 29, 2013 at 3:46 PM, James M Snell <> wrote:
>>> On Apr 29, 2013 11:36 AM, "William Chan (陈智昌)" <>
>>> wrote:
>>> >
>>> [snip]
>>> >
>>> >
>>> > Oops, forgot about that. See, the issue with that is now we've made
>>> PUSH_PROMISE as potentially expensive as a HEADERS frame, since it does
>>> more than just simple stream id allocation. I guess it's not really a huge
>>> issue, since if it's used correctly (in the matter you described), then it
>>> shouldn't be too expensive. If clients attempt to abuse it, then servers
>>> should probably treat it in a similar manner as they treat people trying to
>>> abuse header compression in all other frames with the header block, and
>>> kill the connection accordingly.
>>> >
>>> Not just "potentially" as expensive..   As soon as we get a push promise
>>> we need to allocate state and hold onto it for an indefinite period of
>>> time. We do not yet know exactly when that compression context can be let
>>> go because it has not yet been bound to stream state.  Do push streams all
>>> share the same compression state? Do those share the same compression state
>>> as the originating stream? The answers might be obvious but they haven't
>>> yet been written down.
>> I guess I don't see per-stream state as being that expensive. Compression
>> contexts are a fixed state on a per-connection basis, meaning that
>> additional streams don't add to that state. The main cost, as I see it, is
>> the decompressed headers. I said potentially since that basically only
>> means the URL (unless there are other headers important for caching due to
>> Vary), and additional headers can come in the HEADERS frame. Also,
>> PUSH_PROMISE doesn't require allocating other state, like backend/DB
>> connections, if you only want to be able to handle
>> (#MAX_CONCURRENT_STREAMs) of those backend connections in parallel.
>> If they're not specified, then we should specify it, but I've always
>> understood the header compression contexts to be directional and apply to
>> all frames sending headers in a direction. Therefore there should be two
>> compression contexts in a connection, one for header blocks being sent and
>> one for header blocks being received. If this is controversial, let's fork
>> a thread and discuss it.
>>>  >>
>>> >>
>>> >> > As far as the potential problem above, the root problem is that
>>> when you
>>> >> > have limits you can have hangs. We see this all the time today with
>>> browsers
>>> >> > (it's only reason people do domain sharding so they can bypass
>>> limits). I'm
>>> >> > not sure I see the value of introducing the new proposed limits.
>>> They don't
>>> >> > solve the hangs, and I don't think the granularity addresses any of
>>> the
>>> >> > costs in a finer grained manner. I'd like to hear clarification on
>>> what
>>> >> > costs the new proposed limits will address.
>>> >>
>>> >> I don't believe that the proposal improves the situation enough (or at
>>> >> all) to justify the additional complexity.  That's something that you
>>> >> need to assess for yourself.  This proposal provides more granular
>>> >> control, but it doesn't address the core problem, which is that you
>>> >> and I can only observe each other actions after some delay, which
>>> >> means that we can't coordinate those actions perfectly.  Nor can be
>>> >> build a perfect model of the other upon which to observe and act upon.
>>> >>  The usual protocol issue.
>>> >
>>> >
>>> > OK then. My proposal is to add a new limit for PUSH_PROMISE frames
>>> though, separately from the MAX_CONCURRENT_STREAMS limit, since
>>> PUSH_PROMISE exists as a promise to create a stream, explicitly so we don't
>>> have to count it toward the existing MAX_CONCURRENT_STREAMS limit (I
>>> searched the spec and this seems to be inadequately specced). Roberto and I
>>> discussed that before and may have written an email somewhere in spdy-dev@,
>>> but I don't think we've ever raised it here.
>>> >
>>> Well,  there is an issue tracking it in the github repo now, at least.
>>> As currently defined in the spec,  it definitely needs to be addressed.
>> Great. You guys are way better than I am about tracking all known issues.
>> I just have it mapped fuzzily in my head :)