Re: Design Issue: Max Concurrent Streams Limit and Unidirectional Streams

Roberto Peon <> Mon, 29 April 2013 21:40 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 0866E21F9B64 for <>; Mon, 29 Apr 2013 14:40:33 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -10.298
X-Spam-Status: No, score=-10.298 tagged_above=-999 required=5 tests=[BAYES_00=-2.599, HTML_MESSAGE=0.001, MIME_8BIT_HEADER=0.3, RCVD_IN_DNSWL_HI=-8]
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id OdgJL38YYUKN for <>; Mon, 29 Apr 2013 14:40:26 -0700 (PDT)
Received: from ( []) by (Postfix) with ESMTP id C82D221F9B3C for <>; Mon, 29 Apr 2013 14:40:25 -0700 (PDT)
Received: from lists by with local (Exim 4.72) (envelope-from <>) id 1UWvo0-0007L6-UY for; Mon, 29 Apr 2013 21:40:16 +0000
Resent-Date: Mon, 29 Apr 2013 21:40:16 +0000
Resent-Message-Id: <>
Received: from ([]) by with esmtp (Exim 4.72) (envelope-from <>) id 1UWvnr-0007IQ-Ud for; Mon, 29 Apr 2013 21:40:07 +0000
Received: from ([]) by with esmtps (TLS1.0:RSA_ARCFOUR_SHA1:16) (Exim 4.72) (envelope-from <>) id 1UWvnp-00060i-FY for; Mon, 29 Apr 2013 21:40:07 +0000
Received: by with SMTP id k3so6632407oag.33 for <>; Mon, 29 Apr 2013 14:39:39 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:cc:content-type; bh=Hku13g6mmSot3btvv8OOrllT6whoJ0z1dH2ixoLnfRs=; b=YJh87OQ931EGZsDjoZnfoYhwVG3LnuK6ayuhJOgn0KCOpT8Qhy4dROl+7ZnnYlHSu+ XMLoW6nVPkz+3dqSxNHjxVliFUq9KKYTvUJwR1JzL+5GFDGjvAG+cG98iHr+tE62cGo7 FIQ/daLOHuuvsUlQqhVE+hi3PeRc5Loj2jmJufcm3uUeblFsYeoFh/GN3EO6HiR7Cw0G uyqbk2SRpUUmnYlCZihk5Y1Nrvypwj3WZKYtuprzpHguhgKBriayaGe5zmDQJ5sLb4xd bAkBacmFU9cc+bP/kp/jC1mKEPPnHMfXe//zFVkv4AqOyNC2P4L2/e170C35O0iRywm1 YCJw==
MIME-Version: 1.0
X-Received: by with SMTP id n9mr29294903oed.64.1367271579505; Mon, 29 Apr 2013 14:39:39 -0700 (PDT)
Received: by with HTTP; Mon, 29 Apr 2013 14:39:39 -0700 (PDT)
In-Reply-To: <>
References: <> <> <> <> <> <> <> <> <>
Date: Mon, 29 Apr 2013 14:39:39 -0700
Message-ID: <>
From: Roberto Peon <>
To: =?UTF-8?B?V2lsbGlhbSBDaGFuICjpmYjmmbrmmIwp?= <>
Cc: James M Snell <>, HTTP Working Group <>, Martin Thomson <>
Content-Type: multipart/alternative; boundary=089e013c682ccf405f04db86b843
Received-SPF: pass client-ip=;;
X-W3C-Hub-Spam-Status: No, score=-3.5
X-W3C-Hub-Spam-Report: AWL=-2.686, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FREEMAIL_FROM=0.001, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_LOW=-0.7, SPF_PASS=-0.001
X-W3C-Scan-Sig: 1UWvnp-00060i-FY 90c2d813dff320d9f11b099c4ab19ab6
Subject: Re: Design Issue: Max Concurrent Streams Limit and Unidirectional Streams
Archived-At: <>
X-Mailing-List: <> archive/latest/17684
Precedence: list
List-Id: <>
List-Help: <>
List-Post: <>
List-Unsubscribe: <>

At worst, we burn a flag which states it is half-closed or unidirectional,
or provide some other information which identifies the IANA port number for
the overlayed protocol or something.
Anyway, *shrug*.

On Mon, Apr 29, 2013 at 2:32 PM, William Chan (陈智昌)

> On Mon, Apr 29, 2013 at 6:17 PM, James M Snell <> wrote:
>> +1 on this.  I like this approach.
>>  On Apr 29, 2013 2:15 PM, "Roberto Peon" <> wrote:
>>> I had thought to provide no explicit limit for PUSH_PROMISE, just as
>>> there is no limit to the size of a webpage, or the number of links upon it.
>>> The memory requirements for PUSH are similar or the same (push should
>>> consume a single additional bit of overhead per url, when one considers
>>> that the URL should be parsed, enqueued, etc.).
>>> If the browser isn't done efficiently, or, the server is for some
>>> unknown reason being stupid and attempting to DoS the browser with many
>>> resources that it will never use, then the client sends RST_STREAM for the
>>> ones it doesn't want, and makes a request on its own. all tidy.
> I don't feel too strongly here. I do feel like this is more of an edge
> case, possibly important for forward proxies (or reverse proxies speaking
> to backends over a multiplexed channel like HTTP/2). It doesn't really
> matter for my browser, so unless servers chime in and say they'd prefer a
> limit, I'm fine with this.
>>> As for PUSH'd streams, the easiest solution is likely to assume that the
>>> stream starts out in a half-closed state.
> I looked into our earlier email threads and indeed this is what we agreed
> on (
> I voiced some mild objection since if you view the HTTP/2 framing layer as
> a transport for another application protocol, then bidirectional server
> initiated streams might be nice. But in absence of any such protocol, this
> is a nice simplification.
>> -=R
>>> On Mon, Apr 29, 2013 at 12:33 PM, William Chan (陈智昌) <
>>>> wrote:
>>>> On Mon, Apr 29, 2013 at 3:46 PM, James M Snell <>wrote;wrote:
>>>>> On Apr 29, 2013 11:36 AM, "William Chan (陈智昌)" <>
>>>>> wrote:
>>>>> >
>>>>> [snip]
>>>>> >
>>>>> >
>>>>> > Oops, forgot about that. See, the issue with that is now we've made
>>>>> PUSH_PROMISE as potentially expensive as a HEADERS frame, since it does
>>>>> more than just simple stream id allocation. I guess it's not really a huge
>>>>> issue, since if it's used correctly (in the matter you described), then it
>>>>> shouldn't be too expensive. If clients attempt to abuse it, then servers
>>>>> should probably treat it in a similar manner as they treat people trying to
>>>>> abuse header compression in all other frames with the header block, and
>>>>> kill the connection accordingly.
>>>>> >
>>>>> Not just "potentially" as expensive..   As soon as we get a push
>>>>> promise we need to allocate state and hold onto it for an indefinite period
>>>>> of time. We do not yet know exactly when that compression context can be
>>>>> let go because it has not yet been bound to stream state.  Do push streams
>>>>> all share the same compression state? Do those share the same compression
>>>>> state as the originating stream? The answers might be obvious but they
>>>>> haven't yet been written down.
>>>> I guess I don't see per-stream state as being that expensive.
>>>> Compression contexts are a fixed state on a per-connection basis, meaning
>>>> that additional streams don't add to that state. The main cost, as I see
>>>> it, is the decompressed headers. I said potentially since that basically
>>>> only means the URL (unless there are other headers important for caching
>>>> due to Vary), and additional headers can come in the HEADERS frame. Also,
>>>> PUSH_PROMISE doesn't require allocating other state, like backend/DB
>>>> connections, if you only want to be able to handle
>>>> (#MAX_CONCURRENT_STREAMs) of those backend connections in parallel.
>>>> If they're not specified, then we should specify it, but I've always
>>>> understood the header compression contexts to be directional and apply to
>>>> all frames sending headers in a direction. Therefore there should be two
>>>> compression contexts in a connection, one for header blocks being sent and
>>>> one for header blocks being received. If this is controversial, let's fork
>>>> a thread and discuss it.
>>>>>  >>
>>>>> >>
>>>>> >> > As far as the potential problem above, the root problem is that
>>>>> when you
>>>>> >> > have limits you can have hangs. We see this all the time today
>>>>> with browsers
>>>>> >> > (it's only reason people do domain sharding so they can bypass
>>>>> limits). I'm
>>>>> >> > not sure I see the value of introducing the new proposed limits.
>>>>> They don't
>>>>> >> > solve the hangs, and I don't think the granularity addresses any
>>>>> of the
>>>>> >> > costs in a finer grained manner. I'd like to hear clarification
>>>>> on what
>>>>> >> > costs the new proposed limits will address.
>>>>> >>
>>>>> >> I don't believe that the proposal improves the situation enough (or
>>>>> at
>>>>> >> all) to justify the additional complexity.  That's something that
>>>>> you
>>>>> >> need to assess for yourself.  This proposal provides more granular
>>>>> >> control, but it doesn't address the core problem, which is that you
>>>>> >> and I can only observe each other actions after some delay, which
>>>>> >> means that we can't coordinate those actions perfectly.  Nor can be
>>>>> >> build a perfect model of the other upon which to observe and act
>>>>> upon.
>>>>> >>  The usual protocol issue.
>>>>> >
>>>>> >
>>>>> > OK then. My proposal is to add a new limit for PUSH_PROMISE frames
>>>>> though, separately from the MAX_CONCURRENT_STREAMS limit, since
>>>>> PUSH_PROMISE exists as a promise to create a stream, explicitly so we don't
>>>>> have to count it toward the existing MAX_CONCURRENT_STREAMS limit (I
>>>>> searched the spec and this seems to be inadequately specced). Roberto and I
>>>>> discussed that before and may have written an email somewhere in spdy-dev@,
>>>>> but I don't think we've ever raised it here.
>>>>> >
>>>>> Well,  there is an issue tracking it in the github repo now, at
>>>>> least.  As currently defined in the spec,  it definitely needs to be
>>>>> addressed.
>>>> Great. You guys are way better than I am about tracking all known
>>>> issues. I just have it mapped fuzzily in my head :)