Re: Design Issue: Max Concurrent Streams Limit and Unidirectional Streams

William Chan (陈智昌) <> Mon, 29 April 2013 19:35 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 1DDBA21F9BD7 for <>; Mon, 29 Apr 2013 12:35:08 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -9.676
X-Spam-Status: No, score=-9.676 tagged_above=-999 required=5 tests=[BAYES_00=-2.599, FM_FORGED_GMAIL=0.622, HTML_MESSAGE=0.001, MIME_8BIT_HEADER=0.3, RCVD_IN_DNSWL_HI=-8]
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id HVmcvfVeFoGs for <>; Mon, 29 Apr 2013 12:35:02 -0700 (PDT)
Received: from ( []) by (Postfix) with ESMTP id A4F8F21F9BDC for <>; Mon, 29 Apr 2013 12:35:00 -0700 (PDT)
Received: from lists by with local (Exim 4.72) (envelope-from <>) id 1UWtqI-0008Fm-10 for; Mon, 29 Apr 2013 19:34:30 +0000
Resent-Date: Mon, 29 Apr 2013 19:34:30 +0000
Resent-Message-Id: <>
Received: from ([]) by with esmtp (Exim 4.72) (envelope-from <>) id 1UWtq8-0008Ex-6z for; Mon, 29 Apr 2013 19:34:20 +0000
Received: from ([]) by with esmtps (TLS1.0:RSA_ARCFOUR_SHA1:16) (Exim 4.72) (envelope-from <>) id 1UWtq7-0000I1-6e for; Mon, 29 Apr 2013 19:34:20 +0000
Received: by with SMTP id hg5so1218915qab.18 for <>; Mon, 29 Apr 2013 12:33:53 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=20120113; h=mime-version:x-received:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type; bh=oU/Cv0Ra+/CAaH07fvQ74reZR+w/AQtPbl0AeFK0jnE=; b=nJPNfYM7TEm32pJ9XN0encpjb597wWGd/tqqgejtiUvuD1KFfg7iEkxSA51j0UNT28 D4PoALbjONuvdP0um83NwZHePbjD15mAIgzNEEvBNZUjIC8AhzNvMgyw9a9kUZ5iaw2s 4xH8fjubm8HfZbV62RfUrwMkxCj5BQppijO1C/ALU6apc+D8nCEVpKpc2wQVOadvw+ZB lfGW2OV2aJuQh6Z6i/35Sqvf21PD5n+w6GuYrNPMbqXjS+GfpzZpEXL+d8yWRRqfjLy1 HVRMi7AsGzGhnDcRr73nqUDFUPlw+LmPJ9p22/P65cEm1a0osXns6rFLlf4knbCNxRwS G5kQ==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=google; h=mime-version:x-received:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type; bh=oU/Cv0Ra+/CAaH07fvQ74reZR+w/AQtPbl0AeFK0jnE=; b=Tk5ws49fqlwcXfcLscKopr+Ai/e+vYD7hh07cCM03yJRKwDZoZ6Oo6zrmIIeMtD/dA KJoxhVUMiE+D/e4C11vw5sCxQUW3+QoodD9EfZ/Ka8mIY3I9si+Vsygekh1p86IuJEOG WDTRAc7Sq4vebv7W9p88kydi4e4dsoYOVKXyk=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=20120113; h=mime-version:x-received:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type :x-gm-message-state; bh=oU/Cv0Ra+/CAaH07fvQ74reZR+w/AQtPbl0AeFK0jnE=; b=WES4nwngHXTcUrQU76g2FA5gRbWmAjSs3EtAg1rtYAAN06MOAG4wp5vduu7ZiUo7OJ JyRADZKDRGjH9wYEJ/wxaNf3/oO4C+OXG/QgWTeWGU5RhlyrtU9JqHpehDNSVYuvfW2p U9WFBj1Xt4PJyYjCl9ETsTuqUE0XciZOaPhw2LSgJuw+Bw0zOdYtA8gd27G+9F3BJPCB coYPaJkkwZNSidExso+D+NSjLoksT0nbsciyDFpDpokjBLeeLQ9eqP/wJCf+akR3yURI u/2f6nXSmkTViVvX6eU+sv6EUXepZzROPhfKWDDhH8rZ+qM6ftr4OivBCJbZ9XBxMR99 EvYA==
MIME-Version: 1.0
X-Received: by with SMTP id a3mr50504263qee.22.1367264033561; Mon, 29 Apr 2013 12:33:53 -0700 (PDT)
Received: by with HTTP; Mon, 29 Apr 2013 12:33:53 -0700 (PDT)
In-Reply-To: <>
References: <> <> <> <> <>
Date: Mon, 29 Apr 2013 16:33:53 -0300
X-Google-Sender-Auth: 6LKc_Kko6Q8-6zEutgt7qLSZP3o
Message-ID: <>
From: "William Chan (陈智昌)" <>
To: James M Snell <>
Cc: HTTP Working Group <>, Martin Thomson <>
Content-Type: multipart/alternative; boundary="047d7bd766aa08fabd04db84f721"
X-Gm-Message-State: ALoCoQmbuIq9NI5OJjs/Jwe7IXSQXVIoqGTK0tUeVp0R77YGahZIhUqfJ3/w/7j7GoF08WYzJ+Z4Y/VpsUPpqUacloAIW2Mqp64/i2iRmeTg7fBxV0a8yn/X3YJe2SOJf65jvBzgOP+10IW3h/Bnu19KiCzNgopmImQa2YX3sEoun4vrLO0gb8i7sSn8iwYwElnchnT+myiv
Received-SPF: pass client-ip=;;
X-W3C-Hub-Spam-Status: No, score=-5.7
X-W3C-Hub-Spam-Report: AWL=-0.538, BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_LOW=-0.7, RP_MATCHES_RCVD=-2.442, SPF_PASS=-0.001
X-W3C-Scan-Sig: 1UWtq7-0000I1-6e 3db87dc440caa9e03f76f08bfd15de29
Subject: Re: Design Issue: Max Concurrent Streams Limit and Unidirectional Streams
Archived-At: <>
X-Mailing-List: <> archive/latest/17674
Precedence: list
List-Id: <>
List-Help: <>
List-Post: <>
List-Unsubscribe: <>

On Mon, Apr 29, 2013 at 3:46 PM, James M Snell <> wrote:

> On Apr 29, 2013 11:36 AM, "William Chan (陈智昌)" <>
> wrote:
> >
> [snip]
> >
> >
> > Oops, forgot about that. See, the issue with that is now we've made
> PUSH_PROMISE as potentially expensive as a HEADERS frame, since it does
> more than just simple stream id allocation. I guess it's not really a huge
> issue, since if it's used correctly (in the matter you described), then it
> shouldn't be too expensive. If clients attempt to abuse it, then servers
> should probably treat it in a similar manner as they treat people trying to
> abuse header compression in all other frames with the header block, and
> kill the connection accordingly.
> >
> Not just "potentially" as expensive..   As soon as we get a push promise
> we need to allocate state and hold onto it for an indefinite period of
> time. We do not yet know exactly when that compression context can be let
> go because it has not yet been bound to stream state.  Do push streams all
> share the same compression state? Do those share the same compression state
> as the originating stream? The answers might be obvious but they haven't
> yet been written down.

I guess I don't see per-stream state as being that expensive. Compression
contexts are a fixed state on a per-connection basis, meaning that
additional streams don't add to that state. The main cost, as I see it, is
the decompressed headers. I said potentially since that basically only
means the URL (unless there are other headers important for caching due to
Vary), and additional headers can come in the HEADERS frame. Also,
PUSH_PROMISE doesn't require allocating other state, like backend/DB
connections, if you only want to be able to handle
(#MAX_CONCURRENT_STREAMs) of those backend connections in parallel.

If they're not specified, then we should specify it, but I've always
understood the header compression contexts to be directional and apply to
all frames sending headers in a direction. Therefore there should be two
compression contexts in a connection, one for header blocks being sent and
one for header blocks being received. If this is controversial, let's fork
a thread and discuss it.

>  >>
> >>
> >> > As far as the potential problem above, the root problem is that when
> you
> >> > have limits you can have hangs. We see this all the time today with
> browsers
> >> > (it's only reason people do domain sharding so they can bypass
> limits). I'm
> >> > not sure I see the value of introducing the new proposed limits. They
> don't
> >> > solve the hangs, and I don't think the granularity addresses any of
> the
> >> > costs in a finer grained manner. I'd like to hear clarification on
> what
> >> > costs the new proposed limits will address.
> >>
> >> I don't believe that the proposal improves the situation enough (or at
> >> all) to justify the additional complexity.  That's something that you
> >> need to assess for yourself.  This proposal provides more granular
> >> control, but it doesn't address the core problem, which is that you
> >> and I can only observe each other actions after some delay, which
> >> means that we can't coordinate those actions perfectly.  Nor can be
> >> build a perfect model of the other upon which to observe and act upon.
> >>  The usual protocol issue.
> >
> >
> > OK then. My proposal is to add a new limit for PUSH_PROMISE frames
> though, separately from the MAX_CONCURRENT_STREAMS limit, since
> PUSH_PROMISE exists as a promise to create a stream, explicitly so we don't
> have to count it toward the existing MAX_CONCURRENT_STREAMS limit (I
> searched the spec and this seems to be inadequately specced). Roberto and I
> discussed that before and may have written an email somewhere in spdy-dev@,
> but I don't think we've ever raised it here.
> >
> Well,  there is an issue tracking it in the github repo now, at least.  As
> currently defined in the spec,  it definitely needs to be addressed.
Great. You guys are way better than I am about tracking all known issues. I
just have it mapped fuzzily in my head :)