Re: Design Issue: Max Concurrent Streams Limit and Unidirectional Streams

William Chan (陈智昌) <willchan@chromium.org> Sat, 04 May 2013 02:00 UTC

Return-Path: <ietf-http-wg-request@listhub.w3.org>
X-Original-To: ietfarch-httpbisa-archive-bis2Juki@ietfa.amsl.com
Delivered-To: ietfarch-httpbisa-archive-bis2Juki@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 9581E21F8F6E for <ietfarch-httpbisa-archive-bis2Juki@ietfa.amsl.com>; Fri, 3 May 2013 19:00:37 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -6.598
X-Spam-Level:
X-Spam-Status: No, score=-6.598 tagged_above=-999 required=5 tests=[AWL=-1.922, BAYES_00=-2.599, FM_FORGED_GMAIL=0.622, GB_SUMOF=5, HTML_MESSAGE=0.001, MIME_8BIT_HEADER=0.3, RCVD_IN_DNSWL_HI=-8]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id iXy0QwfOIS7A for <ietfarch-httpbisa-archive-bis2Juki@ietfa.amsl.com>; Fri, 3 May 2013 19:00:32 -0700 (PDT)
Received: from frink.w3.org (frink.w3.org [128.30.52.56]) by ietfa.amsl.com (Postfix) with ESMTP id 538C021F8F02 for <httpbisa-archive-bis2Juki@lists.ietf.org>; Fri, 3 May 2013 19:00:32 -0700 (PDT)
Received: from lists by frink.w3.org with local (Exim 4.72) (envelope-from <ietf-http-wg-request@listhub.w3.org>) id 1UYRlp-00007Z-4E for ietf-http-wg-dist@listhub.w3.org; Sat, 04 May 2013 02:00:17 +0000
Resent-Date: Sat, 04 May 2013 02:00:17 +0000
Resent-Message-Id: <E1UYRlp-00007Z-4E@frink.w3.org>
Received: from lisa.w3.org ([128.30.52.41]) by frink.w3.org with esmtp (Exim 4.72) (envelope-from <willchan@google.com>) id 1UYRle-0008W3-AB for ietf-http-wg@listhub.w3.org; Sat, 04 May 2013 02:00:06 +0000
Received: from mail-qa0-f48.google.com ([209.85.216.48]) by lisa.w3.org with esmtps (TLS1.0:RSA_ARCFOUR_SHA1:16) (Exim 4.72) (envelope-from <willchan@google.com>) id 1UYRlc-0002Tv-Op for ietf-http-wg@w3.org; Sat, 04 May 2013 02:00:06 +0000
Received: by mail-qa0-f48.google.com with SMTP id i13so661267qae.14 for <ietf-http-wg@w3.org>; Fri, 03 May 2013 18:59:39 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:x-received:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type; bh=jg49cG8gmFSNfNvMrU/K4qzgUtB5jptLNHQiIOdXhFg=; b=Eqe59uRW11w96KrWJUdOV6WtV955ydukngI2sSszEfrTpf7sza0kjhRdSW5cVc2/gw 3bxUFsh0Ula7TcNIYE/blnD4zoKNp1+Cdn5ZMYfn/VfY8SeEdalBZSq/hCeqkdBd+0aL f5k4MKXjnw6eRRRV91CdsahCb6CAb09L6YDkBdeDGj3aUwDirpuKSaER82Rg1Xc5AFrb vX59xIdSyVT83Ue9Ki6eDTBrZFPmxox2rU+8ZaoJqm+xr1hNavU1q00e19Wz5VvVCWeW zQrgRVkO+ohcWkPZzIYA2rEzFXejmMdlvqeNqZZYANTSKNHPkJB6T5P89MB8JhxN5zeR MqKw==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=mime-version:x-received:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type; bh=jg49cG8gmFSNfNvMrU/K4qzgUtB5jptLNHQiIOdXhFg=; b=LlaLJFw/rcJP1rPMz2Ex+rzSTZgsdTipfA1JUVwjCh5CLiRcEvpn6UDl7K0v5bP8cy D+qc9EnXqtimJbDAw66BuDBS9sMwHcCebB8LmyeFcpdyx1E7be+COrbe2rZ788GsUW6U i5A3V78KlQ6u5RgEUWU5BeLk+lRvRNt2WSQV0=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:x-received:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type :x-gm-message-state; bh=jg49cG8gmFSNfNvMrU/K4qzgUtB5jptLNHQiIOdXhFg=; b=X09YBTYysnmUBWvTUjFXkHmaw/y80k1je33U3EsgUMM6dJ873KQK/znkOl77AUpXuz OA/+PvW3bFPok7/OFSd7OXO0H4VTAJEHuBEY66aKoDxibuGnriH0iSiy5EgLPkBprbsD Pn92KmsA6uRNE3J/4gl/90mKKy8AERw8nvElEgCE/p4JuLwJWeVNrjvB+P46zleMII1l VJet8TstGrVkWhXGhxzRkaTWfi28tBt9f4RZf54W83jyw5aj700kmsXJDvwgDsSpsOrH dcnAubAYkvde58JkirCYjAqTNyucjxpixqQ0SrpiYbmTfk3UnDOB0V2DASqaJLQVHcCx szCQ==
MIME-Version: 1.0
X-Received: by 10.49.128.225 with SMTP id nr1mr4456234qeb.22.1367632778866; Fri, 03 May 2013 18:59:38 -0700 (PDT)
Sender: willchan@google.com
Received: by 10.229.180.4 with HTTP; Fri, 3 May 2013 18:59:38 -0700 (PDT)
In-Reply-To: <CAP+FsNf6p1fK_MH2P4vZTpt_CFWttDagY1nVdPvsdXm3tp_XuA@mail.gmail.com>
References: <CABP7RbdBe-Xkx+CMvpN=_oNAqm6SyLyL+XNHRUKSqn8mjSDw1Q@mail.gmail.com> <CAA4WUYgCiyWerT0tUUVKcbNPqdTGuXHd_MG59DjcUsEWst5t7g@mail.gmail.com> <CABkgnnVdU=cZ53Bqg5Un=E80NMpcgYO37DVmwUFW0O-i7SNf8w@mail.gmail.com> <CAA4WUYhz64FsEGgGhx91RfWwuPPxWdAkesOV-bmqWVWE7ZxdjA@mail.gmail.com> <CABP7RbcKQkn1o4WZscwNmSmm6YzqE_TKxPr4jnozNdaVqpZ7=A@mail.gmail.com> <CAA4WUYhF6rAZoYEaz4aJO6xawaJxzxGt=Bkg4H9eBOP-LBSRmQ@mail.gmail.com> <CAP+FsNezQzxdZEJY_2_0h_TR2pBbVsGyGBhQhKcm-65pt6S8rQ@mail.gmail.com> <CABP7RbevS8M0q9OxzPncqY_gE34q5-ymdg2hOX2SQgSUNkhzsw@mail.gmail.com> <CAA4WUYjAbuUqz9RdO+-p3a4EsyuS=Gv0rS-U-Vh+ZCjtDjFy6w@mail.gmail.com> <CAP+FsNec2LLZMjtGhSX-1q8qg66WtBoM5K0yMrs5m4VKXb5OVg@mail.gmail.com> <CAA4WUYgAT64jj=Am06MsA02A+eAcDrVbbgb4opO37bnMkWTPfg@mail.gmail.com> <CABP7Rbdgz=kRZPfjHK5UUfieq8uz=ToQZjFt1-+s9scj1CogmA@mail.gmail.com> <CAA4WUYjSjFKSdbj=QBLn0T4ufhzF1hUY=O=Qa2dfnkTzMXF0bg@mail.gmail.com> <CABP7RbejssYWH+nEumVX__+4TnE1ec8e1YXeY8kqWF+AgszTrg@mail.gmail.com> <CAA4WUYiRVxM78Dr+eh9ksVvW_9=S01mHxt_Wr+SyaVECmc0e-g@mail.gmail.com> <CABP7RbexX0T=yYKPeKFeGEnzMAcO7fAifZh6LfLCOngLDNQHUA@mail.gmail.com> <CABkgnnUeicCNUa70GW7Vv9-bbwLPiPM=2-_t28Qz5o6DT0jF8Q@mail.gmail.com> <CABP7RbfbmTqFHPkRvj2K6iZ=Oo7MsT3hD9Y33fmtU9HOLoDmUA@mail.gmail.com> <CAA4WUYj81k1dK-LV+=h-yto4WEpVWFaRnCQZ+h55mipYCnQeYw@mail.gmail.com> <CABkgnnVEy7LPU2sUrKVFTLpEVP4RcWnbdgs1oRvmNFujZGQBOg@mail.gmail.com> <CAA4WUYhznNY_2YBUoMq4Us5NO0r_04Caz9_O1iZUrW4kc3kNcA@mail.gmail.com> <CABP7RbeJ6Fhs-ncpA4cGQ8SQHayamCmUn=xCmcagwBUs6NLyxg@mail.gmail.com> <CAA4WUYgp_b4SZ-OeXyFRiknU0OeWK5bsn-ihiFJqKk26UpHdyQ@mail.gmail.com> <CAP+FsNdfT0du=P7TzE4pPPgzaCi2YR-jpeE1US0inm46n9Z5pQ@mail.gmail.com> <CABP7RbfqZao-kGo==UbHS3f9GFHWu048bSsUvJ3AARiq0dN60w@mail.gmail.com> <CAP+FsNf6p1fK_MH2P4vZTpt_CFWttDagY1nVdPvsdXm3tp_XuA@mail.gmail.com>
Date: Fri, 03 May 2013 22:59:38 -0300
X-Google-Sender-Auth: NLL5-oOdwT-kwaahCFYhXoIJ07A
Message-ID: <CAA4WUYi3tNyF0aND-o_hwPv-kwVTaUyfLOf-_1JEOq7F_3e4Tg@mail.gmail.com>
From: "William Chan (陈智昌)" <willchan@chromium.org>
To: Roberto Peon <grmocg@gmail.com>
Cc: James M Snell <jasnell@gmail.com>, Martin Thomson <martin.thomson@gmail.com>, HTTP Working Group <ietf-http-wg@w3.org>
Content-Type: multipart/alternative; boundary="047d7b6d9e6ef8053104dbdad19c"
X-Gm-Message-State: ALoCoQlbDjwbAIMdXl+hIijh8pQdjsuIxdocQqzqswoE5todYnR9F+WGoWntxGv4xXd67YwrkJak3PxzXBE0Dk1Xxk6QMFhMZiFYVYHW4PD6Nh/mYrkoawEQbCSBktLsLqM+HGwgbSSeu0AdOQi/RhwbO8iLZQnyzUGfa+pNwXeWgT/d7lFEMmTWomjkjn97O4K3MHT+PvnM
Received-SPF: pass client-ip=209.85.216.48; envelope-from=willchan@google.com; helo=mail-qa0-f48.google.com
X-W3C-Hub-Spam-Status: No, score=-4.8
X-W3C-Hub-Spam-Report: AWL=-1.387, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_LOW=-0.7, RP_MATCHES_RCVD=-2.581, SPF_PASS=-0.001
X-W3C-Scan-Sig: lisa.w3.org 1UYRlc-0002Tv-Op f6eacf8595a370ee1d70916261f17f7c
X-Original-To: ietf-http-wg@w3.org
Subject: Re: Design Issue: Max Concurrent Streams Limit and Unidirectional Streams
Archived-At: <http://www.w3.org/mid/CAA4WUYi3tNyF0aND-o_hwPv-kwVTaUyfLOf-_1JEOq7F_3e4Tg@mail.gmail.com>
Resent-From: ietf-http-wg@w3.org
X-Mailing-List: <ietf-http-wg@w3.org> archive/latest/17831
X-Loop: ietf-http-wg@w3.org
Resent-Sender: ietf-http-wg-request@w3.org
Precedence: list
List-Id: <ietf-http-wg.w3.org>
List-Help: <http://www.w3.org/Mail/>
List-Post: <mailto:ietf-http-wg@w3.org>
List-Unsubscribe: <mailto:ietf-http-wg-request@w3.org?subject=unsubscribe>

I think there's some missing context not being stated in the emails which
is leading to confusion. Can we revisit why decoupling the stream
directions will lead to a longer header lifetime? I would think that when
the streaming layer encounters headers for a stream, it would hand it up to
the HTTP semantic layer. As long as the HTTP semantic layer understands the
request-response coupling, it knows to keep the request headers around
until the response arrives, right?


On Fri, May 3, 2013 at 10:49 PM, Roberto Peon <grmocg@gmail.com> wrote:

> Hey, you're the one worried about the size of the compressor state (which
> would be ~4 or 8k)! :)
> Headers are sometimes larger individually, and the upper limit to the size
> of this state is the sum of 10s to 100s of these.
>
> I think that, pragmatically, since there is a framing-layer solution which
> ensures that one must not store headers for longer that necessary, and
> which is semantic-layer agnostic, it is a decent bet.
> Any approach other than declaring it as unidirectional (or equivalent)
> either requires the caching of much state, or a NACK from the remote side.
>
> That isn't to say that Martin's approach doesn't have appeal. I want to
> like it, but unless we are to have unlimited streams in some limbo state,
> it would require a NACK, else I won't be able to correlate sets of headers
> on a stream and the NACK both requires more machinery at both ends, and
> consumes more bytes on the wire.
> -=R
>
>
>
> On Fri, May 3, 2013 at 6:19 PM, James M Snell <jasnell@gmail.com> wrote:
>
>> Ok.. going back over the thread in detail and over the spec again, one
>> approach to addressing the overall concern here (and hopefully bring a
>> bit more rigor to the overall design) is to redefine the stream states
>> slightly along the same lines already suggested by Martin. Each
>> endpoint would maintain its own view of the current activity state of
>> every stream in a session, however, the state would only reflect the
>> actions taken by the peer endpoint. There are five possible activity
>> states:
>>
>> Unused
>>   The peer endpoint has not reserved or used the stream in any way.
>>
>> Open
>>   The endpoint has received frames on the stream from the peer, none
>> of which are type RST_STREAM or have the FINAL flag set.
>>
>> Closed
>>   The endpoint has received an RST_STREAM frame, or any frame with the
>> FINAL flag set from the peer
>>
>> Reserved-Open
>>   The peer has reserved the stream identifier for future use but
>> frames have not yet been received on that stream. The receiving
>> endpoint is expected to send its own frames on the same stream.
>>
>> Reserved-Closed
>>   The peer has reserved the stream identifier for future use but
>> frames have not yet been received on that stream. The receiving
>> endpoint is not expected to send its own frames on the same stream.
>>
>> MAX_CONCURRENT_STREAMS == The number of streams in the Open state the
>> endpoint will permit the peer to initiate at any given time. Once that
>> limit is reached, the receiving endpoint will likely begin rejecting
>> new streams using RST_STREAM. In other words, right now,
>> MAX_CONCURRENT_STREAMS is defined in terms of what the sending
>> endpoint must not do. This changes the definition to an indication of
>> what the receiving endpoint will do once a particular threshold is
>> reached. Any endpoint that wants to be able to keep creating streams
>> must be diligent about sending FINAL frames, etc.
>>
>> As for the Request-Response bounding issue, that's really an HTTP
>> semantic layer notion. I'm not fully convinced we really need to
>> handle that issue in the framing layer at all.
>>
>>
>> On Fri, May 3, 2013 at 2:20 PM, Roberto Peon <grmocg@gmail.com> wrote:
>> > The biggest rub in Martin's suggestion is that, as a stream initiator,
>> I no
>> > longer know for how long I should keep the original "request" headers
>> > around.
>> > I view that as an annoying problem (I want every response to be
>> attributable
>> > to a request).
>> >
>> > I also think it is a bit confusing-- how would it be used in cases where
>> > I've sent all my data on what I thought was a unidirectional stream, and
>> > then receive bytes from the other side on that stream. That'd be...
>> weird.
>> >
>> > With the unidirectional bit (or similar declaration of half-closed
>> > start-state), I now know (by fiat, essentially) that I will not receive
>> a
>> > response on that stream ID, and so I don't need to keep the "request"
>> > headers around after I've finished pushing the stream. Logging
>> accomplished.
>> >
>> >
>> > I think this is an easy issue to solve by reinstating the
>> unidirectional bit
>> > (for now). It is certainly minimal work to have servers which do server
>> push
>> > set that bit.
>> >
>> > To Will's point, I agree that an "ENHANCE YOUR CALM" code seems
>> redundant.
>> > In my case I believe it redundant because the remote side has already
>> > received my settings frame, or is sending without having known it (i.e.
>> > within the initial RTT), and will be receiving the SETTINGS frame
>> before it
>> > could process this new code anyway (assuming I'm following spec and
>> sending
>> > SETTINGS immediately upon session establishment).
>> > -=R
>> >
>> >
>> >
>> >
>> >
>> > On Fri, May 3, 2013 at 11:28 AM, William Chan (陈智昌) <
>> willchan@chromium.org>
>> > wrote:
>> >>
>> >> I guess I kinda think that we're worrying too much about this corner of
>> >> the spec. I don't view it as a big deal in practice. The problem
>> described
>> >> happens when MAX_CONCURRENT_STREAMS is too low to allow enough
>> parallelism
>> >> per roundtrip. I would advise people to simply increase their
>> >> MAX_CONCURRENT_STREAMS in that case. I kinda think this is only
>> problematic
>> >> when we have very high latencies and devices that can't handle high
>> >> parallelism, like an interplanetary refrigerator that speaks HTTP/2
>> for some
>> >> reason. <shrug>
>> >>
>> >> I am unsure how to feel about a ENHANCE YOUR CALM code as it's not well
>> >> defined. I don't mind RST_STREAMs on exceeding limits, like the initial
>> >> MAX_CONCURRENT_STREAMS, since they're usually the result of a race (the
>> >> possible initial SETTINGS frame race) and we won't have to keep
>> continually
>> >> sending RST_STREAMs to rate limit appropriately.
>> >>
>> >>
>> >> On Fri, May 3, 2013 at 3:02 PM, James M Snell <jasnell@gmail.com>
>> wrote:
>> >>>
>> >>> The impact on client-to-server initiated streams is another reason why
>> >>> I suggested the credit-based approach and why it would likely be good
>> >>> to have an RST_STREAM "ENHANCE YOUR CALM" error code [1]. If the
>> >>> client misbehaves and sends too much too quickly, we have flow
>> >>> control, settings, rst_stream and goaway options to deal with it.
>> >>>
>> >>> [1]
>> >>>
>> http://en.wikipedia.org/wiki/List_of_HTTP_status_codes#4xx_Server_Error
>> >>>
>> >>> On Fri, May 3, 2013 at 10:34 AM, William Chan (陈智昌)
>> >>> <willchan@chromium.org> wrote:
>> >>> > As I understand the proposal, which I believe ties into the issue
>> James
>> >>> > raised at the beginning here, the goal is to be able to open and
>> close
>> >>> > a
>> >>> > directional stream without an ACK, which I am nervous about as I
>> said
>> >>> > above
>> >>> > without much detail. Concretely speaking, a HTTP GET is a
>> >>> > HEADERS+PRIORITY
>> >>> > frame with a FINAL flag or an extra DATA frame with FINAL flag. This
>> >>> > means
>> >>> > that the request effectively never gets counted against the
>> directional
>> >>> > stream limit, as controlled by the receiver which sends a
>> >>> > MAX_CONCURRENT_STREAMS setting, since it open and closes the
>> direction
>> >>> > in
>> >>> > the same frame (or closes in the subsequent empty DATA frame).
>> >>> >
>> >>> >
>> >>> > On Fri, May 3, 2013 at 1:52 PM, Martin Thomson
>> >>> > <martin.thomson@gmail.com>
>> >>> > wrote:
>> >>> >>
>> >>> >> On 3 May 2013 09:44, William Chan (陈智昌) <willchan@chromium.org>
>> wrote:
>> >>> >> > I'd like server folks to chime in, but doing this makes me feel a
>> >>> >> > bit
>> >>> >> > nervous. I feel this effectively disables the directional
>> concurrent
>> >>> >> > streams
>> >>> >> > limit. The bidirectional full-close essentially acts like an
>> ACK, so
>> >>> >> > removing it might result in an unbounded number of streams.
>> >>> >>
>> >>> >> I think that I know what you mean here, but can you try to expand a
>> >>> >> little?  Do you refer to the possible gap between close on the
>> >>> >> initiating direction and the first frame on the responding
>> direction;
>> >>> >> a gap that might cause the stream to escape accounting?  I think
>> that
>> >>> >> is a tractable problem - any unbounded-ness is under the control of
>> >>> >> the initiating peer.
>> >>> >
>> >>> >
>> >>
>> >>
>> >
>>
>
>