Re: HTTP/2 flow control <draft-ietf-httpbis-http2-17>

Bob Briscoe <> Fri, 20 March 2015 16:56 UTC

Return-Path: <>
Received: from localhost ( []) by (Postfix) with ESMTP id DFB891A1A36 for <>; Fri, 20 Mar 2015 09:56:24 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -6.911
X-Spam-Status: No, score=-6.911 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_HI=-5, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001, T_RP_MATCHES_RCVD=-0.01] autolearn=ham
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id sOLAGqMQS-4y for <>; Fri, 20 Mar 2015 09:56:15 -0700 (PDT)
Received: from ( []) (using TLSv1.2 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) by (Postfix) with ESMTPS id DFED51A1A4E for <>; Fri, 20 Mar 2015 09:56:14 -0700 (PDT)
Received: from lists by with local (Exim 4.80) (envelope-from <>) id 1YZ093-0006Wy-62 for; Fri, 20 Mar 2015 16:51:37 +0000
Resent-Date: Fri, 20 Mar 2015 16:51:37 +0000
Resent-Message-Id: <>
Received: from ([]) by with esmtp (Exim 4.80) (envelope-from <>) id 1YZ08p-0006Up-Ry for; Fri, 20 Mar 2015 16:51:23 +0000
Received: from ([]) by with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.72) (envelope-from <>) id 1YZ08l-0006HC-PV for; Fri, 20 Mar 2015 16:51:23 +0000
Received: from ( by ( with Microsoft SMTP Server (TLS) id; Fri, 20 Mar 2015 16:50:46 +0000
Received: from ( by ( with Microsoft SMTP Server (TLS) id 8.3.348.2; Fri, 20 Mar 2015 16:50:50 +0000
Received: from ( by ( with Microsoft SMTP Server id; Fri, 20 Mar 2015 16:50:46 +0000
Received: from ([]) by (8.13.5/8.12.8) with ESMTP id t2KGog0p011762; Fri, 20 Mar 2015 16:50:42 GMT
Message-ID: <>
X-Mailer: QUALCOMM Windows Eudora Version
Date: Fri, 20 Mar 2015 16:50:41 +0000
To: Jason Greene <>
From: Bob Briscoe <>
CC: Patrick McManus <>, Roberto Peon <>, HTTP Working Group <>
In-Reply-To: <>
References: <> <> <> <> <> <> <>
MIME-Version: 1.0
Content-Type: multipart/alternative; boundary="=====================_1142968188==.ALT"
X-Scanned-By: MIMEDefang 2.56 on
Received-SPF: pass client-ip=;;
X-W3C-Hub-Spam-Status: No, score=-4.5
X-W3C-Hub-Spam-Report: AWL=-1.763, HTML_MESSAGE=0.001, MIME_QP_LONG_LINE=0.001, RCVD_IN_DNSWL_LOW=-0.7, SPF_PASS=-0.001, T_RP_MATCHES_RCVD=-0.01, URIBL_BLOCKED=0.001, W3C_AA=-1, W3C_WL=-1
X-W3C-Scan-Sig: 1YZ08l-0006HC-PV 893be1fde98b51c36643a6396021b3fa
Subject: Re: HTTP/2 flow control <draft-ietf-httpbis-http2-17>
Archived-At: <>
X-Mailing-List: <> archive/latest/28996
Precedence: list
List-Id: <>
List-Help: <>
List-Post: <>
List-Unsubscribe: <>


At 16:00 19/03/2015, Jason Greene wrote:
>I think thats a good argument for why it’s not 
>suitable for rate-limiting relative to a 
>variable bandwidth product of a single 
>connection (which i took as your use-case a).

I believe there is no need for an intermediate 
node to do flow control for individual streams. 
It does need to control the whole envelope within 
which all the streams are flowing through the 
proxy's app-layer buffer memory (e.g. due to a 
thick incoming pipe feeding a thin outgoing 
pipe). The best mechanism for controlling the 
app-layer buffer consumption of the aggregate 
connection is for the intermediate node to 
control the TCP receive window of the incoming stream.

That doesn't preclude the intermediate node 
passing on any per-stream flow control messages 
emanating from the ultimate receiver so that the 
ultimate sender controls each stream's rate, 
which will alter the balance between streams 
within the overall envelope at the proxy.

I explained all this in my review.

>  However, memory limits are typically constant, 
> and an HTTP/2 server can be configured to limit 
> the capacity of the server based on the 
> combination of the “window” size, the 
> number of streams allowed, and the number of 
> connections allowed. Without a hard data 
> credit, the server would have to either HOL 
> block, prevent valid large requests from 
> processing, or run out of resources and start dropping requests.

I think you've moved away from the intermediate 
node case here? Your suggestions are valid. They 
are the crude controls available because more 
fine-grained per-stream control seems 
unattainable in h2, even tho I think the 
impression has been given that this requirement had been satisfied.

Altho per-stream flow control seems unattainable, 
there is still stream priority.


>>On Mar 19, 2015, at 10:38 AM, Bob Briscoe 
>><<>> wrote:
>>Yes, I agree this is what we /want/ per-stream 
>>flow control to do. My review explained why the 
>>flow control in h2 won't be able to do this.
>>>==Flow control problem summary==
>>>With only a credit signal in the protocol, a 
>>>receiver is going to have to allow generous 
>>>credit in the WINDOW_UPDATEs so as not to hurt 
>>>performance. But then, the receiver will not 
>>>be able to quickly close down one stream (e.g. 
>>>when the user's focus changes), because it 
>>>cannot claw back the generous credit it gave, it can only stop giving out more.
>>>IOW: Between a rock and a hard place,... but 
>>>don't tell them where the rock is.
>>At 15:08 19/03/2015, Jason Greene wrote:
>>>Hi Bob,
>>>I agree with you that HTTP/2 flow control is 
>>>not useful for your a) use-case, intermediate 
>>>buffer control. It is, however, necessary for 
>>>the b) use case, memory management of an endpoint.
>>>An HTTP server is often divided into a core 
>>>set of components which manage work scheduling 
>>>and HTTP protocol aspects, as well as a set of 
>>>independantly developed  “application” 
>>>components, that  are often, but not 
>>>necessarily,  provided by the end-user of the 
>>>server. There is typically an abstraction 
>>>between application code and server code, 
>>>where the application is only ever aware of 
>>>its stream. The stream may be provided by a 
>>>single HTTP/1.1 TCP socket, or it might be 
>>>multiplexed over HTTP/2 or SPDY, or some other 
>>>protocol.  In all of these cases the 
>>>abstraction appears the same to the 
>>>application, as it need not care. The 
>>>application code typically isn’t required 
>>>(nor able) to read everything immediately; 
>>>instead, it is  allowed to read as it 
>>>processes the data, and this ultimately 
>>>requires the server to buffer the data in the 
>>>interim (since other streams should not be 
>>>blocked). HTTP/2 flow control allows the 
>>>server to limit the maximum memory usage of 
>>>this buffering, which would otherwise be 
>>>significant since HTTP data can be of arbitrary length.
>>>>On Mar 19, 2015, at 6:54 AM, Bob Briscoe 
>>>><<>> wrote:
>>>>Thank you for taking the time to read my 
>>>>review carefully. I've been away from mail a 
>>>>few days, which should have allowed time for 
>>>>anything substantive to come from the people with SPDY experience.
>>>>We only had the Rob/Greg exchange which 
>>>>merely talked about what some theoretical 
>>>>thing called flow control might be useful 
>>>>for, rather than really addressing my concern 
>>>>that the credit-based protocol provided in h2 
>>>>is unlikely to be able to provide control if 
>>>>it is needed. Nonetheless, Greg did summarise 
>>>>well the limited usefulness of h2's mechanism.
>>>>I should probably have provided a summary of 
>>>>my (long) review, which I try to do below.
>>>>I agree that there could be some use for the 
>>>>h2 credit-based protocol at the very start of 
>>>>a stream (in both the C->S and S->C PUSH 
>>>>cases you mentioned). And it might possibly 
>>>>be useful for the cases you mention about 
>>>>unsolicited data, which sound like they occur 
>>>>at the start too. Altho without knowing more 
>>>>about these DoS cases I'm not sure; RST_STREAM might have been sufficient.
>>>>However, once a stream has been allowed to 
>>>>open up it's rate, your observation that we 
>>>>mostly see very large windows is what I 
>>>>predicted. It demonstrates that the 
>>>>credit-based protocol does not have 
>>>>sufficient information to be useful to 
>>>>regulate flow. It is effectively being 
>>>>treated like the human appendix - something 
>>>>that no longer serves any purpose but you 
>>>>have to continually put in effort to keep it 
>>>>healthy otherwise it could stop the rest of the system from working.
>>>>For this reason, I questioned why flow 
>>>>control has been made mandatory. And I 
>>>>suggested instead that the credit-based flow control in h2 could be
>>>>i) mandatory for a data sender to respond to 
>>>>incoming WINDOW_UPDATEs (and therefore a data 
>>>>receiver can gracefully protect itself from 
>>>>DoS by discarding data that exceeds the 
>>>>credit it has previously made available)
>>>>ii) optional for a data receiver to emit 
>>>>WINDOW_UPDATEs (i.e. does not even have to implement this part of the code).
>>>>At 21:12 10/03/2015, Patrick McManus wrote:
>>>>>Hi Bob - I think your comments are 
>>>>>appreciated. Its just one of those things 
>>>>>where people have dispersed to other things 
>>>>>and aren't necessarily in a place to revisit 
>>>>>all the ground work again at this stage for 
>>>>>a new go round. It was in large part the 
>>>>>operational feedback and needs of the 
>>>>><> team, who has 
>>>>>a lot of experience operating spdy at scale, 
>>>>>that created the flow control provisions. 
>>>>>hopefully those folks will chime in more authoritatively than my musings below:
>>>>>I'm sure there is quite a bit to learn here 
>>>>>- indeed poorly configured use of the 
>>>>>window_update mechanism has been 
>>>>>(predictably) a source of unintended 
>>>>>bottlenecks during both spdy and h2 trials. 
>>>>>The spec does try and highlight that there 
>>>>>can be dragons here and implementations that 
>>>>>don't need the features it can bring should 
>>>>>provide essentially infinite credits to steer clear of them.
>>>>>During the various trials I've seen h2 per 
>>>>>stream flow control deployed successfully 
>>>>>for a couple of use cases - both of them 
>>>>>essentially deal with unsolicited data.
>>>>>The primary one is essentially a more 
>>>>>flexible version of h1's 100-continue. When 
>>>>>a client presents a large message body (e.g. 
>>>>>a file upload) a multiplexing server needs a 
>>>>>way of saying "these are how many buffers 
>>>>>I've got available while I figure out where 
>>>>>I'm going to sink this incoming data 
>>>>>(perhaps to another server I need to connect 
>>>>>to)". Presenting this on a per-stream basis 
>>>>>allows the server to limit one stream while 
>>>>>another (with a distinct sink) can proceed 
>>>>>independently.  IMO this value should 
>>>>>represent resources available and should be 
>>>>>independent of BDP. This is why in practice 
>>>>>you see clients with extremely large stream 
>>>>>windows - most circumstances just want the 
>>>>>data to flow at line rate (as you describe) 
>>>>>and aren't trying to invoke flow control. 
>>>>>The firefox default window is 256MB per 
>>>>>stream - that's not going to slow down the 
>>>>>sender nor require frequent window_update generation.
>>>>>The other use case is when the server pushes 
>>>>>resources at a client without them being 
>>>>>requested, which is a new feature of h2. 
>>>>>This is conceptually similar to the server 
>>>>>receiving a POST - suddenly there is a large 
>>>>>amount of inbound data that the 
>>>>>implementation might not have the resources 
>>>>>to store completely. We can't just let TCP 
>>>>>flow control take care of the situation 
>>>>>because the TCP session is being multiplexed 
>>>>>between multiple streams that need to be 
>>>>>serviced. In this case the client accepts 
>>>>>"some" of the stream based on policy and 
>>>>>resource availability and can leave the 
>>>>>stream in limbo until an event comes along 
>>>>>that tells it to resume the transfer by 
>>>>>issuing credits or reject it via cancel.
>>>>>hth a least a bit.
>>>>>On Tue, Mar 10, 2015 at 4:31 PM, Bob Briscoe 
>>>>><<>> wrote:
>>>>>HTTP/2 folks,
>>>>>I know extensibility had already been 
>>>>>discussed and put to bed, so the WG is 
>>>>>entitled to rule out opening healed wounds.
>>>>>But have points like those I've made about 
>>>>>flow control been raised before? Please 
>>>>>argue. I may be wrong. Discussion can go on 
>>>>>in parallel to the RFC publication process, 
>>>>>even tho the process doesn't /require/ you to talk to me.
>>>>>If I'm right, then implementers are being 
>>>>>mandated to write complex flow control code, 
>>>>>when it might have little bearing on the 
>>>>>performance benefits measured for http/2.
>>>>>Even if I'm right, and the WG goes ahead 
>>>>>anyway, /I/ will understand. My review came in after your deadline.
>>>>>However, bear in mind that the Webosphere 
>>>>>might not be so forgiving. If h2 goes ahead 
>>>>>when potential problems have been 
>>>>>identified, it could get a bad reputation 
>>>>>simply due to the uncertainty, just when you 
>>>>>want more people to take it up and try it 
>>>>>out. Given you've put in a few person-years 
>>>>>of effort, I would have thought you would not want to risk a reputation flop.
>>>>>I'm trying to help - I just can't go any faster.
>>>>>At 14:43 06/03/2015, Bob Briscoe wrote:
>>>>>>HTTP/2 folks,
>>>>>>As I said, consider this as a late review 
>>>>>>from a clueful but fresh pair of eyes.
>>>>>>My main concerns with the draft are:
>>>>>>* extensibility (previous posting)
>>>>>>* flow control (this posting - apologies 
>>>>>>for the length - I've tried to explain properly)
>>>>>>* numerous open issues left dangling (see subsequent postings)
>>>>>>===HTTP/2 FLOW CONTROL===
>>>>>>The term 'window' as used throughout is 
>>>>>>incorrect and highly confusing, in:
>>>>>>* 'flow control window' (44 occurrences),
>>>>>>* 'initial window size' (5),
>>>>>>* or just 'window size' (8)
>>>>>>The HTTP/2 WINDOW_UPDATE mechanism 
>>>>>>constrains HTTP/2 to use only credit-based 
>>>>>>flow control, not window-based. At one 
>>>>>>point, it actually says it is credit-based 
>>>>>>(in flow control principle #2 
>>>>>> > ), but otherwise it incorrectly uses the term window.
>>>>>>This is not just an issue of terminology. 
>>>>>>The more I re-read the flow control 
>>>>>>sections the more I became convinced that 
>>>>>>this terminology is not just /confusing/, 
>>>>>>rather it's evidence of /confusion/. It raises the questions
>>>>>>* "Is HTTP/2 capable of the flow control it says it's capable of?"
>>>>>>* "What type of flow-control protocol ought HTTP/2 to be capable of?"
>>>>>>* "Can the WINDOW_UPDATE frame support the 
>>>>>>flow-control that HTTP/2 needs?"
>>>>>>To address these questions, it may help if 
>>>>>>I separate the two different cases HTTP/2 
>>>>>>flow control attempts to cover (my own separation, not from the draft):
>>>>>>a) Intermediate buffer control
>>>>>>Here, a stream's flow enters /and/ leaves a 
>>>>>>buffer (e.g. at the app-layer of an intermediate node).
>>>>>>b) Flow control by the ultimate client app.
>>>>>>Here flow never releases memory (at least 
>>>>>>not during the life of the connection). The 
>>>>>>flow is solely consuming more and more 
>>>>>>memory (e.g. data being rendered into a client app's memory).
>>>>>>==a) Intermediate buffer control==
>>>>>>For this, sliding window-based flow control 
>>>>>>would be appropriate, because the goal is 
>>>>>>to keep the e2e pipeline full without wasting buffer.
>>>>>>Let me prove HTTP/2 cannot do window flow 
>>>>>>control. For window flow control, the 
>>>>>>sender needs to be able to advance both the 
>>>>>>leading and trailing edges of the window. In the draft:
>>>>>>* WINDOW_UPDATE frames can only advance the 
>>>>>>leading edge of a 'window' (and they are constrained to positive values).
>>>>>>* To advance the trailing edge, window flow 
>>>>>>control would need a continuous stream of 
>>>>>>acknowledgements back to the sender (like 
>>>>>>TCP). The draft does not provide ACKs at 
>>>>>>the app-layer, and the app-layer cannot 
>>>>>>monitor ACKs at the transport layer, so the 
>>>>>>sending app-layer cannot advance the trailing edge of a 'window'.
>>>>>>So the protocol can only support 
>>>>>>credit-based flow control. It is incapable of supporting window flow control.
>>>>>>Next, I don't understand how a receiver can 
>>>>>>set the credit in 'WINDOW_UPDATE' to a 
>>>>>>useful value. If the sender needed the 
>>>>>>receiver to answer the question "How much 
>>>>>>more can I send than I have seen ACK'd?" 
>>>>>>that would be easy. But because the 
>>>>>>protocol is restricted to credit, the 
>>>>>>sender needs the receiver to answer the 
>>>>>>much harder open-ended question, "How much 
>>>>>>more can I send?" So the sender needs the 
>>>>>>receiver to know how many ACKs the sender 
>>>>>>has seen, but neither of them know that.
>>>>>>The receiver can try, by taking a guess at 
>>>>>>the bandwidth-delay product, and adjusting 
>>>>>>the guess up or down, depending on whether 
>>>>>>its buffer is growing or shrinking. But 
>>>>>>this only works if the unknown bandwidth-delay product stays constant.
>>>>>>However, BDP will usually be highly 
>>>>>>variable, as other streams come and go. So, 
>>>>>>in the time it takes to get a good estimate 
>>>>>>of the per-stream BDP, it will probably 
>>>>>>have changed radically, or the stream will 
>>>>>>most likely have finished anyway. This is 
>>>>>>why TCP bases flow control on a window, not 
>>>>>>credit. By complementing window updates 
>>>>>>with ACK stream info, a TCP sender has sufficient info to control the flow.
>>>>>>The draft is indeed correct when it says:
>>>>>>"Â Â  this can lead to suboptimal use of available
>>>>>>Â Â  network resources if flow control is 
>>>>>>enabled without knowledge of the
>>>>>>Â Â  bandwidth-delay product (see [RFC7323]).
>>>>>>Was this meant to be a veiled criticism of 
>>>>>>the protocol's own design? A credit-based 
>>>>>>flow control protocol like that in the 
>>>>>>draft does not provide sufficient 
>>>>>>information for either end to estimate the 
>>>>>>bandwidth-delay product, given it will be varying rapidly.
>>>>>>==b) Control by the ultimate client app==
>>>>>>For this case, I believe neither window nor 
>>>>>>credit-based flow control is appropriate:
>>>>>>* There is no memory management issue at 
>>>>>>the client end - even if there's a separate 
>>>>>>HTTP/2 layer of memory between TCP and the 
>>>>>>app, it would be pointless to limit the 
>>>>>>memory used by HTTP/2, because the data is 
>>>>>>still going to sit in the same user-space 
>>>>>>memory (or at least about the same amount 
>>>>>>of memory) when HTTP/2 passes it over for rendering.
>>>>>>* Nonetheless, the receiving client does 
>>>>>>need to send messages to the sender to 
>>>>>>supplement stream priorities, by notifying 
>>>>>>when the state of the receiving application 
>>>>>>has changed (e.g. if the user's focus 
>>>>>>switches from one browser tab to another).
>>>>>>* However, credit-based flow control would 
>>>>>>be very sluggish for such control, because 
>>>>>>credit cannot be taken back once it has 
>>>>>>been given (except HTTP/2 allows 
>>>>>>SETTINGS_INITIAL_WINDOW_SIZE to be reduced, 
>>>>>>but that's a drastic measure that hits all streams together).
>>>>>>==Flow control problem summary==
>>>>>>With only a credit signal in the protocol, 
>>>>>>a receiver is going to have to allow 
>>>>>>generous credit in the WINDOW_UPDATEs so as 
>>>>>>not to hurt performance. But then, the 
>>>>>>receiver will not be able to quickly close 
>>>>>>down one stream (e.g. when the user's focus 
>>>>>>changes), because it cannot claw back the 
>>>>>>generous credit it gave, it can only stop giving out more.
>>>>>>IOW: Between a rock and a hard place,... 
>>>>>>but don't tell them where the rock is.
>>>>>>==Towards a solution?==
>>>>>>I think 'type-a' flow control (for 
>>>>>>intermediate buffer control) does not need 
>>>>>>to be at stream-granularity. Indeed, I 
>>>>>>suspect a proxy could control its app-layer 
>>>>>>buffering by controlling the receive window 
>>>>>>of the incoming TCP connection. Has anyone 
>>>>>>assessed whether this would be sufficient?
>>>>>>I can understand the need for 'type-b' 
>>>>>>per-stream flow control (by the ultimate 
>>>>>>client endpoint). Perhaps it would be 
>>>>>>useful for the receiver to emit a new 
>>>>>>'PAUSE_HINT' frame on a stream? Or perhaps 
>>>>>>updating per-stream PRIORITY would be 
>>>>>>sufficient? Either would minimise the 
>>>>>>response time to a half round trip. Whereas 
>>>>>>credit flow-control will be much more 
>>>>>>sluggish (see 'Flow control problem summary').
>>>>>>Either approach would correctly propagate 
>>>>>>e2e. An intermediate node would naturally 
>>>>>>tend to prioritise incoming streams that 
>>>>>>fed into prioritised outgoing streams, so 
>>>>>>priority updates would tend to propagate 
>>>>>>from the ultimate receiver, through 
>>>>>>intermediate nodes, up to the ultimate sender.
>>>>>>==Flow control coverage==
>>>>>>The draft exempts all TCP payload bytes 
>>>>>>from flow control except HTTP/2 data 
>>>>>>frames. No rationale is given for this 
>>>>>>decision. The draft says it's important to 
>>>>>>manage per-stream memory, then it exempts 
>>>>>>all the frame types except data, even tho 
>>>>>>each byte of a non-data frame consumes no 
>>>>>>less memory than a byte of a data frame.
>>>>>>What message does this put out? "Flow 
>>>>>>control is not important for one type of 
>>>>>>bytes with unlimited total size, but flow 
>>>>>>control is so important that it has to be 
>>>>>>mandatory for the other type of bytes."
>>>>>>It is certainly critical that WINDOW_UPDATE 
>>>>>>messages are not covered by flow control, 
>>>>>>otherwise there would be a real risk of 
>>>>>>deadlock. It might be that there are 
>>>>>>dependencies on other frame types that 
>>>>>>would lead to a dependency loop and 
>>>>>>deadlock. It would be good to know what the rationale behind these rules was.
>>>>>>I am concerned that HTTP/2 flow control may 
>>>>>>have entered new theoretical territory, 
>>>>>>without suitable proof of safety. The only 
>>>>>>reassurance we have is one implementation 
>>>>>>of a flow control algorithm (SPDY), and the 
>>>>>>anecdotal non-evidence that no-one using 
>>>>>>SPDY has noticed a deadlock yet (however, 
>>>>>>is anyone monitoring for deadlocks?).
>>>>>>Whereas SPDY has been an existence proof 
>>>>>>that an approach like http/2 'works', so 
>>>>>>far all the flow control algos have been 
>>>>>>pretty much identical (I think that's 
>>>>>>true?). I am concerned that the draft takes 
>>>>>>the InterWeb into uncharted waters, because 
>>>>>>it allows unconstrained diversity in flow 
>>>>>>control algos, which is an untested degree of freedom.
>>>>>>The only constraints the draft sets are:
>>>>>>* per-stream flow control is mandatory
>>>>>>* the only protocol message for flow 
>>>>>>control algos to use is the WINDOW_UPDATE 
>>>>>>credit message, which cannot be negative
>>>>>>* no constraints on flow control algorithms.
>>>>>>* and all this must work within the outer 
>>>>>>flow control constraints of TCP.
>>>>>>Some algos might use priority messages to 
>>>>>>make flow control assumptions. While other 
>>>>>>algos might associate PRI and WINDOW_UPDATE 
>>>>>>with different meanings. What confidence do 
>>>>>>we have that everyone's optimisation 
>>>>>>algorithms will interoperate? Do we know 
>>>>>>there will not be certain types of application where deadlock is likely?
>>>>>>"Â Â  When using flow
>>>>>>Â Â  control, the receiver MUST read from the TCP receive buffer in a
>>>>>>   timely fashion.  Failure to do so could lead to a deadlock when
>>>>>>Â Â  critical frames, such as 
>>>>>>WINDOW_UPDATE, are not read and acted upon.
>>>>>>I've been convinced (offlist) that deadlock 
>>>>>>will not occur as long as the app consumes 
>>>>>>data 'greedily' from TCP. That has since 
>>>>>>been articulated in the above normative 
>>>>>>text. But how sure can we be that every 
>>>>>>implementer's different interpretations of 
>>>>>>'timely' will still prevent deadlock?
>>>>>>Until a good autotuning algorithm for TCP 
>>>>>>receive window management was developed, 
>>>>>>good window management code was nearly 
>>>>>>non-existent. Managing hundreds of 
>>>>>>interdependent stream buffers is a much 
>>>>>>harder problem. But implementers are being 
>>>>>>allowed to just 'Go forth and innovate'. 
>>>>>>This might work if everyone copies 
>>>>>>available open source algo(s). But they might not, and they don't have to.
>>>>>>This all seems like 'flying by the seat of the pants'.
>>>>>>==Mandatory Flow Control? ==
>>>>>>"Â Â Â Â Â  3. [...] A sender
>>>>>>Â Â Â Â Â Â  MUST respect flow 
>>>>>>control limits imposed by a receiver."
>>>>>>This ought to be a 'SHOULD' because it is 
>>>>>>contradicted later - if settings change.
>>>>>>"   6.  Flow control cannot be disabled."
>>>>>>Also effectively contradicted half a page later:
>>>>>>"Â Â  Deployments that do not require 
>>>>>>this capability can advertise a flow
>>>>>>Â Â  control window of the maximum size 
>>>>>>(2^31-1), and by maintaining this
>>>>>>Â Â  window by sending a WINDOW_UPDATE 
>>>>>>frame when any data is received.
>>>>>>Â Â  This effectively disables flow control for that receiver."
>>>>>>And contradicted in the definition of half closed (remote):
>>>>>>"Â  half closed (remote):
>>>>>>Â Â Â Â Â  [...] an endpoint is no longer
>>>>>>Â Â Â Â Â  obligated to maintain a receiver flow control window.
>>>>>>And contradicted in 
>>>>>>The CONNECT Method, which says:
>>>>>>"Â  Frame types other than DATA
>>>>>>Â Â  or stream management frames 
>>>>>>Â Â  MUST NOT be sent on a connected stream, and MUST be treated as a
>>>>>>Â Â  stream error (Section 5.4.2) if received.
>>>>>>Why is flow control so important that it's 
>>>>>>mandatory, but so unimportant that you MUST NOT do it when using TLS e2e?
>>>>>>Going back to the earlier quote about using 
>>>>>>the max window size, it seems perverse for 
>>>>>>the spec to require endpoints to go through 
>>>>>>the motions of flow control, even if they 
>>>>>>arrange for it to affect nothing, but to 
>>>>>>still require implementation complexity and 
>>>>>>bandwidth waste with a load of redundant WINDOW_UPDATE frames.
>>>>>>HTTP is used on a wide range of devices, 
>>>>>>down to the very small and challenged. 
>>>>>>HTTP/2 might be desirable in such cases, 
>>>>>>because of the improved efficiency (e.g. 
>>>>>>header compression), but in many cases the 
>>>>>>stream model may not be complex enough to need stream flow control.
>>>>>>So why not make flow control optional on 
>>>>>>the receiving side, but mandatory to 
>>>>>>implement on the sending side? Then an 
>>>>>>implementation could have no machinery for 
>>>>>>tuning window sizes, but it would respond 
>>>>>>correctly to those set by the other end, which requires much simpler code.
>>>>>>If a receiving implemention chose not to do 
>>>>>>stream flow control, it could still control 
>>>>>>flow at the connection (stream 0) level, or at least at the TCP level.
>>>>>>Flow Control
>>>>>>"Flow control is used for both individual
>>>>>>Â Â  streams and for the connection as a whole."
>>>>>>Does this means that every WINDOW_UPDATE on 
>>>>>>a stream has to be accompanied by another 
>>>>>>WINDOW_UPDATE frame on stream zero? If so, 
>>>>>>this seems like 100% message redundancy. Surely I must  have misunderstood.
>>>>>>==Flow Control Requirements===
>>>>>>I'm not convinced that clear understanding 
>>>>>>of flow control requirements has driven flow control design decisions.
>>>>>>The draft states various needs for 
>>>>>>flow-control without giving me a feel of 
>>>>>>confidence that it has separated out the 
>>>>>>different cases, and chosen a protocol 
>>>>>>suitable for each. I tried to go back to 
>>>>>>the early draft on flow control 
>>>>>>requirements < 
>>>>>> >, and I was not impressed.
>>>>>>I have quoted below the various sentences 
>>>>>>in the draft that state what flow control 
>>>>>>is believed to be for. Below that, I have 
>>>>>>attempted to crystalize out the different 
>>>>>>concepts, each of which I have tagged within the quotes.
>>>>>>HTTP/2 Protocol Overview says
>>>>>>Â  "Flow control and prioritization ensure 
>>>>>>that it is possible to efficiently use multiplexed streams. [Y]
>>>>>>Â Â  Flow control (Section 5.2) helps to 
>>>>>>ensure that only data that can be used by a receiver is transmitted. [X]"
>>>>>>Flow Control says:
>>>>>>Â  "Using streams for multiplexing 
>>>>>>introduces contention over use of the TCP 
>>>>>>connection [X], resulting in blocked 
>>>>>>streams [Z]. A flow control scheme ensures 
>>>>>>that streams on the same connection do not 
>>>>>>destructively interfere with each other [Z]."
>>>>>>Appropriate Use of Flow Control
>>>>>>"Â  Flow control is defined to protect 
>>>>>>endpoints that are operating under
>>>>>>   resource constraints.  For 
>>>>>>example, a proxy needs to share memory
>>>>>>Â Â  between many connections, and also might have a slow upstream
>>>>>>Â Â  connection and a fast downstream one 
>>>>>>[Y].  Flow control addresses cases
>>>>>>Â Â  where the receiver is unable to 
>>>>>>process data on one stream, yet wants
>>>>>>Â Â  to continue to process other streams in the same connection [X]."
>>>>>>"Â  Deployments with constrained resources (for example, memory) can
>>>>>>Â Â  employ flow control to limit the 
>>>>>>amount of memory a peer can consume. [Y]
>>>>>>Each requirement has been tagged as follows:
>>>>>>[X] Notification of the receiver's changing utility for each stream
>>>>>>[Y] Prioritisation of streams due to 
>>>>>>contention over the streaming capacity available to the whole connection.
>>>>>>[Z] Ensuring one stream is not blocked by another.
>>>>>>[Z] might be a variant of [Y], but [Z] 
>>>>>>sounds more binary, whereas [Y] sounds more 
>>>>>>like optimisation across a continuous spectrum.
>>>>>>Bob Briscoe,          
>>>>>>Â Â Â Â Â Â Â Â Â Â Â Â Â Â 
>>>>>>Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â  BT
>>>>>Bob Briscoe,           
>>>>>Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â 
>>>>>Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â  BT
>>>>Bob Briscoe,                                                  BT
>>>Jason T. Greene
>>>WildFly Lead / JBoss EAP Platform Architect
>>>JBoss, a division of Red Hat
>>Bob Briscoe,                                                  BT
>Jason T. Greene
>WildFly Lead / JBoss EAP Platform Architect
>JBoss, a division of Red Hat

Bob Briscoe,                                                  BT