Re: Deadlocking in the transport

Jana Iyengar <> Fri, 12 January 2018 00:52 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 35F13126C22 for <>; Thu, 11 Jan 2018 16:52:57 -0800 (PST)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -2.71
X-Spam-Status: No, score=-2.71 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_LOW=-0.7, SPF_PASS=-0.001, T_RP_MATCHES_RCVD=-0.01] autolearn=ham autolearn_force=no
Authentication-Results: (amavisd-new); dkim=pass (2048-bit key)
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id IcEmiUUpG27O for <>; Thu, 11 Jan 2018 16:52:54 -0800 (PST)
Received: from ( [IPv6:2607:f8b0:4002:c05::230]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by (Postfix) with ESMTPS id 8C10D126579 for <>; Thu, 11 Jan 2018 16:52:54 -0800 (PST)
Received: by with SMTP id t201so1858671ywf.1 for <>; Thu, 11 Jan 2018 16:52:54 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=20161025; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=r4wJJNdBTD4VccX1aAZlwtnwTPJTYvtypv+BlSMFTlU=; b=CdkKiFJncfz7Gi6gzXcvMNEqci32Y6SUwo+H9BLQdFyA7QQphGRrXWs9foZBfZyvz3 3Pwfab6BfxB+iu5TpFDc9CDAmI+lDGwKhRqMeNlxLE740z8pzeur48HZtAda5H/OkhUx W0GxdQ93iFIR8JdNHYvN57or1UGV+5SqJUXn9t/7mnEZ9kqV1BDCVvRtsGMenPhf3Ru3 C/01IE8Un4CYLtRTK6g5n3OAq+pAC3PekH0MHdUkJkwh9O9YgeGeZkcO9YR+H6giYbzX Jcn+vdyTSADvH1rfyUUF8gLs9I01NKWqn8/9ty+E1B0s3/ogu5cqgwJ3A3KUXx12BL85 yuUg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=r4wJJNdBTD4VccX1aAZlwtnwTPJTYvtypv+BlSMFTlU=; b=L5uxTR74rgxGp+MZ4Nh41yqj28Z77PTnPlG70AYDDC2vLxpA7SGlSnffOHZBZK0qxP XnFk+pp3mMy2yp+xhIO05NC2lu4jD569Kw0v459VBh4dw+bMxCJVptO6Ap4ZEXyT9lB2 1rrUFvXsxUxdbemDQDcRRC7TUIE46flSTNco+2amoMJd9fWJz8IDVy6VBJsBMGyni3ji wPeuog5B1mfSJjZSFyik8WsWofX0Hm/uHpoKgLi7mvKuPB6ClU7o95sZsNeVk4oX5UhD 8tcVin+8L/7+S9Q6cY+EpcZKxZ7olEYpkYt9LuOdD0saebtqlvVBxMr+wKLLPfZiZzpN qpWw==
X-Gm-Message-State: AKGB3mKnGT08jZO7tbO4hxWdFDKiMSJ9bwYQwFuDi+a2TL67GppN98yg bNIJNBE1TSPccDinIXN9x+jydfEcg7zt3ug7hUT4qQ==
X-Google-Smtp-Source: ACJfBoss3PtpXQTHvEOj6qweBSZZIOqjmzpPqGKitQNCoIMVr8MvDGdXma58HwS79GlL+muMMu47vs60sNuTbReicFk=
X-Received: by with SMTP id r40mr12217411ywa.371.1515718373422; Thu, 11 Jan 2018 16:52:53 -0800 (PST)
MIME-Version: 1.0
Received: by with HTTP; Thu, 11 Jan 2018 16:52:52 -0800 (PST)
In-Reply-To: <>
References: <> <>
From: Jana Iyengar <>
Date: Thu, 11 Jan 2018 16:52:52 -0800
Message-ID: <>
Subject: Re: Deadlocking in the transport
To: Roberto Peon <>
Cc: Martin Thomson <>, QUIC WG <>
Content-Type: multipart/alternative; boundary="f403045ed28c3ad179056289af88"
Archived-At: <>
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Main mailing list of the IETF QUIC working group <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Fri, 12 Jan 2018 00:52:57 -0000


I'm not convinced that we need something as heavy-weight as this...
specifically, flow control so far has been in the receiver's control, which
is also the thing that owns the resource it is trying to limit exhaustion
of. This moves it out to the sender, which would require a receiver to now
commit memory based on what a sender believes is important. Reasoning about
this seems non-trivial. I worry about how all this degree of freedom will
get abused, and potential sharp edges.

If we can ensure priority consumption of the shared resource, that seems
adequate to resolve the problem on hand. Do you think something more is

- jana

On Wed, Jan 10, 2018 at 12:26 PM, Roberto Peon <> wrote:

> Another option:
> Allow a flow-control ‘override’ which allows a receiver to state that they
> really want data on a particular stream, and ignore the global flow control
> for this.
> How you’d do it:
> A receiver can send a flow-control override for a stream. This includes
> the stream id to which the global window temporarily does not apply, the
> receiver’s current stream flow-control offset, and the offset the receiver
> would wish to be able to receive.
> A receiver must continue to (re)send the override (i.e. rexmit) until it
> is ack’d. It cannot send other flow-control for that stream until the
> override is ack’d.
> Thus:
>   global-flow-control-override: <stream-id> <current-flow-control-offset>,
> <override-flow-control-offset>
> The sender (which receives the override) credits the global flow control
> with the difference between the data sent beyond the receivers
> currently-known flow-control offset upon receipt of the override.
> This synchronizes the global state between the receiver and the sender.
> The sender can then send the data on the stream (without touching any
> other flow control data).
> Why:
> This allows for a receiver to resolve priority inversions which would
> otherwise lead to deadlock, even when the data/dep leading to this was not
> known to the transport. This extends beyond such issues beyond just header
> compression.
> Since the global flow-control exists to protect the app from resource
> exhaustion, this poses no additional risk to the application.
> Simply increasing the global flow control provides less strong
> guarantees—any stream might consume it, which doesn’t resolve the dep
> inversion. Rejiggering priorities can help to resolve this, but would
> require the sender to send priorities to the client, which is problematic
> w.r.t.races and just a web of ick.
> Having a custom frame type is also a less strong guarantee as it requires
> the knowledge that the dep exists to be present at the time of sending,
> which is often impossible.
> -=R
> On 1/9/18, 10:17 PM, "QUIC on behalf of Martin Thomson" <
> on behalf of> wrote:
>     Building a complex application protocol on top of QUIC continues to
>     produce surprises.
>     Today in the header compression design team meeting we discussed a
>     deadlocking issue that I think warrants sharing with the larger group.
>     This has implications for how people build a QUIC transport layer.  It
>     might need changes to the API that is exposed by that layer.
>     This isn't really that new, but I don't think we've properly addressed
>     the problem.
>     ## The Basic Problem
>     If a protocol creates a dependency between streams, there is a
>     potential for flow control to deadlock.
>     Say that I send X on stream 3 and Y on stream 7.  Processing Y
>     requires that X is processed first.
>     X cannot be sent due to flow control but Y is sent.  This is always
>     possible even if X is appropriately prioritized.  The receiver then
>     leaves Y in its receive buffer until X is received.
>     The receiver cannot give flow control credit for consuming Y because
>     it can't consume Y until X is sent.  But the sender needs flow control
>     credit to send X.  We are deadlocked.
>     It doesn't matter whether the stream or connection flow control is
>     causing the problem, either produces the same result.
>     (To give some background on this, we were considering a preface to
>     header blocks that identified the header table state that was
>     necessary to process the header block.  This would allow for
>     concurrent population of the header table and sending message that
>     depended on the header table state that is under construction.  A
>     receiver would read the identifier and then leave the remainder of the
>     header block in the receive buffer until the header table was ready.)
>     ## Options
>     It seems like there are a few decent options for managing this.  These
>     are what occurred to me (there are almost certainly more options):
>     1. Don't do that.  We might concede in this case that seeking the
>     incremental improvement to compression efficiency isn't worth the
>     risk.  That is, we might make a general statement that this sort of
>     inter-stream blocking is a bad idea.
>     2. Force receivers to consume data or reset streams in the case of
>     unfulfilled dependencies.  The former seems like it might be too much
>     like magical thinking, in the sense that it requires that receivers
>     conjure more memory up, but if the receiver were required to read Y
>     and release the flow control credit, then all would be fine.  For
>     instance, we could require that the receiver reset a stream if it
>     couldn't read and handle data.  It seems like a bad arrangement
>     though: you either have to allocate more memory than you would like or
>     suffer the time and opportunity cost of having to do Y over.
>     3. Create an exception for flow control.  This is what Google QUIC
>     does for its headers stream.  Roberto observed that we could
>     alternatively create a frame type that was excluded from flow control.
>     If this were used for data that had dependencies, then it would be
>     impossible to deadlock.  It would be similarly difficult to account
>     for memory allocation, though if it were possible to process on
>     receipt, then this *might* work.  We'd have to do something to address
>     out-of-order delivery though.  It's possible that the stream
>     abstraction is not appropriate in this case.
>     4. Block the problem at the source.  It was suggested that in cases
>     where there is a potential dependency, then it can't be a problem if
>     the transport refused to accept data that it didn't have flow control
>     credit for.  Writes to the transport would consume flow control credit
>     immediately.  That way applications would only be able to write X if
>     there was a chance that it would be delivered.  Applications that have
>     ordering requirements can ensure that Y is written after X is accepted
>     by the transport and thereby avoid the deadlock.  Writes might block
>     rather than fail, if the API wasn't into the whole non-blocking I/O
>     thing.  The transport might still have to buffer X for other reasons,
>     like congestion control, but it can guarantee that flow control isn't
>     going to block delivery.
>     ## My Preference
>     Right now, I'm inclined toward option 4. Option 1 seems a little too
>     much of a constraint.  Protocols create this sort of inter-dependency
>     naturally.
>     There's a certain purity in having the flow control exert back
>     pressure all the way to the next layer up.  Not being able to build a
>     transport with unconstrained writes is potentially creating
>     undesirable externalities on transport users.  Now they have to worry
>     about flow control as well.  Personally, I'm inclined to say that this
>     is something that application protocols and their users should be
>     exposed to.  We've seen with the JS streams API that it's valuable to
>     have back pressure available at the application layer and also how it
>     is possible to do that relatively elegantly.
>     I'm almost certain that I haven't thought about all the potential
>     alternatives.  I wonder if there isn't some experience with this
>     problem in SCTP that might lend some insights.