Deadlocking in the transport

Martin Thomson <martin.thomson@gmail.com> Wed, 10 January 2018 06:17 UTC

Return-Path: <martin.thomson@gmail.com>
X-Original-To: quic@ietfa.amsl.com
Delivered-To: quic@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id F30AB126DC2 for <quic@ietfa.amsl.com>; Tue, 9 Jan 2018 22:17:21 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2
X-Spam-Level:
X-Spam-Status: No, score=-2 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FREEMAIL_FROM=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id w8SP7HNRF2cR for <quic@ietfa.amsl.com>; Tue, 9 Jan 2018 22:17:20 -0800 (PST)
Received: from mail-ot0-x230.google.com (mail-ot0-x230.google.com [IPv6:2607:f8b0:4003:c0f::230]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 6B302126DFE for <quic@ietf.org>; Tue, 9 Jan 2018 22:17:16 -0800 (PST)
Received: by mail-ot0-x230.google.com with SMTP id g59so13422227otg.11 for <quic@ietf.org>; Tue, 09 Jan 2018 22:17:16 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:from:date:message-id:subject:to; bh=dMVY3GBNDWiYW1NrjJmWwxWZLZy1/yH+I+yjY9iiMgs=; b=UO9F7hkJgjxdQlk+UJUg31G9eGcosZpjeCt4I0FiRU6Ei6s6ok6srIQHguJcokqEOS 4g7P+EgpObN8q6to0610S2oQAeuObmx2XGGwL/Mvm7hwPc9XJUyWFOYK52iZNLRmjcND /TXzs1bsp3n1nT6HiFA8R9nzaaG0DChE2mJuk9Uimi6ZwmGNLJDgixcaUn8AEkJ6e0wf lQcdipkdTUvRalAKKjURSaSDJl1FabnN9yzIt6dTLgzITOmE2GZrzDbGhEZqhS9ydrAN DUTQIhhNuggFyGvYbp+hrzAwwQc/8Ztq072eLVHo/66fxLN4UgYHKGL0xzja6dpY+wgR 908w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:from:date:message-id:subject:to; bh=dMVY3GBNDWiYW1NrjJmWwxWZLZy1/yH+I+yjY9iiMgs=; b=rGkVrvyxznhkD/RXBXlRVcKNdfga2tfM4fWSRddjP5URXZaA4z19kh+OeSXtylLl6X roY0QLhNwKUWCHWSV91FYp8aoiZrk2ZrU3tdSKWCfzEOWw6A913MOlOSpjUk7XNwjyc+ k1jQwdlfk/i6lPxyo7hANv+ggWjC5caWVuvwGJJiao5+NDienWISn7hxB3q5S56uE4Wr DpL6qSg+hkwQJsPNGoNfnyZVPqOafX7Sk6z1g2jp3JZwFzpgvIBTcNBZPKN38NCS84Zq ai9GTreG2lqWauH9gkoH1/hdmbEmRVNYJ0cHvjTkj60M7Wc6E5NeD3YMPwSOuyhptiBL ZL9w==
X-Gm-Message-State: AKwxytc4N9eiMKbC73Ccdn3bE2+mGq+soZeFVd4lSAhDn8f4QLUb7czq o25fU5bjoKaBQffrYGfT2MwVoxRxuUhg6WRnwiEP0A==
X-Google-Smtp-Source: ACJfBosmDPbDhQIs6ffRxhjlkmXVEtFuaHjObIeI8/NE8A/PZ4XYy1f89i1HT96WB4ZmgU/1fL3yPne9wYnARmNpE4o=
X-Received: by 10.157.0.102 with SMTP id 93mr3688361ota.175.1515565035252; Tue, 09 Jan 2018 22:17:15 -0800 (PST)
MIME-Version: 1.0
Received: by 10.157.39.16 with HTTP; Tue, 9 Jan 2018 22:17:14 -0800 (PST)
From: Martin Thomson <martin.thomson@gmail.com>
Date: Wed, 10 Jan 2018 17:17:14 +1100
Message-ID: <CABkgnnUSMYRvYNUwzuJk4TQ28qb-sEHmgXhxpjKOBON43_rWCg@mail.gmail.com>
Subject: Deadlocking in the transport
To: QUIC WG <quic@ietf.org>
Content-Type: text/plain; charset="UTF-8"
Archived-At: <https://mailarchive.ietf.org/arch/msg/quic/bsp3y7usja34GE2tMoZGctKDwbk>
X-BeenThere: quic@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Main mailing list of the IETF QUIC working group <quic.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/quic>, <mailto:quic-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/quic/>
List-Post: <mailto:quic@ietf.org>
List-Help: <mailto:quic-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/quic>, <mailto:quic-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 10 Jan 2018 06:17:22 -0000

Building a complex application protocol on top of QUIC continues to
produce surprises.

Today in the header compression design team meeting we discussed a
deadlocking issue that I think warrants sharing with the larger group.
This has implications for how people build a QUIC transport layer.  It
might need changes to the API that is exposed by that layer.

This isn't really that new, but I don't think we've properly addressed
the problem.


## The Basic Problem

If a protocol creates a dependency between streams, there is a
potential for flow control to deadlock.

Say that I send X on stream 3 and Y on stream 7.  Processing Y
requires that X is processed first.

X cannot be sent due to flow control but Y is sent.  This is always
possible even if X is appropriately prioritized.  The receiver then
leaves Y in its receive buffer until X is received.

The receiver cannot give flow control credit for consuming Y because
it can't consume Y until X is sent.  But the sender needs flow control
credit to send X.  We are deadlocked.

It doesn't matter whether the stream or connection flow control is
causing the problem, either produces the same result.

(To give some background on this, we were considering a preface to
header blocks that identified the header table state that was
necessary to process the header block.  This would allow for
concurrent population of the header table and sending message that
depended on the header table state that is under construction.  A
receiver would read the identifier and then leave the remainder of the
header block in the receive buffer until the header table was ready.)


## Options

It seems like there are a few decent options for managing this.  These
are what occurred to me (there are almost certainly more options):

1. Don't do that.  We might concede in this case that seeking the
incremental improvement to compression efficiency isn't worth the
risk.  That is, we might make a general statement that this sort of
inter-stream blocking is a bad idea.

2. Force receivers to consume data or reset streams in the case of
unfulfilled dependencies.  The former seems like it might be too much
like magical thinking, in the sense that it requires that receivers
conjure more memory up, but if the receiver were required to read Y
and release the flow control credit, then all would be fine.  For
instance, we could require that the receiver reset a stream if it
couldn't read and handle data.  It seems like a bad arrangement
though: you either have to allocate more memory than you would like or
suffer the time and opportunity cost of having to do Y over.

3. Create an exception for flow control.  This is what Google QUIC
does for its headers stream.  Roberto observed that we could
alternatively create a frame type that was excluded from flow control.
If this were used for data that had dependencies, then it would be
impossible to deadlock.  It would be similarly difficult to account
for memory allocation, though if it were possible to process on
receipt, then this *might* work.  We'd have to do something to address
out-of-order delivery though.  It's possible that the stream
abstraction is not appropriate in this case.

4. Block the problem at the source.  It was suggested that in cases
where there is a potential dependency, then it can't be a problem if
the transport refused to accept data that it didn't have flow control
credit for.  Writes to the transport would consume flow control credit
immediately.  That way applications would only be able to write X if
there was a chance that it would be delivered.  Applications that have
ordering requirements can ensure that Y is written after X is accepted
by the transport and thereby avoid the deadlock.  Writes might block
rather than fail, if the API wasn't into the whole non-blocking I/O
thing.  The transport might still have to buffer X for other reasons,
like congestion control, but it can guarantee that flow control isn't
going to block delivery.


## My Preference

Right now, I'm inclined toward option 4. Option 1 seems a little too
much of a constraint.  Protocols create this sort of inter-dependency
naturally.

There's a certain purity in having the flow control exert back
pressure all the way to the next layer up.  Not being able to build a
transport with unconstrained writes is potentially creating
undesirable externalities on transport users.  Now they have to worry
about flow control as well.  Personally, I'm inclined to say that this
is something that application protocols and their users should be
exposed to.  We've seen with the JS streams API that it's valuable to
have back pressure available at the application layer and also how it
is possible to do that relatively elegantly.

I'm almost certain that I haven't thought about all the potential
alternatives.  I wonder if there isn't some experience with this
problem in SCTP that might lend some insights.