Re: Deadlocking in the transport

Ted Hardie <ted.ietf@gmail.com> Wed, 10 January 2018 18:41 UTC

Return-Path: <ted.ietf@gmail.com>
X-Original-To: quic@ietfa.amsl.com
Delivered-To: quic@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 689BC12D72F for <quic@ietfa.amsl.com>; Wed, 10 Jan 2018 10:41:23 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.699
X-Spam-Level:
X-Spam-Status: No, score=-2.699 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FREEMAIL_FROM=0.001, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_LOW=-0.7, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 4ZtXlocZarr3 for <quic@ietfa.amsl.com>; Wed, 10 Jan 2018 10:41:20 -0800 (PST)
Received: from mail-qk0-x233.google.com (mail-qk0-x233.google.com [IPv6:2607:f8b0:400d:c09::233]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id AE775124BAC for <quic@ietf.org>; Wed, 10 Jan 2018 10:41:19 -0800 (PST)
Received: by mail-qk0-x233.google.com with SMTP id d21so13384343qkj.3 for <quic@ietf.org>; Wed, 10 Jan 2018 10:41:19 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:in-reply-to:references:from:date:message-id:subject:to; bh=BG4AyqCXJzlG5An/SKq5W/u+Tx+kiQAxVKPqB9g2vHI=; b=Lptm1SsOfC/XTR44MdPZJy+WQ7Fy3Nysn19RUNW1BCa4gGDlIDVuZrb1aqSGHGPC78 f7S120GbwePIwcCWnHf+SLlwlO0+5YIZfMDPxycED1YPUZw6m1JSaXPrlBrcdp4DQ3SX 71lMIN8GLcVTEFuyZN6xm0hJiJ8fYpmJsbBewTn52meaaMSi3sdin9yXK3AwvDyUY4Wg 5aR4mqf0Qj8tsnIArE+O2MjWDxOHBN1x1aOzG37ZQzqmvTTUouLKIeQOm+qdEGGBmINm C/eWCd2KKz+URwkEoT5G5cQmmH7NBcXanPA9WaZnv8uXP7BXV/rIqbdmyM09lC41fMn/ blPw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to; bh=BG4AyqCXJzlG5An/SKq5W/u+Tx+kiQAxVKPqB9g2vHI=; b=ZqtD4vSzNxJS6+zSLJYvb9wIufD161LnlPoK+GNN3PI/6GQQ9Y1l7ABlJx9U+A2ru1 MW7FdRMOHKePzy3oicpN7y+fjHMcTUMFrXMv+AQlAvVUgJB+Qx1Qiw86bbJlkIoCR5kd 5FjNQMRvk+v0riN+DBMrOFUaifihxFA75m/wDU3jAHC/KR5CN3+c7MAe3fzHXGYWf/mr JAx7FVrs+PjzfeHn7UFNMWJXcw3DgG9VGb79ui7fmrM0dCQTrcG5tgYmPfT1rIUP525L NMLt1D2aZi+UjSApjYItk0EcSdEvanVnYlEwDFU/Q7Mq4rPIgUKSTrM1DrnWfgi+O5pE QQdA==
X-Gm-Message-State: AKwxytef1Dx6Au/d4va0woKntfMOpUlTtA/savjvyYWdZ6fugz8L0kiL +rkmQ82mFCSxJm9CC13HlN7W8JI8mknO5LCxm1I=
X-Google-Smtp-Source: ACJfBos94I0u+ZMSlGXi7XFHV7RTAIwys2ug0qzl1JLXCa+3u9GN8x7VL4804IPY0SDYcZr6FYeROazMEjBydYKBuuc=
X-Received: by 10.55.50.83 with SMTP id y80mr26522704qky.347.1515609678513; Wed, 10 Jan 2018 10:41:18 -0800 (PST)
MIME-Version: 1.0
Received: by 10.237.36.169 with HTTP; Wed, 10 Jan 2018 10:40:48 -0800 (PST)
In-Reply-To: <CABkgnnUSMYRvYNUwzuJk4TQ28qb-sEHmgXhxpjKOBON43_rWCg@mail.gmail.com>
References: <CABkgnnUSMYRvYNUwzuJk4TQ28qb-sEHmgXhxpjKOBON43_rWCg@mail.gmail.com>
From: Ted Hardie <ted.ietf@gmail.com>
Date: Wed, 10 Jan 2018 10:40:48 -0800
Message-ID: <CA+9kkMBzLd85ZeboqKTinj3ZSfGkpp7ATqirYC_7uTGSganrxQ@mail.gmail.com>
Subject: Re: Deadlocking in the transport
To: Martin Thomson <martin.thomson@gmail.com>, IETF QUIC WG <quic@ietf.org>
Content-Type: multipart/alternative; boundary="001a1146f7d48198a70562706012"
Archived-At: <https://mailarchive.ietf.org/arch/msg/quic/NBOjopKoMJM3OiHgOU-a2A_DcsY>
X-BeenThere: quic@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Main mailing list of the IETF QUIC working group <quic.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/quic>, <mailto:quic-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/quic/>
List-Post: <mailto:quic@ietf.org>
List-Help: <mailto:quic-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/quic>, <mailto:quic-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 10 Jan 2018 18:41:23 -0000

Howdy,

I think there are two cases being intermingled here, and I think it would
be useful to pull them back apart.  Comments to that end in-line.

On Tue, Jan 9, 2018 at 10:17 PM, Martin Thomson <martin.thomson@gmail.com>
wrote:

> Building a complex application protocol on top of QUIC continues to
> produce surprises.
>
> Today in the header compression design team meeting we discussed a
> deadlocking issue that I think warrants sharing with the larger group.
> This has implications for how people build a QUIC transport layer.  It
> might need changes to the API that is exposed by that layer.
>
> This isn't really that new, but I don't think we've properly addressed
> the problem.
>
>
> ## The Basic Problem
>
> If a protocol creates a dependency between streams, there is a
> potential for flow control to deadlock.
>
> Say that I send X on stream 3 and Y on stream 7.  Processing Y
> requires that X is processed first.
>
> X cannot be sent due to flow control but Y is sent.  This is always
> possible even if X is appropriately prioritized.  The receiver then
> leaves Y in its receive buffer until X is received.
>
> The receiver cannot give flow control credit for consuming Y because
> it can't consume Y until X is sent.  But the sender needs flow control
> credit to send X.  We are deadlocked.
>
>
I think there are two cases here.  In case one, the transport itself cannot
consume Y until X is sent, because X is needed to make Y's content
semantically available to the transport.  In that case, the deadlock case
you give is a truly fatal error and building a transport that allowed this
error case to occur would be a very bad notion.

The second case is one in which the protocol using the transport cannot
access Y's semantic content until X is received.  In that case, it has
three potential strategies.  One is not to permit the dependency to occur
at all; as you note, lots of applications have inherent dependencies or
common optimizations (e.g. FEC) that use this strategy.  This turns into a
limit in the scope of what can use QUIC if this is the required strategy.

Two is is for the using application/protocol to set up and manage its own
buffers so that it can take Y out of the receive buffer and wait for X to
arrive without blocking the transport.  While that might be magical
thinking for some using protocols of QUIC, it's not clear that it always
would be, as an application protocol built with QUIC as a transport in mind
could do it.  There are video cases I've worked on in the past where this
buffer-at-layer-above-the-network was the default, because multiple
networks were simultaneously being used to build the video playout buffer.

Three is to manage the writes to the network so the deadlock is unlikely to
occur (your option 4, or something that manages priorities with sufficient
strictness that it guarantees X is always available when Y is).

For the case you're describing, I think strategy 3/option 4 is right.  But
I don't think we ought to rule out strategy one and two for all using
protocols.  If you have a protocol that doesn't require or can easily avoid
these dependencies being split across streams, you get option one for
cheap.  If you have an application/protocol that's built with the idea of
the application managing its own buffers for resolving these dependencies,
two is practical rather than magical.

Just my two cents,

Ted

It doesn't matter whether the stream or connection flow control is
> causing the problem, either produces the same result.
>
> (To give some background on this, we were considering a preface to
> header blocks that identified the header table state that was
> necessary to process the header block.  This would allow for
> concurrent population of the header table and sending message that
> depended on the header table state that is under construction.  A
> receiver would read the identifier and then leave the remainder of the
> header block in the receive buffer until the header table was ready.)
>
>
> ## Options
>
> It seems like there are a few decent options for managing this.  These
> are what occurred to me (there are almost certainly more options):
>
> 1. Don't do that.  We might concede in this case that seeking the
> incremental improvement to compression efficiency isn't worth the
> risk.  That is, we might make a general statement that this sort of
> inter-stream blocking is a bad idea.
>
> 2. Force receivers to consume data or reset streams in the case of
> unfulfilled dependencies.  The former seems like it might be too much
> like magical thinking, in the sense that it requires that receivers
> conjure more memory up, but if the receiver were required to read Y
> and release the flow control credit, then all would be fine.  For
> instance, we could require that the receiver reset a stream if it
> couldn't read and handle data.  It seems like a bad arrangement
> though: you either have to allocate more memory than you would like or
> suffer the time and opportunity cost of having to do Y over.
>
> 3. Create an exception for flow control.  This is what Google QUIC
> does for its headers stream.  Roberto observed that we could
> alternatively create a frame type that was excluded from flow control.
> If this were used for data that had dependencies, then it would be
> impossible to deadlock.  It would be similarly difficult to account
> for memory allocation, though if it were possible to process on
> receipt, then this *might* work.  We'd have to do something to address
> out-of-order delivery though.  It's possible that the stream
> abstraction is not appropriate in this case.
>
> 4. Block the problem at the source.  It was suggested that in cases
> where there is a potential dependency, then it can't be a problem if
> the transport refused to accept data that it didn't have flow control
> credit for.  Writes to the transport would consume flow control credit
> immediately.  That way applications would only be able to write X if
> there was a chance that it would be delivered.  Applications that have
> ordering requirements can ensure that Y is written after X is accepted
> by the transport and thereby avoid the deadlock.  Writes might block
> rather than fail, if the API wasn't into the whole non-blocking I/O
> thing.  The transport might still have to buffer X for other reasons,
> like congestion control, but it can guarantee that flow control isn't
> going to block delivery.
>
>
> ## My Preference
>
> Right now, I'm inclined toward option 4. Option 1 seems a little too
> much of a constraint.  Protocols create this sort of inter-dependency
> naturally.
>
> There's a certain purity in having the flow control exert back
> pressure all the way to the next layer up.  Not being able to build a
> transport with unconstrained writes is potentially creating
> undesirable externalities on transport users.  Now they have to worry
> about flow control as well.  Personally, I'm inclined to say that this
> is something that application protocols and their users should be
> exposed to.  We've seen with the JS streams API that it's valuable to
> have back pressure available at the application layer and also how it
> is possible to do that relatively elegantly.
>
> I'm almost certain that I haven't thought about all the potential
> alternatives.  I wonder if there isn't some experience with this
> problem in SCTP that might lend some insights.
>
>