Re: Deadlocking in the transport

Martin Thomson <> Wed, 10 January 2018 22:00 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 4BAC312E044 for <>; Wed, 10 Jan 2018 14:00:39 -0800 (PST)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -2
X-Spam-Status: No, score=-2 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FREEMAIL_FROM=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Authentication-Results: (amavisd-new); dkim=pass (2048-bit key)
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id kVFquveTtZCc for <>; Wed, 10 Jan 2018 14:00:36 -0800 (PST)
Received: from ( [IPv6:2607:f8b0:4003:c0f::233]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by (Postfix) with ESMTPS id CB7AC127333 for <>; Wed, 10 Jan 2018 14:00:36 -0800 (PST)
Received: by with SMTP id o1so433575oti.12 for <>; Wed, 10 Jan 2018 14:00:36 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=20161025; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=rCz6xpk2qR+KMa81tJR3qrPmnxveOzmrLwlXk1VzEw4=; b=gucuq/p47glCtuk79kHOLjk5TnHEmFenNEvZuKK4r5ZaJbvt5+egpy5nwcfM8Tl/Rw M8UCF9h08mCQkmGEUbl656VeDtED7RNyFslwH1D5v8XBjerWIEl0nehAskCXjme64QTB 22T3ulbKmyj++2oCeAqfyl3/w2Cf0RIxMCt7wGQNzYROo7fAwW4UgenHemFAn9H6wnDh KUxxJyyEppwa/OF2VyMdOSdtRcLlhM2CL3SrYUs4UXE2ZNx0z1u/4cLqUgqe61DTjEGx b2dM25zRP1KpJVQqt+cRC8cs68RBulySdEczZ0sYAzo1yQdXcRQuUV95h49P41s5U4Oa IvrQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=rCz6xpk2qR+KMa81tJR3qrPmnxveOzmrLwlXk1VzEw4=; b=Wlz8kI5KX4ABq5ulRYTnMaz/K4oBDj+BRXwhQlRE4IQP79e+fvvwsAFPwvYb/WvlE3 Z/DlaWZ60qfnX4Ra2SApAcqDuq1cUbVkgTCoREXn8Qk+eK5GbRPyqTah/aG6OSnTyvj9 Vrlt90nYTstWQT41nztHxqQp3gytDwRizOxK39vevkpOk8AG5qGRyNnelnCImentEyuA Cy3pkINfTe8ZJcl21WiabZlsFtwFnwf5SKKU2tMqKmj0Iy9cmjPd9ZBS7qaPtBL6kZKu THzZ3W3F6wVORs7LeUK6tGat5w9ouYER5NuCoxqEfelaMJeOmocARvoknr/kPYZeD62e eMSQ==
X-Gm-Message-State: AKwxytfKfezFh9CS/58VL0jmTolYET12lFJX6sEDyTUBII9Zp+PvzOOR eky4ciiEr5QElAefnmAF3i4pekDDxr4N5G/5n6v/9Q==
X-Google-Smtp-Source: ACJfBotTqA6/4wJfeNphbIJTJhgbCoJ1qTOPrZQeswXAP6/vh1z5lufXBtmRBXHYLFvrZiDeUZziMcJzRKcIcavg7so=
X-Received: by with SMTP id g27mr14090221otc.71.1515621636052; Wed, 10 Jan 2018 14:00:36 -0800 (PST)
MIME-Version: 1.0
Received: by with HTTP; Wed, 10 Jan 2018 14:00:35 -0800 (PST)
In-Reply-To: <>
References: <> <>
From: Martin Thomson <>
Date: Thu, 11 Jan 2018 09:00:35 +1100
Message-ID: <>
Subject: Re: Deadlocking in the transport
To: Ted Hardie <>
Content-Type: text/plain; charset="UTF-8"
Archived-At: <>
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Main mailing list of the IETF QUIC working group <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Wed, 10 Jan 2018 22:00:39 -0000

On Thu, Jan 11, 2018 at 5:40 AM, Ted Hardie <> wrote:
> The second case is one in which the protocol using the transport cannot
> access Y's semantic content until X is received.  In that case, it has three
> potential strategies.  One is not to permit the dependency to occur at all;
> as you note, lots of applications have inherent dependencies or common
> optimizations (e.g. FEC) that use this strategy.  This turns into a limit in
> the scope of what can use QUIC if this is the required strategy.
> Two is is for the using application/protocol to set up and manage its own
> buffers so that it can take Y out of the receive buffer and wait for X to
> arrive without blocking the transport.  While that might be magical thinking
> for some using protocols of QUIC, it's not clear that it always would be, as
> an application protocol built with QUIC as a transport in mind could do it.
> There are video cases I've worked on in the past where this
> buffer-at-layer-above-the-network was the default, because multiple networks
> were simultaneously being used to build the video playout buffer.

Yes, there will be cases where the content can be read, AND that that
content won't impose a resource penalty on the receiver such that it
is still able to release flow control credit for that read.  Those
applications can likely manage.  As you say, a video application has
to maintain memory for decoding N frames and it could take data from
frame X+1 and run it partway through the decoder pipeline up to the
point that X arrives.  That's just an exception to this general

> For the case you're describing, I think strategy 3/option 4 is right.  But I
> don't think we ought to rule out strategy one and two for all using
> protocols.  If you have a protocol that doesn't require or can easily avoid
> these dependencies being split across streams, you get option one for cheap.
> If you have an application/protocol that's built with the idea of the
> application managing its own buffers for resolving these dependencies,  two
> is practical rather than magical.

Sure.  I'd expect to have that protocol opt in to that sort of
exceptional handling, because having it as a default is hazardous.