Re: flow control and DATAGRAM

Ian Swett <> Wed, 31 October 2018 16:01 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id E8C96130DDF for <>; Wed, 31 Oct 2018 09:01:13 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -15.511
X-Spam-Status: No, score=-15.511 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIMWL_WL_MED=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, ENV_AND_HDR_SPF_MATCH=-0.5, HTML_MESSAGE=0.001, HTTPS_HTTP_MISMATCH=1.989, RCVD_IN_DNSWL_NONE=-0.0001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001, USER_IN_DEF_DKIM_WL=-7.5, USER_IN_DEF_SPF_WL=-7.5] autolearn=ham autolearn_force=no
Authentication-Results: (amavisd-new); dkim=pass (2048-bit key)
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id fx37ltH7nt1L for <>; Wed, 31 Oct 2018 09:01:08 -0700 (PDT)
Received: from ( [IPv6:2a00:1450:4864:20::42d]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by (Postfix) with ESMTPS id 20D05130DDA for <>; Wed, 31 Oct 2018 09:01:08 -0700 (PDT)
Received: by with SMTP id n5-v6so17053624wrw.12 for <>; Wed, 31 Oct 2018 09:01:08 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=ywPspWzVzySLY25CoDX0m+jBQEvj1UCO/fwKfQdw2Bs=; b=BAgDSrfCV2iPNlvZHeqwT5iNr+fwGhvPtScalm+EvkmzhamV8+TDT97ZS9R3vvKjJs nSl2/qCVIUfHZH0uhSz9yKob+xdPbVjFSQ7V8o01qnOZuwgScKoT/aDku0i7Ncm7f1hu ZTcJeT4YV19pxBCEQ0e37WCECfDtvqKdZNVbodyEJHw/nmuVrNUi4XjO8KpAwR1M5DB8 50n80CSMLdhLj7DbG0ljyJabTL/skImF5jGicJTIINVt3+9kSVXRQLtDSQuVo70Clwdg bWUdc3PYPfji9+EX1FocLNRP4W8ERtiiC8UGdd2S8i8NFdA9cdUexVJ3s7Dc3coR2mcQ S46A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=ywPspWzVzySLY25CoDX0m+jBQEvj1UCO/fwKfQdw2Bs=; b=BOqzRW5QtjKjoLNTKUz+nY0mBu7aWVN7pwT+vjxgJe3ZbixSTPcL33vRQTmHPFkSqi 8n50JcZ4U5Jf1bYf+0hLkYP07nbRcjt7V0PZwIsCoarLJkRhw26VCLXaBvvOo9HIKSgR BhkxSIhQM+ORWXqVaMKzPuGa8csfEbtt8HaBfv88e4iHfptuDZm5PdxG4LoGNwtiohMF C2Sir2joVIJMdFEV5VS+STzWO4gOrkIHnUXcIhEavlPkT10G+dSVJViAU7quDj1L7Yw3 XqgbDgzZp+RJn58QQE2O4NvjtOUHpu349QwbTCZqOE0wLdZWIfY1IRmdllc2M+jT4RsS NVgw==
X-Gm-Message-State: AGRZ1gL7D/WrdbW+JBXvgpHUCwx+kcY7xSWHw73WHKSFlTp1l8SmE1lb 3foQ7ZnMzpTa5/U9YTKdLonU4kjkvHv4gq++fjVZbRifSQE=
X-Google-Smtp-Source: AJdET5e5X2kPY+LaTD3aNOrLGlQpeQimBKYLa7ZLAKWDJ+JGTRLInPPzLIfcCv/ExJYHXWBtwL/z1DTh77RkK8uNkmc=
X-Received: by 2002:a5d:6489:: with SMTP id r9-v6mr2295249wru.92.1541001666306; Wed, 31 Oct 2018 09:01:06 -0700 (PDT)
MIME-Version: 1.0
References: <> <> <> <> <> <> <> <>
In-Reply-To: <>
From: Ian Swett <>
Date: Wed, 31 Oct 2018 12:00:53 -0400
Message-ID: <>
Subject: Re: flow control and DATAGRAM
To: Jana Iyengar <>
Cc: "Lubashev, Igor" <>, Tommy Pauly <>, IETF QUIC WG <>, Martin Thomson <>
Content-Type: multipart/alternative; boundary="000000000000eb8eee057988680f"
Archived-At: <>
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Main mailing list of the IETF QUIC working group <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Wed, 31 Oct 2018 16:01:14 -0000

If you add flow control, there's little benefit over a unidirectional
stream, so adding flow control fundamentally breaks this design.

My goal is to use this for cases when I want to use a single QUIC
connection to replace DTLS or RTP + something reliable.  WebRTC is the
obvious use case, but not the only one.  DTLS and RTP don't have flow
control, so adding it just make the application mapping a lot more complex.

On Tue, Oct 30, 2018 at 12:23 AM Jana Iyengar <> wrote:

> Igor,
> I had thought that this was similar to canceling streams, but you're
> right. Canceling a stream gets rid of stream flow control state entirely,
> where in this case, the receiver cannot discard flow control state.
> I don't have any great ideas right now, but I fear that not having flow
> control simply punts the problem down the road. FWIW, this may be quite
> fine for this extension, and we can surely evolve it to have flow control
> when we have a way to do it.
> - jana
> On Mon, Oct 29, 2018 at 8:54 PM Lubashev, Igor <>
> wrote:
>> Jana,
>> > The simplest scheme would be to reserve a stream ID for DATAGRAM data
>> -- [...]
>> > -- and then MAX_STREAM_DATA uses this
>> > stream ID for flow control of DATAGRAM octets.
>> There are many dragons that way. Flow control indicates commitment to
>> buffer data. When sender does not retransmit, when does receiver give up
>> its commitment to wait for (and buffer) earlier data so it can advance
>> MAX_STREAM_DATA to allow the sender to send more data? Can the signal from
>> sender tolerate loss/reordering vs STREAM frames (assuming the datagram can
>> be split for sending in multiple QUIC packets)? These dragons can be
>> conquered (see my partially reliable streams draft, v2 or v3), but this is
>> not simple. :(
>> - Igor
>> -----Original Message-----
>> *From:* Jana Iyengar []
>> *Received:* Monday, 29 Oct 2018, 10:24PM
>> *To:* []
>> *CC:* Ian Swett []; QUIC WG []; Martin
>> Thomson []
>> *Subject:* Re: flow control and DATAGRAM
>> Hey Tommy,
>>> I'm not suggesting any changes to when the packet gets
>>> ack'ed—specifically, I was responding to Martin's hypothetical of having an
>>> ACK to a DATAGRAM frame meaning that the application had processed the
>>> frame. My impression is that the ACK to a packet containing a DATAGRAM
>>> frame means:
>>> - The packet made it across the network to the QUIC endpoint on the
>>> other side
>> ... and was processed by the QUIC receiver (not necessarily by the
>> application). This is the same semantic as the ack of a regular frame.
>>> - The QUIC implementation will deliver the DATAGRAM frame to application
>>> (it won't drop it locally)
>> In which case you will need flow control. If you agree, then we're on the
>> same page so far.
>>> Having a separate outstanding data limit for the DATAGRAM "stream" is an
>>> interesting solution to the space. It would then have the nice property of
>>> not looking like traditional flow control. It could even be measured in
>>> number of frames, rather than bytes (depending on what the limiting factors
>>> are).
>> In terms of flow control, I don't think DATAGRAM flow control is any
>> different than sending stream data on any stream and then canceling it --
>> there are no retransmissions, but the sender still accounts for it in flow
>> control. My thought here was basically to have exactly the same flow
>> control as stream-level flow control, and allow for DATAGRAM bytes to fall
>> within connection-level flow control as normal.
>> The simplest scheme would be to reserve a stream ID for DATAGRAM data --
>> this could be 0 or 2^62-1 -- and then MAX_STREAM_DATA uses this stream ID
>> for flow control of DATAGRAM octets. Alternatively, define a new frame
>> called MAX_DATAGRAM_DATA that is carries the largest allowed offset that
>> can be sent as a DATAGRAM, and introduce an offset field in the DATAGRAM
>> frame.
>> - jana
>>> Thanks,
>>> Tommy
>>> On Oct 29, 2018, at 12:37 PM, Jana Iyengar <> wrote:
>>> Tommy,
>>> Changing the semantics of an acknowledgment to include delivery up to
>>> the application is a fundamental change to the QUIC machinery, and it
>>> doesn't work. First, an ACK frame acknowledges packets, and you can't have
>>> different semantics of an acknowledgment for different frames that are
>>> carried in the same packet. Second, it interferes with RTT measurement, and
>>> it conflates flow control with congestion control, which gets messy. (This
>>> conflation is an interesting problem to consider theoretically, but not one
>>> for us at this time IMO.)
>>> I am wondering if applying a stream-level flow control for DATAGRAMs
>>> makes sense instead. Meaning that you treat DATAGRAMs as a separate stream
>>> for flow control purposes. You might benefit from having an offset in the
>>> DATAGRAM frame for this purpose.
>>> - jana
>>> On Mon, Oct 29, 2018 at 8:21 AM Tommy Pauly <> wrote:
>>>> Hi Martin, Ian,
>>>> Yes, very good points!
>>>> My tendency would be to prefer what Ian's implementation does of
>>>> passing these DATAGRAM frames up immediately to the application. I don't
>>>> think that the acknowledgment needs to indicate that the frame was
>>>> processed by the application, but merely that it has been delivered to the
>>>> application (that is, the application doesn't get to do anything with the
>>>> frame that can influence the acknowledgment).
>>>> The current draft indicates that the content of the DATAGRAM frames
>>>> contributes to the limit used for MAX_DATA, and that if that amount is
>>>> reached, the frames are blocked along with STREAM data. I think this works
>>>> fine for the sender, while the receiver gets into the discussion you
>>>> present. On the sender side, reaching MAX_DATA could mean dropping the
>>>> DATAGRAM frames when unable to send more (and sending BLOCKED instead).
>>>> Since the frames are unreliable, they can be dropped in this situation
>>>> without violating the API contract.
>>>> On the receiver side, I agree that queuing the DATAGRAM frames to let
>>>> the application drive flow control in the way it does for STREAM frames
>>>> adds complexity and diminishes the utility of the frame and ACKs.. However,
>>>> I can imagine taking a fairly simplistic approach in which the data limit
>>>> is automatically increased upon reception of the frame (and the frame is
>>>> immediately passed to the application). This allows the initial_max_data to
>>>> put a cap on the amount of data in a given flight of DATAGRAMS, and allow
>>>> the size of a flight of DATAGRAM frames to be limited by the amount of room
>>>> left over from STREAM data that may be consuming the connection-wide flow
>>>> control.
>>>> Perhaps this approach needs a clearer name other than "flow control",
>>>> since it has a somewhat different meaning in effect.
>>>> As for ACKs, if we never discard on the receiver side, the ACK is
>>>> pretty useful for detecting if there was network-based packet loss.
>>>> Thanks,
>>>> Tommy
>>>> On Oct 29, 2018, at 5:32 AM, Ian Swett <> wrote:
>>>> Good catch Martin, I missed that in the draft as well, and I also think
>>>> it's impossibly with the proposed design.
>>>> And yes, I think Martin's proposed solution is likely the only
>>>> practical one.  In my implementation, the frame is passed up to the
>>>> application immediately, so technically QUIC processed it, and it's the
>>>> application's job to decide what to do with it.
>>>> On Mon, Oct 29, 2018 at 1:16 AM Martin Thomson <
>>>>> wrote:
>>>>> Hi Tommy,
>>>>> Your slides - <
>>>>> <>
>>>>> >
>>>>> - say that DATAGRAM frames respect connection-level flow control.  I
>>>>> missed that in the draft, and I don't know how they can do that in the
>>>>> face of packet loss, especially when you don't necessarily retransmit
>>>>> lost DATAGRAM frames.
>>>>> For that to work, you would need a bunch more machinery to make the
>>>>> connection-level flow control sync between endpoints in the case that
>>>>> packets are lost.  A disagreement about how much flow control is used
>>>>> causes things to break down badly.  Ian and I discussed this point at
>>>>> the last meeting and quickly agreed that while it might be nice to
>>>>> have flow control for this stuff, the increase in complexity is
>>>>> considerable and (at the time) we thought it wouldn't be worth it.
>>>>> The problem that introduces is that you could end up having too many
>>>>> DATAGRAM frames arrive.  The receiver has to drop something at the
>>>>> point that it can't handle them.  And we say that when you acknowledge
>>>>> something, you processed it.  That's tricky.
>>>>> It might be easier to say that a QUIC acknowledgment for a DATAGRAM
>>>>> frame doesn't mean that it was received and processed by an
>>>>> application.  An endpoint might discard these frames before passing
>>>>> them on to applications if it doesn't have space.  In other words,
>>>>> acknowledgment of DATAGRAM means that QUIC got it, not that the
>>>>> application got it.  Sadly, that means that the QUIC acknowledgment
>>>>> machinery doesn't help the application that uses DATAGRAM all that
>>>>> much.  Also, the lower bound on reliability is 0, which isn't the best
>>>>> thing ever.
>>>>> Hard choices, I know.  I don't have a good design for maintaining
>>>>> connection-level flow control (or any back pressure mechanism with
>>>>> equivalent properties) that doesn't add both complexity and overhead.
>>>>> Cheers,
>>>>> Martin