Re: [quicwg/base-drafts] H3 GOAWAY should be symmetric and cover bidi and uni streams (#2632)

Martin Thomson <> Tue, 22 October 2019 01:24 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 943A1120ABD for <>; Mon, 21 Oct 2019 18:24:33 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -8
X-Spam-Status: No, score=-8 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=0.001, MAILING_LIST_MULTI=-1, RCVD_IN_DNSWL_HI=-5, SPF_HELO_NONE=0.001, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Authentication-Results: (amavisd-new); dkim=pass (1024-bit key)
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id g5dchkaHH5W8 for <>; Mon, 21 Oct 2019 18:24:31 -0700 (PDT)
Received: from ( []) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by (Postfix) with ESMTPS id 1CC72120AB8 for <>; Mon, 21 Oct 2019 18:24:31 -0700 (PDT)
Received: from ( []) by (Postfix) with ESMTP id 37442C60DAD for <>; Mon, 21 Oct 2019 18:24:30 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=pf2014; t=1571707470; bh=7WUO7M9kxExlDiMIdq8KaQNNdpopEwyuPxqdzcbZbFo=; h=Date:From:Reply-To:To:Cc:In-Reply-To:References:Subject:List-ID: List-Archive:List-Post:List-Unsubscribe:From; b=laWV96VDV1IyFTulyRMIkqTDp3q06O9dyTLEcJC3cc+e6w/KdsIfxFdsbbsat7Ejm vjCOxL++M8ptsI4Hu2ULToC/diY0bWziVjVz50YZjzOs5Sn5bMhr1DK06bklnMd/0n Q/Pu/qggSvIsgFP/dN3TJXpT+FHTQ01TqN3vNgDI=
Date: Mon, 21 Oct 2019 18:24:30 -0700
From: Martin Thomson <>
Reply-To: quicwg/base-drafts <>
To: quicwg/base-drafts <>
Cc: Subscribed <>
Message-ID: <quicwg/base-drafts/issues/2632/>
In-Reply-To: <quicwg/base-drafts/issues/>
References: <quicwg/base-drafts/issues/>
Subject: Re: [quicwg/base-drafts] H3 GOAWAY should be symmetric and cover bidi and uni streams (#2632)
Mime-Version: 1.0
Content-Type: multipart/alternative; boundary="--==_mimepart_5dae5a4e276c5_40d93f7f084cd96884992"; charset="UTF-8"
Content-Transfer-Encoding: 7bit
Precedence: list
X-GitHub-Sender: martinthomson
X-GitHub-Recipient: quic-issues
X-GitHub-Reason: subscribed
X-Auto-Response-Suppress: All
Archived-At: <>
X-Mailman-Version: 2.1.29
List-Id: Notification list for GitHub issues related to the QUIC WG <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Tue, 22 Oct 2019 01:24:34 -0000

The semantics of an identifier in the server-to-client GOAWAY are twofold.  The identifier clearly identifies requests that were not "processed", which allows the client to retry those requests.  The identifier also identifies which of those requests can be cleaned up expeditiously.  My view is that the former is significantly more important.  Enabling clean retry is one of the best features of h2.

For pushes, the need to signal whether the data was "processed" is moot, because only safe requests can be pushed.  So most of this comes down to what optimizations are enabled by the explicit Push ID.

In h2, requests are automatically "cancelled", though the bits are still retransmitted by TCP if the connection is kept alive.  A client isn't required to reset or finalize any associated streams if the requests were not complete, but the transport will automatically repair lost segments.

In h3, the same applies, though there is an optimization that might be enabled.  The transport layer can be told not to retransmit data for "cancelled" streams if packets are lost.  This doesn't require explicit RESET_STREAM or STOP_SENDING frames, reprioritization might be enough.  Depending on the transport implementations, endpoints might need to use RESET_STREAM to ensure that transport stacks are aware of the need to suppress retransmissions.

>From that perspective, an explicit Push ID would enable new capabilities in terms of suppression of unnecessary retransmission.

For example, say that there is an open request that is receiving associated pushes for data that arrives on a fixed cadence.  Say that a client wants to move to a different connection (because it was told with Alt-Svc, for instance).  The client might initiate a request starting at a particular time on a new connection.  When the client cancels this connection, there might be a number of pushes in flight.  If the client can precisely identify the push that immediately precedes the request it initiates on the new connection, the server can identify which of these pushes need to be fully delivered, and which can be abandoned.

However, this optimization really only works if the client can make this identification precisely.  If there are multiple requests in flight on the connection, the client might not be able to identify or predict the Push ID that will be used for the next push in the sequence.  The server might avoid this by providing promises well ahead (>1RTT) of the actual pushes so that the client doesn't need to guess, but that might not be possible in the general case.

This also requires that Push IDs are used strictly sequentially with respect to the global clock.  This doesn't guarantee clean separation even if Push IDs monotonically increase for each resource.  For instance, if two resources were providing the same sort of pushes, response 0 might promise Push ID 10 for time=44, but response 4 might promise Push ID 9 that corresponds to time=45.  That makes it difficult for a client to make a clean cut with a single value.

This also doesn't interact well with a model where the answering of a request depends on receiving a set of pushes.  If you have a resource that produces a continuous set of pushes on the same connection as a resource that (ideally) produces a fixed number of pushes (see also Prefer-Push), then the cut for the continuous resource might be clean, but it might split the pushes for the resource with a fixed set of associated pushes.

Of course, all of these arguments could easily be countered by saying that this is just an optimization, but given that it's an optimization we don't have for h2 (because TCP), I think that I'd prefer not to provide the capability.

Finer-grained signals might be valuable: such as the idea that individual requests might have separate budgets for push that can be progressively expanded by clients as they are ready for more.  That seems like something worth considering for future versions of this protocol.

You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub: