Re: QUIC Connection Close Reliability

Ian Swett <ianswett@google.com> Wed, 25 April 2018 13:05 UTC

Return-Path: <ianswett@google.com>
X-Original-To: quic@ietfa.amsl.com
Delivered-To: quic@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id AB724126B6D for <quic@ietfa.amsl.com>; Wed, 25 Apr 2018 06:05:30 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.01
X-Spam-Level:
X-Spam-Status: No, score=-2.01 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_PASS=-0.001, T_DKIMWL_WL_MED=-0.01] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=google.com
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id asdS5kQOzSLJ for <quic@ietfa.amsl.com>; Wed, 25 Apr 2018 06:05:28 -0700 (PDT)
Received: from mail-yw0-x22f.google.com (mail-yw0-x22f.google.com [IPv6:2607:f8b0:4002:c05::22f]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 97AF51205D3 for <quic@ietf.org>; Wed, 25 Apr 2018 06:05:28 -0700 (PDT)
Received: by mail-yw0-x22f.google.com with SMTP id g9-v6so6498110ywb.12 for <quic@ietf.org>; Wed, 25 Apr 2018 06:05:28 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=h5UOKvjGregBILDgxbnloFBQ8U6JhsrYa3R0UFuLxtk=; b=sV1h8gimjaOOTKnJG/dI3tUxjshc0I6igeKctkSnvN4cmCLrzT+pszCmH4ZnHTBKaB d9tZuTIJHTwVtHQspnpE12PWut56FeKipWQm+tPth1nwqEUFYiIqvv6mGea/jCAVqI/U 7fraNUXJTf0AzxBg/e355vBnyl0slIS1F+ORHCdgWrdEJqVDTTcIIsuv9MDfDGJR+ypz OJE5vlZD7WLAjx7a1cyKCEmUKBwEaestwiV5SaFEm+xOY2roP/JuaOlT1EuSs+2JsZqb GlEqqunrXqw92rsGjnN64XoDQF2j7bdVTpo7IOSQvlAL3zoVth/yBdcq3q26RK/sePJ5 erMA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=h5UOKvjGregBILDgxbnloFBQ8U6JhsrYa3R0UFuLxtk=; b=p1THALuTMAaMFIaMvqQxJgPeA2IV0h2AmFtnNUuNVsnwcezFAbaAEKmc+RYywfGNAh /U3qWiz7WjOyLbJZN7xfRgDa24/dj5sJip1R+UxM/D3SBbQhS9c0vnDC/BojTqu1ZzOO 9gQJomCBjRdE203yemmRnFAKCxf8XKuFnBd1zk8CZ47lXkZlNs1Om1omZiETe3tBYMnN 9aT1xt/WY4GIu0l9Fs9pNigao4ukolENtoRB+KVfgSbUkoG2tYRbK7xB3e9uymh0u5CJ qWGjfoS7zSukN8yWcLsC05FkkgxhDFXdXyij14CrnPgTXyldKcJ7NXTOCRGVXtTvF5cB pxDg==
X-Gm-Message-State: ALQs6tDWiJT4ssjsIHElqn9+WXGIYzyqQNfZNlqJbRqoiHfB0Y+Lkknm gx6zOvbo3dtrPUbnTzWLg58KRyaeQNtT/b365TnHdw==
X-Google-Smtp-Source: AIpwx4+t6e0vy+2ZGMIXm54ZP214xUslrzqTKpoUdmJkUEoYlu9U1fg26deYYHn0xz4QxUab0al72h+eIILkS5B6T/M=
X-Received: by 2002:a81:32c1:: with SMTP id y184-v6mr14495924ywy.361.1524661527602; Wed, 25 Apr 2018 06:05:27 -0700 (PDT)
MIME-Version: 1.0
References: <DM5PR2101MB090161AA6FD645EFFA1A1EFBB3890@DM5PR2101MB0901.namprd21.prod.outlook.com> <CABkgnnVOaOh8AsPAubvXwZe4G3SC-NSoaj0ePaxuHea0kaXw=Q@mail.gmail.com>
In-Reply-To: <CABkgnnVOaOh8AsPAubvXwZe4G3SC-NSoaj0ePaxuHea0kaXw=Q@mail.gmail.com>
From: Ian Swett <ianswett@google.com>
Date: Wed, 25 Apr 2018 13:05:15 +0000
Message-ID: <CAKcm_gMv_g7dTDUdJyB11j+USxxQ6YpM8esGdjT6Kt30aTf+vA@mail.gmail.com>
Subject: Re: QUIC Connection Close Reliability
To: Martin Thomson <martin.thomson@gmail.com>
Cc: nibanks=40microsoft.com@dmarc.ietf.org, IETF QUIC WG <quic@ietf.org>
Content-Type: multipart/alternative; boundary="000000000000c2072a056aabec97"
Archived-At: <https://mailarchive.ietf.org/arch/msg/quic/JZcd31v17N-gqmr_zQ37YftOGb0>
X-BeenThere: quic@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: Main mailing list of the IETF QUIC working group <quic.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/quic>, <mailto:quic-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/quic/>
List-Post: <mailto:quic@ietf.org>
List-Help: <mailto:quic-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/quic>, <mailto:quic-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 25 Apr 2018 13:05:30 -0000

Given the APP_CLOSE is closing the connection, I don't understand why the
ACK matters?  Once the connection is closed, the state is torn down and any
unacknowledged data is unnecessary.  I assume Martin's interpretation is
correct and this is a case where A wants to know the data was delivered,
and if you're immediately closing on fin receipt in your application,
Martin's suggestion that the APP_CLOSE implicitly acknowledges the data is
the obvious way to go.

However, I would expect most applications to have an idle period between
the last closed stream and when they close the connection, which ensures
the majority of connections are closed without unacknowledged data.

On Wed, Apr 25, 2018 at 6:14 AM Martin Thomson <martin.thomson@gmail.com>
wrote:

> Hi Nick,
>
> It seems that you are having a problem with getting agreement between
> peers about what the close state is.  In your example, if A cares that
> the stream was completely received by B, then this is not an
> appropriate way to negotiate a close in that application protocol.
>
> You might interpret the receipt of APPLICATION_CLOSE as implicit
> acknowledgment of all streams to avoid this particular problem, or to
> create additional signals.
>
> HTTP over QUIC does this largely by driving this from the client end,
> which simplifies things considerably.
>
> On Tue, Apr 24, 2018 at 12:06 AM, Nick Banks
> <nibanks=40microsoft.com@dmarc.ietf.org> wrote:
> > Hey Folks,
> >
> >
> >
> > I have been trying to work through an issue with reliability, related to
> > connection close and I am trying to understand if this is a protocol
> issue
> > for if I need to change my implementation.
> >
> >
> >
> > Suppose you have an application protocol on top of QUIC where each side
> of
> > the connection may use one or more streams and then immediately, but
> > gracefully, close the connection when they are done with all the
> streams. In
> > that kind of application protocol, one side of the connection generally
> ends
> > up closing immediately in response to receiving a FIN on a stream. Then
> you
> > get into a state where the peer won’t necessarily get that FIN
> acknowledged.
> > Even if the endpoint sends an acknowledgment for the FIN (either as a
> > separate packet or with the close) it’s possible that ACK is lost..
> Ideally
> > (I think), the ACK is sent in the same packet as the close, so you can
> > control the atomic state change on the peer. If the ACK was separate, and
> > lost, but the close wasn’t lost, then the peer would treat the stream as
> > abortively closed.
> >
> >
> >
> > A -------- STREAM (w/FIN) ------> B
> >
> > A <------- ACK, APP_CLOSE ------- B
> >
> >
> >
> > But now the issue comes if the ACK/APP_CLOSE packet is lost. Per spec,
> the
> > endpoint (B) that received the FIN and then closed the connection enters
> the
> > closing period. While in this state, the spec states that the endpoint
> > SHOULD respond to any packet it receives [without a corresponding close
> > frame] with another packet containing a closing frame. If the endpoint
> does
> > this, and includes the ACK again as well, I believe everything would
> work..
> > The peer (A) would retransmit the STREAM packet, which would elicit a new
> > ACK/APP_CLOSE packet from (B). My fear though (from current interop
> > experience) is that everyone will try to short circuit connection closure
> > and just immediately go away. If this behavior isn’t completely standard
> > across all implementations, then the application protocol would end up
> > having different results on different implementations. Some
> implementations,
> > might not send the ACK at all with the initial close, causing the peer
> (A)
> > to most of the time interpret the stream as aborted. Other times, the
> > endpoint (A) might not continue to send the ACK frame with the close, or
> > just not send anything any more at all, resulting in a timeout on the
> peer
> > side, also resulting in an abortive closure of the stream. This would
> > probably cause the application protocol to end up creating it’s own close
> > semantics, on top of QUIC.
> >
> >
> >
> > Now, folks might not think this is a very important scenario, but I feel
> we
> > should aim for consistency here. Perhaps I have missed a different
> solution
> > to the problem. I considered a design that always sends that last ACK
> with a
> > PING frame, and waits for that to get acknowledged before closing, but I
> > felt it was way too ugly.
> >
> >
> >
> > Thanks,
> >
> > - Nick
>
>