RE: Transport in 0-RTT connections for high BDP

Kuhn Nicolas <Nicolas.Kuhn@cnes.fr> Mon, 11 March 2019 15:56 UTC

Return-Path: <Nicolas.Kuhn@cnes.fr>
X-Original-To: quic@ietfa.amsl.com
Delivered-To: quic@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id A4967128B14 for <quic@ietfa.amsl.com>; Mon, 11 Mar 2019 08:56:04 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: 0.09
X-Spam-Level:
X-Spam-Status: No, score=0.09 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, HTML_MESSAGE=0.001, HTTPS_HTTP_MISMATCH=1.989, RCVD_IN_DNSWL_NONE=-0.0001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id QGZKdxDJBDfC for <quic@ietfa.amsl.com>; Mon, 11 Mar 2019 08:56:00 -0700 (PDT)
Received: from mx2.cnes.fr (mx2.cnes.fr [194.199.174.201]) by ietfa.amsl.com (Postfix) with ESMTP id 8E52D130FCB for <quic@ietf.org>; Mon, 11 Mar 2019 08:55:47 -0700 (PDT)
X-IronPort-AV: E=Sophos; i="5.58,468,1544486400"; d="scan'208,217"; a="25048119"
X-IPAS-Result: A2EFAABRhIZc/wUBeApkGQEBAQEBAQEBAQEBAQcBAQEBAQGBUwIBAQEBAQsBgQ2BAlcRgQMnCoN/gV+TenyXKoF5AjMFAYEFXYJeAheEJCI2Bw0BAQMBAQEGAQMCAgJpHAyFSwEBAQEDIwo/DRACAQUBAg0EAwEBAQEKFgcDAgICMBQJCAIEAQ0FCBODAgQCgRFMAxUPA5QMm2aBLxqEFgEDAg5BgnkDgiQFgS8BgVEIhzaCDoIlgQ8CRoFOSQcugx4BAQIBARaBMQYSDAkJFgKCUjGCJgOKSIIChAiTTwcCgR+GMotUaIERhWaDNwOIIYgHgnGFZYlIhRAGLA0ogSEzGieDOBOBUzAXg0uFFIU/QjABAQEBgRwIjXkBMW0BAQ
X-URL-LookUp-ScanningError: 1
From: Kuhn Nicolas <Nicolas.Kuhn@cnes.fr>
To: 'Praveen Balasubramanian' <pravb=40microsoft.com@dmarc.ietf.org>, Roberto Peon <fenix@fb.com>, Patrick McManus <mcmanus@ducksong.com>, Martin Thomson <mt@lowentropy.net>
CC: IETF QUIC WG <quic@ietf.org>
Subject: RE: Transport in 0-RTT connections for high BDP
Thread-Topic: Transport in 0-RTT connections for high BDP
Thread-Index: AdTU0Vt/T3P8LGrHSNKE8g+Nvj0KOwAV+OiAAADC6IAAJlu3AAACQkRw//+cc4D//044oP/6RLLw
Date: Mon, 11 Mar 2019 15:54:44 +0000
Deferred-Delivery: Mon, 11 Mar 2019 15:55:44 +0000
Message-ID: <F3B0A07CFD358240926B78A680E166FF1EBAE721@TW-MBX-P03.cnesnet.ad.cnes.fr>
References: <F3B0A07CFD358240926B78A680E166FF1EBABEF8@TW-MBX-P03.cnesnet.ad.cnes.fr> <CANatvzzskCSCgZT7wZVF88eL8yHyZdZuW-WSAgXCcHaVxE77Fw@mail.gmail.com> <1e40253b-bc4e-4b5f-b5ce-6b5cc377b206@www.fastmail.com> <CAOdDvNrCvNjdNgpaDqvvKthQKDgUdAmDK164-Zo8=rcvUEBA1g@mail.gmail.com> <MW2PR2101MB104967CC68D913218781D7B9B64D0@MW2PR2101MB1049.namprd21.prod.outlook.com> <4BBA93BA-7546-4711-9DAD-B1939A603E13@fb.com> <MW2PR2101MB1049F01C03219B3F2A184033B64D0@MW2PR2101MB1049.namprd21.prod.outlook.com>
In-Reply-To: <MW2PR2101MB1049F01C03219B3F2A184033B64D0@MW2PR2101MB1049.namprd21.prod.outlook.com>
Accept-Language: en-US
Content-Language: fr-FR
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-tm-as-product-ver: SMEX-11.0.0.4255-8.100.1062-24484.000
x-tm-as-result: No--29.679400-8.000000-31
x-tm-as-user-approved-sender: No
x-tm-as-user-blocked-sender: No
Content-Type: multipart/alternative; boundary="_000_F3B0A07CFD358240926B78A680E166FF1EBAE721TWMBXP03cnesnet_"
MIME-Version: 1.0
Archived-At: <https://mailarchive.ietf.org/arch/msg/quic/mdCMIDM2MQoIHktrKs3YRNToaxE>
X-BeenThere: quic@ietf.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Main mailing list of the IETF QUIC working group <quic.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/quic>, <mailto:quic-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/quic/>
List-Post: <mailto:quic@ietf.org>
List-Help: <mailto:quic-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/quic>, <mailto:quic-request@ietf.org?subject=subscribe>
X-List-Received-Date: Mon, 11 Mar 2019 15:56:05 -0000

While I agree that knobs can be used to increase the value on servers, having these knobs specifically for SATCOM use case is not realistic.
That being said, the proposal does not reside only in increasing the IW; knowing the path MTU or the fact that the BDP is high would open the door for client level improvements.
We have illustrated some more gains in an updated version of the draft : https://datatracker.ietf.org/doc/draft-kuhn-quic-0rtt-bdp/

Cheers,

NK

De : QUIC <quic-bounces@ietf.org> De la part de Praveen Balasubramanian
Envoyé : vendredi 8 mars 2019 22:34
À : Roberto Peon <fenix@fb.com>; Patrick McManus <mcmanus@ducksong.com>; Martin Thomson <mt@lowentropy.net>
Cc : IETF QUIC WG <quic@ietf.org>
Objet : RE: Transport in 0-RTT connections for high BDP

For better or worse that de facto starting point today is 10 packets worth in TCP and the same value is in QUIC recovery draft. For TCP there are knobs to increase the value on servers. Agreed that this discussion is more suited to another forum.

From: Roberto Peon <fenix@fb.com<mailto:fenix@fb.com>>
Sent: Friday, March 8, 2019 10:52 AM
To: Praveen Balasubramanian <pravb@microsoft.com<mailto:pravb@microsoft.com>>; Patrick McManus <mcmanus@ducksong.com<mailto:mcmanus@ducksong.com>>; Martin Thomson <mt@lowentropy.net<mailto:mt@lowentropy.net>>
Cc: IETF QUIC WG <quic@ietf.org<mailto:quic@ietf.org>>
Subject: Re: Transport in 0-RTT connections for high BDP

We’ve had this conversation in the past in the H2 context.
I had evidence that such caching is no worse in terms of erroneous behavior than not.
In any case, all you’re doing is picking the starting condition; you’re not talking about a new algorithm or any other fanciness.

Tongue in cheek: Historically, we’ve determined congestion control starting points by looking at historical data… Predicting the future based on incomplete past information is always erroneous, but there isn’t any other data by which you can do so…

While I’m happy to continue to discuss this topic here, I suspect that it might be better suited to some other forum as this doesn’t really affect the standard in any interesting way.

-=R

From: QUIC <quic-bounces@ietf.org<mailto:quic-bounces@ietf.org>> on behalf of Praveen Balasubramanian <pravb=40microsoft.com@dmarc.ietf.org<mailto:pravb=40microsoft.com@dmarc.ietf.org>>
Date: Friday, March 8, 2019 at 9:27 AM
To: Patrick McManus <mcmanus@ducksong.com<mailto:mcmanus@ducksong.com>>, Martin Thomson <mt@lowentropy.net<mailto:mt@lowentropy.net>>
Cc: IETF QUIC WG <quic@ietf.org<mailto:quic@ietf.org>>
Subject: RE: Transport in 0-RTT connections for high BDP

Past performance is no guarantee of future results. There is no QoS or min bandwidth guarantee on the Internet and connections between same IP pair do not take the same paths due to ECMP and LAG. Unless you are on a constrained network with min bandwidth guarantees, each connection will have to probe for its fair share. Network conditions change drastically over even short intervals of time. There’s been lots of discussion on what information should be cached and reused for future connections in tcpm. Today’s TCP implementations cache and reuse the bare minimum information and certainly not the old window information. Please see https://www.ietf.org/id/draft-touch-tcpm-2140bis-06.txt<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-3A__www.ietf.org_id_draft-2Dtouch-2Dtcpm-2D2140bis-2D06.txt%26d%3DDwMGaQ%26c%3D5VD0RTtNlTh3ycd41b3MUw%26r%3DC0sUo-LFNBaYfyoaCsf6TA%26m%3DBJq9FvRziFJ7rGx8uSR0ccn2Skr77EAiM4EwqVNKL2o%26s%3DSN4b47ErXRSv9QrWtboyFZHC-mESbQQavgfRqTHJGPg%26e%3D&data=02%7C01%7Cpravb%40microsoft.com%7Cc1423491496140004ea508d6a3f74585%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636876679758247136&sdata=5IB%2BJEYQIQsSWB88RxxdMqV2LuytQqM2sUGuVpKOdgs%3D&reserved=0>.

+1 to what Patrick says, lots of experiments and data needed for this.

From: QUIC <quic-bounces@ietf.org<mailto:quic-bounces@ietf.org>> On Behalf Of Patrick McManus
Sent: Friday, March 8, 2019 7:44 AM
To: Martin Thomson <mt@lowentropy.net<mailto:mt@lowentropy.net>>
Cc: IETF QUIC WG <quic@ietf.org<mailto:quic@ietf.org>>
Subject: Re: Transport in 0-RTT connections for high BDP



On Thu, Mar 7, 2019 at 4:25 PM Martin Thomson <mt@lowentropy.net<mailto:mt@lowentropy.net>> wrote:
As you say, this is a congestion control topic, not as much a QUIC-specific one.  And you are equally correct that this does not require standardization of any signaling to use it.

However, we have typically attempted to document congestion control strategies.  So in that spirit, it's a fine thing to do.

Folks interested in this thread might find a recent iccrg comment from Keith Winstein thought provoking: (in full https://mailarchive.ietf.org/arch/msg/iccrg/LHRAWIjbFktaqLC5hFqgG37M9zk<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-3A__nam06.safelinks.protection.outlook.com_-3Furl-3Dhttps-253A-252F-252Fmailarchive.ietf.org-252Farch-252Fmsg-252Ficcrg-252FLHRAWIjbFktaqLC5hFqgG37M9zk-26data-3D02-257C01-257Cpravb-2540microsoft.com-257C036f08944a7744f376a908d6a3dcf377-257C72f988bf86f141af91ab2d7cd011db47-257C1-257C1-257C636876566715916713-26sdata-3DVjl34Km0Rfz2Kw-252BR78rA0jGFuKGTVhkuoeTaGELycG0-253D-26reserved-3D0%26d%3DDwMGaQ%26c%3D5VD0RTtNlTh3ycd41b3MUw%26r%3DC0sUo-LFNBaYfyoaCsf6TA%26m%3DBJq9FvRziFJ7rGx8uSR0ccn2Skr77EAiM4EwqVNKL2o%26s%3D82FUmp5MVGgEPRq_mTfvqHMFAYOCzpVjFzyRxb2Qdg4%26e%3D&data=02%7C01%7Cpravb%40microsoft.com%7Cc1423491496140004ea508d6a3f74585%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636876679758247136&sdata=mHjpo%2Fy24j0br%2Bth4zNgPuURcDb%2Bb5PhVznNlqKF3rg%3D&reserved=0>)

> "My honest view is that congestion control is not a natural subject for the IETF process -- it doesn't really affect interoperability, practice diverged from the RFCs relatively early in the Internet's history and hasn't returned, we lack a real understanding and common language about some of the fundamental concepts and hard parts, and  there is an economic incentive for the traffic sources to cheat a bit (perhaps in increasing the IW, or perhaps in rolling out BBR, or CUBIC, or more connections-per-application) and not try *too* hard to discover and be concerned about how that might be harming their competitors. If there is a silver lining, maybe it's one of Mutually Assured Destruction -- in that Google, Netflix, and other major traffic sources do have a substantial economic inventive in the Internet's continued health."

my own view is somewhere in the middle - that this kind of approach needs a pile of data and experience first and then is pretty valuable as a best practice contribution after the fact. OTOH ietf/irtf provide good forums for development of these ideas and collaboration, we just need to separate these functions from standards track work at times.



All that said, we explicitly decided to not address this particular question early in our process.  Saving information about congestion window from one connection so that it could be used to jump-start the next connection was a feature cut from HTTP/2 as well.

On Fri, Mar 8, 2019, at 08:04, Kazuho Oku wrote:
> Thank you for writing the draft.
>
> I think the approach is fine, though I wonder if we need to a
> specification for this.
>
> The server has the freedom to store whatever it wants in a session
> ticket. Therefore, I do not see why we need to define an extension.
>
> Also, it has been my understanding that token (sent by NEW_TOKEN
> frame) is a better place for storing the RTT observed in last
> connection. This is because token is a per-path information that is
> updated per each path, compared to session tickets that are expected
> to rarely be updated mid-connection.
>
> 2019年3月7日(木) 19:36 Kuhn Nicolas <Nicolas.Kuhn@cnes.fr<mailto:Nicolas.Kuhn@cnes.fr>>:
> >
> > Hi,
> >
> >
> >
> > We have uploaded a draft that describes a solution to improve QUIC’s performance during 0-RTT establishment in a high BDP context.
> >
> > https://tools.ietf.org/html/draft-kuhn-quic-0rtt-bdp-00<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-3A__nam06.safelinks.protection.outlook.com_-3Furl-3Dhttps-253A-252F-252Ftools.ietf.org-252Fhtml-252Fdraft-2Dkuhn-2Dquic-2D0rtt-2Dbdp-2D00-26data-3D02-257C01-257Cpravb-2540microsoft.com-257C036f08944a7744f376a908d6a3dcf377-257C72f988bf86f141af91ab2d7cd011db47-257C1-257C1-257C636876566715926705-26sdata-3DG8rCpp5EPSNBP4kcExyNjlP8PSsHi20k-252Baq5rIy1Uno-253D-26reserved-3D0%26d%3DDwMGaQ%26c%3D5VD0RTtNlTh3ycd41b3MUw%26r%3DC0sUo-LFNBaYfyoaCsf6TA%26m%3DBJq9FvRziFJ7rGx8uSR0ccn2Skr77EAiM4EwqVNKL2o%26s%3D4Tvh4ZaXXSpM6QjUZTiSrH9Y5marBvLM3SF4o2h7j_Y%26e%3D&data=02%7C01%7Cpravb%40microsoft.com%7Cc1423491496140004ea508d6a3f74585%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636876679758257128&sdata=OubHgFuUQG3KBdUvvhB1QbTVXiwBKcLk83AgjRMbkjg%3D&reserved=0>
> >
> >
> >
> > Abstract
> >
> >
> >
> >    0-RTT is designed to accelerate the throughput at the establishment
> >
> >    of a connection.  There are cases where 0-RTT alone does not improve
> >
> >    the time-to-service.
> >
> >
> >
> >    This memo discusses a solution where a fundamental characteristic of
> >
> >    the path is learned during the 1-RTT phase and shared with the 0-RTT
> >
> >    phase to accelerate the initial throughput during subsequent 0-RTT
> >
> >    connections.
> >
> >
> >
> > Would it be possible to have a small slot to present it during the next IETF meeting?
> >
> >
> >
> > Kind regards,
> >
> >
> >
> > Nico & Emile
> >
> >
>
>
>
> --
> Kazuho Oku
>
>