RE: Transport in 0-RTT connections for high BDP: opt-in

Kuhn Nicolas <Nicolas.Kuhn@cnes.fr> Wed, 13 March 2019 18:43 UTC

Return-Path: <Nicolas.Kuhn@cnes.fr>
X-Original-To: quic@ietfa.amsl.com
Delivered-To: quic@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 2D71A130F3F for <quic@ietfa.amsl.com>; Wed, 13 Mar 2019 11:43:21 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.9
X-Spam-Level:
X-Spam-Status: No, score=-1.9 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, RCVD_IN_DNSWL_NONE=-0.0001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id LlLFpaxRanvi for <quic@ietfa.amsl.com>; Wed, 13 Mar 2019 11:43:18 -0700 (PDT)
Received: from mx1.cnes.fr (mx1.cnes.fr [194.199.174.200]) by ietfa.amsl.com (Postfix) with ESMTP id 35005130E00 for <quic@ietf.org>; Wed, 13 Mar 2019 11:43:17 -0700 (PDT)
X-IronPort-AV: E=Sophos;i="5.58,475,1544486400"; d="scan'208";a="6392340"
X-IPAS-Result: A2EiAABWTolc/wUBeApkGgEBAQEBAgEBAQEHAgEBAQGBVAIBAQEBCwGBZSpXgRQnCoQAlUd8iDWPEYFnOAGEQAIXhDAiNwYNAQEDAQEBCAEDAgICaSiFSwEBAQECASMEDTgKAxACAQUBAg0BCgICBiACAgIfERUQAgQBDQUIE4RlAw0IkxubZnwzGodpDYIfgQskAYFZi2mBEUaCFwcugleBdwERAgEIBhIFMQKCUDGCJgOKFoIRKoYLkSk1BwKBH4dVgmOEB4NYaIETiR8DiCqICYJ0hyGNRCNlcTMaJ0yCbIIWF4EAAQKNG0IwgSAIjEwCBR8HgQEBgR4BAQ
From: Kuhn Nicolas <Nicolas.Kuhn@cnes.fr>
To: 'Roberto Peon' <fenix@fb.com>, 'Kazuho Oku' <kazuhooku@gmail.com>, Lars Eggert <lars@eggert.org>
CC: Jana Iyengar <jri.ietf@gmail.com>, Mikkel Fahnøe Jørgensen <mikkelfj@gmail.com>, "quic@ietf.org" <quic@ietf.org>, "emile.stephan@orange.com" <emile.stephan@orange.com>, Martin Thomson <mt@lowentropy.net>
Subject: RE: Transport in 0-RTT connections for high BDP: opt-in
Thread-Topic: Transport in 0-RTT connections for high BDP: opt-in
Thread-Index: AQHU2Z/rPiEYlEOHJUKuPlV5jqGkXaYJvuEA///7h4CAABO/wA==
Date: Wed, 13 Mar 2019 18:42:15 +0000
Deferred-Delivery: Wed, 13 Mar 2019 18:43:15 +0000
Message-ID: <F3B0A07CFD358240926B78A680E166FF1EBB0159@TW-MBX-P03.cnesnet.ad.cnes.fr>
References: <28624_1552035060_5C822CF4_28624_403_15_5AE9CCAA1B4A2248AB61B4C7F0AD5FB931E01F59@OPEXCAUBM44.corporate.adroot.infra.ftgroup> <CACpbDcdqfu7f=2BpsiuqzQ1gD6ovd3xNnTSnjc57t4sudq2VEg@mail.gmail.com> <DB6PR10MB176605C582ECF4B187DEC2BDAC4A0@DB6PR10MB1766.EURPRD10.PROD.OUTLOOK.COM> <CANatvzw6qL=Y6dqyVw4XA4VguFVctNV9v6BvN8praxMToYpAhw@mail.gmail.com> <8625C58B-3A59-4D88-91A9-52E619FF24C0@eggert.org> <CANatvzx4gaD+Qk_myoLRFe_UewGy+3Vi6KxGvS45+no59egfLQ@mail.gmail.com> <F3B0A07CFD358240926B78A680E166FF1EBAFFC0@TW-MBX-P03.cnesnet.ad.cnes.fr> <912BD9BE-694B-4C3B-9544-B18A13EE8226@fb.com>
In-Reply-To: <912BD9BE-694B-4C3B-9544-B18A13EE8226@fb.com>
Accept-Language: en-US
Content-Language: fr-FR
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-tm-as-product-ver: SMEX-11.0.0.4255-8.100.1062-24488.002
x-tm-as-result: No--7.605500-8.000000-31
x-tm-as-user-approved-sender: No
x-tm-as-user-blocked-sender: No
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
Archived-At: <https://mailarchive.ietf.org/arch/msg/quic/B28xNuu3fqYiZgJ1RIOxf4gLpMc>
X-BeenThere: quic@ietf.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Main mailing list of the IETF QUIC working group <quic.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/quic>, <mailto:quic-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/quic/>
List-Post: <mailto:quic@ietf.org>
List-Help: <mailto:quic-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/quic>, <mailto:quic-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 13 Mar 2019 18:43:21 -0000

Thanks for your comment and sharing the issues in sizing the initial CWND for H1/H2.

The core element in the draft is : if the client knows that the BDP of the network is high, client-level adaptations can be envisioned. 
The draft also presents server-level adaptations that could also improve the time to service (with an increased IW and MTU as examples - following some comments we received from the list).  

So : 
1- How do we inform the client know the BDP is high ?
The actual BDP may not be relevant and difficult to measure - as you mentioned. Using the actual value or a binary based on simple heuristics (e.g. RTT value > 500ms) is still under discussion. 

2- Is increasing the IW safe ? 
As mentioned in other emails by others, server implementations decide how to restart connections. The draft indeed mentions the gains we could have in increasing the initCWND, but along with pacing this increase *should* not hurt yourself. When you send a packet once every five milliseconds, a 50ms RTT would give you an initCWND of 10, while a 500ms RTT would result in a initCWND of 100 without burst-related losses. 

Cheers, 

Nico


-----Message d'origine-----
De : Roberto Peon <fenix@fb.com> 
Envoyé : mercredi 13 mars 2019 18:07
À : Kuhn Nicolas <Nicolas.Kuhn@cnes.fr>; 'Kazuho Oku' <kazuhooku@gmail.com>; Lars Eggert <lars@eggert.org>
Cc : Jana Iyengar <jri.ietf@gmail.com>; Mikkel Fahnøe Jørgensen <mikkelfj@gmail.com>; quic@ietf.org; emile.stephan@orange.com; Martin Thomson <mt@lowentropy.net>
Objet : Re: Transport in 0-RTT connections for high BDP: opt-in

This may be obvious, but I'll lay it out anyway:

The 1st order benefit of predicting the BW to the client/server is bounded by the error between under-predicted BW and actual BW at the start of a connection. We know that this error last for a longer time with larger RTTs.

The 1st order effects on the network is either better utilization or over-utilization. Both are corrected in some amount of time. Depending on the algorithm, it can be corrected in a single RTT, or it can be corrected in something like RTT * lg(error) time.

The 2nd order effects aren't new-- folks may make more connections because the opportunity cost of spinning up a new connection is lower. We've seen from H1->H2 that applications do this more often anyway when congestion control has non-error-proportional feedback loops (e.g. is a direct function of packet-loss, rather than ratio of packet loss).

From what we saw in the SPDY-experiment era, caching of the prior state worked and showed improvements. The experiments that we did then were imperfect-- we recorded BDP (as CWND). IIRC most clients had surprisingly moderate CWNDs (< 40), and saw benefit from an increased INITCWND.  The benefit at the time was not enough to overcome fears in the H2 era, and thus it was dropped then.

Since the SPDY-era experiment failed to record CWND+RTT (or bandwidth directly), there were hypotheses that this could have been much better and that larger cwnds were probably causal of burst-losses that immediately decreased the effectiveness (at those larger cwnd start sizes). 
There are a bunch of heuristics that attempt to deal with this in various transport stuff (how much should you transmit after you're idle for X seconds? Hint: If you just use the last param you had, you'll potentially initiate a large burst and hurt yourself). In other words, this problem is endemic of long-lived bursty traffic (i.e. web and web-like traffic *including* most video right now) within any congestion control loop and not just at connection initiation.
-=R



On 3/13/19, 9:27 AM, "QUIC on behalf of Kuhn Nicolas" <quic-bounces@ietf.org on behalf of Nicolas.Kuhn@cnes.fr> wrote:

    Hi, 
    
    This email is just an attempt to clarify the potential gain with the proposed solution. 
    In short, the client being also aware of the high BDP connection can help in further improvements of the time-to-service (and does not cost much).
    
    ---
    Potential classic operation with the proposed BCP 
    ---
    During the 1-RTT connection, the server measures MTU, RTT, BDP, etc.
    This information is sent to the client in the NewSessionTicket to the client. 
    The reduced ticket_lifetime  guarantee that the client has not moved (the server is also free to discard this information if the client IP address has changed). 
    -> The server can adapt the CC to the measured characteristics of the network to improve the time to service
    
    ---
    Added signal with the proposed BCP 
    ---
    In addition to a classic operation, with our proposal the client is aware of being on a high BDP path.
    - > The client can adapt the CC to the measured characteristics of the network to improve the time to service (e.g. ACK management (ratio, delayed, etc), better memory management, reestablishment, etc)
    
    While I agree that more data may be required, the purpose is more to let the door open for further concrete proposals initiated by the client knowing it is on a high BDP path.
    
    Cheers,
    
    Nico
    
    -----Message d'origine-----
    De : Kazuho Oku <kazuhooku@gmail.com> 
    Envoyé : mercredi 13 mars 2019 14:23
    À : Lars Eggert <lars@eggert.org>
    Cc : Mikkel Fahnøe Jørgensen <mikkelfj@gmail.com>; Jana Iyengar <jri.ietf@gmail.com>; Kuhn Nicolas <Nicolas.Kuhn@cnes.fr>; quic@ietf.org; Martin Thomson <mt@lowentropy.net>; emile.stephan@orange.com
    Objet : Re: Transport in 0-RTT connections for high BDP: opt-in
    
    2019年3月13日(水) 21:00 Lars Eggert <lars@eggert.org>:
    >
    > Hi,
    >
    > On 2019-3-13, at 0:09, Kazuho Oku <kazuhooku@gmail.com> wrote:
    >
    > To me it seems that the question is if having a client-sent flag that 
    > indicates whether the client is on the same network as the one it 
    > obtained the token helps the server determine the correct send window, 
    > either before path validation or right after path validation.
    >
    >
    > sure, but the server already knows this based on whether the source IP address has changed?
    
    Exactly. That's why I stated in my previous mail that the benefit of the proposed approach depends on the improvement we gain from the added signal (compared to just relying on the client's address).
    
    > Lars
    
    
    
    --
    Kazuho Oku