Re: [multipathtcp] MPTCP carrying UDP

<mohamed.boucadair@orange.com> Wed, 23 November 2016 13:07 UTC

Return-Path: <mohamed.boucadair@orange.com>
X-Original-To: multipathtcp@ietfa.amsl.com
Delivered-To: multipathtcp@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id E5E2B129D6E for <multipathtcp@ietfa.amsl.com>; Wed, 23 Nov 2016 05:07:17 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -4.116
X-Spam-Level:
X-Spam-Status: No, score=-4.116 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, FREEMAIL_FROM=0.001, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, RP_MATCHES_RCVD=-1.497, SPF_PASS=-0.001, UNPARSEABLE_RELAY=0.001] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id ZFdhROnbugpk for <multipathtcp@ietfa.amsl.com>; Wed, 23 Nov 2016 05:07:16 -0800 (PST)
Received: from relais-inet.orange.com (mta134.mail.business.static.orange.com [80.12.70.34]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id EA34912960D for <multipathtcp@ietf.org>; Wed, 23 Nov 2016 05:07:15 -0800 (PST)
Received: from opfednr02.francetelecom.fr (unknown [xx.xx.xx.66]) by opfednr20.francetelecom.fr (ESMTP service) with ESMTP id 5EEAF407A5; Wed, 23 Nov 2016 14:07:14 +0100 (CET)
Received: from Exchangemail-eme2.itn.ftgroup (unknown [xx.xx.31.57]) by opfednr02.francetelecom.fr (ESMTP service) with ESMTP id 21A0412006B; Wed, 23 Nov 2016 14:07:14 +0100 (CET)
Received: from OPEXCLILMA3.corporate.adroot.infra.ftgroup ([fe80::60a9:abc3:86e6:2541]) by OPEXCLILM23.corporate.adroot.infra.ftgroup ([fe80::787e:db0c:23c4:71b3%19]) with mapi id 14.03.0319.002; Wed, 23 Nov 2016 14:07:13 +0100
From: mohamed.boucadair@orange.com
To: Sébastien Noel <noel@multitel.be>, "philip.eardley@bt.com" <philip.eardley@bt.com>
Thread-Topic: [multipathtcp] MPTCP carrying UDP
Thread-Index: AQHSRXoVHVhE4d35FEaPB6lA4/kUu6DmfKdA
Date: Wed, 23 Nov 2016 13:07:13 +0000
Message-ID: <787AE7BB302AE849A7480A190F8B933009DB7E96@OPEXCLILMA3.corporate.adroot.infra.ftgroup>
References: <4d11a19b2b6644848ce79f55cdbd6ab5@rew09926dag03b.domain1.systemhost.net> <20161123120926.7ed52bd4@sne-UX31E>
In-Reply-To: <20161123120926.7ed52bd4@sne-UX31E>
Accept-Language: fr-FR, en-US
Content-Language: fr-FR
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-originating-ip: [10.168.234.1]
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
Archived-At: <https://mailarchive.ietf.org/arch/msg/multipathtcp/pI09wjcTMIoPRtt6NOW7ScTTGTY>
Cc: "multipathtcp@ietf.org" <multipathtcp@ietf.org>
Subject: Re: [multipathtcp] MPTCP carrying UDP
X-BeenThere: multipathtcp@ietf.org
X-Mailman-Version: 2.1.17
Precedence: list
List-Id: Multi-path extensions for TCP <multipathtcp.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/multipathtcp>, <mailto:multipathtcp-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/multipathtcp/>
List-Post: <mailto:multipathtcp@ietf.org>
List-Help: <mailto:multipathtcp-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/multipathtcp>, <mailto:multipathtcp-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 23 Nov 2016 13:07:18 -0000

Dear Sébastion, 

Thank you for sharing these details. 

Please see inline.

Cheers,
Med

> -----Message d'origine-----
> De : multipathtcp [mailto:multipathtcp-bounces@ietf.org] De la part de
> Sébastien Noel
> Envoyé : mercredi 23 novembre 2016 12:09
> À : philip.eardley@bt.com
> Cc : multipathtcp@ietf.org
> Objet : Re: [multipathtcp] MPTCP carrying UDP
> 
> Phil,
> 
> > Do people have any experimental results /experiences they could share of
> > running UDP applications over MPTCP sub-flows?  Would be interested to
> hear
> > about the issues.
> > I guess VoIP and Quic would be the most interesting ones.
> 
> To understand the interactions between QUIC and an underlying MPTCP
> transport, we performed some experiments by running QUIC over OpenVPN
> that runs itself over an MPTCP connection. This is the closest scenario
> to what you are discussing based on existing open-source software.

[Med] The schema we are investigating does not include this cascaded layers. It is only UDP payload transported in plain MPTCP connections.  

> 
> OpenVPN includes some framing to carry UDP and
> encryption/authentication. These mechanisms add CPU overhead and byte
> overhead compared to transporting QUIC over a plain MPTCP connection,
> but this does not change the results of the experiments.
> 

[Med] I wouldn't drop that conclusion as the overall performance depends also on the overhead prepended to packets to be injected over the tunnel. 

Putting that aside, can you please indicate how the traffic is distributed among available subflows? FWIW, the target traffic distribution policy for the hybrid access is to use the fixed line first, and then grab some resources from the cellular link if needed. So the target objective is not 1+1! 


> Our measurement setup was the following.
> 
>                        /-----\
> [client] --- [router1]         [router2] --- [server]
>                        \-----/
> 
> An openvpn tunnel over MPTCP mode was setup between router 1 & 2.
> 
> The client & server were not running an MPTCP kernel.
> 
> We simply recorded the time needed to transfer a random 10M file
> from server to client while tweaking speed/latency/reliability of each
> link on router 1 & 2 with netem.
> 
> We used quic_server & quic_client
> from https://github.com/google/proto-quic
> 
> 
> In the following tables, Download/Upload are in Mbps, time in seconds &
> performance is a percentage of a perfect theoretical result/max speed
> (without any IP or TCP overhead).
> 
> 
> 1) "no" latency (< 1ms) & no packet loss; different link bandwidths
> 
> Link 1		| Link2		|
> --------------------------------|----------------
> Down	Upload	| Down	Upload	| time	| perf (%)
> -------------------------------------------------
> 1	1	| 1	1	| 58,1	| 68,85 %
> 1	1	| 2	1	| 48,4	| 55,10 %
> 1	1	| 4	1	| 24,3	| 65,84 %
> 1	1	| 8	1	| 12,3	| 72,27 %
> 1	1	| 10	1	| 10,3	| 70,61 %
> 1	1	| 15	5	| 7,2	| 69,44 %
> 1	1	| 20	5	| 6,5	| 58,61 %
> 1	1	| 25	5	| 18,2	| 16,91 %
> 1	1	| 30	10	| 20,1	| 12,84 %
> 1	1	| 50	10	| 15,5	| 10,12 %
> 2	1	| 2	1	| 31,7	| 63,09 %
> 4	1	| 4	1	| 17,1	| 58,48 %
> 8	1	| 8	1	| 10	| 50,00 %
> 10	1	| 10	1	| 7,3	| 54,79 %
> 15	5	| 15	5	| 6,4	| 41,67 %
> 20	5	| 20	5	| 3,6	| 55,56 %
> 25	5	| 25	5	| 3,3	| 48,48 %
> 30	10	| 30	10	| 2,9	| 45,98 %
> 
> 2) Fixed bandwidth ; no latency ; different packet loss on Link2
> 
> Link 1		| Link2			|
> ----------------------------------------|----------------
> Down	Upload	| Down	Upload	P-L(%)	| time	| perf (%)
> ---------------------------------------------------------
> 2	1	| 10	2	0	| 9,9	| 67,34 %
> 2	1	| 10	2	1	| 11,2	| 59,52 %
> 2	1	| 10	2	2	| 11,3	| 59,00 %
> 2	1	| 10	2	3	| 11,7	| 56,98 %
> 2	1	| 10	2	4	| 12,3	| 54,20 %
> 2	1	| 10	2	5	| 15,9	| 41,93 %
> 2	1	| 10	2	6	| 16,6	| 40,16 %
> 2	1	| 10	2	7	| 18,7	| 35,65 %
> 2	1	| 10	2	8	| 20,1	| 33,17 %
> 2	1	| 10	2	9	| 23,5	| 28,37 %
> 2	1	| 10	2	10	| 27,4	| 24,33 %
> 
> 3) Fixed bandwidth ; no packet loss ; variable latency (in ms) on Link2
> 
> Link 1		| Link2			|
> ----------------------------------------|----------------
> Down	Upload	| Down	Upload	Latency	| time	| perf (%)
> ---------------------------------------------------------
> 2	1	| 10	2	0	| 9,9	| 67,34 %
> 2	1	| 10	2	5	| 10	| 66,67 %
> 2	1	| 10	2	10	| 10,1	| 66,01 %
> 2	1	| 10	2	15	| 10,2	| 65,36 %
> 2	1	| 10	2	20	| 10,6	| 62,89 %
> 2	1	| 10	2	30	| 12,2	| 54,64 %
> 2	1	| 10	2	40	| 14,3	| 46,62 %
> 2	1	| 10	2	50	| 16,2	| 41,15 %
> 2	1	| 10	2	60	| 18,2	| 36,63 %
> 
> 
> IMHO those results confirm the intuition that running a protocol like
> QUIC that includes its own congestion control and retransmission
> mechanisms are a reliable bytestream protocol like MPTCP is not a good
> idea.
> 
> As you can see, with QUIC over MPTCP sub-flows, performance seems to
> drop as soon as you have an unreliable medium or as soon as you have
> latency
> 
> The sames tests were performed again, but this time with HTTP over
> end-to-end MPTCTP, to have a point of comparison

[Med] When do you say "end-to-end MPTCP" do you mean MPTCP is enabled by the client and the server? 

> 
> 1b) "no" latency (< 1ms) & no packet loss; variable bandwidth
> 
> Link 1		| Link2		|
> --------------------------------|----------------
> Down	Upload	| Down	Upload	| time	| perf (%)
> -------------------------------------------------
> 2	1	| 2	1	| 23,5	| 85,11 %
> 4	1	| 4	1	| 11,2	| 89,29 %
> 8	1	| 8	1	| 5,7	| 87,72 %
> 10	1	| 10	1	| 4,9	| 81,63 %
> 15	5	| 15	5	| 3	| 88,89 %
> 20	5	| 20	5	| 2,5	| 80,00 %
> 25	5	| 25	5	| 1,8	| 88,89 %
> 30	10	| 30	10	| 1,5	| 88,89 %
> 
> This should not surprise the readers of this list, but confirms that
> MPTCP works well in this environment.
> 
> 2b) Fixed bandwidth ; no latency ; variable packet loss (P-L) on Link2
> 
> Link 1		| Link2			|
> ----------------------------------------|----------------
> Down	Upload	| Down	Upload	P-L(%)	| time	| perf (%)
> ---------------------------------------------------------
> 2	1	| 10	2	0	| 8,2	| 81,30 %
> 2	1	| 10	2	1	| 7,8	| 85,47 %
> 2	1	| 10	2	2	| 7,7	| 86,58 %
> 2	1	| 10	2	3	| 7,8	| 85,47 %
> 2	1	| 10	2	4	| 7,7	| 86,58 %
> 2	1	| 10	2	5	| 8,8	| 75,76 %
> 2	1	| 10	2	6	| 8	| 83,33 %
> 2	1	| 10	2	7	| 7,8	| 85,47 %
> 2	1	| 10	2	8	| 7,9	| 84,39 %
> 2	1	| 10	2	9	| 8	| 83,33 %
> 
> Again, MPTCP adapts correctly to packet losses in the environment.
> 
> Given the bad results of running QUIC over MPTCP, we don't plan to
> analyse this in more details.
> 

[Med] What if you had an option in the MPTCP implementation to relax TCP reliability checks on QUIC-triggered MPTCP connections? 

==
   The CPE and the Concentrator MUST establish a set of subflows that
   are maintained alive.  These subflows are used to transport UDP
   datagrams that are distributed among existent subflows.  TCP session
   tracking is not enabled for the set of subflows that are dedicated to
   transport UDP traffic.
=== 

> Best regards,
> 
> Sébastien
> 
> _______________________________________________
> multipathtcp mailing list
> multipathtcp@ietf.org
> https://www.ietf.org/mailman/listinfo/multipathtcp