Re: [multipathtcp] MPTCP carrying UDP

<philip.eardley@bt.com> Wed, 23 November 2016 17:53 UTC

Return-Path: <philip.eardley@bt.com>
X-Original-To: multipathtcp@ietfa.amsl.com
Delivered-To: multipathtcp@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 6454512A266 for <multipathtcp@ietfa.amsl.com>; Wed, 23 Nov 2016 09:53:59 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -4.099
X-Spam-Level:
X-Spam-Status: No, score=-4.099 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H2=-0.001, RP_MATCHES_RCVD=-1.497, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id hnsD7gNQvMH7 for <multipathtcp@ietfa.amsl.com>; Wed, 23 Nov 2016 09:53:56 -0800 (PST)
Received: from smtpb1.bt.com (smtpb1.bt.com [62.7.242.142]) (using TLSv1 with cipher AES128-SHA (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id B321012A0E9 for <multipathtcp@ietf.org>; Wed, 23 Nov 2016 09:52:55 -0800 (PST)
Received: from EVMHT03-UKBR.domain1.systemhost.net (193.113.108.56) by EVMED06-UKBR.bt.com (10.216.161.38) with Microsoft SMTP Server (TLS) id 14.3.319.2; Wed, 23 Nov 2016 17:52:50 +0000
Received: from rew09926dag03c.domain1.systemhost.net (10.55.202.26) by EVMHT03-UKBR.domain1.systemhost.net (193.113.108.56) with Microsoft SMTP Server (TLS) id 8.3.342.0; Wed, 23 Nov 2016 17:52:53 +0000
Received: from rew09926dag03b.domain1.systemhost.net (10.55.202.22) by rew09926dag03c.domain1.systemhost.net (10.55.202.26) with Microsoft SMTP Server (TLS) id 15.0.1210.3; Wed, 23 Nov 2016 17:52:52 +0000
Received: from rew09926dag03b.domain1.systemhost.net ([fe80::d514:fe50:560c:401e]) by rew09926dag03b.domain1.systemhost.net ([fe80::d514:fe50:560c:401e%12]) with mapi id 15.00.1210.000; Wed, 23 Nov 2016 17:52:52 +0000
From: philip.eardley@bt.com
To: noel@multitel.be
Thread-Topic: [multipathtcp] MPTCP carrying UDP
Thread-Index: AdI/NebMR0wIzv/5Tmmn4GsoWOcicgGRCaUAAA4BjjA=
Date: Wed, 23 Nov 2016 17:52:52 +0000
Message-ID: <44d198129fa544438eb72c1aecc36f92@rew09926dag03b.domain1.systemhost.net>
References: <4d11a19b2b6644848ce79f55cdbd6ab5@rew09926dag03b.domain1.systemhost.net> <20161123120926.7ed52bd4@sne-UX31E>
In-Reply-To: <20161123120926.7ed52bd4@sne-UX31E>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-transport-fromentityheader: Hosted
x-originating-ip: [10.55.202.242]
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
Archived-At: <https://mailarchive.ietf.org/arch/msg/multipathtcp/PqV-QKMA0XDjzBJ0aPqFxA6fzwM>
Cc: multipathtcp@ietf.org
Subject: Re: [multipathtcp] MPTCP carrying UDP
X-BeenThere: multipathtcp@ietf.org
X-Mailman-Version: 2.1.17
Precedence: list
List-Id: Multi-path extensions for TCP <multipathtcp.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/multipathtcp>, <mailto:multipathtcp-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/multipathtcp/>
List-Post: <mailto:multipathtcp@ietf.org>
List-Help: <mailto:multipathtcp-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/multipathtcp>, <mailto:multipathtcp-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 23 Nov 2016 17:53:59 -0000

Sebastien, Olivier,
Thanks for doing these experiments- and so quickly - and for sharing the results - very interesting.

phil


-----Original Message-----
From: Sébastien Noel [mailto:noel@multitel.be] 
Sent: 23 November 2016 11:09
To: Eardley,PL,Philip,TUB8 R <philip.eardley@bt.com>
Cc: multipathtcp@ietf.org
Subject: Re: [multipathtcp] MPTCP carrying UDP

Phil,

> Do people have any experimental results /experiences they could share 
> of running UDP applications over MPTCP sub-flows?  Would be interested 
> to hear about the issues.
> I guess VoIP and Quic would be the most interesting ones.

To understand the interactions between QUIC and an underlying MPTCP transport, we performed some experiments by running QUIC over OpenVPN that runs itself over an MPTCP connection. This is the closest scenario to what you are discussing based on existing open-source software.

OpenVPN includes some framing to carry UDP and encryption/authentication. These mechanisms add CPU overhead and byte overhead compared to transporting QUIC over a plain MPTCP connection, but this does not change the results of the experiments.

Our measurement setup was the following.

                       /-----\
[client] --- [router1]         [router2] --- [server]
                       \-----/

An openvpn tunnel over MPTCP mode was setup between router 1 & 2.

The client & server were not running an MPTCP kernel.

We simply recorded the time needed to transfer a random 10M file from server to client while tweaking speed/latency/reliability of each link on router 1 & 2 with netem.

We used quic_server & quic_client
from https://github.com/google/proto-quic


In the following tables, Download/Upload are in Mbps, time in seconds & performance is a percentage of a perfect theoretical result/max speed (without any IP or TCP overhead).


1) "no" latency (< 1ms) & no packet loss; different link bandwidths
		
Link 1		| Link2		|
--------------------------------|----------------
Down	Upload	| Down	Upload	| time	| perf (%)
-------------------------------------------------
1	1	| 1	1	| 58,1	| 68,85 %
1	1	| 2	1	| 48,4	| 55,10 %
1	1	| 4	1	| 24,3	| 65,84 %
1	1	| 8	1	| 12,3	| 72,27 %
1	1	| 10	1	| 10,3	| 70,61 %
1	1	| 15	5	| 7,2	| 69,44 %
1	1	| 20	5	| 6,5	| 58,61 %
1	1	| 25	5	| 18,2	| 16,91 %
1	1	| 30	10	| 20,1	| 12,84 %
1	1	| 50	10	| 15,5	| 10,12 %
2	1	| 2	1	| 31,7	| 63,09 %
4	1	| 4	1	| 17,1	| 58,48 %
8	1	| 8	1	| 10	| 50,00 %
10	1	| 10	1	| 7,3	| 54,79 %
15	5	| 15	5	| 6,4	| 41,67 %
20	5	| 20	5	| 3,6	| 55,56 %
25	5	| 25	5	| 3,3	| 48,48 %
30	10	| 30	10	| 2,9	| 45,98 %

2) Fixed bandwidth ; no latency ; different packet loss on Link2

Link 1		| Link2			|
----------------------------------------|----------------
Down	Upload	| Down	Upload	P-L(%)	| time	| perf (%)
---------------------------------------------------------
2	1	| 10	2	0	| 9,9	| 67,34 %
2	1	| 10	2	1	| 11,2	| 59,52 %
2	1	| 10	2	2	| 11,3	| 59,00 %
2	1	| 10	2	3	| 11,7	| 56,98 %
2	1	| 10	2	4	| 12,3	| 54,20 %
2	1	| 10	2	5	| 15,9	| 41,93 %
2	1	| 10	2	6	| 16,6	| 40,16 %
2	1	| 10	2	7	| 18,7	| 35,65 %
2	1	| 10	2	8	| 20,1	| 33,17 %
2	1	| 10	2	9	| 23,5	| 28,37 %
2	1	| 10	2	10	| 27,4	| 24,33 %

3) Fixed bandwidth ; no packet loss ; variable latency (in ms) on Link2

Link 1		| Link2			|
----------------------------------------|----------------
Down	Upload	| Down	Upload	Latency	| time	| perf (%)
---------------------------------------------------------
2	1	| 10	2	0	| 9,9	| 67,34 %
2	1	| 10	2	5	| 10	| 66,67 %
2	1	| 10	2	10	| 10,1	| 66,01 %
2	1	| 10	2	15	| 10,2	| 65,36 %
2	1	| 10	2	20	| 10,6	| 62,89 %
2	1	| 10	2	30	| 12,2	| 54,64 %
2	1	| 10	2	40	| 14,3	| 46,62 %
2	1	| 10	2	50	| 16,2	| 41,15 %
2	1	| 10	2	60	| 18,2	| 36,63 %


IMHO those results confirm the intuition that running a protocol like QUIC that includes its own congestion control and retransmission mechanisms are a reliable bytestream protocol like MPTCP is not a good idea.

As you can see, with QUIC over MPTCP sub-flows, performance seems to drop as soon as you have an unreliable medium or as soon as you have latency

The sames tests were performed again, but this time with HTTP over end-to-end MPTCTP, to have a point of comparison

1b) "no" latency (< 1ms) & no packet loss; variable bandwidth
		
Link 1		| Link2		|
--------------------------------|----------------
Down	Upload	| Down	Upload	| time	| perf (%)
-------------------------------------------------
2	1	| 2	1	| 23,5	| 85,11 %
4	1	| 4	1	| 11,2	| 89,29 %
8	1	| 8	1	| 5,7	| 87,72 %
10	1	| 10	1	| 4,9	| 81,63 %
15	5	| 15	5	| 3	| 88,89 %
20	5	| 20	5	| 2,5	| 80,00 %
25	5	| 25	5	| 1,8	| 88,89 %
30	10	| 30	10	| 1,5	| 88,89 %

This should not surprise the readers of this list, but confirms that MPTCP works well in this environment.

2b) Fixed bandwidth ; no latency ; variable packet loss (P-L) on Link2

Link 1		| Link2			|
----------------------------------------|----------------
Down	Upload	| Down	Upload	P-L(%)	| time	| perf (%)
---------------------------------------------------------
2	1	| 10	2	0	| 8,2	| 81,30 %
2	1	| 10	2	1	| 7,8	| 85,47 %
2	1	| 10	2	2	| 7,7	| 86,58 %
2	1	| 10	2	3	| 7,8	| 85,47 %
2	1	| 10	2	4	| 7,7	| 86,58 %
2	1	| 10	2	5	| 8,8	| 75,76 %
2	1	| 10	2	6	| 8	| 83,33 %
2	1	| 10	2	7	| 7,8	| 85,47 %
2	1	| 10	2	8	| 7,9	| 84,39 %
2	1	| 10	2	9	| 8	| 83,33 %

Again, MPTCP adapts correctly to packet losses in the environment.

Given the bad results of running QUIC over MPTCP, we don't plan to analyse this in more details.

Best regards,

Sébastien