Re: [multipathtcp] MPTCP carrying UDP

Sébastien Noel <noel@multitel.be> Wed, 23 November 2016 11:09 UTC

Return-Path: <noel@multitel.be>
X-Original-To: multipathtcp@ietfa.amsl.com
Delivered-To: multipathtcp@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id A0E6C129CBE for <multipathtcp@ietfa.amsl.com>; Wed, 23 Nov 2016 03:09:33 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -3.397
X-Spam-Level:
X-Spam-Status: No, score=-3.397 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, RP_MATCHES_RCVD=-1.497] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id aEmfQpJF00Dr for <multipathtcp@ietfa.amsl.com>; Wed, 23 Nov 2016 03:09:30 -0800 (PST)
Received: from smtp.multitel.be (smtp.multitel.be [IPv6:2001:6a8:3500:b17e::3]) by ietfa.amsl.com (Postfix) with ESMTP id 12D8D129CBF for <multipathtcp@ietf.org>; Wed, 23 Nov 2016 03:09:28 -0800 (PST)
Received: from smtp.multitel.be (localhost [127.0.0.1]) by smtp.multitel.be (Postfix) with ESMTP id C97ECB20038; Wed, 23 Nov 2016 12:09:26 +0100 (CET)
Received: from sne-UX31E (unknown [IPv6:fddd:3138:5d15:1:ccc3:898f:8f06:c841]) by smtp.multitel.be (Postfix) with ESMTPS id BC35DB20036; Wed, 23 Nov 2016 12:09:26 +0100 (CET)
Date: Wed, 23 Nov 2016 12:09:26 +0100
From: Sébastien Noel <noel@multitel.be>
To: "philip.eardley@bt.com" <philip.eardley@bt.com>
Message-ID: <20161123120926.7ed52bd4@sne-UX31E>
Organization: Multitel
X-Mailer: Claws Mail 3.14.0 (GTK+ 2.24.30; x86_64-pc-linux-gnu)
In-Reply-To: <4d11a19b2b6644848ce79f55cdbd6ab5@rew09926dag03b.domain1.systemhost.net>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Virus-Scanned: ClamAV using ClamSMTP
Archived-At: <https://mailarchive.ietf.org/arch/msg/multipathtcp/sHqxHFlQKsf3KRjlmC8l8EasMZE>
Cc: "multipathtcp@ietf.org" <multipathtcp@ietf.org>
Subject: Re: [multipathtcp] MPTCP carrying UDP
X-BeenThere: multipathtcp@ietf.org
X-Mailman-Version: 2.1.17
Precedence: list
List-Id: Multi-path extensions for TCP <multipathtcp.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/multipathtcp>, <mailto:multipathtcp-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/multipathtcp/>
List-Post: <mailto:multipathtcp@ietf.org>
List-Help: <mailto:multipathtcp-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/multipathtcp>, <mailto:multipathtcp-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 23 Nov 2016 11:09:33 -0000

Phil,

> Do people have any experimental results /experiences they could share of
> running UDP applications over MPTCP sub-flows?  Would be interested to hear
> about the issues.
> I guess VoIP and Quic would be the most interesting ones.

To understand the interactions between QUIC and an underlying MPTCP 
transport, we performed some experiments by running QUIC over OpenVPN 
that runs itself over an MPTCP connection. This is the closest scenario 
to what you are discussing based on existing open-source software.

OpenVPN includes some framing to carry UDP and 
encryption/authentication. These mechanisms add CPU overhead and byte 
overhead compared to transporting QUIC over a plain MPTCP connection, 
but this does not change the results of the experiments.

Our measurement setup was the following.

                       /-----\
[client] --- [router1]         [router2] --- [server]
                       \-----/

An openvpn tunnel over MPTCP mode was setup between router 1 & 2.

The client & server were not running an MPTCP kernel.

We simply recorded the time needed to transfer a random 10M file
from server to client while tweaking speed/latency/reliability of each 
link on router 1 & 2 with netem.

We used quic_server & quic_client
from https://github.com/google/proto-quic


In the following tables, Download/Upload are in Mbps, time in seconds &
performance is a percentage of a perfect theoretical result/max speed
(without any IP or TCP overhead).


1) "no" latency (< 1ms) & no packet loss; different link bandwidths
		
Link 1		| Link2		|
--------------------------------|----------------
Down	Upload	| Down	Upload	| time	| perf (%)
-------------------------------------------------
1	1	| 1	1	| 58,1	| 68,85 %
1	1	| 2	1	| 48,4	| 55,10 %
1	1	| 4	1	| 24,3	| 65,84 %
1	1	| 8	1	| 12,3	| 72,27 %
1	1	| 10	1	| 10,3	| 70,61 %
1	1	| 15	5	| 7,2	| 69,44 %
1	1	| 20	5	| 6,5	| 58,61 %
1	1	| 25	5	| 18,2	| 16,91 %
1	1	| 30	10	| 20,1	| 12,84 %
1	1	| 50	10	| 15,5	| 10,12 %
2	1	| 2	1	| 31,7	| 63,09 %
4	1	| 4	1	| 17,1	| 58,48 %
8	1	| 8	1	| 10	| 50,00 %
10	1	| 10	1	| 7,3	| 54,79 %
15	5	| 15	5	| 6,4	| 41,67 %
20	5	| 20	5	| 3,6	| 55,56 %
25	5	| 25	5	| 3,3	| 48,48 %
30	10	| 30	10	| 2,9	| 45,98 %

2) Fixed bandwidth ; no latency ; different packet loss on Link2

Link 1		| Link2			|
----------------------------------------|----------------
Down	Upload	| Down	Upload	P-L(%)	| time	| perf (%)
---------------------------------------------------------
2	1	| 10	2	0	| 9,9	| 67,34 %
2	1	| 10	2	1	| 11,2	| 59,52 %
2	1	| 10	2	2	| 11,3	| 59,00 %
2	1	| 10	2	3	| 11,7	| 56,98 %
2	1	| 10	2	4	| 12,3	| 54,20 %
2	1	| 10	2	5	| 15,9	| 41,93 %
2	1	| 10	2	6	| 16,6	| 40,16 %
2	1	| 10	2	7	| 18,7	| 35,65 %
2	1	| 10	2	8	| 20,1	| 33,17 %
2	1	| 10	2	9	| 23,5	| 28,37 %
2	1	| 10	2	10	| 27,4	| 24,33 %

3) Fixed bandwidth ; no packet loss ; variable latency (in ms) on Link2

Link 1		| Link2			|
----------------------------------------|----------------
Down	Upload	| Down	Upload	Latency	| time	| perf (%)
---------------------------------------------------------
2	1	| 10	2	0	| 9,9	| 67,34 %
2	1	| 10	2	5	| 10	| 66,67 %
2	1	| 10	2	10	| 10,1	| 66,01 %
2	1	| 10	2	15	| 10,2	| 65,36 %
2	1	| 10	2	20	| 10,6	| 62,89 %
2	1	| 10	2	30	| 12,2	| 54,64 %
2	1	| 10	2	40	| 14,3	| 46,62 %
2	1	| 10	2	50	| 16,2	| 41,15 %
2	1	| 10	2	60	| 18,2	| 36,63 %


IMHO those results confirm the intuition that running a protocol like 
QUIC that includes its own congestion control and retransmission 
mechanisms are a reliable bytestream protocol like MPTCP is not a good idea.

As you can see, with QUIC over MPTCP sub-flows, performance seems to 
drop as soon as you have an unreliable medium or as soon as you have latency

The sames tests were performed again, but this time with HTTP over 
end-to-end MPTCTP, to have a point of comparison

1b) "no" latency (< 1ms) & no packet loss; variable bandwidth
		
Link 1		| Link2		|
--------------------------------|----------------
Down	Upload	| Down	Upload	| time	| perf (%)
-------------------------------------------------
2	1	| 2	1	| 23,5	| 85,11 %
4	1	| 4	1	| 11,2	| 89,29 %
8	1	| 8	1	| 5,7	| 87,72 %
10	1	| 10	1	| 4,9	| 81,63 %
15	5	| 15	5	| 3	| 88,89 %
20	5	| 20	5	| 2,5	| 80,00 %
25	5	| 25	5	| 1,8	| 88,89 %
30	10	| 30	10	| 1,5	| 88,89 %

This should not surprise the readers of this list, but confirms that 
MPTCP works well in this environment.

2b) Fixed bandwidth ; no latency ; variable packet loss (P-L) on Link2

Link 1		| Link2			|
----------------------------------------|----------------
Down	Upload	| Down	Upload	P-L(%)	| time	| perf (%)
---------------------------------------------------------
2	1	| 10	2	0	| 8,2	| 81,30 %
2	1	| 10	2	1	| 7,8	| 85,47 %
2	1	| 10	2	2	| 7,7	| 86,58 %
2	1	| 10	2	3	| 7,8	| 85,47 %
2	1	| 10	2	4	| 7,7	| 86,58 %
2	1	| 10	2	5	| 8,8	| 75,76 %
2	1	| 10	2	6	| 8	| 83,33 %
2	1	| 10	2	7	| 7,8	| 85,47 %
2	1	| 10	2	8	| 7,9	| 84,39 %
2	1	| 10	2	9	| 8	| 83,33 %

Again, MPTCP adapts correctly to packet losses in the environment.

Given the bad results of running QUIC over MPTCP, we don't plan to 
analyse this in more details.

Best regards,

Sébastien