Re: [multipathtcp] MPTCP carrying UDP

<N.Leymann@telekom.de> Fri, 25 November 2016 08:46 UTC

Return-Path: <prvs=130cc9d5e=N.Leymann@telekom.de>
X-Original-To: multipathtcp@ietfa.amsl.com
Delivered-To: multipathtcp@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 3FCB31296A1 for <multipathtcp@ietfa.amsl.com>; Fri, 25 Nov 2016 00:46:57 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -5.817
X-Spam-Level:
X-Spam-Status: No, score=-5.817 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, RCVD_IN_DNSWL_MED=-2.3, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, RP_MATCHES_RCVD=-1.497] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=telekom.de
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id RRC_GB98rt0g for <multipathtcp@ietfa.amsl.com>; Fri, 25 Nov 2016 00:46:52 -0800 (PST)
Received: from mailout23.telekom.de (MAILOUT23.telekom.de [80.149.113.253]) (using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 83BD7129555 for <multipathtcp@ietf.org>; Fri, 25 Nov 2016 00:46:51 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=telekom.de; i=@telekom.de; q=dns/txt; s=dtag1; t=1480063611; x=1511599611; h=from:to:cc:subject:date:message-id:references: in-reply-to:content-transfer-encoding:mime-version; bh=dZkb93iZidfeqzGKp+IK7DuEL0aB0cdjYCfYe1qwVQU=; b=JhYyhF6QGRJBjHi0yU5/bhLtthZR3fwGrmmAlN2IH70onRKdVmbq4vZ3 97/AaCkhSqylPmJikfKCXSc+d1gc/aHOwWSAVzr7RHzbzzHv/Ijl78RWH VLl/ETTrTJLqq1AUNeVyzv4mqTrMC97USDPflGxbNR6PaZl4OonUQDbTQ FICGJjwHulvPHMUagDeBDfHwjPXcg5rlpUl3Yeu3FdolncOIFGoN/aJer GVlyGi4AlBdpWF8LZcwG1fZvmDYs6yIkpBRO9fsrb7wz3isIG0nodEeKh y37kTbrh6W1JZRLO0EKAB6cmvcudQGv/sZXsiiQamJ/Q1sX84+mKrTmW3 g==;
Received: from q4de8psa169.blf.telekom.de ([10.151.13.200]) by MAILOUT21.telekom.de with ESMTP/TLS/DHE-RSA-AES128-SHA; 25 Nov 2016 09:46:48 +0100
X-IronPort-AV: E=Sophos;i="5.31,546,1473112800"; d="scan'208";a="1203526094"
Received: from he105662.emea1.cds.t-internal.com ([10.169.119.58]) by q4de8psazkj.blf.telekom.de with ESMTP/TLS/AES256-SHA; 25 Nov 2016 09:46:48 +0100
Received: from HE105662.EMEA1.cds.t-internal.com (10.169.119.58) by HE105662.emea1.cds.t-internal.com (10.169.119.58) with Microsoft SMTP Server (TLS) id 15.0.1236.3; Fri, 25 Nov 2016 09:46:47 +0100
Received: from HE105662.EMEA1.cds.t-internal.com ([fe80::442c:834e:c489:d2c4]) by HE105662.emea1.cds.t-internal.com ([fe80::442c:834e:c489:d2c4%26]) with mapi id 15.00.1236.000; Fri, 25 Nov 2016 09:46:47 +0100
From: N.Leymann@telekom.de
To: noel@multitel.be, philip.eardley@bt.com
Thread-Topic: [multipathtcp] MPTCP carrying UDP
Thread-Index: AdI/NebMR0wIzv/5Tmmn4GsoWOcicgGO8TQAAGF/gdA=
Date: Fri, 25 Nov 2016 08:46:47 +0000
Message-ID: <5662d2f837c044719fda758c51546742@HE105662.emea1.cds.t-internal.com>
References: <4d11a19b2b6644848ce79f55cdbd6ab5@rew09926dag03b.domain1.systemhost.net> <20161123120926.7ed52bd4@sne-UX31E>
In-Reply-To: <20161123120926.7ed52bd4@sne-UX31E>
Accept-Language: en-US
Content-Language: de-DE
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-transport-fromentityheader: Hosted
x-originating-ip: [10.118.251.36]
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
Archived-At: <https://mailarchive.ietf.org/arch/msg/multipathtcp/k-HXECI64ohWlP5l-9hXCL5wYxk>
Cc: multipathtcp@ietf.org
Subject: Re: [multipathtcp] MPTCP carrying UDP
X-BeenThere: multipathtcp@ietf.org
X-Mailman-Version: 2.1.17
Precedence: list
List-Id: Multi-path extensions for TCP <multipathtcp.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/multipathtcp>, <mailto:multipathtcp-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/multipathtcp/>
List-Post: <mailto:multipathtcp@ietf.org>
List-Help: <mailto:multipathtcp-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/multipathtcp>, <mailto:multipathtcp-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 25 Nov 2016 08:46:57 -0000

Hi Sebastien,

Thanks for sharing the results! I guess that shows quite well that "layering" protocols where both use a congestion control has an significant impact on the overall throughput.
	
regards

Nic

-----Ursprüngliche Nachricht-----
Von: multipathtcp [mailto:multipathtcp-bounces@ietf.org] Im Auftrag von Sébastien Noel
Gesendet: Mittwoch, 23. November 2016 12:09
An: philip.eardley@bt.com
Cc: multipathtcp@ietf.org
Betreff: Re: [multipathtcp] MPTCP carrying UDP

Phil,

> Do people have any experimental results /experiences they could share 
> of running UDP applications over MPTCP sub-flows?  Would be interested 
> to hear about the issues.
> I guess VoIP and Quic would be the most interesting ones.

To understand the interactions between QUIC and an underlying MPTCP transport, we performed some experiments by running QUIC over OpenVPN that runs itself over an MPTCP connection. This is the closest scenario to what you are discussing based on existing open-source software.

OpenVPN includes some framing to carry UDP and encryption/authentication. These mechanisms add CPU overhead and byte overhead compared to transporting QUIC over a plain MPTCP connection, but this does not change the results of the experiments.

Our measurement setup was the following.

                       /-----\
[client] --- [router1]         [router2] --- [server]
                       \-----/

An openvpn tunnel over MPTCP mode was setup between router 1 & 2.

The client & server were not running an MPTCP kernel.

We simply recorded the time needed to transfer a random 10M file from server to client while tweaking speed/latency/reliability of each link on router 1 & 2 with netem.

We used quic_server & quic_client
from https://github.com/google/proto-quic


In the following tables, Download/Upload are in Mbps, time in seconds & performance is a percentage of a perfect theoretical result/max speed (without any IP or TCP overhead).


1) "no" latency (< 1ms) & no packet loss; different link bandwidths
		
Link 1		| Link2		|
--------------------------------|----------------
Down	Upload	| Down	Upload	| time	| perf (%)
-------------------------------------------------
1	1	| 1	1	| 58,1	| 68,85 %
1	1	| 2	1	| 48,4	| 55,10 %
1	1	| 4	1	| 24,3	| 65,84 %
1	1	| 8	1	| 12,3	| 72,27 %
1	1	| 10	1	| 10,3	| 70,61 %
1	1	| 15	5	| 7,2	| 69,44 %
1	1	| 20	5	| 6,5	| 58,61 %
1	1	| 25	5	| 18,2	| 16,91 %
1	1	| 30	10	| 20,1	| 12,84 %
1	1	| 50	10	| 15,5	| 10,12 %
2	1	| 2	1	| 31,7	| 63,09 %
4	1	| 4	1	| 17,1	| 58,48 %
8	1	| 8	1	| 10	| 50,00 %
10	1	| 10	1	| 7,3	| 54,79 %
15	5	| 15	5	| 6,4	| 41,67 %
20	5	| 20	5	| 3,6	| 55,56 %
25	5	| 25	5	| 3,3	| 48,48 %
30	10	| 30	10	| 2,9	| 45,98 %

2) Fixed bandwidth ; no latency ; different packet loss on Link2

Link 1		| Link2			|
----------------------------------------|----------------
Down	Upload	| Down	Upload	P-L(%)	| time	| perf (%)
---------------------------------------------------------
2	1	| 10	2	0	| 9,9	| 67,34 %
2	1	| 10	2	1	| 11,2	| 59,52 %
2	1	| 10	2	2	| 11,3	| 59,00 %
2	1	| 10	2	3	| 11,7	| 56,98 %
2	1	| 10	2	4	| 12,3	| 54,20 %
2	1	| 10	2	5	| 15,9	| 41,93 %
2	1	| 10	2	6	| 16,6	| 40,16 %
2	1	| 10	2	7	| 18,7	| 35,65 %
2	1	| 10	2	8	| 20,1	| 33,17 %
2	1	| 10	2	9	| 23,5	| 28,37 %
2	1	| 10	2	10	| 27,4	| 24,33 %

3) Fixed bandwidth ; no packet loss ; variable latency (in ms) on Link2

Link 1		| Link2			|
----------------------------------------|----------------
Down	Upload	| Down	Upload	Latency	| time	| perf (%)
---------------------------------------------------------
2	1	| 10	2	0	| 9,9	| 67,34 %
2	1	| 10	2	5	| 10	| 66,67 %
2	1	| 10	2	10	| 10,1	| 66,01 %
2	1	| 10	2	15	| 10,2	| 65,36 %
2	1	| 10	2	20	| 10,6	| 62,89 %
2	1	| 10	2	30	| 12,2	| 54,64 %
2	1	| 10	2	40	| 14,3	| 46,62 %
2	1	| 10	2	50	| 16,2	| 41,15 %
2	1	| 10	2	60	| 18,2	| 36,63 %


IMHO those results confirm the intuition that running a protocol like QUIC that includes its own congestion control and retransmission mechanisms are a reliable bytestream protocol like MPTCP is not a good idea.

As you can see, with QUIC over MPTCP sub-flows, performance seems to drop as soon as you have an unreliable medium or as soon as you have latency

The sames tests were performed again, but this time with HTTP over end-to-end MPTCTP, to have a point of comparison

1b) "no" latency (< 1ms) & no packet loss; variable bandwidth
		
Link 1		| Link2		|
--------------------------------|----------------
Down	Upload	| Down	Upload	| time	| perf (%)
-------------------------------------------------
2	1	| 2	1	| 23,5	| 85,11 %
4	1	| 4	1	| 11,2	| 89,29 %
8	1	| 8	1	| 5,7	| 87,72 %
10	1	| 10	1	| 4,9	| 81,63 %
15	5	| 15	5	| 3	| 88,89 %
20	5	| 20	5	| 2,5	| 80,00 %
25	5	| 25	5	| 1,8	| 88,89 %
30	10	| 30	10	| 1,5	| 88,89 %

This should not surprise the readers of this list, but confirms that MPTCP works well in this environment.

2b) Fixed bandwidth ; no latency ; variable packet loss (P-L) on Link2

Link 1		| Link2			|
----------------------------------------|----------------
Down	Upload	| Down	Upload	P-L(%)	| time	| perf (%)
---------------------------------------------------------
2	1	| 10	2	0	| 8,2	| 81,30 %
2	1	| 10	2	1	| 7,8	| 85,47 %
2	1	| 10	2	2	| 7,7	| 86,58 %
2	1	| 10	2	3	| 7,8	| 85,47 %
2	1	| 10	2	4	| 7,7	| 86,58 %
2	1	| 10	2	5	| 8,8	| 75,76 %
2	1	| 10	2	6	| 8	| 83,33 %
2	1	| 10	2	7	| 7,8	| 85,47 %
2	1	| 10	2	8	| 7,9	| 84,39 %
2	1	| 10	2	9	| 8	| 83,33 %

Again, MPTCP adapts correctly to packet losses in the environment.

Given the bad results of running QUIC over MPTCP, we don't plan to analyse this in more details.

Best regards,

Sébastien

_______________________________________________
multipathtcp mailing list
multipathtcp@ietf.org
https://www.ietf.org/mailman/listinfo/multipathtcp