Re: [multipathtcp] MPTCP carrying UDP

Olivier Bonaventure <Olivier.Bonaventure@uclouvain.be> Wed, 23 November 2016 14:07 UTC

Return-Path: <olivier.bonaventure@uclouvain.be>
X-Original-To: multipathtcp@ietfa.amsl.com
Delivered-To: multipathtcp@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 57DDE1299F9 for <multipathtcp@ietfa.amsl.com>; Wed, 23 Nov 2016 06:07:49 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -4.321
X-Spam-Level:
X-Spam-Status: No, score=-4.321 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, RCVD_IN_DNSWL_MED=-2.3, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (1024-bit key) header.d=uclouvain.be
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id tOz4ugxCQiVb for <multipathtcp@ietfa.amsl.com>; Wed, 23 Nov 2016 06:07:46 -0800 (PST)
Received: from smtp2.sgsi.ucl.ac.be (smtp.sgsi.ucl.ac.be [130.104.5.67]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 26E0C1295F9 for <multipathtcp@ietf.org>; Wed, 23 Nov 2016 06:07:46 -0800 (PST)
Received: from mbpobo.dhcp.info.ucl.ac.be (mbpobo.dhcp.info.ucl.ac.be [130.104.228.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) (Authenticated sender: obonaventure@smtp2.sgsi.ucl.ac.be) by smtp2.sgsi.ucl.ac.be (Postfix) with ESMTPSA id 2DD0867DD02; Wed, 23 Nov 2016 15:07:32 +0100 (CET)
DKIM-Filter: OpenDKIM Filter v2.9.2 smtp2.sgsi.ucl.ac.be 2DD0867DD02
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=uclouvain.be; s=selucl; t=1479910052; bh=QaoPoT2L63Vh674WcpKllyBE5NR3wITEgayL75N9HQ8=; h=Reply-To:Subject:References:To:Cc:From:Date:In-Reply-To; b=jZEgNQyLvBHj5k1E6SCTHpsWW3XkL879il4uBD4d3a5tREK26Fsm9wuaZkglxa9gq ozdGvW/VDnZFUbqg1HQkaJw3Mh9qIVmiNx1mAUA/tGVIi8aH+/uSuk9OQ7ASZOm27w ZjweW6L3X3iT3LnpcpvbiIdzEV+jBpKAWysnR6ew=
X-Virus-Status: Clean
X-Virus-Scanned: clamav-milter 0.99.2 at smtp-2
References: <4d11a19b2b6644848ce79f55cdbd6ab5@rew09926dag03b.domain1.systemhost.net> <20161123120926.7ed52bd4@sne-UX31E> <787AE7BB302AE849A7480A190F8B933009DB7E96@OPEXCLILMA3.corporate.adroot.infra.ftgroup>
To: mohamed.boucadair@orange.com, Sébastien Noel <noel@multitel.be>, "philip.eardley@bt.com" <philip.eardley@bt.com>
From: Olivier Bonaventure <Olivier.Bonaventure@uclouvain.be>
Message-ID: <b6958439-0495-45f1-8b12-28dcda15ba74@uclouvain.be>
Date: Wed, 23 Nov 2016 15:07:32 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.11; rv:45.0) Gecko/20100101 Thunderbird/45.5.0
MIME-Version: 1.0
In-Reply-To: <787AE7BB302AE849A7480A190F8B933009DB7E96@OPEXCLILMA3.corporate.adroot.infra.ftgroup>
Content-Type: text/plain; charset="utf-8"; format="flowed"
Content-Transfer-Encoding: 8bit
X-Sgsi-Spamcheck: SASL authenticated,
X-SGSI-Information:
X-SGSI-MailScanner-ID: 2DD0867DD02.A342E
X-SGSI-MailScanner: Found to be clean
X-SGSI-From: olivier.bonaventure@uclouvain.be
X-SGSI-Spam-Status: No
Archived-At: <https://mailarchive.ietf.org/arch/msg/multipathtcp/HiW3FllgqfSFgnOkyqDaUazmK-E>
Cc: "multipathtcp@ietf.org" <multipathtcp@ietf.org>
Subject: Re: [multipathtcp] MPTCP carrying UDP
X-BeenThere: multipathtcp@ietf.org
X-Mailman-Version: 2.1.17
Precedence: list
Reply-To: Olivier.Bonaventure@uclouvain.be
List-Id: Multi-path extensions for TCP <multipathtcp.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/multipathtcp>, <mailto:multipathtcp-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/multipathtcp/>
List-Post: <mailto:multipathtcp@ietf.org>
List-Help: <mailto:multipathtcp-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/multipathtcp>, <mailto:multipathtcp-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 23 Nov 2016 14:07:49 -0000

Med,
>
>> -----Message d'origine-----
>> De : multipathtcp [mailto:multipathtcp-bounces@ietf.org] De la part de
>> Sébastien Noel
>> Envoyé : mercredi 23 novembre 2016 12:09
>> À : philip.eardley@bt.com
>> Cc : multipathtcp@ietf.org
>> Objet : Re: [multipathtcp] MPTCP carrying UDP
>>
>> Phil,
>>
>>> Do people have any experimental results /experiences they could share of
>>> running UDP applications over MPTCP sub-flows?  Would be interested to
>> hear
>>> about the issues.
>>> I guess VoIP and Quic would be the most interesting ones.
>>
>> To understand the interactions between QUIC and an underlying MPTCP
>> transport, we performed some experiments by running QUIC over OpenVPN
>> that runs itself over an MPTCP connection. This is the closest scenario
>> to what you are discussing based on existing open-source software.
>
> [Med] The schema we are investigating does not include this cascaded layers. It is only UDP payload transported in plain MPTCP connections.


The cascading layers increase the CPU and byte overhead, but the 
interactions between the congestion control schemes and the reliablity 
mechanisms remains. In the experiment, the CPU is not a concern given 
that PCs were used as routers. The byte overhead influences the maximum 
efficiency of the solution but not how it degrades when there is latency 
or losses. The degration comes form the coupling between the congestion 
control and the reliability mechanisms of both QUIC and MPTCP.

>> OpenVPN includes some framing to carry UDP and
>> encryption/authentication. These mechanisms add CPU overhead and byte
>> overhead compared to transporting QUIC over a plain MPTCP connection,
>> but this does not change the results of the experiments.
>>
>
> [Med] I wouldn't drop that conclusion as the overall performance depends also on the overhead prepended to packets to be injected over the tunnel.
>
> Putting that aside, can you please indicate how the traffic is distributed among available subflows? FWIW, the target traffic distribution policy for the hybrid access is to use the fixed line first, and then grab some resources from the cellular link if needed. So the target objective is not 1+1!
>

That's a policy issue. The experiment was whether both links can be used 
efficiently when running QUIC over MPTCP to transport a long file. If 
your policy delays the utilisation of the second link when the first is 
full, then you will get even lower performance since MPTCP will delay 
the utilisation of the subflows.

>> Our measurement setup was the following.
>>
>>                        /-----\
>> [client] --- [router1]         [router2] --- [server]
>>                        \-----/
>>
>> An openvpn tunnel over MPTCP mode was setup between router 1 & 2.
>>
>> The client & server were not running an MPTCP kernel.
>>
>> We simply recorded the time needed to transfer a random 10M file
>> from server to client while tweaking speed/latency/reliability of each
>> link on router 1 & 2 with netem.
>>
>> We used quic_server & quic_client
>> from https://github.com/google/proto-quic
>>
>>
>> In the following tables, Download/Upload are in Mbps, time in seconds &
>> performance is a percentage of a perfect theoretical result/max speed
>> (without any IP or TCP overhead).
>>
>>
>> 1) "no" latency (< 1ms) & no packet loss; different link bandwidths
>>
>> Link 1		| Link2		|
>> --------------------------------|----------------
>> Down	Upload	| Down	Upload	| time	| perf (%)
>> -------------------------------------------------
>> 1	1	| 1	1	| 58,1	| 68,85 %
>> 1	1	| 2	1	| 48,4	| 55,10 %
>> 1	1	| 4	1	| 24,3	| 65,84 %
>> 1	1	| 8	1	| 12,3	| 72,27 %
>> 1	1	| 10	1	| 10,3	| 70,61 %
>> 1	1	| 15	5	| 7,2	| 69,44 %
>> 1	1	| 20	5	| 6,5	| 58,61 %
>> 1	1	| 25	5	| 18,2	| 16,91 %
>> 1	1	| 30	10	| 20,1	| 12,84 %
>> 1	1	| 50	10	| 15,5	| 10,12 %
>> 2	1	| 2	1	| 31,7	| 63,09 %
>> 4	1	| 4	1	| 17,1	| 58,48 %
>> 8	1	| 8	1	| 10	| 50,00 %
>> 10	1	| 10	1	| 7,3	| 54,79 %
>> 15	5	| 15	5	| 6,4	| 41,67 %
>> 20	5	| 20	5	| 3,6	| 55,56 %
>> 25	5	| 25	5	| 3,3	| 48,48 %
>> 30	10	| 30	10	| 2,9	| 45,98 %
>>
>> 2) Fixed bandwidth ; no latency ; different packet loss on Link2
>>
>> Link 1		| Link2			|
>> ----------------------------------------|----------------
>> Down	Upload	| Down	Upload	P-L(%)	| time	| perf (%)
>> ---------------------------------------------------------
>> 2	1	| 10	2	0	| 9,9	| 67,34 %
>> 2	1	| 10	2	1	| 11,2	| 59,52 %
>> 2	1	| 10	2	2	| 11,3	| 59,00 %
>> 2	1	| 10	2	3	| 11,7	| 56,98 %
>> 2	1	| 10	2	4	| 12,3	| 54,20 %
>> 2	1	| 10	2	5	| 15,9	| 41,93 %
>> 2	1	| 10	2	6	| 16,6	| 40,16 %
>> 2	1	| 10	2	7	| 18,7	| 35,65 %
>> 2	1	| 10	2	8	| 20,1	| 33,17 %
>> 2	1	| 10	2	9	| 23,5	| 28,37 %
>> 2	1	| 10	2	10	| 27,4	| 24,33 %
>>
>> 3) Fixed bandwidth ; no packet loss ; variable latency (in ms) on Link2
>>
>> Link 1		| Link2			|
>> ----------------------------------------|----------------
>> Down	Upload	| Down	Upload	Latency	| time	| perf (%)
>> ---------------------------------------------------------
>> 2	1	| 10	2	0	| 9,9	| 67,34 %
>> 2	1	| 10	2	5	| 10	| 66,67 %
>> 2	1	| 10	2	10	| 10,1	| 66,01 %
>> 2	1	| 10	2	15	| 10,2	| 65,36 %
>> 2	1	| 10	2	20	| 10,6	| 62,89 %
>> 2	1	| 10	2	30	| 12,2	| 54,64 %
>> 2	1	| 10	2	40	| 14,3	| 46,62 %
>> 2	1	| 10	2	50	| 16,2	| 41,15 %
>> 2	1	| 10	2	60	| 18,2	| 36,63 %
>>
>>
>> IMHO those results confirm the intuition that running a protocol like
>> QUIC that includes its own congestion control and retransmission
>> mechanisms are a reliable bytestream protocol like MPTCP is not a good
>> idea.
>>
>> As you can see, with QUIC over MPTCP sub-flows, performance seems to
>> drop as soon as you have an unreliable medium or as soon as you have
>> latency
>>
>> The sames tests were performed again, but this time with HTTP over
>> end-to-end MPTCTP, to have a point of comparison
>
> [Med] When do you say "end-to-end MPTCP" do you mean MPTCP is enabled by the client and the server?
>

Yes. Given our experience with TCP/MPTCP proxies, the result would have 
been the same with TCP/MPTCP proxies running on the two routers, but 
these proxies could not be integrated in this setup for practical reasons.

>> 1b) "no" latency (< 1ms) & no packet loss; variable bandwidth
>>
>> Link 1		| Link2		|
>> --------------------------------|----------------
>> Down	Upload	| Down	Upload	| time	| perf (%)
>> -------------------------------------------------
>> 2	1	| 2	1	| 23,5	| 85,11 %
>> 4	1	| 4	1	| 11,2	| 89,29 %
>> 8	1	| 8	1	| 5,7	| 87,72 %
>> 10	1	| 10	1	| 4,9	| 81,63 %
>> 15	5	| 15	5	| 3	| 88,89 %
>> 20	5	| 20	5	| 2,5	| 80,00 %
>> 25	5	| 25	5	| 1,8	| 88,89 %
>> 30	10	| 30	10	| 1,5	| 88,89 %
>>
>> This should not surprise the readers of this list, but confirms that
>> MPTCP works well in this environment.
>>
>> 2b) Fixed bandwidth ; no latency ; variable packet loss (P-L) on Link2
>>
>> Link 1		| Link2			|
>> ----------------------------------------|----------------
>> Down	Upload	| Down	Upload	P-L(%)	| time	| perf (%)
>> ---------------------------------------------------------
>> 2	1	| 10	2	0	| 8,2	| 81,30 %
>> 2	1	| 10	2	1	| 7,8	| 85,47 %
>> 2	1	| 10	2	2	| 7,7	| 86,58 %
>> 2	1	| 10	2	3	| 7,8	| 85,47 %
>> 2	1	| 10	2	4	| 7,7	| 86,58 %
>> 2	1	| 10	2	5	| 8,8	| 75,76 %
>> 2	1	| 10	2	6	| 8	| 83,33 %
>> 2	1	| 10	2	7	| 7,8	| 85,47 %
>> 2	1	| 10	2	8	| 7,9	| 84,39 %
>> 2	1	| 10	2	9	| 8	| 83,33 %
>>
>> Again, MPTCP adapts correctly to packet losses in the environment.
>>
>> Given the bad results of running QUIC over MPTCP, we don't plan to
>> analyse this in more details.
>>
>
> [Med] What if you had an option in the MPTCP implementation to relax TCP reliability checks on QUIC-triggered MPTCP connections?

If you disable reliability from MPTCP, you end up using a protocol that 
is very different from MPTCP. In TCP, congestion control and flow 
control are closely coupled with reliable delivery and removing this 
coupling would result in a protocol that would not be TCP anymore.



Olivier