[multipathtcp] High-level design decisions /architecture

<philip.eardley@bt.com> Mon, 02 November 2009 17:47 UTC

Return-Path: <philip.eardley@bt.com>
X-Original-To: multipathtcp@core3.amsl.com
Delivered-To: multipathtcp@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id 662BE3A68D6 for <multipathtcp@core3.amsl.com>; Mon, 2 Nov 2009 09:47:44 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.362
X-Spam-Level:
X-Spam-Status: No, score=-2.362 tagged_above=-999 required=5 tests=[AWL=-0.264, BAYES_00=-2.599, HTML_MESSAGE=0.001, MIME_ASCII0=1.5, RCVD_IN_DNSWL_LOW=-1]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 4xdhKlZDWrK1 for <multipathtcp@core3.amsl.com>; Mon, 2 Nov 2009 09:47:36 -0800 (PST)
Received: from smtp4.smtp.bt.com (smtp4.smtp.bt.com [217.32.164.151]) by core3.amsl.com (Postfix) with ESMTP id 45A193A6831 for <multipathtcp@ietf.org>; Mon, 2 Nov 2009 09:47:35 -0800 (PST)
Received: from E03MVB1-UKBR.domain1.systemhost.net ([193.113.197.110]) by smtp4.smtp.bt.com with Microsoft SMTPSVC(6.0.3790.3959); Mon, 2 Nov 2009 17:47:54 +0000
X-MimeOLE: Produced By Microsoft Exchange V6.5
Content-class: urn:content-classes:message
MIME-Version: 1.0
Content-Type: multipart/alternative; boundary="----_=_NextPart_001_01CA5BE4.9B28378F"
Date: Mon, 02 Nov 2009 17:47:54 -0000
Message-ID: <4A916DBC72536E419A0BD955EDECEDEC06363A62@E03MVB1-UKBR.domain1.systemhost.net>
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Thread-Topic: High-level design decisions /architecture
Thread-Index: Acpb5JtVyCJMgKRdQLqhkxWfqx2wUQ==
From: philip.eardley@bt.com
To: multipathtcp@ietf.org
X-OriginalArrivalTime: 02 Nov 2009 17:47:54.0929 (UTC) FILETIME=[9B474210:01CA5BE4]
Subject: [multipathtcp] High-level design decisions /architecture
X-BeenThere: multipathtcp@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Multi-path extensions for TCP <multipathtcp.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/multipathtcp>, <mailto:multipathtcp-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/multipathtcp>
List-Post: <mailto:multipathtcp@ietf.org>
List-Help: <mailto:multipathtcp-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/multipathtcp>, <mailto:multipathtcp-request@ietf.org?subject=subscribe>
X-List-Received-Date: Mon, 02 Nov 2009 17:47:44 -0000

Hi

 

In order to help the process of reaching consensus on the high-level
design decisions /architecture, we have gone through the current set of
I-Ds to identify items that seem like they may fall into the category of
high-level design decisions - see below. Note: this list is not to meant
to imply our support or non-support of these items (except where our
charter requires our support!); the purpose is to identify what needs to
be discussed.

 

Please shout if there's something we misunderstood or missed.

 

Please let the WG know your opinions on these items - especially if your
disagree with any, have a potential alternative, or think something is
part of 'detailed design' and not 'high-level design'.

 

Thanks

Phil & Yoshifumi

 

 

Protocol-related

============

*	IPv4 & IPv6 will have the same high-level design 

	*	[Comment: the Charter implies both v4 & v6 are in scope]


*	The MPTCP connection is identified, from an apps perspective, by
the addresses associated with the original path (even if that subflow
/path closes) 
*	A SYN/ACK exchange (on a single path) checks that both ends
support MPTCP, with automatic fall-back to TCP if not 
*	each MPTCP subflow looks exactly like an ordinary, semantically
independent TCP flow. MPTCP effectively runs on top. So re-transmission,
FINs etc are independent on each subflow 
*	control of the MPTCP connection (as opposed to the subflows) is
carried in TCP options, eg DATA FIN for closing an MPTCP connection 
*	the protocol involves an MPTCP stack at both ends of the MPTCP
connection 

	*	[Comment: this is in our charter] 

*	Either end of the MPTCP connection can add or remove paths
to/from the MPTCP connection 

	*	[Comment / Question: this seemed a bit unclear in the
protocol i-d, but is presumably required?] 

*	Subflow end point definition

	*	[Question: Should we allow subflows in a single MPTCP
connection to have different ports?] 

 

 

Congestion algorithm

===============

*	the goals are: [1] MPTCP is at least as good as TCP would be on
the best subflow path; [2] on any path, an MPTCP connection uses less
(or the same) capacity as a TCP connection would do; [3] MPTCP moves
traffic off more congested paths and onto less congested ones 
*	increases in the congestion windows of the subflows are coupled;
for decreases in the congestion windows, each subflow does new reno
independently (ie they are not coupled) 
*	slow start, fast re-transmit and fast recovery are the same as
RFC5681 

 

 

API

===

*	no changes to existing apps are needed to use MPTCP; MPTCP uses
the unaltered TCP socket API 
*	There is an optional, extended API 

	*	[Question: would both ends need the extended API, with a
fall-back if not?] 
	*	[Comment: presumably the actual features of the extended
API are outside the initial high-level design decisions] 

*	congestion control state is shared among application-visible
transport instances (eg multiple HTTPs between the same pair of hosts) 

*	3 application profiles are mentioned (Bulk data transport,
Latency-sensitive transport with overflow, Latency-sensitive transport
with hot stand by) 

	*	[Question: how are these supported? Is a negotiation
mechanism needed?]            

 

 

Security

======

*	exchange a token in the initial SYN for the MPTCP connection,
and include the token when subflows /paths are added to the connection 

	*	[Question: Is the same token used whether the sender or
receiver (of the MPTCP connection) adds the subflow? Is the same
technique used for removing subflows? And closing connections, ie DATA
FIN?] 

*	security mechanisms will not interfere with end-to-end
authentication etc and will be compatible with legacy middleboxes 

 

 

Modularity

========

*         the architecture has 2 components, multipath scheduler & path
manager - meaning that improved /different modules of each can be
slotted in 

o        [Comment: it is unclear from the Architecture i-d how the
congestion control & protocol i-ds map to these components; some of the
text (strangely?) appears in a section about implementation]

o        [Question: what are the mechanisms for evolvability? How is it
negotiated which new module (for congestion control or path manager) to
use?]

o        [Question: this would seem to imply we need to define an
interface between the modules?]

*         there is a default path manager module, meaning it is the MUST
fall-back option                                                 

o        [Comment: the charter restricts us - the path manager we work
on will distinguish paths by multiple addresses]