Re: [tcpm] a question for TCPLS

"Scharf, Michael" <> Fri, 25 March 2022 14:47 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 84A9A3A0F01 for <>; Fri, 25 Mar 2022 07:47:42 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -2.106
X-Spam-Status: No, score=-2.106 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, HTML_MESSAGE=0.001, SPF_HELO_NONE=0.001, SPF_NONE=0.001, T_SCC_BODY_TEXT_LINE=-0.01, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Authentication-Results: (amavisd-new); dkim=pass (1024-bit key)
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id pY1MfBsu6Pm0 for <>; Fri, 25 Mar 2022 07:47:37 -0700 (PDT)
Received: from ( []) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by (Postfix) with ESMTPS id 8B32F3A10FF for <>; Fri, 25 Mar 2022 07:47:37 -0700 (PDT)
Received: from localhost (localhost.localdomain []) by (Postfix) with ESMTP id DB75F25A17; Fri, 25 Mar 2022 15:47:34 +0100 (CET)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;; s=mail; t=1648219655; bh=EYLPct+9GvcnrKSSvwpcYzNuTOTIr7MN3BJLT3Y9TWk=; h=From:To:CC:Subject:Date:References:In-Reply-To:From; b=Ou4gcxswoNgf3NGqs/18qwcQj85HrQo+0C7/Xf5A9+7T83WJr7P21Sga/3X/YZ44t WwMwyEdyPGnQgDhrqkNazgWyk/txiY+eqJtVcLjC3/O59leIQ5rsOOCDFINmVuJ6eb de6pYbfGFMaikG47QBZc2xHvPGbOiu3TbpioppsQ=
X-Virus-Scanned: by amavisd-new-2.7.1 (20120429) (Debian) at
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id z6CBD541d2pV; Fri, 25 Mar 2022 15:47:33 +0100 (CET)
Received: from ( []) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by (Postfix) with ESMTPS; Fri, 25 Mar 2022 15:47:33 +0100 (CET)
Received: from ( by ( with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Fri, 25 Mar 2022 15:47:33 +0100
Received: from ([fe80::aca4:171a:3ee1:57e0]) by ([fe80::aca4:171a:3ee1:57e0%3]) with mapi id 15.01.2375.024; Fri, 25 Mar 2022 15:47:33 +0100
From: "Scharf, Michael" <>
To: Maxime Piraux <>, Yoshifumi Nishida <>
CC: "" <>
Thread-Topic: [tcpm] a question for TCPLS
Thread-Index: AQHYPzA/SWQgbKoj30WOewJmaqeGTqzP3VKAgABNtBA=
Date: Fri, 25 Mar 2022 14:47:33 +0000
Message-ID: <>
References: <> <>
In-Reply-To: <>
Accept-Language: de-DE, en-US
Content-Language: de-DE
x-originating-ip: []
Content-Type: multipart/alternative; boundary="_000_7768b398d686462a84a0990575f07b8chsesslingende_"
MIME-Version: 1.0
Archived-At: <>
Subject: Re: [tcpm] a question for TCPLS
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: TCP Maintenance and Minor Extensions Working Group <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Fri, 25 Mar 2022 14:47:50 -0000

For what it is worth, my MCTCP prototype in 2010, which was similar to the TCPLS design, used such a user-space scheduler based on “tcp_info” data. And, obviously, MCTCP scheduled its own (TLS-like) records and relied on error recovery of each TCP connection, with all the pros and cons of that approach.

It was a KISS design. My MCTCP implementation could be used on an unmodified Linux kernel TCP/IP stack just in the user space. Implementing coupled congestion control (RFC 6356) required a small kernel patch with some few lines of code to modify the congestion control window of each coupled connection from the user space scheduler.


From: tcpm <> On Behalf Of Maxime Piraux
Sent: Friday, March 25, 2022 11:51 AM
To: Yoshifumi Nishida <>
Subject: Re: [tcpm] a question for TCPLS

Hi Yoshi,
Thank you for this question. It is very interesting as it touches on a fundamental choice of TCPLS.

Indeed, TCPLS does not act at the TCP segment level. Instead, TCPLS makes decisions at the TLS record level. This decision is driven by several factors. First, when TLS is used atop TCP, TCP segments of a TLS record become bound together, e.g. the receiver cannot decrypt their data before the whole record is received. The loss of one segment, no matter if it is in order or not, can block the whole record data from being decrypted and processed, remaining in the receiver buffers until it becomes complete.

When TLS is used atop MPTCP, this effect can be further amplified by the scheduling. Spreading the segments of a single TLS record over two paths can create an increase in transmission time, as segments arriving on the fast path are of no use before segments sent on the slow path are received. In this case, splitting the application data in two independent records could make more sense to the application. In TLS+MPTCP, one could split this data into two TLS records, but as MPTCP maintains a single bytestream, they will be reordered before decryption at the receiver. So to effectively benefit from this finer scheduling, MPTCP has to be aware of the TLS segmentation happening in the TCP bytestream, with the limitation of the receiver reordering sometimes going against scheduling decisions.

As far as I'm aware, there is no easy way to implement this currently. So far, most MPTCP schedulers have been taking decisions based on inputs from the network and not from the application. Conceding this level of granularity enables TCPLS to be implemented purely atop TCP, and we believe this tradeoff is of value.

Observing packet losses, dupacks, RTOs and RTT variations could still be achieved, for instance by using tcp_info metrics. It can then be used to drive the TLS record scheduling.

At any point of a TCPLS session, the sender is aware of the TLS records that were received, decrypted and processed by the receiver over each TCP connection thanks to the ACK frame. These ACKs can be sent on any TCP connection of the session. TCPLS can take the decision to reinject the content of a TLS record on another TCP connection at any point, but it delegates handling TCP segment loss to the TCP connection. The decision of reinjecting the content of a TLS record can be made based on many inputs, for instance a timer on the receipt of the ACK frame, metrics from tcp_info indicating that the connection is performing badly and of course a timeout of the TCP connection.

We also defined a Connection Reset frame for the receiver to notify the sender that it received a TCP RST on a TCP connection, this can shorten the time for TCPLS to detect that a TCP connection was disrupted by a middlebox and reinject the lost TLS records. Note that adding these kind of control messages in MPTCP would be achieved by adding a TCP Options, which is in cleartext and in a much more limited space.


Le 24/03/22 à 04:34, Yoshifumi Nishida a écrit :


I tried to ask a question for TCPLS during the presentation, but gave up

due to time constraints. So, I just sent it here before my memory expires...

In my understanding, the advantage of MPTCP is it's in the TCP stack so

that it can have fine-grade packet scheduling and path management.

But, in case of TCPLS, I'm not sure how it can control multiple TCP

connections effectively.

For example, I am wondering how TCPLS finds the failure of connections or

which packets have been considered lost at the TCP layer without delay.

Can you provide some more info about this point?