Re: [iccrg] Disadvantages of TCP connection splitters

Toerless Eckert <> Fri, 10 January 2020 10:53 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 2D8C2120105 for <>; Fri, 10 Jan 2020 02:53:03 -0800 (PST)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -3.949
X-Spam-Status: No, score=-3.949 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, HEADER_FROM_DIFFERENT_DOMAINS=0.25, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id DGGmPHnJ689d for <>; Fri, 10 Jan 2020 02:53:01 -0800 (PST)
Received: from ( []) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by (Postfix) with ESMTPS id DD7F5120071 for <>; Fri, 10 Jan 2020 02:53:00 -0800 (PST)
Received: from ( []) by (Postfix) with ESMTP id E0CDD548044; Fri, 10 Jan 2020 11:52:53 +0100 (CET)
Received: by (Postfix, from userid 10463) id D9633440059; Fri, 10 Jan 2020 11:52:53 +0100 (CET)
Date: Fri, 10 Jan 2020 11:52:53 +0100
From: Toerless Eckert <>
To: Michael Welzl <>
Cc: iccrg IRTF list <>
Message-ID: <>
References: <> <> <>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <>
User-Agent: Mutt/1.10.1 (2018-07-13)
Archived-At: <>
Subject: Re: [iccrg] Disadvantages of TCP connection splitters
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: "Discussions of Internet Congestion Control Research Group \(ICCRG\)" <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Fri, 10 Jan 2020 10:53:03 -0000

On Fri, Jan 10, 2020 at 11:21:28AM +0100, Michael Welzl wrote:
> > On Jan 10, 2020, at 11:14 AM, Toerless Eckert <> wrote:
> > 
> > Main downside is that its limited to TCP, and i think it might
> > be easier nowadays to implement split/merge generically for
> > 5 tuple flows and not for TCP alone, and the customer gets
> > the benefits for TCP, DCTCP, SCTP, QUIC, RTP and any other flow.
> Sure, you???re obviously right, but that???s also potentially solvable with app-level proxying (see draft-kuehlewind-quic-substrate, for example) - I???m more interested in the general congestion control & transport aspects of this.

Todays transport protocols expect tuple transparent end-to-end
connection, you can not cleanly design solutions with application level
proxies, so we're always talking about network/transport layer proxies
or encaps.  At least when you are talking about "application" in
the TCP/IP layering of the meaning. If you are just talking about
"userland" implementable proxies, thats a different issue. But given
enough good userland APIs, you can build any proxy there.

> > In most situations you can't simply split the packets without
> > encap or NAT anyhow because the alternative paths may likely
> > require different source or dest addresses (dual SP attachment
> > via broadband/DSL or the like). Once you do have such encap
> > its typically easier to run your own congestion/retransmission
> > between merge/split instead of just trying to passivle
> > observe and analyze the end-to-end TCP signals and figure
> > out how to deduce the appropriate load-split from them.
> ???run your own congestion/retransmission??? -  yes, the added flexibility is a big plus here, that???s the plus that I???m trying to weigh against a minus???

I can only see theoretical minuses that in todays real world would never
be important enough to be solved. 

Lets say you have differently CC-aggressive TCP stacks on your
hosts. Which by itself is usually more of a problem than a benefit, 
which is why you wouldn't necessarily invest big developmnet costs
in supporting split/merge for this situation.

But lets say you somehow do, and now you want to maintain this difference in
aggressiveness across the split paths. If you do your own CC on
split/merge you would need to try to explicitly recognize and emulate
those differences in CC-agggressiveness. Which would be hard given the
likely insufficient exposure of information in TCP and less so in e.g.: QUIC.

If you where to try to make a very passive tcp split/merge (maybe just a
dual-leaky bucket shaper per flow/path), you could probably
maintain difference in CC-aggressiveness of different flows easier, but
you would likely have a bigger problem in correctly predicting the
bandwidth (serving rate) and capacity (bucket size) of each split/merge path.

Aka: in the encap case you like will take out CC feedback for all flows
resulting in overall less and equal tthroughput for all flows (more
perfect split/merge path but lower bandwidth maybe higher latency),
in the passive split you can likely expose more the of the split/merge
path issues to the end-to-end CC but get a lot more difficult to predict
and well manage situaions.

Given how widely TCP is used, i am pretty sure, commercial vendors have
explored this space as well as commercially beneficial, and its likely all very hacky.

IMHO the better research question is how you would design a future
transport protocol (sublayer) that explicity supports coupled on-path
proxies with their various functions. Especially in a way where the
endpoints are in control of them.


> Cheers,
> Michael
> > 
> > Toerless
> > 
> > On Fri, Jan 10, 2020 at 09:54:07AM +0100, Michael Welzl wrote:
> >> Hi,
> >> 
> >> I???ve been thinking a lot about TCP connection splitters lately ( ).
> >> 
> >> I???m curious: what are the real practical disadvantages of this type of PEPs that people have seen?
> >> I'll appreciate any kind of feedback, also anecdotes, but pointers to citable papers would be best.
> >> 
> >> BTW, let???s keep multi-path apart from this discussion please. My question is about single path TCP.
> >> 
> >> Cheers,
> >> Michael
> >> 
> >> PS: I???m not trying to indirectly hint that such devices would be *always good*. However, the scenarios where they are not strike me as surprisingly narrow, so I wonder if I???m missing more.
> >> 
> >> _______________________________________________
> >> iccrg mailing list
> >>
> >>
> _______________________________________________
> iccrg mailing list