Re: [arch-d] A Public Option for the Core

Toerless Eckert <> Wed, 19 August 2020 03:10 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 03DCA3A10E6 for <>; Tue, 18 Aug 2020 20:10:36 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -1.118
X-Spam-Status: No, score=-1.118 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, HEADER_FROM_DIFFERENT_DOMAINS=0.001, SPF_HELO_NONE=0.001, SPF_NEUTRAL=0.779, URIBL_BLOCKED=0.001] autolearn=no autolearn_force=no
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id JXpl0klZqiAl for <>; Tue, 18 Aug 2020 20:10:33 -0700 (PDT)
Received: from ( [IPv6:2001:638:a000:4134::ffff:40]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by (Postfix) with ESMTPS id BF2A33A10E4 for <>; Tue, 18 Aug 2020 20:10:32 -0700 (PDT)
Received: from ( []) by (Postfix) with ESMTP id C80C4548622; Wed, 19 Aug 2020 05:02:39 +0200 (CEST)
Received: by (Postfix, from userid 10463) id BBE3F440059; Wed, 19 Aug 2020 05:02:39 +0200 (CEST)
Date: Wed, 19 Aug 2020 05:02:39 +0200
From: Toerless Eckert <>
To: Brian E Carpenter <>
Message-ID: <>
References: <> <> <> <> <> <> <> <> <> <>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <>
User-Agent: Mutt/1.10.1 (2018-07-13)
Archived-At: <>
Subject: Re: [arch-d] A Public Option for the Core
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: open discussion forum for long/wide-range architectural issues <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Wed, 19 Aug 2020 03:10:36 -0000

On Wed, Aug 19, 2020 at 09:23:31AM +1200, Brian E Carpenter wrote:
> On 18-Aug-20 17:26, Toerless Eckert wrote:
> > 
> > Multiple parallel TCP connections to overcome TCP issues with high
> > capacity high loss paths even in the absence of congestion was always
> > a bad workaround. 
> It's certainly a workaround, but why was it particularly bad?

Ships in the night (no coupled CC) between the multiple parallel TCP sessions
across the same path. Given how DoE HEP money was repsonsible for a lot
of US/EU transcontinental capacity even in the 80th/90th, there must have
been enough money around to look at this problem before 2000, but alas
progress was limited by having to muck around in BSD kernel for the most

> The context where GridFTP was developed and used was the world of very
> high capacity links between major Big Data sites; it's actually a
> good example for RFC8799 that we overlooked.

No worries. Everything exccept the paths used to grow amazon, facebook and
google was and is is limited domains ;-))

> If you really want maximum throughput on such links, which also have
> very low bit error rates, you can use a non-windowing, rate-controlled
> transport protocol with simplified error handling.

I think one of the issues was that in the 90th a lot of the paths used
by researchers where still controlled/limited-domains, and when they
started to buy more into general purpose internet capacity, the available
paths became even worse due to bufferbloat, which maybe explaining
more focus on hose really bad high capacity, high loss paths later on.

> But that's not an IETF problem, really.

Why not. AFAIK it was, is and should be. Even had an FECFRAME WG for a while.

> > Digital Fountain was already selling software 15 years
> > ago with scatter storage and network coding gather retrieval. Still
> > network coding researchers  seem to claim this stuff is new today.
> The network coding literature goes back to at least 2000**.

Yes, but at that time the more exciting new usse of it was multicast,
only a few yers later did the technology became a mayor player to
solve the supposedly easer problem of bad high capacity unicast paths
(a least from the one off stories i was told). 

> The network coding people I know are aiming at a different problem
> area today: poor quality wireless and/or overloaded satellite links.

Sure. research wise they probably think the other areas are done, but
that still doesn't mean we have a good ubiquitously availale NC based
unicast reliable transport standardized. IPR played a big roole IMHO
in throttling wide ranging adoption/standardization.


> Exactly the opposite of the Big Data scenario. (However, I am not
> tracking the NWCRG. I am rather amazed by the lack of references
> to the extensive literature in RFC8406, but draft-irtf-nwcrg-nwc-ccn-reqs
> does better.)
> > A lot of of the network side problems of this are the result of the
> > traditional, uncontrolled transit SP paths, aka: the classical Internet
> > model. 
> Indeed. 

> Regards
>     Brian
> ** R. Ahlswede, N. Cai, S.-Y.R. Li and R.W. Yeung, ???Network
> Information Flow???, IEEE-IT, vol. 46, pp. 1204-1216, 2000.
> > 
> > Cheers
> >     Toerless
> > 
> > On Tue, Aug 18, 2020 at 11:54:34AM +1200, Brian E Carpenter wrote:
> >> On 18-Aug-20 04:42, Toerless Eckert wrote:
> >> ...
> >>  
> >>> -> I would like for traffic to get bandwidth share indpendent of the
> >>>    number of 5 tuple flows it utilizes (no gaming the system). But
> >>>    rather fair per subscriber (weighted by how much the subscriber pays).
> >>
> >> You just broke GridFTP, used in Big Science to move terabyte datasets
> >> around the world efficiently. It's not gaming, it's achieving throughput
> >> despite defects in TCP. But the topic is very much alive there, since
> >> GridFTP is now unsupported.
> >>
> >> For further reading:
> >>
> >>
> >>     Brian
> >