Re: [arch-d] possible new IAB programme on Internet resilience

John C Klensin <> Wed, 01 January 2020 21:49 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id DA8891200EC for <>; Wed, 1 Jan 2020 13:49:01 -0800 (PST)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -1.105
X-Spam-Status: No, score=-1.105 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, RDNS_NONE=0.793, SPF_HELO_NONE=0.001, SPF_NONE=0.001] autolearn=no autolearn_force=no
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id DO96V_TdneiB for <>; Wed, 1 Jan 2020 13:49:00 -0800 (PST)
Received: from (unknown []) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by (Postfix) with ESMTPS id 0E856120020 for <>; Wed, 1 Jan 2020 13:49:00 -0800 (PST)
Received: from ([] by with esmtp (Exim 4.82 (FreeBSD)) (envelope-from <>) id 1imlrL-000GdB-Cn; Wed, 01 Jan 2020 16:48:55 -0500
Date: Wed, 01 Jan 2020 16:48:50 -0500
From: John C Klensin <>
To: Brian E Carpenter <>, Dan York <>, Lucy Lynch <>
Message-ID: <>
X-Mailer: Mulberry/4.0.8 (Win32)
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline
Archived-At: <>
Subject: Re: [arch-d] possible new IAB programme on Internet resilience
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: open discussion forum for long/wide-range architectural issues <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Wed, 01 Jan 2020 21:49:02 -0000

Brian, Dan, and Lucy,

I agree with Lucy's point and Dan's elaboration.  Yes, it seems
to me that some of the private cloud issues that Dan is talking
about are a reversion to where we were in the 1990s, with a few
large differences other than public and private money. One is
that the public network, such as it was at the beginning of the
1990s, was, in many respects, a connectivity workaround to the
AUP-restricted research and education networks Brian refers to.
At this point, there is hugely more dependency on the
existence of the so-called public Internet.  And, to the extent
that those proprietary clouds reduce the incentives to provide
high-quality services available to the public at a reasonable
price, resilience suffers as do several other things.

Two other aspects of the situation Lucy and Dan point out
concern me even more.   One is that the design of the network
for robustness in the light of disaster (natural, technological,
or political) depends heavily on assumptions about diversity of
paths and resources.  Where once we had a great deal of
low-bandwidth copper and relatively low-density fiber, diverse
physical paths were easy to find for many transmissions.  Now,
as bandwidth of fiber has risen to levels most of us could not
imagine in the 1980s, physical path diversity opportunities have
decreased and the inherent robustness of "if a packet doesn't go
one way, it can go another" has decreased with it. For, e.g.,
undersea cables, there may still be an increasing number of
physical cables, but the number of landing points is not
increasing as rapidly.   There is another aspect of that change
which is also connected to the growth of large clouds.  Unless
we are very careful, encrypting everything and organize things
so that an individual's traffic gets lost in the crowd also
decreases robustness and resiliency, perhaps not against attacks
on privacy but against attacks on (or unplanned problems with)
the ability to transmit and deliver packets and the data they

The third issue is one we are seeing, IMO, more and more at the
applications layer.  In the networks of the early 1990s,
especially those not restricted by AUPs, most applications
services were provided on a single system into which one logged
in.  Outside the research and education networks, network
transmission, when it occurred at all, was a matter of gateways
(some legal, some less so) among those essentially-proprietary
systems.  And many of those arrangements involved bilateral
agreements among the proprietors.  If standards were needed, the
proprietary system operates made agreements among themselves,
with anyone who wanted to interoperate with them at their mercy.
As concentration increases among a relatively small number of
operators or software providers, even if their systems are
distributed across proprietary clouds rather than being all in
one place, I wonder if we are seeing that again with de facto
standards for web browsers being made by a handful of web
browser producers, de facto standards for email being made by a
handful or very large providers, etc, as examples.  There may be
advantages to such arrangements -- certainly they represent
running code -- but they put robustness and resiliency almost
entirely in the hands of those operators and vendors.

I hope the IAB intends to address those issues.  If it does not,
I fear that the proposed programme will not be useful to the


--On Thursday, 02 January, 2020 08:12 +1300 Brian E Carpenter
<> wrote:

> Hi Dan,
> Cherry-picking from your interesing message:
>> What if there winds up being a lack of diversity of paths
>> through the "open" and "public" Internet? What if
>> increasingly traffic winds up traveling through these
>> proprietary global networks (to which you need to pay to
>> connect and through that gain permission to send traffic -
>> and only to that company's platforms)?
> Is this really new, from a technical viewpoint? It reminds me
> very much of the early 1990s, when policy based BGP4 routing
> first became a thing, and acceptable use policies were applied
> by NSFNET, ESNET, and their equivalents in Europe and Asia.
> That was all about money, of course, except that it was public
> money.
> Regards
>    Brian
> On 02-Jan-20 07:44, Dan York wrote:
>> Stephen,
>> Lucy very nicely captured a concern I'd had… upon which I
>> expanded a bit below..
>>> On Dec 31, 2019, at 2:27 PM, Lucy Lynch
>>> < <>>
>>> wrote:
>>> On Fri, 20 Dec 2019, Stephen Farrell wrote:
>>>> [1]
>>> Circling back to the top here.
>>> I think this is a fine topic for an IAB program and I took
>>> the draft charter to encompass resilience as both a
>>> technical and a design problem.
>> I also agree that this seems to be a great topic for an IAB
>> program.
>>> I am particularly interested in this statement:
>>> ----
>>> Definition of resilience:
>>> 1]  the capability of a strained body to recover its size
>>> and shape    after deformation caused especially by
>>> compressive stress 2]  an ability to recover from or adjust
>>> easily to misfortune or change
>>> This program is mostly interested in definition #2.
>>> ----
>>> I actually have my own concerns related to #1 as well and
>>> would hope that this program might consider the warping of
>>> the overall Internet model to accommodate currents trends or
>>> business practices.
>>> As an example - an Internet optimized for the web may not be
>>> the same internet that supports real time data collection
>>> and shared computation in the context of big science. How do
>>> we avoid closing out capabilities as we optimize for others?
>>> Narrowing of choices looks like a path to a limited and more
>>> brittle model to me.
>> I agree with Lucy's example… and would also note that
>> other sources of "compressive stress" could be the
>> increasing movement of most all real-time communication via
>> voice and video to be over the Internet, and most recently
>> the very large movement of streaming online gaming, which has
>> very different characteristics and stress factors (as noted
>> in recent discussions in the new MOPS working group and the
>> BOF before it).
>> I like Lucy's phrase "the warping of the overall Internet
>> model to accommodate current trends or business practices,"
>> particularly when some of those current business practices
>> may involve connecting networks not only to the public
>> Internet, but also to private, proprietary, globe-spanning
>> networks. 
>> For example, at Amazon's recent re:Invent conference they
>> promoted a way to directly connect enterprise data centers to
>> Amazon's global AWS network ("Outposts") and also a way
>> to connect telco points-of-presence to Amazon's network
>> ("Wavelength"). My understanding is that Microsoft has
>> similar functionality for Azure ("Stack", I believe) and
>> Google either has or is working on something similar for
>> their Google Cloud Platform. Similarly, large entities such
>> as Facebook and Netflix have built their own global, private
>> networks that interconnect to local data centers (where those
>> data centers are also connected to the public Internet).
>> All of these separate, private, global networks are designed
>> to help speed access to content, applications, etc., through
>> caching, "edge" computing, and other technologies.
>> Thinking as a network engineer about running applications in
>> various cloud providers, I can see the value that could be
>> obtained by these connections. And some of these providers
>> are typically promoting these services as providing a
>> low-latency alternative to sending traffic across the public
>> Internet.
>> Going back to the draft charter text
>> ( ) , I note
>> this:
>>>  One fundamental pattern contributing to Internet
>>> resilience is diversity: for example, diversity of physical
>>> links, of peer networks, of paths through the network. Lack
>>> of diversity is a key challenge for Internet resilience.
>> What if there winds up being a lack of diversity of paths
>> through the "open" and "public" Internet? What if
>> increasingly traffic winds up traveling through these
>> proprietary global networks (to which you need to pay to
>> connect and through that gain permission to send traffic -
>> and only to that company's platforms)?
>> Given that the Internet has always been a "network of
>> networks", there have been (and still are) multiple large,
>> global networks to which you could connect your network and
>> data centers. You may, in fact, connect to multiple of those
>> large, global ISPs to have a higher degree of
>> "resilience" for your network. The difference to me is
>> that in connecting to those network providers (and paying to
>> do so), you are connecting to the open, public Internet. 
>> In contrast, connecting to these newer private networks gives
>> you only access to the company's cloud or content platform.
>> It's not the whole Internet.
>> So regarding definition #2, how does this evolution in
>> network connectivity impact the overall resilience of the
>> Internet to recover from issues?
>> I think it's a very interesting question to consider as
>> part of this program.
>> My 2 cents, 
>> Dan
>> P.S. And yes, I'm well aware that some large enterprises
>> also operate their own private, global networks / WANs to
>> interconnect their own data centers - and have done so for
>> many years. I think the difference is one of scale. The new
>> "cloud" providers are operating networks significantly
>> larger than any individual enterprise - and are also
>> encouraging enterprises to move away from operating their own
>> networks and to instead move their networking to these newer
>> networks.
>> _______________________________________________
>> Architecture-discuss mailing list
> _______________________________________________
> Architecture-discuss mailing list