Re: [rrg] Critique - ILNP and the other 5 core-edge elimination proposals

Shane Amante <shane@castlepoint.net> Wed, 06 January 2010 08:10 UTC

Return-Path: <shane@castlepoint.net>
X-Original-To: rrg@core3.amsl.com
Delivered-To: rrg@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id 9FDDE3A6810 for <rrg@core3.amsl.com>; Wed, 6 Jan 2010 00:10:56 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -0.899
X-Spam-Level:
X-Spam-Status: No, score=-0.899 tagged_above=-999 required=5 tests=[AWL=-1.699, BAYES_50=0.001, SARE_SUB_RAND_LETTRS4=0.799]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id LB5jB9ETmBOD for <rrg@core3.amsl.com>; Wed, 6 Jan 2010 00:10:54 -0800 (PST)
Received: from dog.tcb.net (dog.tcb.net [64.78.150.133]) by core3.amsl.com (Postfix) with ESMTP id BCC223A659A for <rrg@irtf.org>; Wed, 6 Jan 2010 00:10:54 -0800 (PST)
Received: by dog.tcb.net (Postfix, from userid 0) id DBB822684EA; Wed, 6 Jan 2010 01:10:52 -0700 (MST)
Received: from mbp.castlepoint.net (71-215-78-124.hlrn.qwest.net [71.215.78.124]) (authenticated-user smtp) (TLSv1/SSLv3 AES128-SHA 128/128) by dog.tcb.net with SMTP; Wed, 06 Jan 2010 01:10:52 -0700 (MST) (envelope-from shane@castlepoint.net)
X-Avenger: version=0.7.8; receiver=dog.tcb.net; client-ip=71.215.78.124; client-port=57572; syn-fingerprint=65535:56:1:64:M1452,N,W1,N,N,T,S; data-bytes=0
Mime-Version: 1.0 (Apple Message framework v1077)
Content-Type: text/plain; charset="us-ascii"
From: Shane Amante <shane@castlepoint.net>
In-Reply-To: <4B44140B.7060504@firstpr.com.au>
Date: Wed, 06 Jan 2010 01:10:37 -0700
Content-Transfer-Encoding: quoted-printable
Message-Id: <50C8D2A8-524F-41F4-A936-77079097371E@castlepoint.net>
References: <4B4348D5.70201@joelhalpern.com> <4B44140B.7060504@firstpr.com.au>
To: Robin Whittle <rw@firstpr.com.au>
X-Mailer: Apple Mail (2.1077)
Cc: RRG <rrg@irtf.org>
Subject: Re: [rrg] Critique - ILNP and the other 5 core-edge elimination proposals
X-BeenThere: rrg@irtf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: IRTF Routing Research Group <rrg.irtf.org>
List-Unsubscribe: <http://www.irtf.org/mailman/listinfo/rrg>, <mailto:rrg-request@irtf.org?subject=unsubscribe>
List-Archive: <http://www.irtf.org/mail-archive/web/rrg>
List-Post: <mailto:rrg@irtf.org>
List-Help: <mailto:rrg-request@irtf.org?subject=help>
List-Subscribe: <http://www.irtf.org/mailman/listinfo/rrg>, <mailto:rrg-request@irtf.org?subject=subscribe>
X-List-Received-Date: Wed, 06 Jan 2010 08:10:56 -0000

Robin,

On Jan 5, 2010, at 21:39 MST, Robin Whittle wrote:
> 1 - Core-edge elimination schemes are impossible to introduce widely
>    enough on a voluntary basis to solve the routing scaling problem.

I take issue with your assertion that it's "impossible" to upgrade hosts/servers, particularly on a voluntary basis, to support (say) an ID-Loc split to solve the routing scaling problem.

First, I think this completely ignores the emerging trend of mobile devices -- not [just] cell phones, but smartphones, netbooks and (possibly) "tablets".  It's important to note that when I mention these devices I'm not referring solely to the portability/mobility of these devices, but also (perhaps, more importantly) the **disposability** of these devices.  Specifically, due to breakage, wear & tear, CPU slowness, lack of memory, lack of new features, etc. people often throw out these types of devices after a couple of years and buy new ones, (usually/generally subsidized by the carriers).  Even aside from disposability, it seems the more recent generations of smartphones, (e.g.: iPhone, Android phones, etc.), often have major releases around once per year mostly for new features and, more importantly, entice end-users to upgrade (by themselves) for those new features -- this is opposed to traditional cell phones of just a few years ago that were never upgraded largely because they only had one, or two, applications/uses: Voice & TXT messaging.

Assuming one believes this is true, how many of these types of devices are out there?  Well, unfortunately, (from what I can discern) it seems the IETF stopped measuring the size and, more importantly, composition of devices on the Internet back in the early '90's.  (I would welcome being corrected, of course).  Although I've spent way too much time looking for *any* [good] data, at all, the "best" data I came across appears to come from Internet advertising firms[1].  Specifically, look here:
http://www.phonecount.com/pc/count.jsp
Take a look at the "sources" links on that Web page, but I believe they mostly come from here:
http://www.internetworldstats.com/stats.htm
... quite frankly, if these numbers are true (and, I'd like to believe they're directionally correct), they appear quite shocking.  Of course, if there's better or more reliable data that I haven't seen, please do share.  Regardless, the larger trends that I gather from that data is that: a) mobile/disposable devices are, or will be, growing at an unprecedented rate we've not witnessed heretofore; and, b) we still have a lot of Internet growth ahead of us, given that parts of the [developing] world have such a low penetration of the Internet.  Ultimately, because they're disposable devices, 'natural' breakage and wear & tear should ensure fairly healthy turnover of these devices and, more importantly, the O/S'es that drive them.

Next, let's take a look at the release cycles of major O/S'es, (note, this list is completely arbitrary picking on my part, but hopefully illustrates the point):
1)  http://en.wikipedia.org/wiki/Microsoft_windows#Timeline_of_releases
At a quick glance, from Windows 95 onward, it appears Microsoft averaged approximately 2 - 3 years to release a new O/S, modulo XP to Vista which took about double that time.
2)  http://en.wikipedia.org/wiki/Mac_OS_X#Versions
Mac OS X, in recent years, appears to much more consistently average around 2 years for a new major O/S release.
3)  http://en.wikipedia.org/wiki/Fedora_linux#Version_history
4)  http://en.wikipedia.org/wiki/RHEL#Version_history
It seems as if Fedora is on an ~6 month release schedule whereas RHEL, (more Enterprise focused), is averaging around 7 - 9 months.  Although I didn't include other mainstream Linux releases, from what I understand of them they're typically releasing 1 - 2 times per year.
... The larger point with mentioning the above release cycles is, it's my belief, that what gets released into these (and other) O/S'es is the _size_ of the changes being made.  IOW, if the changes are viewed as more incremental in nature, (i.e.: perhaps similar to ILNP and Name-Based Sockets, just to mention two host-based proposals I'm more familiar with), then it will be significantly easier for them to code the changes, test them, release the code and start to transition their developer base onto them.  Related to the last point, take a look at articles related to Mac OS X Snow Leopard and how Apple is transitioning their developer base toward 64-bit API's -- it's tricky, but they appear to be doing it quite gracefully.

The point of mentioning all of the above is that you appear to be focusing mostly/solely on the rearview mirror when thinking about a future Internet Architecture, specifically designing a solution based around traditional fixed/wired devices that use traditional multihoming techniques, (while potentially placing significant amounts of complexity in the network to do so).  While we certainly can't forget about the embedded base that's out there today, it seems false to believe that host O/S'es are completely static, nor ever get upgraded.  Finally, I would assert that we potentially are at a crossroads where the composition of the Internet may fundamentally be changing, as we speak, away from pre-dominantly wired hosts to mobile, disposable devices (if it hasn't already).  It would be very unfortunate if we didn't provide a well designed, host-based ID-Loc solution out-of-the-gate (perhaps/likely as not the only solution, but certainly as a key part of the overall recommended solution) to get us on a better trajectory for scaling, not to mention putting more intelligence in the hosts to let them decide/control their own application's fate while at the same time keeping the network as dumb, inexpensive and [relatively] easy to run.

My $0.02,

-shane

[1] I would assume major content houses like Google, Yahoo, etc. probably have some great data on browser types and O/S'es, over time, which would be wonderful to see and help guide us; however, I'm not aware of anyone of that size making said data publicly available.  I've looked at publicly released reports/presos from Akamai, Arbor Networks, Renesys & other vendors who would seem to potentially have interesting data in this regard, however they don't seem to look at Internet device composition either, unfortunately, from what I can tell.  Paging kc @ CAIDA.  :-)