Re: Autonomous System Sanity Protocol

Noel Chiappa <jnc@ginger.lcs.mit.edu> Sat, 26 April 1997 22:45 UTC

Received: from cnri by ietf.org id aa01458; 26 Apr 97 18:45 EDT
Received: from murtoa.cs.mu.OZ.AU by CNRI.Reston.VA.US id aa15894; 26 Apr 97 18:45 EDT
Received: from mailing-list by murtoa.cs.mu.OZ.AU (8.6.9/1.0) id IAA10071; Sun, 27 Apr 1997 08:37:43 +1000
Received: from munnari.OZ.AU by murtoa.cs.mu.OZ.AU (8.6.9/1.0) with SMTP id IAA10053; Sun, 27 Apr 1997 08:32:31 +1000
Received: from ginger.lcs.mit.edu by munnari.OZ.AU with SMTP (5.83--+1.3.1+0.56) id WA12652; Sun, 27 Apr 1997 08:32:29 +1000 (from jnc@ginger.lcs.mit.edu)
Received: by ginger.lcs.mit.edu id AA21401; Sat, 26 Apr 97 18:32:21 -0400
Date: Sat, 26 Apr 1997 18:32:21 -0400
From: Noel Chiappa <jnc@ginger.lcs.mit.edu>
Message-Id: <9704262232.AA21401@ginger.lcs.mit.edu>
To: big-internet@munnari.oz.au, michael@memra.com
Subject: Re: Autonomous System Sanity Protocol
Cc: jnc@ginger.lcs.mit.edu
Precedence: bulk

<I'll keep this thread on Big-I along, to save the IETF list...>

    From: Michael Dillon <michael@memra.com>

    Using maps implies that there is a central database that records the
    current state of this fairly static connectivity information and that
    routing announcements would be verified against this database to ensure
    that the announced routes correspond to edges on the map.

Nope. Why should there be? A couple of different things catch my eye here.

First, I'm not sure what you mean by "routing announcements". Remember,
nobody sends out their routing table any more. The *only* thing you ever say
is "I'm X, and I'm attached to A, B, and C". (Major glossing over to handle
issues of abstraction, but in essence this is true.)

Second, why do you need a configured database to check any dynamic
announcements against? To me, this is like saying you need a configured table
to check DNS lookups against. Just as the collection of DNS zone files on disk
*is* the (distributed) master database, from which entries get sent out to
caches around the net, the set of information *in routers X1...Xn* that it is
connected to A, B and C *is* the (distributed) master database, copies of
which again get sent out to all the entities which need that information.

The maps that people create (there is no central map, it would be an
impossibly big database anyway) are done so by putting together the
connectivity announcements that people make. (Think of people's maps as sets
of cached connectivity announcements...) Since the sources of those
announcements are absolutely authoritative for *their own* connectivity, there
is no reason not to believe them. (You can decide to ignore them, but let's
leave out that detail for now.)


    So, who will manage the database and how will it be distributed so that
    this database does not become a single point of failure?

It's the fate-sharing design principle. X knows it is connected to A, B and C
- so the "master" copy of the information *about X* is stored *in X*. If X
goes away, who cares - the *need* for the information *about* X went away
*with* X.  There is no central database - the authoritative version of each
piece of data is stored with the entity to which applies.

Things get a little tricker when X is not a single physical entity, but a
collection of entities acting as a logical entity. What happens there is that
the virtual representation of X has to be (recursively) based on the real
data about the physical things which make up X. So, at base, the initial,
bootstrap data is still "fate-shared".

You do have to have a protocol for the entities who make up X and are
responsible for announcing it to the rest of the net to "filter" this data
up, and agree on what representation of X they will present to people, etc
and this is not trivial, but it does rest on the bootstrap of fate-shared
data.

	Noel