Re: Autonomous System Sanity Protocol

Noel Chiappa <> Sun, 27 April 1997 05:48 UTC

Received: from cnri by id aa12999; 27 Apr 97 1:48 EDT
Received: from by CNRI.Reston.VA.US id aa02519; 27 Apr 97 1:48 EDT
Received: from mailing-list by (8.6.9/1.0) id PAA10556; Sun, 27 Apr 1997 15:37:23 +1000
Received: from munnari.OZ.AU by (8.6.9/1.0) with SMTP id PAA10534; Sun, 27 Apr 1997 15:27:11 +1000
Received: from by munnari.OZ.AU with SMTP (5.83--+1.3.1+0.56) id FA21902; Sun, 27 Apr 1997 15:27:08 +1000 (from
Received: by id AA22472; Sun, 27 Apr 97 01:27:05 -0400
Date: Sun, 27 Apr 97 01:27:05 -0400
From: Noel Chiappa <>
Message-Id: <>
Subject: Re: Autonomous System Sanity Protocol
Precedence: bulk

    From: Tony Li <>

    > use of public key cryptography can prevent anyone else from originating
    > bad information about connectivity inside or to X - their map updates
    > will not be correctly signed with X's private key

    If you posit the use of PKC for map distribution it seems only fair to
    posit its use in a prefix distribution system.

However, the fundamental difference between *connectivity* information and
*reachability* information comes into play - the latter is not useful to you
*until* it has been processed (i.e. not just replicated) through someone else
(i.e. path selection is *inherently* a distributed computation, in which you
use someone else's partial computation results), whereas connectivity
information is only used *without* being touched in any way be the people who
pass it on - so there is no way any errors/malice on their part can mess it
up. Either you have it as the source sent it - or you don't.

    this also presumes the existence of a mechanism to derive prefix
    authority. This is thought to be non-trivial.

How you find out, authoritatively, the public key for X is definitely a hard
one. (It's even harder in bottom-up addressing systems instead of top-down,
sigh...) But it isn't impossible - in fact (admittedly, without working out
the details) in a top-down system, the same framework that works for DNS
should work for addresses too, no?

    > It can certainly prevent all unilateral bad information, i.e. based on
    > someone incorrectly configuring their routers (or software/hardware
    > bugs).


Because routing data is never modified/updated/added-to on its way to you,
merely copied. You can be *guaranteed* that what you get is what the original
sender sent. So there's no way anyone between you and the source can mess the
data up, right?

All they can do is try and stop the data from going through - but there are
limits on how much good this will do them. If they are your only path between
you and the source of the update - well, they can screw you anyway, by
passing the routing update, but not your data. (Think of it as another form
of fate-sharing - but this time, it's data and routing updates! :-)
If they aren't, then if the routing update gets to you via another path,
if you have an explicit/unitary routing architecture (also my assumption,
surprise, surprise :-), your data can take the same path too.

The other thing a single site, either confused/malicious, can try and do
is inject bogus data (i.e. saying "I'm X and I'm connected to A, B, C ...
<ad infinitum>"). But this doesn't work because the data will not match
up - i.e. A is not also reporting that they are connected to X. So you can
trivially filter that out.

So, there's no way for a single site to cause havoc. Pairs of sites can
get together are lie about connectivity, sure (and we can explore that
case at length, if people are interested), but that's unlikely to happen
except maliciously... (famous last words? :-)

    If we consider a typical link state protocol today (as perhaps a
    degenerate example of map distribution)

Well, not so much degenerate, as in a different fundamental quadrant of
design space, the two major axes (IMNSHO, anyway :-) being MD/DV, and

    it's trivial to inject bad information and have it propagate throughout
    the immediate map.

Well, it would depend on the protocol design. If you can't have a link
between X and Y unless each end reports connectivity to the other, you can't
have unilateral announcements. (I don't know enough details of, e.g. OSPF to
know if it would work to have only one end of a link announce it - anyone

Yes, a malicious site could try and fake both ends, but then you get into the
next problem, which is that when X saw what looked like a more recent version
of their data than their own last update (and again, this depends on the
protocol details), they might try and "correct" it by sending out a new,
"correct" update.

    Further, unless there is intelligence (aka filtering) between the local
    map and the global map, it will tend to propagate globally.

I'm not quite sure what you had in mind here, but in an OSPF-like system
where addresses and topology are so complete disconnected, so that routers
and areas only report the address ranges they are connected to, it might look
like there's a lot of potential for abuse - unless you have a delegation
f authority along with PKC, so that router or area X can't announce that
it's connected to address range R unless it has R's private key...

    I would tend to agree with you more if we had a system where we had
    stronger abstraction boundaries and fewer abstraction violations.

Well, not to say we don't need that *anyway* (but for other reasons), but as
long as you have private keys associated with topology naming abstractions, I
think you're safe.

    It would leave us with less need to propagate information from the
    immediate map to the global map because we'd simply be propagating the
    static abstraction information. Unfortunately, our inability to do this
    is more a function of the address allocation than the information
    distribution mechanism.

Yes, but again, this is more an issue of routing efficiency than security;
just because A.1 is over here in B, detached from A, doesn't mean it needs
A's private key, it only needs the key for A.1 - which it both i) has to
have, and ii) is entitled to, anyway.

The need to propogate information about exceptions out to larger and larger
scopes until you find one that contains both the partitioned smaller entity
(this is basically the partitioning problem, right?) and the "parent"
entity is mostly just an efficiency one (i.e. the only cost is extra
overhead), as far as I can see. Am I missing something?

    As an example, consider a utopian world where we had an address
    assignment which maintained such strong abstraction boundaries. Using
    BGP4, one simply does proxy aggregation...

But even assuming 100% perfect address allocation, one *still* has the
problem in all DV systems that data about X is only useful to you *after*
it has been massaged/added-to by everyone in between you. Connectivity
data is fundamentally different from reachability data....