Re: [Idr] Fwd:I-D ACTION:draft-pmohapat-idr-acceptown-community-01.txt

Robert Raszuk <> Wed, 30 April 2008 22:05 UTC

Return-Path: <>
Received: from (localhost []) by (Postfix) with ESMTP id 0FD803A6955; Wed, 30 Apr 2008 15:05:26 -0700 (PDT)
Received: from localhost (localhost []) by (Postfix) with ESMTP id 54F4F3A6C63 for <>; Wed, 30 Apr 2008 15:05:24 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -6.599
X-Spam-Status: No, score=-6.599 tagged_above=-999 required=5 tests=[BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4]
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id qbJTUCsJQnCN for <>; Wed, 30 Apr 2008 15:05:23 -0700 (PDT)
Received: from ( []) by (Postfix) with ESMTP id 5ABFF3A6B17 for <>; Wed, 30 Apr 2008 15:05:02 -0700 (PDT)
Received: from source ([]) by ([]) with SMTP; Wed, 30 Apr 2008 15:04:46 PDT
Received: from ([]) by with Microsoft SMTPSVC(6.0.3790.3959); Wed, 30 Apr 2008 15:04:37 -0700
Received: from [] ([]) by (8.11.3/8.11.3) with ESMTP id m3UM4Zx93288; Wed, 30 Apr 2008 15:04:35 -0700 (PDT) (envelope-from
Message-ID: <>
Date: Wed, 30 Apr 2008 15:04:33 -0700
From: Robert Raszuk <>
User-Agent: Thunderbird (Windows/20080421)
MIME-Version: 1.0
To: Danny McPherson <>
References: <> <> <> <> <> <><><><><><><> <>
In-Reply-To: <>
X-OriginalArrivalTime: 30 Apr 2008 22:04:37.0002 (UTC) FILETIME=[2E04D6A0:01C8AB0E]
Cc: idr idr <>
Subject: Re: [Idr] Fwd:I-D ACTION:draft-pmohapat-idr-acceptown-community-01.txt
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Inter-Domain Routing <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit

Hi Danny,

In your below mail you are missing one fundamental practical data.

It is very cheap and easy to drop the unneeded route on inbound while it 
is very CPU intensive to build separate updates to each peer.

With current link bandwith the amount of traffic generated due to that 
is just white noise.

IMHO based on the most common implementations at that time that was the 
main reason for the RR spec change.


Now back to accept-own community I really see nothing wrong with it. If 
you are trying to say this is bad just because some implementations to 
not support 2796 that's I am afraid wrong avenue.

There are practical applications for this enhancement of which one is 
described in the draft.

As general rule of thumb I think there is need to support more 
automation in the network provisioning among many providers. There is 
also a demand in the market for more dynamic behavior of running 
applications. This is just right on this spot attempt to support one of 


PS. As to the comment from Ilya that this draft may break BGP 
implementations ... one needs to realize that conditional one more check 
during inbound processing requires negligible amount of new BGP code.

And I would like to point out that in most BGP implementations I am 
familiar with every month amount of code changes which go in even in the 
fundamental parts of BGP assuming one may freeze all IDR work is much 
much much higher then such draft would require.

> On Apr 30, 2008, at 2:56 PM, Enke Chen wrote:
>  >>
>  >> Sure..  I don't like the idea of changing specs to accommodate
>  >> implementation optimizations.
>  >
>  > welcome to the real world :-)
> I do understand the reasoning behind this, although I
> don't agree with it.
> In the real world, operators are complaining about DFZ
> sizes (unique routes), and routing scalability, and churn,
> little implementation tweaks that surface as base spec
> changes like this have serious implications.
> Consider this 'optimization', for example.  Now, most
> clusters have at least 2-3 RRs, and many clients.  If you've
> got a single client that has 50k external paths that it sees
> as best, and that client advertises them to the RRs, and
> the RRs reflect them back, just so that the client can discard
> them, then unless I'm confused, that little optimization just cost
> you 150k (50k routes * 3 RRs, reflected back to client they
> were learned from) worth update processing resources on
> the clients.  Not to mention implications on churn, or additional
> layers of RR hierarchy, or the dynamics this introduces in a
> REAL network.
>  From a network-level perspective I don't see much of an
> optimization at all, quite the contrary, actually.
> -danny
> _______________________________________________
> Idr mailing list

Idr mailing list