issue comes from colip-BoF (rsvp on atm)

Hiroshi Esaki <hiroshi@ctr.columbia.edu> Sun, 09 April 1995 19:55 UTC

Received: from ietf.nri.reston.va.us by IETF.CNRI.Reston.VA.US id aa03226; 9 Apr 95 15:55 EDT
Received: from acton.timeplex.com by IETF.CNRI.Reston.VA.US id aa03222; 9 Apr 95 15:55 EDT
Received: from sirius.ctr.columbia.edu (root@sirius.ctr.columbia.edu [128.59.64.60]) by maelstrom.acton.timeplex.com (8.6.9/ACTON-MAIN-1.2) with ESMTP id PAA16455 for <rolc@maelstrom.timeplex.com>; Sun, 9 Apr 1995 15:36:32 -0400
Received: from mimas.ctr.columbia.edu (mimas.ctr.columbia.edu [128.59.74.18]) by sirius.ctr.columbia.edu (8.6.11/8.6.4.287) with ESMTP id PAA06062; Sun, 9 Apr 1995 15:37:18 -0400
Sender: ietf-archive-request@IETF.CNRI.Reston.VA.US
From: Hiroshi Esaki <hiroshi@ctr.columbia.edu>
Received: (hiroshi@localhost) by mimas.ctr.columbia.edu (8.6.11/8.6.4.788743) id PAA07465; Sun, 9 Apr 1995 15:37:16 -0400
Date: Sun, 9 Apr 1995 15:37:16 -0400
Message-Id: <199504091937.PAA07465@mimas.ctr.columbia.edu>
To: rolc@maelstrom.timeplex.com, ip-atm@matmos.hpl.hp.com, rsvp@isi.edu, colip-atm@necom830.cc.titech.ac.jp
Subject: issue comes from colip-BoF (rsvp on atm)


Hi all, 

The following is the brief memo raised at colip-atm BoF associated with 
RSVP over large cloud data-link networks. 


** When this issue is just a noise for you, please ignore this mail 
** and please say sorry to you. 


  (1) How to provide large scale soft-state-type (i.e. RSVP) multicast 
      over a large data-link (e.g. ATM) cloud. 
         --> related with rsvp, rolc and ipoveratm.  

The issue is associated with the data-link platform that has a 
large scale multicast capability.  Typical data-link platform 
is ATM.   For small sized data-link platform or point-to-point 
based large cloud data-link platform (such as telephone networks), 
this is not the issue.  However, for the large scale data-link 
platform providing large scale multicast service without router, 
there seems to be the scaling problem.  

Let take an ATM as the data-link platform that can provide large 
scale multicast over the large cloud data-link platform. 
# I think the large cloud ATM networks will be developped and 
  the large ATM cloud will provide large scale multicast over it. 
  Actally, in the IPoverATM-WG, the provision of scalable IP multicast
  is discussed. 
Since the data-link platform is very large scale, large (or huge) 
number of down-stream routers (routers at exit points from ATM cloud)
can be connected to the up-stream router (router at entrypoint to 
ATM cloud).   ATM cloud can provide multicast-tree that has a 
large fan-out as a data-link connection (i.e. point-to-multipoint
data-link cell-relaying pipe among the corresponding routers).
In this case, all down-stream routers send a reserve message to 
the up-stream router, according to the path message.   As a result, 
large (or huge) number of reserve message will be sent to the up-stream 
router...... 

How to solve this issue seems to be important for large cloud ATM 
networks.   
By my understanding, the most simplest method will be the  use of 
RFC1577's classical IP model over the large ATM cloud.  This senario 
will scale well, since LIS will not be so large and classical model 
keep subnet model.

Now, let think about NHRP model........I am confused how to do....
NHRP allows multiple LISs and the existance of routers on NBMA 
domain.   When the up-steam router (ingress point to ATM) tries to 
forward multicast packet, it will resolve the corresponding 
ATM aggress that should be NSAP group address.  From the view point 
of scalable soft-state multicast, the resolved NSAP address must 
corresponds to the NSAP address associated with sub-tree of large 
scale multicast tree over the large ATM cloud.    This multicast 
maintainance seem to be different from the usual multicast 
over ATM cloud.......
My confusion may be due to my stupidness......

Can someone clarify how to provide large scale soft-state multicast 
over the large ATM cloud ? 


Best Regards, 

Hiroshi Esaki 
c/o CTR, Columbia Univ.