Re: Purge packet - questions

Andrew Smith <asmith@baynetworks.com> Fri, 27 October 1995 00:28 UTC

Received: from ietf.nri.reston.va.us by IETF.CNRI.Reston.VA.US id aa27743; 26 Oct 95 20:28 EDT
Received: from guelah.nexen.com by IETF.CNRI.Reston.VA.US id aa27737; 26 Oct 95 20:28 EDT
Received: from maelstrom.nexen.com ([204.249.99.5]) by guelah.nexen.com (8.6.12/8.6.12) with ESMTP id TAA00247; Thu, 26 Oct 1995 19:58:50 -0400
Received: (from root@localhost) by maelstrom.nexen.com (8.6.12/8.6.12) id UAA16845 for rolc-out; Thu, 26 Oct 1995 20:09:00 -0400
Received: from nexen.nexen.com (nexen.nexen.com [204.249.96.18]) by maelstrom.nexen.com (8.6.12/8.6.12) with ESMTP id UAA16836 for <rolc@nexen.com>; Thu, 26 Oct 1995 20:08:57 -0400
Received: from lightning.synoptics.com (lightning.synoptics.com [134.177.3.18]) by nexen.nexen.com (8.6.12/8.6.12) with SMTP id UAA15161 for <rolc@nexen.com>; Thu, 26 Oct 1995 20:07:49 -0400
Received: from pobox.synoptics.com ([134.177.1.95]) by lightning.synoptics.com (4.1/SMI-4.1) id AA09718; Thu, 26 Oct 95 17:05:38 PDT
Received: from milliways-le0.engwest (milliways-le0.synoptics.com) by pobox.synoptics.com (4.1/SMI-4.1) id AA15826; Thu, 26 Oct 95 17:07:00 PDT
Received: by milliways-le0.engwest (4.1/SMI-4.1) id AA24664; Thu, 26 Oct 95 17:07:00 PDT
Date: Thu, 26 Oct 95 17:07:00 PDT
Sender: ietf-archive-request@IETF.CNRI.Reston.VA.US
From: Andrew Smith <asmith@baynetworks.com>
Message-Id: <9510270007.AA24664@milliways-le0.engwest>
To: gardo@vnet.ibm.com
Subject: Re: Purge packet - questions
Cc: rolc@nexen.com
X-Orig-Sender: owner-rolc@nexen.com
Precedence: bulk
X-Info: Submissions to rolc@nexen.com
X-Info: [Un]Subscribe requests to rolc-request@nexen.com
X-Info: Archives for rolc via ftp://ietf.cnri.reston.va.us/ietf-mail-archive/rolc/

> Date: Wed, 25 Oct 95 22:09:32 EDT
> From: gardo@VNET.IBM.COM
> To: asmith@BayNetworks.COM
> Cc: rolc@nexen.com, genecox@VNET.IBM.COM
> Subject: Purge packet - questions

Russell,

> Less control traffic, quantity of state data, more rapid convergence,
> a more flexible purge feature, etc.

Can you be a little more precise with your reasoning? The quantity of state
and rapid convergence are assertions made with no argument to back them up. A
"more flexible purge feature": what is that good for?

> >>>3. A client now *must* index by destination (match on wildcarded destination
> >>>to be more exact) and cannot use request-id for acting on the purge. Correct?
> >>
> >> True, if the request-id in the Purge is zero.
> >> The client must have a match on both the
> >> destination and request-id if the request-id is non-zero.
> 
> >Do all wildcard Purges will contain zero request-id?  (I've still not
> >seen any text from you describing the semantics of wildcard purge).

??

> >Do you think that any reduction in control packets outweighs the
> >(potential) extra processing involved in doing a wildcard destination
> >lookup rather than a request-id lookup?  

??

> >Do you think that you will
> >often have one NHS being able to coalesce multiple purges together
> >back to the same requester - can you describe a scenario where that is
> >likely?
> 
> Yes, The preferred route to a particular network changes requiring a
> refresh of the cached data.  The NHS can send multiple Purge packets for
> all destinations belonging to this network that are cached;  Or with
> a mask, the NHS can send a single Purge packet with the associated
> network mask...

Not quite: even with a mask, the NHS sends a wildcard-purge back *to each
requester* that has current cached info: you can only wildcard/summarise
the targets, not the original requesters with your proposal. 

If you assume a client-server sort of model with the clients being the 
first ones to do the requesting and there being many more clients than 
servers, then it is really the clients' addresses that need wildcarding: 
wildcarding the servers' purges does not buy you that much in this 
scenario as there will usually be a small number of servers represented
by any one NHS. Were you assuming much more of a peer-peer deployment?

I think that an analysis at this level of detail is needed in order to
judge the tradeoffs of protocol/implementation complexity vs. savings and
I haven't yet seen that done for this feature.

> >> I would expect intermediate nodes that have cache entries that match
> >> the Purge destination/mask would purge those entries.
> 
> >...and remember which way(s) to forward the Purge:  it has to be
> >forwarded back along the same path that the original response messages
> >went, rather than the current routing path, doesn't it?  It is
> >precisely when routing is changing that the purges are most useful and
> >the caches in intermediate nodes most useless, right?.
> 
> This intermdiate node subject applies to both Purges with or without
> a mask.  Correct?

Yes, but I think the processing done by each NHS doing caching is different in 
each case (is NHS caching still allowed in current revision? I know MPOA talked 
about asking to remove it for some cases) and would probably lead to a
different way of indexing the cache:

The state in question for each destination cached by an NHS is something like:
       { requester, responder, destination, hop-who-I-forwarded-response-to, 
            hold-down-time, timestamp }

When processing an incoming wildcard-purge, the NHS needs to lookup the
destination to find all matching entries and mark them all as purged.

Forwarding of the purge is not so simple: what if the "hop-who-I-forwarded-
response-to" fields are different for two destinations covered by the purge? 
Do I try to split the purge into 2 or punch holes in the mask? Do I just give 
up and forward 'n' individual purges back? Now add QoS to the database destination 
lookup too - this quickly gets out of hand. What if the purge matches
destinations for some QoSs but not others? Can you wildcard the QoS field 
too? Do I have to purge for *all* QoSs? There will be serious interactions
with QoS-based routing protocols here (sorry, I keep forgetting, there is
no such thing as QoS-based routing yet .... :-).

The issues are the same as for the "destination mask" in the Request/Response
packets. Some of the MPOA group locked itself in a room for 2 long days
last month to try to work this one out and generated a lot of heat on this
topic but very little light. One alternative would be to just put it in
and see if it works by trial and error ....

> Since the purge feature has been added, let's do it right.  Maybe
> it was a mistake to add the purge feature to NHRP ???

I thought purge was not a "feature" but a necessary part of the protocol
in order to make it work.


Andrew

********************************************************************************
Andrew Smith					TEL:	+1 408 764 1574
Technology Synergy Unit				FAX:	+1 408 988 5525
Bay Networks, Inc.				E-m:	asmith@baynetworks.com
Santa Clara, CA
********************************************************************************