Purge packet - questions
gardo@vnet.ibm.com Fri, 27 October 1995 01:47 UTC
Received: from ietf.nri.reston.va.us by IETF.CNRI.Reston.VA.US id aa28882;
26 Oct 95 21:47 EDT
Received: from guelah.nexen.com by IETF.CNRI.Reston.VA.US id aa28877;
26 Oct 95 21:47 EDT
Received: from maelstrom.nexen.com ([204.249.99.5]) by guelah.nexen.com
(8.6.12/8.6.12) with ESMTP id VAA00640; Thu, 26 Oct 1995 21:18:27 -0400
Received: (from root@localhost) by maelstrom.nexen.com (8.6.12/8.6.12) id
VAA17356 for rolc-out; Thu, 26 Oct 1995 21:27:29 -0400
Received: from guelah.nexen.com (guelah.nexen.com [204.249.96.19]) by
maelstrom.nexen.com (8.6.12/8.6.12) with ESMTP id VAA17344;
Thu, 26 Oct 1995 21:27:26 -0400
Sender: ietf-archive-request@IETF.CNRI.Reston.VA.US
From: gardo@vnet.ibm.com
Received: from VNET.IBM.COM (vnet.ibm.com [199.171.26.4]) by guelah.nexen.com
(8.6.12/8.6.12) with SMTP id VAA00624; Thu, 26 Oct 1995 21:15:47 -0400
Message-Id: <199510270115.VAA00624@guelah.nexen.com>
Received: from RALVM29 by VNET.IBM.COM (IBM VM SMTP V2R3) with BSMTP id 6536;
Thu, 26 Oct 95 21:22:08 EDT
Date: Thu, 26 Oct 95 20:42:45 EDT
To: asmith@baynetworks.com
cc: rolc@nexen.com, genecox@nexen.com
Subject: Purge packet - questions
X-Orig-Sender: owner-rolc@nexen.com
Precedence: bulk
X-Info: Submissions to rolc@nexen.com
X-Info: [Un]Subscribe requests to rolc-request@nexen.com
X-Info: Archives for rolc via
ftp://ietf.cnri.reston.va.us/ietf-mail-archive/rolc/
Ref: Your note of Thu, 26 Oct 95 17:07:00 PDT
Andrew,
>> Less control traffic, quantity of state data, more rapid convergence,
>> a more flexible purge feature, etc.
>
>Can you be a little more precise with your reasoning? The quantity of
>state and rapid convergence are assertions made with no argument to
>back them up. A "more flexible purge feature": what is that good
>for?
The preferred route that maps to an address/mask pair
can change. A single client can have multiple short-cut routes that
need to be purged as a result of this change. It is much simplier
for the server to map this single client to this single address/mask
pair (for some implementations) than keep state for each
of the individual host addresses that were queried. Also, it is
much simplier and efficient to send one purge packet instead of
multiple.
Have you got an implementation that is dependent upon the request-id
and this proposal is causing you heartburn? :-)
Are you implementing NHRP?
>
>> >>>3. A client now *must* index by destination (match on wildcarded
>> >>> destination to be more exact) and cannot use request-id for
>> >>> acting on the purge.
>Correct?
>> >>
>> >> True, if the request-id in the Purge is zero.
>> >> The client must have a match on both the
>> >> destination and request-id if the request-id is non-zero.
>>
>> >Do all wildcard Purges will contain zero request-id? (I've still not
>> >seen any text from you describing the semantics of wildcard purge).
>
>??
Jim put a copy of the text on the mailing list yesterday, an excerpt
showing the Purge packet...
>
>> >Do you think that any reduction in control packets outweighs the
>> >(potential) extra processing involved in doing a wildcard destination
>> >lookup rather than a request-id lookup?
>
>??
Yes. I'm sure that one could argue this point in certain
situations, but not in most..
>> >Do you think that you will
>> >often have one NHS being able to coalesce multiple purges together
>> >back to the same requester - can you describe a scenario where that is
>> >likely?
Yes, that's why I'm asking for this change. I explained a case earlier,
and I don't think anymore details are required...
>> Yes, The preferred route to a particular network changes requiring a
>> refresh of the cached data. The NHS can send multiple Purge packets for
>> all destinations belonging to this network that are cached; Or with
>> a mask, the NHS can send a single Purge packet with the associated
>> network mask...
>
>Not quite: even with a mask, the NHS sends a wildcard-purge back *to each
>requester* that has current cached info: you can only wildcard/summarise
>the targets, not the original requesters with your proposal.
I agree. I should have made it clear that I was talking about a single
requester...
>If you assume a client-server sort of model with the clients being the
>first ones to do the requesting and there being many more clients than
>servers, then it is really the clients' addresses that need wildcarding:
>wildcarding the servers' purges does not buy you that much in this
>scenario as there will usually be a small number of servers represented
>by any one NHS. Were you assuming much more of a peer-peer deployment?
>
>I think that an analysis at this level of detail is needed in order to
>judge the tradeoffs of protocol/implementation complexity vs. savings and
>I haven't yet seen that done for this feature.
You have not shown any problems that a Purge with a mask will create.
I am really puzzled by your resistence to making the purge more
flexible...
>> >> I would expect intermediate nodes that have cache entries that match
>> >> the Purge destination/mask would purge those entries.
>>
>> >...and remember which way(s) to forward the Purge: it has to be
>> >forwarded back along the same path that the original response messages
>> >went, rather than the current routing path, doesn't it? It is
>> >precisely when routing is changing that the purges are most useful and
>> >the caches in intermediate nodes most useless, right?.
>>
>> This intermdiate node subject applies to both Purges with or without
>> a mask. Correct?
>
>Yes, but I think the processing done by each NHS doing caching is different in
>each case (is NHS caching still allowed in current revision? I know MPOA
>talked
>about asking to remove it for some cases) and would probably lead to a
>different way of indexing the cache:
>
>The state in question for each destination cached by an NHS is something like:
> { requester, responder, destination, hop-who-I-forwarded-response-to,
> hold-down-time, timestamp }
>
>When processing an incoming wildcard-purge, the NHS needs to lookup the
>destination to find all matching entries and mark them all as purged.
>
>Forwarding of the purge is not so simple: what if the "hop-who-I-forwarded-
>response-to" fields are different for two destinations covered by the purge?
>Do I try to split the purge into 2 or punch holes in the mask? Do I just give
>up and forward 'n' individual purges back? Now add QoS to the database
>destination
>lookup too - this quickly gets out of hand. What if the purge matches
>destinations for some QoSs but not others? Can you wildcard the QoS field
>too? Do I have to purge for *all* QoSs? There will be serious interactions
>with QoS-based routing protocols here (sorry, I keep forgetting, there is
>no such thing as QoS-based routing yet .... :-).
>
>The issues are the same as for the "destination mask" in the Request/Response
>packets. Some of the MPOA group locked itself in a room for 2 long days
>last month to try to work this one out and generated a lot of heat on this
>topic but very little light. One alternative would be to just put it in
>and see if it works by trial and error ....
>
>> Since the purge feature has been added, let's do it right. Maybe
>> it was a mistake to add the purge feature to NHRP ???
>
>I thought purge was not a "feature" but a necessary part of the protocol
>in order to make it work.
>
>
>Andrew
>
Have a nice day!
-- Russell
- Purge packet - questions Andrew Smith
- Re: Purge packet - questions Andrew Smith
- Purge packet - questions gardo
- Re: Purge packet - questions Andrew Smith
- Purge packet - questions gardo
- Re: Purge packet - questions Andrew Smith