filters and channels

Robin Iddon <robini@spider.co.uk> Tue, 10 December 1991 11:13 UTC

Received: from mtigate.mti.com by NRI.NRI.Reston.VA.US id aa02110; 10 Dec 91 6:13 EST
Received: by mtigate.mti.com id AA03326 (5.65+/IDA-1.3.5); Tue, 10 Dec 91 02:39:07 -0800
Received: from eros.uknet.ac.uk by mtigate.mti.com with SMTP id AA03322 (5.65+/IDA-1.3.5); Tue, 10 Dec 91 02:38:59 -0800
Received: from castle.ed.ac.uk by eros.uknet.ac.uk via JANET with NIFTP (PP) id <5677-0@eros.uknet.ac.uk>; Tue, 10 Dec 1991 10:38:27 +0000
Received: from spider.co.uk by castle.ed.ac.uk id aa18018; 10 Dec 91 10:36 WET
Received: by widow.spider.co.uk; Tue, 10 Dec 91 10:39:24 GMT
From: Robin Iddon <robini@spider.co.uk>
Date: Tue, 10 Dec 1991 10:35:15 +0000
Message-Id: <24759.9112101035@orbweb.spider.co.uk>
Received: by orbweb.spider.co.uk; Tue, 10 Dec 91 10:35:15 GMT
To: Stephen Grau <steveg@com.novell.na>
In-Reply-To: Stephen Grau's message of Mon, 09 Dec 91 18:37:45 GM
Subject: filters and channels
Cc: rmonmib@lexcel.com

Hi,

[My message deleted]

[Steve shows how filters can end up being used by other clients accidentally]

Good point - I hadn't realized that.

   I believe this is an issue in several places in the MIB (channels, filters,
   and alarms).  One way to solve it is to not allow a valid row to refer to
   a non-existent object and also disallow setting a variable to refer to a
   row that does not exist.  

I can see that forcing variables to reference existing rows (or 0 in some
cases) is useful.  I'm not in favor of invalidating existing rows that
reference recently deleted rows.  I believe that this restricts the
functionality available to the client.

   This would force management consoles to allocate
   rows prior to setting references to them, forcing resolution of
   resource contention by using the row creation mechanism
   already in the MIB.

The first constraint is sufficient to enforce this behaviour.  Indeed it is
possible for a client to create all the rows needed before setting any to
valid.  Once all the rows have been grabbed, the client can go back and patch
the references in the same PDU used to validate the rows.  If people followed
this approach then no new constraints are required.  I feel some text
explaining how to safely control multiple table configurations would be just
as useful.  The multiple manager model already relies on cooperating clients -
this is no different.

   This implies that invalidating a row in one table may also invalidate
   rows in other tables.  I'm not sure what the best thing to do with the
   invalid rows is - leave them hanging around on a timeout or just delete
   them.


   Any other ideas?

   Steve Grau
   Network Management Products Division
   Novell, Inc.
   steveg@novell.com

Robin
Robin Iddon (robini@spider.co.uk)
Spider Systems Ltd
Edinburgh
Scotland