Re: OSI-DS 33 and 34 - DUA and DSA Metrics

Andrew Waugh <> Thu, 18 June 1992 06:59 UTC

Received: from by IETF.NRI.Reston.VA.US id aa03038; 18 Jun 92 2:59 EDT
Received: from by NRI.Reston.VA.US id aa00646; 18 Jun 92 3:00 EDT
Received: from by NRI.Reston.VA.US id aa00628; 18 Jun 92 3:00 EDT
Received: from shark.mel.dit.CSIRO.AU by with Internet SMTP id <>; Thu, 18 Jun 1992 06:29:50 +0100
Received: from squid.mel.dit.CSIRO.AU by with SMTP id AA26694 (5.65c/IDA-1.4.4/DIT-1.3 for <>); Thu, 18 Jun 1992 15:28:27 +1000
Received: by squid.mel.dit.CSIRO.AU (4.1/SMI-4.0) id AA15126; Thu, 18 Jun 92 15:24:51 EST
Message-Id: <9206180524.AA15126@squid.mel.dit.CSIRO.AU>
Subject: Re: OSI-DS 33 and 34 - DUA and DSA Metrics
In-Reply-To: Your message of "Wed, 17 Jun 92 13:09:11 +0100." <>
Date: Thu, 18 Jun 92 15:24:50 +1000
From: Andrew Waugh <>

>1. I dont think that the metrics used page 15 for the SearchStone
>definition is really appropriate
>   If, as I understood it, the numeric values associated to
>	respectively read, list, search 1-level, search whole-subtree
>	are supposed to reflect the relative cost of the operations,
>	I would suggest
>	     . to raise a lot the "search subtree" to a much higher
>		value (say 10)
>	     . to raise also the "search one level" but here
>		the value depends in fact of the DSA actually
>		performing the operation (this operation
>		especially at the higher levels of the DIT,
>		where there is a lot of distribution, might result
>		very very costly for NON QUIPU DSAs)
>		[[ For example at the C=FR master level a
>		search-one-level cost of 100 would be appropriate
>		whereas at <C=FR; O=CNRS> a cost 3 seems appropriate ]]
>		As I understand that DUA metrics is not DSA metrics,
>		we have to choose a mean value and I would propose the
>		following rating
>		 . read [1]
>		 . list [2]
>		 . search-1-level [5]
>		 . search subtree [10]
>	Another  point is that no disctinction is made about
>	plain searches, substring searches, approximate match searches
>	which certainly have very different costs....

I, too, was worried about the weightings to be given for the various
operations. If we are going to set weights for the different
operations I would like to see:

	1) Explicit assumptions about the distribution of the DIT
	amongst DSAs. For example; that an entire organisation is
	contained within a DSA.

	2) An agreement on what cost we are actually measuring.
	Are we trying to minimise elapsed time in finding a
	specific entry? Or are we attempting to minimise the
	cpu cycles in the DSA?

	3) Actual costs from real live DSAs (as many as possible).
	Note that our decision for 2) will have a great impact here.
	If we are interested in elapsed user time, then network
	and protocol delays may mean that a small number of complicated
	searches may be more 'economic' than a larger number of
	reads; particularly if we are finding a user in UCL from

My fear is that without making explicit assumptions, DUAs will be
judged 'good' or 'bad' with little basis as to whether they are,
in fact, good or bad. Worse still, this might start to drive DUA
development so that DUAs are constructed in particular ways to get
'low' scores.

andrew waugh