Re: [ietf-outcomes] [OPS-AREA] IETF Outcomes wiki

Dave CROCKER <> Sun, 07 February 2010 17:59 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 5ECB33A6F0C; Sun, 7 Feb 2010 09:59:26 -0800 (PST)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -6.574
X-Spam-Status: No, score=-6.574 tagged_above=-999 required=5 tests=[AWL=0.025, BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4]
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id mMVudQgWCZe2; Sun, 7 Feb 2010 09:59:25 -0800 (PST)
Received: from ( []) by (Postfix) with ESMTP id F0DB93A6C1F; Sun, 7 Feb 2010 09:59:24 -0800 (PST)
Received: from [] ( []) (authenticated bits=0) by (8.13.8/8.13.8) with ESMTP id o17I0HPT012124 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Sun, 7 Feb 2010 10:00:22 -0800
Message-ID: <>
Date: Sun, 07 Feb 2010 10:00:17 -0800
From: Dave CROCKER <>
Organization: Brandenburg InternetWorking
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv: Gecko/20100111 Thunderbird/3.0.1
MIME-Version: 1.0
To: David Harrington <>
References: <> <048101caa80f$db2ee5a0$>
In-Reply-To: <048101caa80f$db2ee5a0$>
Content-Type: text/plain; charset="ISO-8859-1"; format="flowed"
Content-Transfer-Encoding: 7bit
X-Virus-Scanned: ClamAV 0.92/10363/Sun Feb 7 05:50:54 2010 on
X-Virus-Status: Clean
X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.0 ( []); Sun, 07 Feb 2010 10:00:23 -0800 (PST)
Subject: Re: [ietf-outcomes] [OPS-AREA] IETF Outcomes wiki
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: IETF Outcomes Wiki discussion list <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Sun, 07 Feb 2010 17:59:26 -0000


Thoughtful posting.  Thanks!

Responding only to the larger issues you raise (but hoping others will also 
discuss the detail)...

On 2/7/2010 8:08 AM, David Harrington wrote:
> Hi,
> I think this results table an interesting idea, but I have some
> concerns.
> 1) My first concern is about the clarity of adoption ratings.
> SNMPv1 was obviously widely adopted.
> But a recent survey shared by Bert showed 14% using SNMPv1, 65% using
> SNMPv2, and 12% using SNMPv3. Yet SNMPv1 is listed as massive
> adoption, SNMPv2 is listed as some adoption, and SNMPv3 as complete
> failure.

The rating is like a 'best achieved' meter.  It pegs at the best value /ever/ 
achieved.  It's not designed to show reduced use over time.

Perhaps another column should be added, to distinguish between "best" and 
"current"?  (Although that creates a serious challenge, for keeping the 
'current' value, ummmmm... current.)

SNMPv1 was certainly massively successful... in its day.

> ipfix did not start out as an IETF design; I think an analysis of IPR
> disclosures would certainly reflect that; the ipfix WG just developed
> existing work into a standard.
> What criteria should be used for this binary decision?

Kernigan or Ritche defined "portable code" as needing less than 10% re-writing. 
  Perhaps some similarly pragmatic definition should be developed for 
determining whether something started "within" the IETF?

Since everything we define incorporates some details from outside, the right 
choice here is probably something along the lines of "the ietf assembled or 
invented all the pieces" vs. "the ietf made some changes to a pre-existing 
spec".  But I'm just guessing.

> 3) I have concerns about whether "adoption" is historical adoption or
> current adoption.
> At what point in time is adoption measured? Is netconf a complete
> failure because it has not been widely adopted yet? Was SNMPv1
> massively adopted in 1990? Maybe the chart should have adoption
> ratings at 2 years, 5 years, 10 years, and 15+ to get a better idea of
> adoption trends.

You are raising a question that is clearly reasonable.  I don't know what my own 
opinion about the particulars of the answer is.

However the one caution I will suggest is against trying to make this wiki 
assert much "precision".  The method for developing entries is highly 
subjective, and that means precision is extremely coarse.  It will be misleading 
to represent more precision about community rough consensus than is warranted.

There is a natural tendency to want finer-grained resolution to information, but 
we typically are not going to have assessment techniques that justify them, nor 
will we find a compelling community /need/ for the effort to develop them.

To be pedantic about the precisions limitations when reporting assessment:  for 
empirical efforts like this, the answer to 1.5 * 1.7 is not 2.55.  It's 2.6.

>       We certainly do not want to discourage
> people from adopting Netconf because so far it has not achieved
> massive adoption. Yet this chart could do that if the timeline of
> availability versus adoption isn't represented.

This is an interesting point:  To what extent can entries in this table 
influence the use of the things being listed?

I had assumed that an assessment should only be listed after the value to use 
(degree of success or failure) was quite clear to the community and that there 
therefore was/is consensus about it.  Other than the "0" value (still pending), 
of course.  I guess I could also imagine going from + to ++, or ++ to ++>, over 

> 4) I have concerns about "adoption" when it relates to extensions
> versus the base technology. SNMPv2 and SNMPv3 build upon the base SNMP
> design. What has been adopted widely in SNMPv2? Is it the operations
> (getBulk), or is it SMIv2 extensions (Counter64)?
> SNMPv2 and SNMPv3 still use the base protocol features, such as GETs
> and SETs, traps and request/responses, and so on. That doesn't really
> get reflected in this table.

Not sure what sort of "reflected" you are looking for.

Please note that the table is attempting to report on the utility of particular 
a standards effort, not necessarily the functional relationship among different 

That is, the bald form of the question is:  Was the incremental effort to 
produce SNMPv2 worth the considerable effort?  Every standards effort is 
enormously expensive.  Try calculating the aggregate cost of participant time 
for even the smallest effort; it's intimidating.

(However Jon Callas commented to me that some efforts that are overtly a failure 
do provide valuable learning for later efforts that succeed; and indeed, 
capturing this is part of the goal in the "phases" idea, for efforts that have 
to change direction.)

> I notice Diameter is listed as a separate protocol from RADIUS, but
> isn't Diameter really RADIUS v2?

This gets back to the notation question I raised separately, for such things as 
phases.  It would be great if we could resolve this to cover such cases.

And this particular example can't reasonably be supported by the choice I had 
stated a preference for, which is a version number in column 1, since the basic 
name is different.

>  Diameter has been adopted by the
> telecom/service provider segment, but it has been largely rejected by
> the enterprise segment who prefer the simplicity of RADIUS. But
> adoption by some market segements and not others certainly doesn't get
> reflected in this table.

That's where the Target/Segment column helps.

> I would be interested in seeing the adoption trends for other
> protocols and later versions or their extensions, like DHCP and TLS
> and RADIUS and ipfix and IPv4/IPv6.


> 5) I am concerned about incomplete information, especially concerning
> area production.

The good news, is that you can directly remedy that deficiency....

> 6) I am concerned about statistics without analysis. The info
> presented has no analysis with it.

The right-most column is conveniently available for pointing to in-depth 
information...  Discussion of history, core issues, and the like, is not the 
goal of this wiki, but yes, it could be extremely helpful to develop that 

> I am especially concerned because I see many efforts that try to build
> the pieces separately. For example, netconf was published two years
> ago, but the data modeling language is still in development, and there
> are no manageable data models available. Sip-clf is trying to build
> just a data model without considering how protocols will need to work
> to acheive the targeted use cases (and what effect that will have on
> the information and data models).

Is there a way to depict this sort of issue in a table?  Can the wiki be 
modified to support it?

> --Summary--
> I think this results chart is way too simplistic, and it can be very
> misleading, and it could lead to bad decisions about how we should
> develop protocols.

Your premise is that greater complexity will produce greater insight.

The cognitive reality is that there needs to be a balance between simplicity and 
complexity.  Go too far in either direction and utility is lost.

Where the balance needs to be largely depends on the details of the target 
population for consuming the wiki data.


   Dave Crocker
   Brandenburg InternetWorking