[Icar] Re: [mpowr] Re: [Solutions] Summary of Discussion on Reforming IETF Quality Control Process

Margaret Wasserman <margaret@thingmagic.com> Sat, 10 January 2004 17:20 UTC

Received: from optimus.ietf.org ([132.151.1.19]) by ietf.org (8.9.1a/8.9.1a) with ESMTP id MAA11765 for <icar-archive@odin.ietf.org>; Sat, 10 Jan 2004 12:20:52 -0500 (EST)
Received: from localhost.localdomain ([127.0.0.1] helo=www1.ietf.org) by optimus.ietf.org with esmtp (Exim 4.20) id 1AfMme-0008MC-Vk for icar-archive@odin.ietf.org; Sat, 10 Jan 2004 12:20:25 -0500
Received: (from exim@localhost) by www1.ietf.org (8.12.8/8.12.8/Submit) id i0AHKNMf032113 for icar-archive@odin.ietf.org; Sat, 10 Jan 2004 12:20:23 -0500
Received: from odin.ietf.org ([132.151.1.176] helo=ietf.org) by optimus.ietf.org with esmtp (Exim 4.20) id 1AfMmd-0008Lr-0N for icar-web-archive@optimus.ietf.org; Sat, 10 Jan 2004 12:20:23 -0500
Received: from ietf-mx (ietf-mx.ietf.org [132.151.6.1]) by ietf.org (8.9.1a/8.9.1a) with ESMTP id MAA11734 for <icar-web-archive@ietf.org>; Sat, 10 Jan 2004 12:20:19 -0500 (EST)
Received: from ietf-mx ([132.151.6.1]) by ietf-mx with esmtp (Exim 4.12) id 1AfMmb-0002ku-00 for icar-web-archive@ietf.org; Sat, 10 Jan 2004 12:20:21 -0500
Received: from exim by ietf-mx with spam-scanned (Exim 4.12) id 1AfMki-0002dV-00 for icar-web-archive@ietf.org; Sat, 10 Jan 2004 12:18:26 -0500
Received: from [132.151.1.19] (helo=optimus.ietf.org) by ietf-mx with esmtp (Exim 4.12) id 1AfMjM-0002Yk-00 for icar-web-archive@ietf.org; Sat, 10 Jan 2004 12:17:00 -0500
Received: from localhost.localdomain ([127.0.0.1] helo=www1.ietf.org) by optimus.ietf.org with esmtp (Exim 4.20) id 1AfMjN-0008DR-7i; Sat, 10 Jan 2004 12:17:01 -0500
Received: from odin.ietf.org ([132.151.1.176] helo=ietf.org) by optimus.ietf.org with esmtp (Exim 4.20) id 1AfMib-0008Be-D2 for icar@optimus.ietf.org; Sat, 10 Jan 2004 12:16:13 -0500
Received: from ietf-mx (ietf-mx.ietf.org [132.151.6.1]) by ietf.org (8.9.1a/8.9.1a) with ESMTP id MAA11573 for <icar@ietf.org>; Sat, 10 Jan 2004 12:16:09 -0500 (EST)
Received: from ietf-mx ([132.151.6.1]) by ietf-mx with esmtp (Exim 4.12) id 1AfMiZ-0002Vt-00 for icar@ietf.org; Sat, 10 Jan 2004 12:16:11 -0500
Received: from exim by ietf-mx with spam-scanned (Exim 4.12) id 1AfMgg-0002QY-00 for icar@ietf.org; Sat, 10 Jan 2004 12:14:16 -0500
Received: from smtp.exodus.net ([66.35.230.236]) by ietf-mx with esmtp (Exim 4.12) id 1AfMfk-0002Np-00 for icar@ietf.org; Sat, 10 Jan 2004 12:13:17 -0500
Received: from ms101.mail1.com (ms101.mail1.com [209.1.5.174]) by smtp.exodus.net (8.12.8/8.12.8) with ESMTP id i0AIqWw3003021 for <icar@ietf.org>; Sat, 10 Jan 2004 10:52:35 -0800
Received: from ala-mrwtemp.thingmagic.com (unverified [24.61.30.237]) by accounting.espmail.com (Rockliffe SMTPRA 5.2.5) with ESMTP id <B0017778926@ms101.mail1.com>; Sat, 10 Jan 2004 09:12:40 -0800
Message-Id: <5.1.0.14.2.20040110075158.0385bbd0@ms101.mail1.com>
X-Sender: margaret@thingmagic.com@ms101.mail1.com
X-Mailer: QUALCOMM Windows Eudora Version 5.1
Date: Sat, 10 Jan 2004 12:09:52 -0500
To: "James Kempf" <kempf@docomolabs-usa.com>
From: Margaret Wasserman <margaret@thingmagic.com>
Cc: icar@ietf.org, solutions@alvestrand.no
In-Reply-To: <003701c3d737$5b361530$386015ac@dclkempt40>
References: <5.1.0.14.2.20040109203410.04552a28@ms101.mail1.com>
Mime-Version: 1.0
Content-Type: text/plain; charset="us-ascii"; format=flowed
Subject: [Icar] Re: [mpowr] Re: [Solutions] Summary of Discussion on Reforming IETF Quality Control Process
Sender: icar-admin@ietf.org
Errors-To: icar-admin@ietf.org
X-BeenThere: icar@ietf.org
X-Mailman-Version: 2.0.12
Precedence: bulk
List-Unsubscribe: <https://www1.ietf.org/mailman/listinfo/icar>, <mailto:icar-request@ietf.org?subject=unsubscribe>
List-Id: Improved Cross-Area Review <icar.ietf.org>
List-Post: <mailto:icar@ietf.org>
List-Help: <mailto:icar-request@ietf.org?subject=help>
List-Subscribe: <https://www1.ietf.org/mailman/listinfo/icar>, <mailto:icar-request@ietf.org?subject=subscribe>
X-Spam-Checker-Version: SpamAssassin 2.60 (1.212-2003-09-23-exp) on ietf-mx.ietf.org
X-Spam-Status: No, hits=0.0 required=5.0 tests=AWL autolearn=no version=2.60

Hi James,

[I'm only responding on solutions and icar (not mpowr).  Most
of this discussion happened on solutions, and your proposal is
largely about review/approval structures, so I think it is
more applicable to the proposed ICAR effort than to the proposed
MPOWR effort.]

I've had some time to think about (and re-read) your draft, so I
now I will provide my feedback, rather than just clarifying
questions.  I do support the idea of a tiered review/approval
structure within the IETF, but I have many concerns with the
specifics of your proposal, several of which I also expressed
regarding the SIRS proposal.

IMO, any proposal in this area needs meet certain criteria
for scalability, consistency, cross-area coverage, efficiency,
manageability and accountability. I have some concerns with
your proposal in each of these areas and I have explained them
below.  I also hope that these criteria and my explanations may
be useful to assessing other review/approval proposals, as well.

These concerns are what lead me to originally prefer the type of
per-area review board that Alex Zinin has suggested, rather than
a single large (SIRS-like) review board as you suggest.  I
believe that it would be possible to give single ADs the ability
to approve non-critical documents (criteria TBD) based on
reviews by all of the per-area boards -- in other words, I think
that we can somewhat consider the structure/management of the review
board(s) to be orthogonal to whether some documents can be approved
without full IESG review.

So, on to the criteria...

SCALABILITY
===========

Like the SIRS proposal, I believe that this proposal is quite
naive about the scale of this particular problem.  Let's take
a few numbers:

We currently approve about 200 RFCs per year.  Each of these
RFCs receives (on average) ~2-1/2 review cycles from the IESG
plus a full AD Review.  So, let's assume that we will continue to
produce 200 documents per year, and that each will be subjected
to 3 cross-area review cycles.

BTW, if the same group of people reviews the document all three
times (also see section on consistency below), the later reviews
will take much less time than the earlier reviews.

The IESG consists of 13 people, not all of whom carefully review
each document during IESG review (for various reason).  So, let's
assume that we can get adequate cross-area coverage (see section
on cross-area coverage) by having each document reviewed by
8 properly selected members of the Quality Review Board.  That's
the number of ADs that it currently takes (today, with one slot
empty) to approve publication of a document.

So, the number of individual document reviews required would be
200 * 3 * 8 == 4800.

Let's assume that we can find 200 people who are willing and
qualified to serve on the Quality Review Board.  I don't know
that this is possible, and it means that someone will have to
manage a function that involves 200 people (see manageability
section below), but let's assume...  In that case, each member
of the review board would need to do an average of 24 reviews
per year -- so a minimum of 3 is misleadingly low.  Ideally,
this means that each member of the review board would do a
full 3-cycle review for 8 documents per year.

If we expect this system to result in a 50% improvement in
document throughput, we will need to handle 300 documents per
year, which requires 36 reviews/board member/year (or 12
documents).

Doable?  Maybe.  If we can find 200 people willing to do this,
figure out how to train and organize them (see sections on
preparation/training and management below), and sustain that
number over time.  The sign-up rate for SIRS does not make me
confident that this is possible, but we could try....

CONSISTENCY
===========

There are two types of consistency that are important for IETF
document reviews:

     (1) Consistency across the different reviews done for the
         same document (to avoid thrash).
     (2) Using a consistent set of acceptance criteria for each
         document that is reviewed/approved.

Although we don't always achieve both type of consistency today
(due to personnel changes on the IESG, for example), we usually
do quite well in this area.  The fact that the same group of people
reviews/approves every document is heavily optimized for consistency
(over efficiency, scalability, etc.), and because we don't suffer
too much is this area today, it is easy to underrate the benefits
and importance of consistency.

Type (1) consistency could be achieved by having the same set of
reviewers (to whatever extent possible) perform all of the
reviews for a given document.  In other words, each document
would have its own mini-board of reviewers (8 reviewers, in my
previous example) who would do the initial review and perform
whatever subsequent reviews are needed to determine that new
versions of the document correct problems found in earlier
reviews and do not introduce new problems.  So, this is fairly
easily handled, assuming that we have a manageable method for
assigning reviewers to a particular document and replacing them
as needed, etc. (see manageability section below).

Type (2) consistency is _much_ harder to achieve across a large
board (~200 people in my example above) than it is with a group
of 13 people who spend several hours per week on the phone with
each other, hold retreats and communicate daily...

In order to achieve even a reasonable level of consistency across
a large group (200 people?), I believe that we would need most or
all of the following things:

     - Written review criteria for each type of document.
     - Documented per-area review criteria (like the MIB nits) for
       every area or technical sub-area.
     - A record of architectural/policy decisions that were made
       that can be searched and used by subsequent reviewers.
     - A mandatory preparation/training program (in-person or
       on-line) for all new members of the review board.
     - Some type of mentoring or monitoring program for new
       members during the first NN reviews (most easily
       achieved, perhaps, by having some sort of structure
       within the board?).

Without a plan (including committed resources) to achieve these
things, I believe that a large review board would devolve into
chaos.

There may also be an issue with documents that are passed to the
IESG for final approval.  The IESG's review may not be consistent
with earlier reviews, perhaps resulting in "late surprises" or
demotivation of the WG.  This problem is most likely in a situation
where the IESG has no influence over what reviewers are chosen to
do the initial review, and where the IESG does not feel accountable
for the quality/correctness of the initial review (see section on
accountability below).

CROSS-AREA COVERAGE
===================

The IESG believes that Cross-Area Review is a core value of the
IETF, as well as one of the key properties that differentiates us
from other standards bodies.  Obviously the folks involved in
developing this plan think so, too, because otherwise we would
simply ship documents based on WG technical review.

However, this proposal doesn't seem to do much to support or
facilitate cross-area review.

In order for a document to receive sufficient cross-area review,
it is not sufficient to establish a pool of (perhaps 200)
reviewers and ensure that the document is reviewed by a certain
number (perhaps 8) of those reviewers.  We would need to make
sure that the document is reviewed by a range of people who,
together, have an appropriate breadth of expertise to provide
adequate cross-area coverage.

It isn't easy to put together a small group of people who can
cover all of the technology areas represented in the IETF (as
I'm sure any nomcom member could tell you), and it certainly
isn't possible to populate the Quality Review Team with people
who each have knowledge of the whole set of IETF technologies
and their possible interactions.  As far as I know, there isn't
anyone who could provide a full cross-area review alone (although
Allison can do a passable imitation) -- that's why we have documents
reviewed by IESG members from all areas before they are approved.

A lot of details are swept under the rug of the statement "The
Working Group Chair is responsible for ensuring that all work
done by the Working Group receives adequate and timely review".
How are you picturing that this would work?  How would the
WG chair identify the areas of expertise represented by each
reviewer and gauge their level of expertise in each area?
How would s/he make sure that the appropriate inter-area
issues are adequately covered by the chosen reviewers?

This proposal is largely predicated on the belief that WG chairs
should not be responsible for document quality because their
involvement in the technology may blind them to its technical
flaws.  However, it seems to assume that WG chairs will be
aware of all possible interactions that this technology may
have with other IETF areas and be able (and willing) to
solicit review feedback from the right sort of people to
uncover all of those problems, which IMO won't work.  For
example, if the WG is unaware that their work has major impact
on the application layer, the WG chair may not think it is
important that the work be reviewed by an applications-
knowledgeable reviewer! This seems, to me, to be a pretty big
flaw.

Per-area review boards have a major advantage in this area.  If
a document is passed by every per-area review board (particularly
if those boards are carefully chosen to cover their area well
and provide coverage for inter-area gaps), with the reviewers
chosen by the ADs or other area review board leaders who aren't
active in the WG, then it is more likely that a document will
have received adequate cross-area review than if it is reviewed
by a set of reviewers chosen from a large pool by the WG chair.

EFFICIENCY
==========

One goal of any new review/approval process should be to increase
our document throughput -- publishing documents faster than we
do today, without a reduction in quality.  In general, we should
try to avoid a situation where a document needs to go through
two (possibly inconsistent, see above) review/approval processes.

This proposal might achieve this goal for some documents,
particularly those that do not require IESG review.  However,
the most important documents might be subjected to two rounds
of review/approval -- one by the Quality Review Board and
another by the IESG.  This issue might be resolved by better
defining how/when a decision is made regarding whether a
document will be reviewed/approved by the Quality Review Board
or the IESG.

MANAGEABILITY
=============

In order to run an orderly review process, to assign reviewers
with appropriate (combinations of) expertise, to adquately
training reviewers and to maintain the quality of the review
process, the Quality Review Board would need to be managed.
This proposal does not indicate how the board would be managed,
who would manage it, how they would be chosen, etc.  This is
a major omission that makes it difficult to evaluate many
aspects of this proposal.

Passive voice is used throughout this document, and that tends
to obscure the fact that there is no management structure
defined.  For example, it says "Reviewers can be assigned to
particular Working Groups and have their names listed on the
Working Group Web page as technical advisors".  By whom?  By
the WG chair?

This document does not mention any preparation or training
for members of the Quality Review Board, nor does it
specify any period during which a member's initial
reviews will be monitored by experienced members, etc.
I think that some type of preparation/training is essential
to asking people to assume any role in the IETF process.

The per-area review board proposal relies on the ADs to select
reviewers and manage the process as appropriate to each area.
The AD would be responsible for ensuring that members of the
area boards are adequately trained and prepared, and would
be held accountable for the quality of their work.

ACCOUNTABILITY
==============

This document seems to be based on the concept that quality
review is performed for the WGs, and that the review process
should be accountable to the WGs.  Ultimately, reviewers report
to WG chairs, because they if they are not asked (by WG chairs)
to review at least 3 document per year, they are removed.
I believe that is wrong...

The current AD Review/IETF Last Call/IESG Review system is
not intended to be accountable to a WG.  The IESG (as a review
board) exists outside of the WG structure, as a group that is
accountable to the community.  AD/IESG reviews and IETF
Last Calls serve as a mechanism for the interests of the
IETF community (and perhaps the Internet as a whole) to be
balanced against the interests of a particular WG, and I don't
see this balance adequately represented in this proposal, at
least not for documents that don't receive full IESG review.

I realize that some members of this discussion would like
to move away from the hierarchical structure represented by
the current IESG.  I'm not sure I agree with that approach,
but I'm also not sure that I'm against it.  So far, though,
I have seen any proposal that gives the IETF/Internet
community a strong enough voice that doesn't involve
giving some authority to some selected representatives of
that community...

This doesn't mean that the IESG is the only group that can
review/approve documents, and I do strongly believe that we should
build a more scalable review/approval structure that does not
rely solely on the IESG.  What it does mean (to me) is that
reviewers should be chosen, managed, evaluated and held
accountable by someone other than WG chairs.  This leads to
two choices:

     - Have the IESG manage the review board(s) and the
       review/approval process.
     - Build a different group to manage this process that
       is selected by and accountable to the community.

The second choice has some known weaknesses.  In the past, we
have found that separating review/approval from the group that
was ultimately responsible for WG management lead to serious
problems with the review/approval group becoming out of touch
with the community, etc.  (at least that's a very short summary
of what I've been told about the pre-Kobe IESG/IAB split).
Perhaps we could do this in a way that wouldn't have those
flaws, but most people I've spoken to seem to be against the
idea of splitting review/approval and WG management at the highest
level, as the review/approval group has no motivation to help the
WGs to achieve their chartered goals...

So, perhaps the first choice makes more sense?  That is the
choice that is represented in the per-area review board proposal
that Alex has submitted.

In addition to properly balancing the interests of the WG
and the interests of the IETF/Internet community, the per-area
review board approach makes the IESG directly accountable to
the community for the results of the initial review/approval
process,  making it more likely that those reviews will be
consistent with a later IESG review, if one is deemed necessary.

---

Another advantage of the per-area review board proposal over
this proposal is that it builds upon, rather than replacing, the
work that has already been done to build per-area review teams
such as the MIB doctors, the ops-dir, Transport doctors, the
rtg-dir, etc.

At this point, I think that my comments on the proposal may be
almost as long as the proposal itself, so I'll stop now.

I hope that these comments will be taken as constructive input
to the effort of building a scalable IETF review/approval process,
even though I don't favor the specific type of review mechanism
proposed in this document.

Margaret




















_______________________________________________
Icar mailing list
Icar@ietf.org
https://www1.ietf.org/mailman/listinfo/icar