Review Board Scalability (was: Re: [Icar] Input based on SIRS experience)
"James Kempf" <kempf@docomolabs-usa.com> Mon, 12 January 2004 18:08 UTC
Received: from optimus.ietf.org ([132.151.1.19])
by ietf.org (8.9.1a/8.9.1a) with ESMTP id NAA08935
for <icar-archive@odin.ietf.org>; Mon, 12 Jan 2004 13:08:24 -0500 (EST)
Received: from localhost.localdomain ([127.0.0.1] helo=www1.ietf.org)
by optimus.ietf.org with esmtp (Exim 4.20) id 1Ag6Tj-0007Oj-RM
for icar-archive@odin.ietf.org; Mon, 12 Jan 2004 13:07:56 -0500
Received: (from exim@localhost)
by www1.ietf.org (8.12.8/8.12.8/Submit) id i0CI7sNg028430
for icar-archive@odin.ietf.org; Mon, 12 Jan 2004 13:07:54 -0500
Received: from odin.ietf.org ([132.151.1.176] helo=ietf.org)
by optimus.ietf.org with esmtp (Exim 4.20) id 1Ag6Th-0007OT-VY
for icar-web-archive@optimus.ietf.org; Mon, 12 Jan 2004 13:07:54 -0500
Received: from ietf-mx (ietf-mx.ietf.org [132.151.6.1])
by ietf.org (8.9.1a/8.9.1a) with ESMTP id NAA08869
for <icar-web-archive@ietf.org>; Mon, 12 Jan 2004 13:07:50 -0500 (EST)
Received: from ietf-mx ([132.151.6.1]) by ietf-mx with esmtp (Exim 4.12)
id 1Ag6Tg-0006jr-00
for icar-web-archive@ietf.org; Mon, 12 Jan 2004 13:07:52 -0500
Received: from exim by ietf-mx with spam-scanned (Exim 4.12)
id 1Ag6Rl-0006ej-00
for icar-web-archive@ietf.org; Mon, 12 Jan 2004 13:05:54 -0500
Received: from [132.151.1.19] (helo=optimus.ietf.org)
by ietf-mx with esmtp (Exim 4.12) id 1Ag6Pv-0006bt-00
for icar-web-archive@ietf.org; Mon, 12 Jan 2004 13:03:59 -0500
Received: from localhost.localdomain ([127.0.0.1] helo=www1.ietf.org)
by optimus.ietf.org with esmtp (Exim 4.20)
id 1Ag6Pw-0006sq-NY; Mon, 12 Jan 2004 13:04:00 -0500
Received: from odin.ietf.org ([132.151.1.176] helo=ietf.org)
by optimus.ietf.org with esmtp (Exim 4.20) id 1Ag6PO-0006rv-Qx
for icar@optimus.ietf.org; Mon, 12 Jan 2004 13:03:26 -0500
Received: from ietf-mx (ietf-mx.ietf.org [132.151.6.1])
by ietf.org (8.9.1a/8.9.1a) with ESMTP id NAA08558
for <icar@ietf.org>; Mon, 12 Jan 2004 13:03:23 -0500 (EST)
Received: from ietf-mx ([132.151.6.1]) by ietf-mx with esmtp (Exim 4.12)
id 1Ag6PN-0006VL-00
for icar@ietf.org; Mon, 12 Jan 2004 13:03:25 -0500
Received: from exim by ietf-mx with spam-scanned (Exim 4.12)
id 1Ag6OF-0006NT-00
for icar@ietf.org; Mon, 12 Jan 2004 13:02:17 -0500
Received: from key1.docomolabs-usa.com
([216.98.102.225] helo=fridge.docomolabs-usa.com ident=fwuser)
by ietf-mx with esmtp (Exim 4.12) id 1Ag6N2-0006IQ-00
for icar@ietf.org; Mon, 12 Jan 2004 13:01:01 -0500
Message-ID: <01c401c3d936$19675690$606015ac@dclkempt40>
From: "James Kempf" <kempf@docomolabs-usa.com>
To: <icar@ietf.org>
References: <2004111185020.818584@bbprime>
Subject: Review Board Scalability (was: Re: [Icar] Input based on SIRS
experience)
Date: Mon, 12 Jan 2004 10:01:27 -0800
MIME-Version: 1.0
Content-Type: text/plain;
charset="iso-8859-1"
Content-Transfer-Encoding: 7bit
Content-Transfer-Encoding: 7bit
Sender: icar-admin@ietf.org
Errors-To: icar-admin@ietf.org
X-BeenThere: icar@ietf.org
X-Mailman-Version: 2.0.12
Precedence: bulk
List-Unsubscribe: <https://www1.ietf.org/mailman/listinfo/icar>,
<mailto:icar-request@ietf.org?subject=unsubscribe>
List-Id: Improved Cross-Area Review <icar.ietf.org>
List-Post: <mailto:icar@ietf.org>
List-Help: <mailto:icar-request@ietf.org?subject=help>
List-Subscribe: <https://www1.ietf.org/mailman/listinfo/icar>,
<mailto:icar-request@ietf.org?subject=subscribe>
X-Spam-Checker-Version: SpamAssassin 2.60 (1.212-2003-09-23-exp) on
ietf-mx.ietf.org
X-Spam-Status: No, hits=0.0 required=5.0 tests=AWL autolearn=no version=2.60
Content-Transfer-Encoding: 7bit
Content-Transfer-Encoding: 7bit
Margaret Wasserman said:
>Like the SIRS proposal, I believe that this proposal is quite
>naive about the scale of this particular problem. Let's take
>a few numbers:
>
>We currently approve about 200 RFCs per year. Each of these
>RFCs receives (on average) ~2-1/2 review cycles from the IESG
>plus a full AD Review. So, let's assume that we will continue to
>produce 200 documents per year, and that each will be subjected
>to 3 cross-area review cycles.
>
>BTW, if the same group of people reviews the document all three
>times (also see section on consistency below), the later reviews
>will take much less time than the earlier reviews.
>
>The IESG consists of 13 people, not all of whom carefully review
>each document during IESG review (for various reason). So, let's
>assume that we can get adequate cross-area coverage (see section
>on cross-area coverage) by having each document reviewed by
>8 properly selected members of the Quality Review Board. That's
>the number of ADs that it currently takes (today, with one slot
>empty) to approve publication of a document.
>
>So, the number of individual document reviews required would be
>200 * 3 * 8 == 4800.
>
>Let's assume that we can find 200 people who are willing and
>qualified to serve on the Quality Review Board. I don't know
>that this is possible, and it means that someone will have to
>manage a function that involves 200 people (see manageability
>section below), but let's assume... In that case, each member
>of the review board would need to do an average of 24 reviews
>per year -- so a minimum of 3 is misleadingly low. Ideally,
>this means that each member of the review board would do a
>full 3-cycle review for 8 documents per year.
>
>If we expect this system to result in a 50% improvement in
>document throughput, we will need to handle 300 documents per
>year, which requires 36 reviews/board member/year (or 12
>documents).
>
>Doable? Maybe. If we can find 200 people willing to do this,
>figure out how to train and organize them (see sections on
>preparation/training and management below), and sustain that
>number over time. The sign-up rate for SIRS does not make me
>confident that this is possible, but we could try....
and Spencer Dawkins said:
> Yeah, Margaret is also coming up with 200-300 reviewers needed, and
> we still have no idea how many we can get. :-{
It seems to me that, in principle, the only difference between an IETF
review board and the program committee for a conference is that the members
of an IETF review board would be expected to stick with a draft for more
than just one review, and possibly in the number of reviewers per paper. So
it seems like we might be able to look at existing experience with
conference program committees for some idea about how to make this work.
As an example, I'm on the program committee for Mobihoc this year. There are
42 program committee members and 230 papers which works out to a ratio of
about 6 papers per reviewer. I was assigned 11 papers, from which I conclude
that the program chairs have decided to have about 2 reviews per paper.
If one accepts Margaret's numbers of about 24 reviews per year, that would
imply that the number of reviews for IETF drafts would be about twice what
Mobihoc members are expected to do. This doesn't seem all that bad to me
considering that I'll probably do at least 11 additional reviews this year,
of IETF drafts and papers for other conferences, etc. Working backwards from
the Mobihoc program committee, that would reduce the number of review board
members to maybe 90 (twice the Mobihoc number) rather than 200.
To do this, however, we would have to reduce the number of independent
_solicited_ reviewers to 2 rather than Margaret's suggestion of 8. Would
this cause a reduction of quality detection problems? Unclear. Many drafts
now receive unsolicited review as part of WG last call and IETF last call
and presumably that would not change. Margaret's number of 8 reviews was
based on an approximation of how many ADs review drafts, so YMMV. And, even
with 8 people in the IESG reviewing, there have been occasions when problems
have been caught later (MIPv6 binding update security comes to mind). Also,
with an issue tracking system in place, the reviewers may be able to
determine whether issues are resolved more easily without having to wade
through the text in the spec, so reviews might become less time consuming.
I think there is a different issue that is possibly more of a barrier,
getting people to volunteer by providing the right incentive. People
volunteer for a conference program commitee because they want to see what
other people in the area are up to from a reseach standpoint, and they want
to make sure that published results are of high quaility. The program
committee's comments are used to determine what papers are ultimately
published, that is, their reviews make a real difference in the conference
content.
If the review board's work is just advisory, as the ART and SIRS proposals
do, then I think we will have a hard time getting people to volunteer.
Imagine a program committee for a conference in which the reviews produced
by the program committee could be overridden by the program's organizing
committee, or even by the authors (this would be the equivalent of the IESG
or the WGs themselves, respectively, being allowed to override a review
board opinion). Who would volunteer?
There is another kind of conference program committee in which the authors
themselves are required to do reviews. This could be another way to solicit
reviews, people who are draft editors are required to provide reviews.
Obviously, since draft editors are less experienced, their reviews could not
be as highly rated as the review board's reviews. In such conferences, I
presume the organizing committee has the final say, and typically these
kinds of conferences are not first tier.
jak
_______________________________________________
Icar mailing list
Icar@ietf.org
https://www1.ietf.org/mailman/listinfo/icar
- [Icar] Input based on SIRS experience Wijnen, Bert (Bert)
- Re: [Icar] Input based on SIRS experience Dave Crocker
- Re: [Icar] Input based on SIRS experience Dave Crocker
- Re: [Icar] Input based on SIRS experience Spencer Dawkins
- Re: Re: [Icar] Input based on SIRS experience Dave Crocker
- Review Board Scalability (was: Re: [Icar] Input b… James Kempf
- Re: Re: [Icar] Input based on SIRS experience Alex Rousskov
- Re: Review Board Scalability (was: Re: [Icar] Inp… Pekka Savola