Re: [newtrk] [Fwd: I-D ACTION:draft-carpenter-newtrk-twostep-00.txt]

Brian E Carpenter <brc@zurich.ibm.com> Thu, 16 June 2005 19:07 UTC

Received: from darkwing.uoregon.edu (root@darkwing.uoregon.edu [128.223.142.13]) by ietf.org (8.9.1a/8.9.1a) with ESMTP id PAA26475 for <newtrk-archive@lists.ietf.org>; Thu, 16 Jun 2005 15:07:17 -0400 (EDT)
Received: from darkwing.uoregon.edu (majordom@localhost [127.0.0.1]) by darkwing.uoregon.edu (8.13.4/8.13.4) with ESMTP id j5GJ6UN7004355; Thu, 16 Jun 2005 12:06:30 -0700 (PDT)
Received: (from majordom@localhost) by darkwing.uoregon.edu (8.13.4/8.13.4/Submit) id j5GJ6UPU004352; Thu, 16 Jun 2005 12:06:30 -0700 (PDT)
Received: from mtagate1.uk.ibm.com (mtagate1.uk.ibm.com [195.212.29.134]) by darkwing.uoregon.edu (8.13.4/8.13.4) with ESMTP id j5GJ6Rqm004037 (version=TLSv1/SSLv3 cipher=EDH-RSA-DES-CBC3-SHA bits=168 verify=NOT) for <newtrk@lists.uoregon.edu>; Thu, 16 Jun 2005 12:06:28 -0700 (PDT)
Received: from d06nrmr1407.portsmouth.uk.ibm.com (d06nrmr1407.portsmouth.uk.ibm.com [9.149.38.185]) by mtagate1.uk.ibm.com (8.12.10/8.12.10) with ESMTP id j5GJ6LrU151378 for <newtrk@lists.uoregon.edu>; Thu, 16 Jun 2005 19:06:21 GMT
Received: from d06av02.portsmouth.uk.ibm.com (d06av02.portsmouth.uk.ibm.com [9.149.37.228]) by d06nrmr1407.portsmouth.uk.ibm.com (8.12.10/NCO/VER6.6) with ESMTP id j5GJ6LRP283964 for <newtrk@lists.uoregon.edu>; Thu, 16 Jun 2005 20:06:21 +0100
Received: from d06av02.portsmouth.uk.ibm.com (loopback [127.0.0.1]) by d06av02.portsmouth.uk.ibm.com (8.12.11/8.13.3) with ESMTP id j5GJ6KIR032536 for <newtrk@lists.uoregon.edu>; Thu, 16 Jun 2005 20:06:21 +0100
Received: from sihl.zurich.ibm.com (sihl.zurich.ibm.com [9.4.16.232]) by d06av02.portsmouth.uk.ibm.com (8.12.11/8.12.11) with ESMTP id j5GJ6KhO032531; Thu, 16 Jun 2005 20:06:20 +0100
Received: from zurich.ibm.com (sig-9-145-131-80.de.ibm.com [9.145.131.80]) by sihl.zurich.ibm.com (AIX4.3/8.9.3p2/8.9.3) with ESMTP id VAA84106; Thu, 16 Jun 2005 21:06:13 +0200
Message-ID: <42B1BBBD.20200@zurich.ibm.com>
Date: Thu, 16 Jun 2005 19:49:49 +0200
From: Brian E Carpenter <brc@zurich.ibm.com>
Organization: IBM
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.6) Gecko/20040113
X-Accept-Language: en, fr, de
MIME-Version: 1.0
To: Bruce Lilly <blilly@erols.com>
CC: NEWTRK <newtrk@lists.uoregon.edu>
Subject: Re: [newtrk] [Fwd: I-D ACTION:draft-carpenter-newtrk-twostep-00.txt]
References: <200569145723.330045@bbfujip7> <200506130850.42677.blilly@erols.com> <42AEE9BE.8080000@zurich.ibm.com> <200506151149.21768.blilly@erols.com>
In-Reply-To: <200506151149.21768.blilly@erols.com>
Content-Type: text/plain; charset="us-ascii"; format="flowed"
Content-Transfer-Encoding: 7bit
Sender: owner-newtrk@lists.uoregon.edu
Precedence: bulk
Reply-To: Brian E Carpenter <brc@zurich.ibm.com>
Content-Transfer-Encoding: 7bit

Bruce Lilly wrote:
> [trimming to address the most important issues]
> 
> On Tue June 14 2005 10:29, Brian E Carpenter wrote:
> 
>>>>-- clarifies that moving up from PS is explicitly tied
>>>>to interoperability
> 
> 
>>>2. a list of partially- or non-interoperating features (see below) is
>>>   not the same as actual interoperability.  The relevant "problem"
>>>   is that some people don't believe that mere markings on virtual
>>>   paper -- moreover with no regard for the factual accuracy of what
>>>   those markings represent -- constitute interoperability, and that
>>>   "problem" is "solved" by explicitly *removing* ties to actual
>>>   interoperability. 
>>
>>Nothing we write is the same as actual interoperability. The proposal
>>doesn't remove the ties between what we write and interoperability;
>>it just moves them around a bit.
> 
> 
> I believe that the change from BCP 9 (a.k.a RFC 2026) "'interoperable'
> means to be functionally equivalent or interchangeable components of
> the system or process in which they are used" and "The requirement
> for at least two independent and interoperable implementations applies
> to all of the options and features of the specification" to the draft
> under discussion "applies to each of the options and features of the
> specification considered individually" is much more than "moves them
> around a bit"; it changes the character of the requirement from assessing
> interoperability in terms of functional components to something which
> is considerably less meaningful w.r.t. interoperability as currently
> defined in BCP 9.

Bruce, I think we've had this conversation, but iirc our rules on
what interoperability means were intended very precisely to ensure
that the *specification* was interoperable with itself and not that
implementations were conformant. If someone wants to check and
certify conformance, that's fine but it's not IETF business.

My changes are specifically aimed at the target of checking
that all the features in the spec are interoperable (or not), not
at whether the implementations are complete.

The other change I suggested would allow us to state that a spec
has been proved interoperable except for certain specified features,
in order to avoid the need to update an RFC just to remove
those features. That's a point we can certainly argue, and I'd like
to hear other opinions.

> 
> 
>>>>-- removes the sense of struggling towards an unattainable
>>>>goal (yet another bureaucratic step to finally reach STD status)
>>>
>>>
>>>"unattainable" is too strong; as there are full standards, achieving
>>>that status clearly is attainable.
>>
>>So rarely that nobody cares, IMHO.
> 
> 
> If nobody cares, why are we here?  Let the sleeping dog lie. [I believe
> that "nobody" is too strong an assertion also]

Because I believe that by making the end of the standards track more
attainable, we reduce the cost/benefit ratio of investing effort
after reaching PS. And that will cause more specs to be advanced,
which is presumably an implied goal of this WG.
> 
> 
>>>The third bullet point addresses lack of enforcement of implementation
>>>or deployment experience, and further weakening the "requirement" for
>>>interoperable implementations doesn't address the lack of enforcement,
>>>it codifies it -- the problem is exacerbated.
>>
>>But my proposal doesn't weaken (or strengthen) that requirement.
>>It just moves it around a bit.
> 
> 
> See above.
> 
> 
>>>The penultimate bullet point discusses failure to carry out periodic
>>>reviews, and that appears to be "addressed" by the draft proposal in
>>>the same manner as the standards track process itself (throwing out
>>>the baby by eliminating the requirement).  As the IESG has been tasked
>>>with carrying out such reviews (BCP 9), it is difficult to see this
>>>as anything other than "appoints itself as both judge and jury on
>>>process changes".
>>
>>This isn't the IESG, it's newtrk, and newtrk is here to propose
>>changes. But, with my limited experience of the IESG workload, I
>>want to state as my opinion that the probability of any IESG operating
>>under current conditions actually implementing such a periodic
>>review is zero. What use is a provision that, as a matter of running
>>code, is certain not to be executed?
> 
> 
> Presumably the provision is there because the WG, the IETF (via Last
> Call review), and the IESG felt that it was an important step in
> managing the collection of specifications that constitute Standards
> Track RFCs.  IESG has apparently decided to second-guess that
> consensus decision by not carrying out the process specified in BCP 9.
> Not merely judge and jury, but (non-)executioner as well.

That was certainly the view at the time RFC 2026 was approved.
But we do have running code proof that the IESG doesn't have
cycles. If you want the job done, you have to provide the cycles -
which would imply a new committee, or offloading some other duties
from the IESG.
> 
> 
>>>The last bullet point discusses lack of maintenance, and the first
>>>bullet point of the section notes that specifications rarely progress;
>>>the practice of closing down WGs noted in the last bullet point has
>>>been recognized as a major contributing factor in the lack of
>>>progression, as discussed here and on the IETF discussion list. The
>>>draft proposal doesn't address that issue.
>>
>>It's somewhat decoupled, but I think the point is valid. On the one hand
>>we don't want self-perpetuating WGs, but on the other hand if we close
>>them down ASAP we lose any chance of a reasonably prompt interoperability
>>review. I agree something needs to change in this area.
> 
> 
> This is also related to the periodic reviews, since per BCP 9 the WG
> Chair is responsible for documenting interoperability in a report for
> advancement to DS.  No WG, no Chair, no report, no advancement.

Indeed. And no maintenance of ISDs, either.

>  
> 
>>>i.e. "lowering the quality bar"
>>
>>No, not in reality. Abolishing the bar that almost nobody tries to jump over
>>has no particular effect at the entry level, and most documents stay at the
>>entry level.
> 
> 
> There are other factors in lack of advancement (see above).  Changing the
> requirement for interoperability to something (difficult to concisely
> characterize exactly what that is) considerably less related to true
> interoperability certainly lowers the quality bar.

I dispute this, see above.

   Whether or not that's
> a de-facto process change vs. a de-jure process change is another matter,
> outlined in detail in my June 8 "Interoperability" message, and related
> to the "judge and jury" comments.

De facto, we run an approximation to RFC 2026 that matches available
human resources.
> 
> 
>>>>-- moving to Interoperable Standard specifically may *not* require a
>>>>   new RFC even if some features are listed as not shown to interoperate
>>>
>>>
>>>Apparently already the de-facto situation.  Usually there is a new RFC
>>>because of correction of errata, boilerplate changes, etc., but non-
>>>interoperating features has not prevented progression on the current
>>>Standards Track, at least recently.
>>
>>Hmm. Examples?
> 
> 
> We discussed (in the sense of BCP 9 section 6.5.2 second paragraph) the
> case of advancement of the 2476bis draft to DS despite lack of
> demonstrated functional component interoperability (See
> http://www.ietf.org/IESG/Implementations/rfc-2476-and-draft-gellens-submit-bis-implementation.txt
> and the spreadsheet matrix derived from it at
> http://users.erols.com/blilly/rfc-2476-gellens-implementation.sxc
> ).  There is also the advancement of a 2234 successor draft to DS
> despite the fact that *both* cited implementations (
> http://www.ietf.org/IESG/Implementations/RFC2234-implementation-report.txt
> ) explicitly fail to
> use the 2234 and draft provisions specifying CRLF line endings, one fails
> to accept binary, decimal, and hexadecimal literals using %B, %D, %X in
> spite of the fact that 2234 and the draft explicitly state case
> independence of the 'b', 'd', and 'x' (it could be argued that the
> specification is not sufficiently clear, since the case-independence part
> is in a separate section), one accepts (illegal) empty rules, one fails
> to accept a specified string literal of the form %x0D.0A.0B.  And there
> are specification features known to cause confusion which have not been
> used, and which could and should be removed as part of the fine-tuning
> of the specification.
> 

Interesting. So if we are operating that way, doesn't it tell us
something about what's realistically possible? It seems to me that
unless we can find a few million to fund an IETF Interoperability
Institute, it's very hard to do better. I really don't see an easy
answer here, other than dealing with what we know to be pragmatically
possible today.
> 
>>>Simply listing non-interoperating features does not constitute actual
>>>interoperability (see above and my June 8 list message).
>>
>>No,of course not - it would however avoid the need to republish an RFC
>>if only minor features had not be demonstrated.
> 
> 
> I believe that the provision for removal of features not demonstrated as
> interoperable removes features which may be used to exploit security
> vulnerabilities and enhances interoperability (in the BCP 9 functional
> component sense).

It doesn't follow that such features imply vulnerabilities (they might,
of course). But I really don't see a logical difference between
deleting text and re-issuing the RFC compared with an external
statement that the feature is removed.

   I further believe that that is a valuable step in
> refining a specification as it progresses from entry level through
> fine-tuned to cast in concrete.

And that is where we differ. I think it's a clerical matter without
deep significance.

> 
> 
>>>>(Remember that my starting position was to move to a one level
>>>>standards track.
>>>
>>>
>>>Effectively, we already have that via BCP RFCS, which (per BCP 9) are
>>>supposed to meet the requirements for a full Standard.  If -- as sometimes
>>>claimed -- the quality bar has been raised to require interoperating
>>>implementations at Standards Track entry, and if there is some
>>>hypothetical specification which meets the requirements for full Standard
>>>when the specification is prepared, there is a mechanism to give the
>>>specification full Standard status in one step via BCP.
>>
>>Well, BCPs are not meant for that as I read 2026.
> 
> 
> I agree that BCP 9 specifies other uses.  However, in the hypothetical
> case of a fully-fledged, truly interoperable specification with wide
> deployment experience in mission-critical applications, BCP *could* be
> used.  Indeed there is a proposal currently in Last Call for consideration
> as BCP which may well be more suitable as a Standards Track specification
> precisely because of fine-tuning and deployment issues.
>  
> 
>>>There are more than 2 questions of significance.  Obviously, one could
>>>ask:
>>>3. What significant problems would the proposed changes introduce?
>>
>>I haven't spotted any, but maybe I have rose-tinted glasses.
> 
> 
> I have mentioned the functional component interoperability issues, and
> the current provision for removal of unimplemented potential security
> loopholes/interoperability sandbars.

See above

> 
> 
>>>4. Do we really care "to make the Internet work better", i.e.
>>>   interoperabilty, or is the Mission Statement a hollow one?
>>>   Indeed, that question is fundamental; if we do care about quality --
>>>   making the Internet work better in reality -- then anything that
>>>   lowers the quality bar is counterproductive.
>>
>>I think that's arguable. You may make it work better by shipping something
>>in a big hurry despite imperfections.
> 
> 
> Well, I'm not sure that "a big hurry" is possible with current procedures
> unrelated to the Standards Track (e.g. IESG review *after* IETF Last Call
> rather than *concurrent* review, RFC Editor backlog), and the nature of
> the beast (getting e.g. security considerations right takes time and
> effort).  And if the "imperfections" impair interoperabilty or cause
> havoc, one may well make the Internet work much worse.

Indeed. Circumstances vary and in many cases you're quite correct.

> 
> 
>>But in any case, I don't agree 
>>that reducing the number of standards levels need have any adverse effect
>>on the quality of our specs.
> 
> 
> Agreed that all else being equal (particularly the quality-assurance
> provisions w.r.t. interoperability and large-scale deployment), the number
> of steps need not be related to quality.  However, if weakening or removing
> those QA provisions is specified as (part of) the reduction of steps, then
> clearly quality may suffer.

Yes, but what we are seeing I think is that BCP 9 sets a standard that
we can't meet in practice, and that tends to make people give up.

    Brian (thinking it's time I put my AD hat on...)


.
newtrk resources:_____________________________________________________
web user interface: http://darkwing.uoregon.edu/~llynch/newtrk.html
mhonarc archive: http://darkwing.uoregon.edu/~llynch/newtrk/index.html