Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

mrex@sap.com (Martin Rex) Wed, 09 January 2013 00:36 UTC

Return-Path: <mrex@sap.com>
X-Original-To: ietf@ietfa.amsl.com
Delivered-To: ietf@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 6A8B211E809A for <ietf@ietfa.amsl.com>; Tue, 8 Jan 2013 16:36:13 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -10.249
X-Spam-Level:
X-Spam-Status: No, score=-10.249 tagged_above=-999 required=5 tests=[BAYES_00=-2.599, HELO_EQ_DE=0.35, RCVD_IN_DNSWL_HI=-8]
Received: from mail.ietf.org ([64.170.98.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id oQAutbrWaLwl for <ietf@ietfa.amsl.com>; Tue, 8 Jan 2013 16:36:12 -0800 (PST)
Received: from smtpde02.sap-ag.de (smtpde02.sap-ag.de [155.56.68.140]) by ietfa.amsl.com (Postfix) with ESMTP id A763E1F0CAF for <ietf@ietf.org>; Tue, 8 Jan 2013 16:36:11 -0800 (PST)
Received: from mail05.wdf.sap.corp by smtpde02.sap-ag.de (26) with ESMTP id r090a9x2014661 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Wed, 9 Jan 2013 01:36:09 +0100 (MET)
Subject: Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]
In-Reply-To: <CAF4+nEE9uq_811ZMX4ShCqoprL_FZFhY4XW9W2U0TZmpf86uTQ@mail.gmail.com>
To: Donald Eastlake <d3e3e3@gmail.com>
Date: Wed, 09 Jan 2013 01:36:09 +0100
X-Mailer: ELM [version 2.4ME+ PL125 (25)]
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="US-ASCII"
Message-Id: <20130109003609.2BFF41A445@ld9781.wdf.sap.corp>
From: mrex@sap.com
X-SAP: out
Cc: IETF Discussion <ietf@ietf.org>
X-BeenThere: ietf@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
Reply-To: mrex@sap.com
List-Id: IETF-Discussion <ietf.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/ietf>, <mailto:ietf-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/ietf>
List-Post: <mailto:ietf@ietf.org>
List-Help: <mailto:ietf-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/ietf>, <mailto:ietf-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 09 Jan 2013 00:36:13 -0000

 
>John Day <jeanjour@comcast.net> wrote:
>>
>> The reasons have been discussed or at least alluded to previously in this
>> thread.  The short answer is we have been there and done that: 30 years ago.
>>
>> All those tools were developed and used successfully in the 80s.
>> I know of cases where doing the formal specification alongside the
>> design phase caught lots of problems.   However, there are two
>> central problems:  First, in general, programmers are lazy and just
>> want to code. ;-)  Using a formal method is a lot more work.
>> Second, the complexity of the formal statements that must be written
>> down is greater than the code.  So there is a higher probability
>> of mistakes in the formal description than in the code.
>> Admittedly, if those statements are made, one has a far greater
>> understanding of what you are doing.

I believe that the problem with the formal logic is that it is difficult
to both write as well as read/understand, and to verify that the chosen
"axioms" actually reflect (and lead to) the desired behaviour/outcome,
the relative of scarcity of suitable tools, the complexity of the
underlying theory and tools, and the tools' resulting lack of
"intuitive usability".


>>
>> Once you have both, there is still the problem that if a bug or ambiguity
>> shows up,

For any formal proofing system worth its dime, this can be 100% ruled out,
since the proofing system will emit a 100% bugfree implementation of the
spec in the programming language of your choice as a result/byproduct of the
formal proofing process.


>>
>>  neither the code nor the formal spec nor a prose spec can be taken
>> be the right one.  What is right is still in the heads of the authors.

But maybe you're not really thinking of a defect in the implementation
(commonly called "bug") but rather a "gap" in the specification that
leads to unintended or undesired behaviour of a 100% bug-free
implementation of your spec.

(see Section 4 of this paper for an example:
  http://digbib.ubka.uni-karlsruhe.de/volltexte/documents/1827
)


Donald Eastlake wrote:
>
> Another problem is maintenance. Protocols change. Having to maintain a
> formal specification is commonly at least an order of magnitude more
> effort than maintaining a prose description.

Definitely.  Discussing protocols at the level of formal specification
language would be quite challenging (typically impossible in an open
forum, I believe it is even be difficult in a small and well-trained
design team).


> 
> And, as has been mentioned before, I'd like to emphasize that the IETF
> experience and principal is that, if you want interoperation,
> compliance testing is useless.

Ouch.  I believe this message is misleading or wrong.

Compliance testing is VERY important, rather than useless,

Compliance testing would actually be perfectly sufficient iff the spec
was formally proven to be free of conflicts and ambiguities among
specified protocol elements -- plus a significant effort spent on
ensuring there were no "gaps" in the specification.

As it turns out, however, a significant amount of implementations will
be created by humans interpreting natural language specifications,
rather than implemenations created as 100% bug-free by-products of a
formal proof tool, and often, interop with such buggy, and often
spec-incompliant implementations is desirable and necessary since
they make up a significant part of the installed base.


>
> The way to interoperation is interoperation testing between
> implementations and, to a first approximation, the more and the
> earlier you do interoperation testing, the better.


The #1 reason for interop problems and road-block to evolution of
protocols is the wide-spread lack of compliance testing (on the flawed
assumption that it is useless) and focus on black-box interop testing
of the intersection of two subsets of protocol features.

The problem with interop testing is that it really doesn't provide
much of a clue, and the results can _not_ be extrapolated to features
and areas that were not interop tested (typically the other 90% of the
specification, many optional features and most of the extensibility).


The near complete breakage of TLS protocol version negotiation is a
result of the narrow-minded interop testing in companion with a complete
lack of compliance testing.


If done right, pure compliance testing can go a long way to providing
interop.  The only area where real interop testing is necessary is
those protocol areas where the spec contains conflicts/inconsistencies
that implementors didn't notice or ignored, or where there is a
widespread inconsistencies of implementations with the specification,
of created by careless premature shipping of a defective implementation
and factual creation of an installed base that is too large to ignore.


-Martin