Re: [TLS] SCSV vs RI when both specified. Was: Updated draft

Martin Rex <> Mon, 21 December 2009 15:16 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id A63083A693E for <>; Mon, 21 Dec 2009 07:16:29 -0800 (PST)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -6.194
X-Spam-Status: No, score=-6.194 tagged_above=-999 required=5 tests=[AWL=0.055, BAYES_00=-2.599, HELO_EQ_DE=0.35, RCVD_IN_DNSWL_MED=-4]
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id 0WoLHtFX0lzf for <>; Mon, 21 Dec 2009 07:16:28 -0800 (PST)
Received: from ( []) by (Postfix) with ESMTP id ABE573A6A07 for <>; Mon, 21 Dec 2009 07:16:22 -0800 (PST)
Received: from by (26) with ESMTP id nBLFG2q3001041 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Mon, 21 Dec 2009 16:16:02 +0100 (MET)
From: Martin Rex <>
Message-Id: <>
Date: Mon, 21 Dec 2009 16:16:01 +0100
In-Reply-To: <> from "Blumenthal, Uri - 0662 - MITLL" at Dec 20, 9 07:53:56 pm
MIME-Version: 1.0
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 8bit
X-Scanner: Virus Scanner virwal05
X-SAP: out
Subject: Re: [TLS] SCSV vs RI when both specified. Was: Updated draft
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: "This is the mailing list for the Transport Layer Security working group of the IETF." <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Mon, 21 Dec 2009 15:16:30 -0000

Blumenthal, Uri - 0662 - MITLL wrote:
> Please indulge me: the one short compelling reason why we don't want
> to say "when two signals are present use this one and ignore the other"
> instead of "when two signals are present - abort connection" - is...?

I do not think such a reason exists.

The justification that has been given so far is
"because that is what the document says".

If you look at the document from the perspective of formal correctness
proofing, then this is justification is based on a defect in the document.

> >
> OK. Karlsruhe server time-outs on me, so no chance to get enlightened
> by checking that thread.

That URL refers to a publication called 
  "Would you ever risk a non-monomorphic specication?"
It is about formal correctness proofing of specifications
and implementations.

I am _not_ an expert in the field of formal correction proofing,
and in theoretical math stuff, I am more of a practical type of person.

But I have a good understanding of some of the principles and
methodologies involved.  In 1996 I shared office for 5 month with
a student from the University of Karlsruhe who did his master
thesis at our company developing a formal specification for the
authorization model in our application software and doing a
formal correctness proof, using a tool called
"Karlsruhe Interactive Verifier (KIV)" which was also used for
above document.  This resulted in a number of fixes to our
design and implementation.

I found the underlying concepts and methodologies so interesting
and intriguing that I started implmenting a "generic" gssapi-mechanism
in order to locate inconsistencies and ambiguities in the GSS-API v2
during the following 2 years, and get them fixed in the
IETF CAT WG before publishing GSS-API v2.

(the most part of this generic implementation was re-used to build
 my gssntlm.dll/gsskrb5.dll wrappers for Microsoft NTLM and
 Kerberos 5 SSP to provide a GSS-API v2 compliant API

The methodology of the KIV tool is to create an implementation
of the given specification, and it asks the operater to manually
("interactively") resolve all ambiguities and conflicts that prevent
an full implementation of the spec.  The KIV tool will actually
output a 100% bugfree implementation of the spec (back then you
could ask for C or C++ output).

Probably the biggest and most time-consuming part for tool-based
formal verification is the translation from a specification that
mere humans understand into predicate logic (and backwards for
all the resolved problems).

What formal proofing does _not_ provide is coming up with
specification that are actually useful and do what you want.

The most intriguing part is, that the methodology not only
works for the spec -> code transformation, it can also
be used for the  solution-architecture -> spec transformation.