Re: [arch-d] Centralization or diversity

John C Klensin <> Thu, 09 January 2020 01:23 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id B1E45120020 for <>; Wed, 8 Jan 2020 17:23:33 -0800 (PST)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -1.898
X-Spam-Status: No, score=-1.898 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, SPF_HELO_NONE=0.001, SPF_NONE=0.001] autolearn=ham autolearn_force=no
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id t3o4BM3bKSp3 for <>; Wed, 8 Jan 2020 17:23:30 -0800 (PST)
Received: from ( []) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by (Postfix) with ESMTPS id 0FA7F12004F for <>; Wed, 8 Jan 2020 17:23:30 -0800 (PST)
Received: from [] (helo=PSB) by with esmtp (Exim 4.82 (FreeBSD)) (envelope-from <>) id 1ipMXg-000D4S-BB; Wed, 08 Jan 2020 20:23:20 -0500
Date: Wed, 08 Jan 2020 20:23:14 -0500
From: John C Klensin <>
To: John Day <>,
cc: Andrew Campling <>, Toerless Eckert <>,
Message-ID: <C500F2428D48360BBED51F2B@PSB>
In-Reply-To: <>
References: <LO2P265MB0573A1353911BFDD554DE5C8C2760@LO2P265MB0573.GBRP265.PROD.OUTLOOK.COM> < om> <> <> <> <> <> <> <>
X-Mailer: Mulberry/4.0.8 (Win32)
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Content-Disposition: inline
X-SA-Exim-Scanned: No (on; SAEximRunCond expanded to false
Archived-At: <>
Subject: Re: [arch-d] Centralization or diversity
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: open discussion forum for long/wide-range architectural issues <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Thu, 09 Jan 2020 01:23:34 -0000

Let me add three small things to John's and Tony's observations
and experience, partially from other areas of standards...

* When there is a test suite that is more or less official
(typically by designation by the SDO that created the standard,
some related body, or just because "everyone" is using it) ,
there is considerably risk of its becoming a substitute for the
standard.  In other words, "conform to the standard" turns out,
in practice, to mean "conform to the test suite" even, or
especially, if the two don't quite align.  John's suggestion of
having a collection of them that are developed independently and
picking 2 or 3 can mitigate that problem significantly but, if
the chosen test suites are completely consistent with each
other, then one is, de facto, back to a de facto official test

* Empirical experience that parallels the above and observations
others made are the reason why the Internet was build rather
more on reference implementations (good examples but not
intrinsically authoritative) and interoperability testing (if
problems show up, they are issues to be investigated,
understood, and repaired, not the source of extended debates
about who was most correct or most conforming), rather than test

* John's observation about applying the robustness ("Postel's")
principle to systematic testing has proven very useful and is
something we should keep in mind when people, especially people
with less experience, claim that principle has outlived its

* While the PICS effort, and "PICS proforma"s, are good
candidates for good examples in this area for networking-related
standards, they differ from many other examples because they had
the character of what we would call Applicability Statements
that were often expressed in test suites, i.e., specifications
of what constituted conformance s.t., if something was not in
the PICS, it didn't count.   In broader IT standards areas, the
all-time poster child was probably the Ada programming language
and there is a case to be made that its longevity and brilliant
success when not mandated by law or contracts was largely a
result of implementations designed around passing the test suite
rather than, e.g., actually working.  Neither examples I think
the IETF should follow unless with have a death (or at least
obscurity) wish.


--On Wednesday, January 8, 2020 16:49 -0500 John Day
<> wrote:

> FWIW, there were also problems with keeping the test suites
> up-to-date with the errata.  Bugs could be fixed in the
> standards and incorporated into implementations faster than
> the people developing the test suites could keep up (or wanted
> to). Of course the fixes were in the implementations before
> they were in the standards. And what was worse, when pointed
> out to them that their test-suited didn't conform to the
> spec, they didn't care. They still insisted they were right.
> For some standards, it was possible to develop utilities that
> did a specific function and used part of a standard to do that
> function but didn't need the rest of the standard. But the
> people running the test suites couldn't deal with it. If it
> didn't implement the entire protocol, it didn't pass. The
> utility couldn't cause any of the rest of the protocol to be
> generated. It conformed to the standard for everything it did.
> They were using the standard as it had been intended. 
> But the testers were effectively bean-counters and totally
> stupid.
> Conformance fell under the ANSI committee I chaired and I
> remember writing them (some military base in Arizona whose
> name I have thankfully forgotten) nasty letters telling them
> to get their act together or get out. They were doing more
> harm than good. Wish we had known about the IS-IS issues as
> well to add to the list.
> One needs some way to test implementations before trying them
> live.  But one has to treat them (and it has to be instilled
> in the testers) that the purpose is to help:  obey Postel's
> rule, find problems, and then determine whether it is the
> tests, the 'standard' or the new implementation that is at
> fault. And above all if it isn't externally visible, it is
> none of the tester's business.  Even if there is 'formal
> specification', one can't be sure that a bug is in the
> prose, the formal spec, the tests, or the implementation. They
> are all suspect. (Often formal specifications are more complex
> than the code. And we know what that means!)
> Years before the standards issues arose, I did a survey of
> testing methods for our own use. I expected to find that two
> or three were pretty effective, pick one or two.  What I found
> were 20 or 25 that were all pretty effective and the advice
> was pick 2 or 3 and it didn't really matter which ones as
> long as they were different. (Not the result I was looking
> for.)  ;-)
> Take care,
> John
>> On Jan 8, 2020, at 15:58, wrote:
>>> Yes, i was thinking of ISO PICS. But i lost track of those
>>> ISO standards since the 90th. I guess i would have to see a
>>> successfull and still currently used standard and how its
>>> PICS do or do not help. Just not alot of those ISO specs
>>> left in wide deployments, right. X.500 ? Maybe i can get
>>> more insight from the security community.
>> You need not go that far.  Remember IS-IS?  Still thriving.  
>> And yes, my experience with PICs and test suites comes from
>> that.
>>>> What it does do is to encourage people to write
>>>> ???conformance test suites???.  These then get sold to
>>>> unsuspecting customers and end-users.  Unfortunately, the
>>>> quality of such suites is so low that you spend way, way,
>>>> way more time debugging the test suite and you never find
>>>> bugs in your implementation.
>>> This is just the worst possible outcome, and i am sure one
>>> can learn from those bad experiences. Just think of taking a
>>> more organized step towards being able to collect protocol
>>> implementation information from the industry. Right now
>>> every WG who wants to raise the level of an IETF protocol,
>>> e.g.: to(wards) full standard is coming up with a
>>> questionaire in an ad-hoc fashion (we're just doing this in
>>> Multicast for IGMP/MLD). 
>> You can do that if you like, but the one proven, effective
>> method is interoperability testing.
>> Running code trumps questionaires, conformance test suites,
>> theory, and mandate.
>>> My point was: If the only murphy you have is competitive
>>> pressure, it works not it there is only limited competition,
>>> such as a total of maybe no more than 3 competing national
>>> infrastructures. 
>> And countries that stifle competition inflict vendor lock on
>> themselves.
>>> See above. The best i think we (IETF) can do is to educate
>>> better about this by appropriate documents. Certainly some
>>> additional protocol work will result from that. I for once
>>> had simple slides 10 years ago how to fixup non-dual plane
>>> networks to be dual-plane. And maybe we can come up with
>>> questionaires to get better numbers from actual deployments
>>> (the PICS discussion).
>> The IETF's job is not education. It does not work as the
>> education is not requested and not welcome. The IETF's job
>> is standards and it needs to focus on that.
>>> Figuring out ideas for a complete ecosystem of monetization
>>> and deployment between competition and regulation is better
>>> left for an ongoing bar-bof with emphasis on bar.
>> I concur. Let's stop fooling ourselves into thinking that
>> anything that the IAB/IETF writes has any impact on
>> regulation or economics.
>>> Actually one of the interesting conclusions was that
>>> dual-plane in many cases is really free but deployments
>>> often just don't fully utilize it.
>> I'd agree that it's close to free.  If you accept that
>> you need a dual-plane approach for resiliency, then adding a
>> second vendor to the mix is relatively easy. There's
>> additional cost because of decreased  purchasing volume.
>> There's additional costs in management and operations.
>> However, as compared to the costs of an unnecessary outage,
>> it's still trivial.
>> Tony
>> _______________________________________________
>> Architecture-discuss mailing list
> _______________________________________________
> Architecture-discuss mailing list