Re: [arch-d] Centralization or diversity

Toerless Eckert <> Thu, 09 January 2020 09:45 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 7B44D12010E for <>; Thu, 9 Jan 2020 01:45:45 -0800 (PST)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -0.87
X-Spam-Status: No, score=-0.87 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, HEADER_FROM_DIFFERENT_DOMAINS=0.25, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_NEUTRAL=0.779] autolearn=no autolearn_force=no
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id jGVWlIGUwqJO for <>; Thu, 9 Jan 2020 01:45:43 -0800 (PST)
Received: from ( [IPv6:2001:638:a000:4134::ffff:40]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by (Postfix) with ESMTPS id 6CA8E120105 for <>; Thu, 9 Jan 2020 01:45:43 -0800 (PST)
Received: from ( [IPv6:2001:638:a000:4134::ffff:52]) by (Postfix) with ESMTP id 7C26854804B; Thu, 9 Jan 2020 10:45:37 +0100 (CET)
Received: by (Postfix, from userid 10463) id 6B8BC440059; Thu, 9 Jan 2020 10:45:37 +0100 (CET)
Date: Thu, 9 Jan 2020 10:45:37 +0100
From: Toerless Eckert <>
To: John Day <>
Cc:, Andrew Campling <>,
Message-ID: <>
References: <LO2P265MB0573A1353911BFDD554DE5C8C2760@LO2P265MB0573.GBRP265.PROD.OUTLOOK.COM> <> <> <> <> <> <> <> <>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <>
User-Agent: Mutt/1.10.1 (2018-07-13)
Archived-At: <>
Subject: Re: [arch-d] Centralization or diversity
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: open discussion forum for long/wide-range architectural issues <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Thu, 09 Jan 2020 09:45:45 -0000

Good insights about the outcome of PICS.

To clarify, i am primarily interested to see if we can do something
to better support static network planning and protocol evolution.

Something which is more a self-declaration than a (broken) conformance
suite "verified" testing result. Something which always must be
followed by appropriate testing (interop, deployment) depending
on the goal. But something that can help to faster get to that
step. Something to to help avoid horrenduously repetitive RFP questions
by poor customers (does your IETF FOO protocol on your product support
the optional BAR feature).

One could think of another axis of information in our yang models,
a subset of "implementation defined" behavior whose values can be
statically exposed without access to a live system and hence be
published and then used in netork planning for example
(or auto-collection by a curious WG trying to progress some
standard to full internet ;-)).

I was interested in the type of information i remember to have seen
in the 90th in PICS, i am horrified about the process i learn
about it here in te thread ;-)


On Wed, Jan 08, 2020 at 04:49:46PM -0500, John Day wrote:
> FWIW, there were also problems with keeping the test suites up-to-date with the errata.  Bugs could be fixed in the standards and incorporated into implementations faster than the people developing the test suites could keep up (or wanted to). Of course the fixes were in the implementations before they were in the standards. And what was worse, when pointed out to them that their test-suited didn???t conform to the spec, they didn???t care. They still insisted they were right.
> For some standards, it was possible to develop utilities that did a specific function and used part of a standard to do that function but didn???t need the rest of the standard. But the people running the test suites couldn???t deal with it. If it didn???t implement the entire protocol, it didn???t pass. The utility couldn???t cause any of the rest of the protocol to be generated. It conformed to the standard for everything it did. They were using the standard as it had been intended. 
> But the testers were effectively bean-counters and totally stupid.
> Conformance fell under the ANSI committee I chaired and I remember writing them (some military base in Arizona whose name I have thankfully forgotten) nasty letters telling them to get their act together or get out. They were doing more harm than good. Wish we had known about the IS-IS issues as well to add to the list.
> One needs some way to test implementations before trying them live.  But one has to treat them (and it has to be instilled in the testers) that the purpose is to help:  obey Postel???s rule, find problems, and then determine whether it is the tests, the ???standard??? or the new implementation that is at fault. And above all if it isn???t externally visible, it is none of the tester???s business.  Even if there is ???formal specification???, one can???t be sure that a bug is in the prose, the formal spec, the tests, or the implementation. They are all suspect. (Often formal specifications are more complex than the code. And we know what that means!)
> Years before the standards issues arose, I did a survey of testing methods for our own use. I expected to find that two or three were pretty effective, pick one or two.  What I found were 20 or 25 that were all pretty effective and the advice was pick 2 or 3 and it didn???t really matter which ones as long as they were different. (Not the result I was looking for.)  ;-)
> Take care,
> John
> > On Jan 8, 2020, at 15:58, wrote:
> > 
> > 
> >> Yes, i was thinking of ISO PICS. But i lost track of those ISO standards
> >> since the 90th. I guess i would have to see a successfull and still
> >> currently used standard and how its PICS do or do not help. Just
> >> not alot of those ISO specs left in wide deployments, right.
> >> X.500 ? Maybe i can get more insight from the security community.
> > 
> > 
> > You need not go that far.  Remember IS-IS?  Still thriving.  
> > 
> > And yes, my experience with PICs and test suites comes from that.
> > 
> > 
> >>> What it does do is to encourage people to write ???conformance test suites???.  These then get sold to unsuspecting customers and end-users.  Unfortunately, the quality of such suites is so low that you spend way, way, way more time debugging the test suite and you never find bugs in your implementation.
> >> 
> >> This is just the worst possible outcome, and i am sure one can learn
> >> from those bad experiences. Just think of taking a more organized
> >> step towards being able to collect protocol implementation information
> >> from the industry. Right now every WG who wants to raise the level of
> >> an IETF protocol, e.g.: to(wards) full standard is coming up with a
> >> questionaire in an ad-hoc fashion (we're just doing this in Multicast
> >> for IGMP/MLD). 
> > 
> > 
> > You can do that if you like, but the one proven, effective method is interoperability testing.
> > 
> > Running code trumps questionaires, conformance test suites, theory, and mandate.
> > 
> > 
> >> My point was: If the only murphy you have is competitive pressure, it works not
> >> it there is only limited competition, such as a total of maybe no
> >> more than 3 competing national infrastructures. 
> > 
> > 
> > And countries that stifle competition inflict vendor lock on themselves.
> > 
> > 
> >> See above. The best i think we (IETF) can do is to educate better about
> >> this by appropriate documents. Certainly some additional protocol work
> >> will result from that. I for once had simple slides 10 years ago how to fixup
> >> non-dual plane networks to be dual-plane. And maybe we can come up with
> >> questionaires to get better numbers from actual deployments (the
> >> PICS discussion).
> > 
> > 
> > The IETF???s job is not education. It does not work as the education is not requested
> > and not welcome. The IETF???s job is standards and it needs to focus on that.
> > 
> > 
> >> Figuring out ideas for a complete ecosystem of monetization and
> >> deployment between competition and regulation is better left for
> >> an ongoing bar-bof with emphasis on bar.
> > 
> > 
> > I concur. Let???s stop fooling ourselves into thinking that anything that the IAB/IETF writes has any impact
> > on regulation or economics.
> > 
> > 
> >> Actually one of the interesting conclusions was that dual-plane in
> >> many cases is really free but deployments often just don't fully
> >> utilize it.
> > 
> > 
> > I???d agree that it???s close to free.  If you accept that you need a dual-plane approach for resiliency, then
> > adding a second vendor to the mix is relatively easy. There???s additional cost because of decreased 
> > purchasing volume. There???s additional costs in management and operations.
> > 
> > However, as compared to the costs of an unnecessary outage, it???s still trivial.
> > 
> > Tony
> > 
> > _______________________________________________
> > Architecture-discuss mailing list
> >
> >