Re: [netmod] backward compatibility requirements in draft-verdt-netmod-yang-versioning-reqs-00

Robert Wilton <> Thu, 26 July 2018 10:19 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 6FABF1310BB for <>; Thu, 26 Jul 2018 03:19:13 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -14.511
X-Spam-Status: No, score=-14.511 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, RCVD_IN_DNSWL_HI=-5, SPF_PASS=-0.001, T_DKIMWL_WL_HIGH=-0.01, USER_IN_DEF_DKIM_WL=-7.5] autolearn=ham autolearn_force=no
Authentication-Results: (amavisd-new); dkim=pass (1024-bit key)
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id jgYW5TPyB5Fn for <>; Thu, 26 Jul 2018 03:19:10 -0700 (PDT)
Received: from ( []) (using TLSv1.2 with cipher DHE-RSA-SEED-SHA (128/128 bits)) (No client certificate requested) by (Postfix) with ESMTPS id 6C7741310B9 for <>; Thu, 26 Jul 2018 03:19:10 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;;; l=9432; q=dns/txt; s=iport; t=1532600350; x=1533809950; h=subject:to:references:from:message-id:date:mime-version: in-reply-to:content-transfer-encoding; bh=/s4NqXouo4AP9+LlrFuFaQYCVugnuC3zupqk7BGugXs=; b=TJzigZFSKit/LcJB2uMVC524fJ+n6bODYRWulsWT34AOrKHSTLnAf2LH GI9xk9JiiTDJHKS1XEvCEKV+7XxT3DExDzrizDYEFYPMjxdAzVBVkquoL 48/jCZYAQtQtT9jdvrF+mV637aZpFSjkyzwDQNDUi6xbGMx+7prN9Jn/f o=;
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: =?us-ascii?q?A0CDAQCen1lb/xbLJq1cGQEBAQEBAQE?= =?us-ascii?q?BAQEBAQcBAQEBAYMggX4ShCaIZY08LJVigWYLhGwCgxE4FAECAQECAQECbSi?= =?us-ascii?q?FNgEBAQECASMPAQU0BhcLGAICERUCAlcGAQwIAQEXgwWBeAixZoEuhF6FbYE?= =?us-ascii?q?Lh2gmgUE/gTiBbUk1hDgvVYJCglUCh04ZhHuBLYtsCY8vBoFHhmWFV4gLhFC?= =?us-ascii?q?FWIFYIYFSMxoIGxU7gmqCTI4HPo8lAQE?=
X-IronPort-AV: E=Sophos;i="5.51,404,1526342400"; d="scan'208";a="5415112"
Received: from (HELO ([]) by with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 26 Jul 2018 10:19:08 +0000
Received: from [] ( []) by (8.15.2/8.15.2) with ESMTP id w6QAJ7KA020968; Thu, 26 Jul 2018 10:19:07 GMT
To: Andy Bierman <>, Christian Hopps <>, NetMod WG <>
References: <> <> <> <> <> <> <> <> <> <>
From: Robert Wilton <>
Message-ID: <>
Date: Thu, 26 Jul 2018 11:19:07 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1
MIME-Version: 1.0
In-Reply-To: <>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US
Archived-At: <>
Subject: Re: [netmod] backward compatibility requirements in draft-verdt-netmod-yang-versioning-reqs-00
X-Mailman-Version: 2.1.27
Precedence: list
List-Id: NETMOD WG list <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Thu, 26 Jul 2018 10:19:14 -0000

On 25/07/2018 17:59, Juergen Schoenwaelder wrote:
> On Wed, Jul 25, 2018 at 05:25:32PM +0100, Robert Wilton wrote:
>> One alternative way to build a robust client would be to have an internally
>> defined schema by the client (perhaps based on open models, or perhaps a
>> particular version of vendor models, possibly with some deviations, or their
>> own model) which they can then map onto one or more per device schema.  So
>> within a particular schema (internal or device specific) then (module name,
>> path) uniquely resolves to a data node with particular properties; but
>> between different schema (e.g. for each different device) then the same
>> (module name, path) pair is allowed to resolve to a different uniquely
>> defined property.
>> A layer of mapping is performed between the internal schema and the device
>> schema for whichever devices/sw-versions it needs to interoperate with.  The
>> more closely the two schema align, the easier the mapping is to achieve.
>> This is also why, ultimately, that determining whether changes are backwards
>> compatible or not needs to be done at the schema level (which takes into
>> account deviations and features), and also the per data node level.
> This is what NMSes used to do and well still nasty work. But I assume
> people think about much simpler clients, the scripts that often clue
> the things together.
>>> What really chances here is the adaptation process: Today, a client
>>> will not bother to use a new namespace (a new version) unless it was
>>> programmed to do so (opt-in). The proposed new versioning scheme
>>> effectively means that client will automatically use new versions
>>> until the client got told to be careful where necessary (and I assume
>>> that in most cases this means until the client failed and then got
>>> fixed, an opt-out process).
>> My main concern with using module name for major version changes is that it
>> forces a name change for each data node in that module, regardless of
>> whether that node has actually changed.  This inherently feels like the
>> wrong this to do.  If a data node hasn't changed then ideally you want it to
>> remain unchanged on the same path.  So the alternative way that RFC 7950
>> supports today is to keep the module name the same but introduce new names
>> for all identifiers that have been changed in a non backwards compatible
>> way, making use of status deprecated/obsolete.
> Yes, you can decide between the two options today. If only leaf foo
> needs an non-backwards compatible update, you create a new leaf
> bar. If the majority of leafs in a module need a non-backwards
> compatible update, you create a new module.
But my main objection is that changing the name of the module forces a 
name change for all data nodes in the module, even for those that have 
not changed.  I'm just not convinced that this is a good thing to do.

I think that there is an assumption here that if the module is published 
with a new name then servers can implement both old and new modules at 
the same time.  But I'm not sure that vendors will actually do this, nor 
will this necessary work that well in practice.

E.g. if we consider a non backwards compatible change to ietf-interfaces 
that it is decided needs a new module name.  Not only do we need 
ietf-interfaces-v2, but we also need v2 versions of all 70 modules that 
augment from ietf-interfaces, either directly or indirectly.  Now if 
someone makes a get request without a sufficient filter then will twice 
as much data.

>> However, it isn't clear to me that handling the old and new values within
>> the same schema is always a good thing to do.  I generally prefer the idea
>> of doing version selection (if anyone chooses to support this) so that a
>> given value is only reported once.
> I wonder how this will ever work with vendor modules, ietf modules,
> ieee modules, openconfig modules, etc. all overlapping, sometimes more
> and sometimes less. You would have to create distinct views onto the
> same instrumentation if you do not want to have overlapping values.
Yes, I think that they have to be kept as discrete views.  Or perhaps 
the server is configured as to which external management model it should 
use.  Even with discrete views there are problems due to different 
hierarchies or list keys.  There is no perfect technical solution here, 
although over time I think that things will get better as management 
models slowly converge.

But representing multiple models in one combined view doesn't really 
seem to be a good idea.

I think that some aspects of versioning are the same problem.  E.g. I 
think that there is a difference between a major release where more 
backwards compatible changes could be expected, vs maintenance releases 
where one would expect backwards compatibility should likely be preserved.
>>> I can understand why some people believe a conservative opt-in
>>> approach is desirable for them and I can understand why some other
>>> people believe an optimistic approach opt-out approach is desirable
>>> for them. There are likely good arguments for both and this makes it
>>> difficult to pick one.
>> My feeling (and I don't have hard evidence to back this up) is that clients
>> are generally less impacted by keeping the name the same rather than
>> changing the module name for every major version change.
> Not sure I parse this correctly...
This was similar to a comment above.  I don't like the (module, path) of 
a data node having to change (due to a module name change for a major 
version) if its definition hasn't changed at all.

>>>> The YANG 1.1 way is to define a new definition and then deprecate
>>>> the broken one. But this has negative consequences as well,
>>>> e.g. does writing to the old leaf automatically also write to the
>>>> new leaf at the same time? Are both returned in a get request? What
>>>> if a different client only writes to the new leaf?
>>> Sure, duplicate or overlapping config objects is something we usually
>>> try to avoid. In general, I assume a server would try to keep such
>>> overlapping leafs in sync. But then, this is an issue that appears in
>>> any scheme that allows access to multiple versions of a leaf (or
>>> overlapping leafs in general, i.e. a standard object and a vendor
>>> version of it).
>> I still think that clients may struggle to process configuration in a get
>> request that they had not configured, and hence were not expecting.
> Maybe, maybe not. Perhaps things get simpler if the relationship of
> foo replacing bar is machine readable.
Yes, potentially that would help.

>>> The point I was trying to make was a different one, namely that today
>>> we have a way to expose multiple versions of a leaf by using a new
>>> (module, path) name while with a (module, path, version) naming
>>> system, our protocols need extensions to expose multiple versions of a
>>> leaf (or we declare that servers will never expose multiple versions
>>> and that clients must always adapt to the version currently offered by
>>> a server).
>> So, I think that constraint should be that for a given schema (i.e. set of
>> implemented modules) there can be only a single definition for a (module,
>> path) pairing.  I see the version information, much like deviations, as just
>> a mechanism to help clients (and readers) to spot where the definition may
>> deviate from what they were previously anticipating.
>> In all cases, I think that minimizing backwards incompatible changes is the
>> right thing to do, and I suspect that as YANG gains more traction, more of
>> the latent bugs will get fixed, models will harden, and churn will
>> decrease.  But I think that vendors will always want an easy way to fix
>> bugs, and to change the model in the case that the implementation has
>> radically changed, or when the schema is being cleaned up.
> Well, part of the story line is that it is necessary to produce half
> baked modules faster and then to fix them iteratively. And there is
> likely some truth to it since implementation of models helps to
> understand how to do them better. But once you are in production, you
> hate moving APIs and stability is suddenly a big value.
Yes, both of these are right.  And partly it is a chicken and egg 
problem.  We would like clients to migrate from CLI to YANG, but that 
requires stable models, but don't want to invest heavily in creating 
stable models in YANG if customers are not using it. Having two separate 
sets of standard models (e.g. IETF and OpenConfig) doesn't help either.

> Perhaps all we need is a marker that says "this module is still
> experimental and hence it does not provide any backwards compatibility
> promises". Clients using these modules then know that newer revisions
> can break things.
>   module ietf-foo {
>      stability alpha;
>   }
Possibly.  In SemVer, version numbers less than 1.0 would generally be 
regarded as experimental.

I think that regardless of what IETF decides to standardize for YANG 
versioning, I can't see OpenConfig changing their direction for their 
own models, so some part of the industry at least will likely have to 
accept semver as part of the solution.


> /js