Re: [netmod] Y34 - root node

"Alexander Clemm (alex)" <alex@cisco.com> Wed, 26 August 2015 22:29 UTC

Return-Path: <alex@cisco.com>
X-Original-To: netmod@ietfa.amsl.com
Delivered-To: netmod@ietfa.amsl.com
Received: from localhost (ietfa.amsl.com [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 7E7F71B3406 for <netmod@ietfa.amsl.com>; Wed, 26 Aug 2015 15:29:51 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -14.211
X-Spam-Level:
X-Spam-Status: No, score=-14.211 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, MIME_8BIT_HEADER=0.3, RCVD_IN_DNSWL_HI=-5, SPF_PASS=-0.001, T_RP_MATCHES_RCVD=-0.01, USER_IN_DEF_DKIM_WL=-7.5] autolearn=ham
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id fYhw8U9nqPk2 for <netmod@ietfa.amsl.com>; Wed, 26 Aug 2015 15:29:49 -0700 (PDT)
Received: from alln-iport-2.cisco.com (alln-iport-2.cisco.com [173.37.142.89]) (using TLSv1 with cipher RC4-SHA (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 0CF741B3369 for <netmod@ietf.org>; Wed, 26 Aug 2015 15:29:49 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=@cisco.com; l=9177; q=dns/txt; s=iport; t=1440628189; x=1441837789; h=from:to:cc:subject:date:message-id:references: in-reply-to:content-transfer-encoding:mime-version; bh=YEeO3+YF/dyunI6dB2WMx6jnreuMwaMnDJF4tdiZxpo=; b=HqmWTsxkWlZCqeZE8SV/93L8RUEGg4szJYvT7PPtAHB07EwIRlNn7ckt g7wK/25INmBF0XU6BJCrv6qCn0i+7bl/J+E6+NEbFn0LQwy8n/c/h7C0H ooUgH2QS17KMspvPpma2ZqT09j5CSF1igON6m7Efi6IR3FAWR6c16088a 8=;
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: A0AeBQARPd5V/4sNJK1dgxtUaQa9e4FuCoUxSgKBNjoSAQEBAQEBAYEKhCMBAQEDAQEBAWsLBQcEAgEIEQQBAQEKHQcnCxQJCAIEAQ0FCIgeCA3JHgEBAQEBAQEBAQEBAQEBAQEBAQEBARMEhEWHFoQ/GjEHBoMSgRQFjGs5hH2DGAGOO5Upg2smg39xgQclHIEFAQEB
X-IronPort-AV: E=Sophos;i="5.17,419,1437436800"; d="scan'208";a="181736767"
Received: from alln-core-6.cisco.com ([173.36.13.139]) by alln-iport-2.cisco.com with ESMTP; 26 Aug 2015 22:29:48 +0000
Received: from XCH-RCD-017.cisco.com (xch-rcd-017.cisco.com [173.37.102.27]) by alln-core-6.cisco.com (8.14.5/8.14.5) with ESMTP id t7QMTl2m030338 (version=TLSv1/SSLv3 cipher=AES256-SHA bits=256 verify=FAIL); Wed, 26 Aug 2015 22:29:47 GMT
Received: from xch-rcd-017.cisco.com (173.37.102.27) by XCH-RCD-017.cisco.com (173.37.102.27) with Microsoft SMTP Server (TLS) id 15.0.1104.5; Wed, 26 Aug 2015 17:29:47 -0500
Received: from xhc-aln-x13.cisco.com (173.36.12.87) by xch-rcd-017.cisco.com (173.37.102.27) with Microsoft SMTP Server (TLS) id 15.0.1104.5 via Frontend Transport; Wed, 26 Aug 2015 17:29:47 -0500
Received: from xmb-rcd-x05.cisco.com ([169.254.15.111]) by xhc-aln-x13.cisco.com ([173.36.12.87]) with mapi id 14.03.0248.002; Wed, 26 Aug 2015 17:29:46 -0500
From: "Alexander Clemm (alex)" <alex@cisco.com>
To: Ladislav Lhotka <lhotka@nic.cz>, Martin Björklund <mbj@tail-f.com>
Thread-Topic: [netmod] Y34 - root node
Thread-Index: AQHQzfwQhTp8SpZgRU2/JK7T0ppEc536tNgA//+74lSAAFbtgIAAz1DkgADCswCABfPJgIAC/IuAgAAMEQCAAKj6AIAAC1YAgAC+7gCAC50MgIAABoyAgAEhX4CAAA01gIABIKqAgAA8e4CAAdMvAIAADywAgAAMDoCACAd/8A==
Date: Wed, 26 Aug 2015 22:29:46 +0000
Message-ID: <DBC595ED2346914F9F81D17DD5C32B571DC8B7A6@xmb-rcd-x05.cisco.com>
References: <CABCOCHRgAHah6_f1qZkPs0_v8Cj6NA5TKokb_RtUv+XWNOocFA@mail.gmail.com> <20150820.101533.1535137181522006328.mbj@tail-f.com> <55D7148C.6090508@cisco.com> <20150821.150158.491063432174006492.mbj@tail-f.com> <BBD0133D-7EAE-4283-ACEF-358FF1D8600B@nic.cz>
In-Reply-To: <BBD0133D-7EAE-4283-ACEF-358FF1D8600B@nic.cz>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-originating-ip: [10.154.204.135]
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
Archived-At: <http://mailarchive.ietf.org/arch/msg/netmod/9JLtIK51gTh9-ZQ3To4csbmAmpY>
Cc: "netmod@ietf.org" <netmod@ietf.org>
Subject: Re: [netmod] Y34 - root node
X-BeenThere: netmod@ietf.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: NETMOD WG list <netmod.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/netmod>, <mailto:netmod-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/netmod/>
List-Post: <mailto:netmod@ietf.org>
List-Help: <mailto:netmod-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/netmod>, <mailto:netmod-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 26 Aug 2015 22:29:51 -0000

Coming late to the thread.

A couple of comments to add on to and/or respond to others:

- Mounting IMHO is a good way to address the requirement for various paths.  As has been been mentioned, this has been successfully used e.g. for Open Daylight and the use case of mounting / referring to information from remote devices in the context of one consolidated network inventory and topology model that is accessed by clients and applications on top of ODL.  

- The peer-mount draft was aimed at mounting subtrees from remote systems.  There is definitely also the possibility to generalize this for aliasing even within the same system (as discussed here).  However, as Eric points out there is no reason to restrict such a capability to the boundaries of a single system.  

- As Martin mentioned, clearly by allowing to mount you are decoupling schema information and instance population.  Regarding the issue of validation, this can be addressed by several ways.  In the mount draft, our answer has been that every piece of data has an authoritative owner - the server from which the data is mounted.  This is where validation and enforcing of integrity constraints needs to occur.   It is possible for an integrity constraints to refer to data that has been mounted, but the enforcement / validation is always the responsibility of the local "authoritative" server.  In other words, mounted information provides an additional, alternative view on data without replacing/substituting the original instance, and possibly restricting the types of operations that it can be subjected to (e.g. retrieval only).  

--- Alex  

-----Original Message-----
From: netmod [mailto:netmod-bounces@ietf.org] On Behalf Of Ladislav Lhotka
Sent: Friday, August 21, 2015 6:45 AM
To: Martin Björklund <mbj@tail-f.com>
Cc: netmod@ietf.org
Subject: Re: [netmod] Y34 - root node


> On 21 Aug 2015, at 15:01, Martin Bjorklund <mbj@tail-f.com> wrote:
> 
> Robert Wilton <rwilton@cisco.com> wrote:
>> Hi Martin,
>> 
>> On 20/08/2015 09:15, Martin Bjorklund wrote:
>>> Andy Bierman <andy@yumaworks.com> wrote:
>>>> On Wed, Aug 19, 2015 at 4:25 AM, Martin Bjorklund <mbj@tail-f.com>
>>>> wrote:
>>>> 
>>>>> Robert Wilton <rwilton@cisco.com> wrote:
>>>>>> 
>>>>>> On 18/08/2015 18:22, Andy Bierman wrote:
>>>>>>> This is how languages like SMIv2 and YANG work.
>>>>>>> A conceptual object is given a permanent "home" within the tree 
>>>>>>> of object identifiers.
>>>>>>> Moving data is very expensive, since any clients working with 
>>>>>>> the old data will break as soon as the data is moved.
>>>>>>> 
>>>>>>>  I am not convinced the IETF can or should come up with a set of  
>>>>>>> containers that covers every possible topic that can be modeled 
>>>>>>> in YANG.
>>>>>> I mostly agree, but having some more structure/advice as to where 
>>>>>> to place YANG modules may be helpful.  I'm thinking more along 
>>>>>> the lines of broad categories rather than precise locations.
>>>>> +1
>>>>> 
>>>>>>>     If someone wants to builds a YANG controller node that is managing
>>>>>>>     the configuration for a network of devices then wouldn't they want
>>>>>>>     a particular device's interface configuration to be located
>>>>>>>     somewhere like /network/device/<device-name>/interfaces/interface?
>>>>>>>     Ideally, they would be able to use the same YANG definitions that
>>>>>>>     are defined for /interfaces/ but root them relative to
>>>>>>>     /network/device/<device-name>/.
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> Yes -- some of us (like Martin) have pointed this out many times.
>>>>>>> The "device" container on an NE does not help at all wrt/ 
>>>>>>> aggregation on a controller. "/device" or "/" work the same for 
>>>>>>> this purpose.
>>>>> Actually, I would argue that / works better.  On the controller, 
>>>>> you probably have a list of devices you control (this is how our 
>>>>> NCS works, and how ODL works (I have been told)):
>>>>> 
>>>>>   container devices {
>>>>>     list device {
>>>>>       key name;
>>>>>       // meta-info about the device goes here, things like
>>>>>       // ip-address, port, auth info...
>>>>>       container data {
>>>>>         // all models supported by the devices are "mounted" here
>>>>>       }
>>>>>     }
>>>>>   }
>>>>> 
>>>>> So on the controller, the path to interface "eth0" on device "foo"
>>>>> would be:
>>>>> 
>>>>>   
>>>>> /devices/device[name='foo']/data/interfaces/interface[name='eth0']
>>>>> 
>>>>> if we also have a top-level "/device" container we'd have:
>>>>> 
>>>>>   
>>>>> /devices/device[name='foo']/data/device/interfaces/interface[name=
>>>>> 'eth0']
>>>>> 
>>>>>> What would the real resource location for 
>>>>>> "/network/device/<device-name>/interfaces/interface" be?
>>>>> I don't think there is such a thing as a "real" location.  The 
>>>>> path is scoped in the system you work with; in the controller it 
>>>>> might be as I illustrated above, in the device it starts with 
>>>>> /interfaces, but in a controller-of-controllers it might be:
>>>>> 
>>>>>   /domains/domain[name='bar']/devices/device[name='foo']/data
>>>>>     /interfaces/interface[name='eth0']
>>>>> 
>>>>> Currently we have a proprietary way of "relocating" YANG modules, 
>>>>> and ODL has its "mount", and I think Andy has some other 
>>>>> mechanism.  Maybe the time has come to standardize how mount 
>>>>> works, and maybe then also standardize the list of devices in a controller model.
>>>>> 
>>>>> 
>>>> +1
>>>> 
>>>> We just need to standardize a "docroot within a docroot".
>>>> This is not relocation of subtrees within the datastore, this is 
>>>> just mounting a datastore somewhere within a parent datastore.
>>>> 
>>>> In YANG validation terms, you simply adjust the docroot to the 
>>>> nested mount point, and the replicated datastore can be used as if 
>>>> it were stand-alone.
>>>> This would allow any sort of encapsulation of datastores and not 
>>>> add any data model complexity to devices which do not have virtual 
>>>> servers (most of them).
>>> Compared to the mount draft, I would like to decouple the schema 
>>> information from the instance population mechanism.  I.e., I'd like 
>>> a mechanism that simply defines the schema, not necessarily how the 
>>> data is populated (in the mount draft data was fetched from a remote 
>>> server, but IMO that is just one of several use cases).
>> Yes, I agree that these could/should be decoupled.  Although I note 
>> that the mount draft does also allow for local mounts, although this 
>> does not seem to be intended to be the mainline case.
>> 
>>> 
>>> I can think of two ways to do this.
>>> 
>>> 1)  Your "ycx:root" statement.  This is open-ended, so we could do:
>>> 
>>>       list logical-element {
>>>         key name;
>>>         leaf name { ... }
>>>         yang-root true;
>>>       }
>>> 
>>>     From a schema perspective, any top-level node from any data model
>>>     could be used within the logical-element list.
>>> 
>>> 2)  Cherry-picking:
>>> 
>>>       list logical-element {
>>>         key name;
>>>         leaf name { ... }
>>>         mount if:interfaces;
>>>         mount sys:system;
>>>         ...
>>>       }
>> I think that that it makes the overall schema more useful if it 
>> explicitly states what schema is used for the mounted nodes, although 
>> possibly a wildcard mount could still be allowed.
>> 
>> I wasn't quite sure how it would work if you wanted to mount a schema 
>> that has augmentations.  Would you have to list all supported 
>> augmentations in the mount point as well?  Otherwise you wouldn't 
>> know what the full schema is.
> 
> My idea is that you mount the top-level node, and that means that 
> everything below it is "copied" into the new location.  I.e., 
> augmentations to the subtree are also copied.  So you would not mount 
> any augmentations (that's why the syntax is mount <top-level-node>).

But what about normal modules that refer to nodes in the mounted module? Their XPath expressions and leafrefs would be incorrect.
I think they must be mounted to the same root, too.

Lada

> 
> 
> /martin
> 
> 
> 
>> 
>> Thanks,
>> Rob
>> 
>> 
>>> 
>>> Or maybe combine them into one "mount" statement:
>>> 
>>>    mount *;  // allow any top-level node
>>>    mount sys:system; // allow this specific top-level node
>>> 
>>> 
>>> 
>>> /martin
>>> 
>>>    .
>>> 
>> 
> 
> _______________________________________________
> netmod mailing list
> netmod@ietf.org
> https://www.ietf.org/mailman/listinfo/netmod

--
Ladislav Lhotka, CZ.NIC Labs
PGP Key ID: E74E8C0C




_______________________________________________
netmod mailing list
netmod@ietf.org
https://www.ietf.org/mailman/listinfo/netmod