Re: [netmod] Y34 - root node

Ladislav Lhotka <lhotka@nic.cz> Fri, 21 August 2015 13:45 UTC

Return-Path: <lhotka@nic.cz>
X-Original-To: netmod@ietfa.amsl.com
Delivered-To: netmod@ietfa.amsl.com
Received: from localhost (ietfa.amsl.com [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 8B40C1A904E for <netmod@ietfa.amsl.com>; Fri, 21 Aug 2015 06:45:11 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -0.361
X-Spam-Level:
X-Spam-Status: No, score=-0.361 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HELO_EQ_CZ=0.445, HOST_EQ_CZ=0.904, MIME_8BIT_HEADER=0.3, T_RP_MATCHES_RCVD=-0.01] autolearn=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id SYcrEa_B3ATM for <netmod@ietfa.amsl.com>; Fri, 21 Aug 2015 06:45:09 -0700 (PDT)
Received: from mail.nic.cz (mail.nic.cz [IPv6:2001:1488:800:400::400]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 12B5A1A904A for <netmod@ietf.org>; Fri, 21 Aug 2015 06:45:08 -0700 (PDT)
Received: from birdie.labs.nic.cz (unknown [195.113.220.110]) by mail.nic.cz (Postfix) with ESMTPSA id 790581812CD; Fri, 21 Aug 2015 15:45:05 +0200 (CEST)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=nic.cz; s=default; t=1440164705; bh=eXKyooB+DBg7aplUUFk9XssSi7ItXhmgCZkhm2r5ey0=; h=From:Date:To; b=Tsrr8xFI2ekQGSXzEOnOhQY2kynGmTmnOFmMI9kBrLdqirNjG+tlba9pnHmzQW2ku UadLISFPynzY8ArTotxJMI3rbNdu5StY2+mfmBSc9aS426FkNp1RR0OJxbS7clSdIG QEzgVgDCiwL4vdqL+pQGuNrpNi7RBIuoVbwrmvtg=
Content-Type: text/plain; charset="us-ascii"
Mime-Version: 1.0 (Mac OS X Mail 8.2 \(2102\))
From: Ladislav Lhotka <lhotka@nic.cz>
In-Reply-To: <20150821.150158.491063432174006492.mbj@tail-f.com>
Date: Fri, 21 Aug 2015 15:45:07 +0200
Content-Transfer-Encoding: quoted-printable
Message-Id: <BBD0133D-7EAE-4283-ACEF-358FF1D8600B@nic.cz>
References: <CABCOCHRgAHah6_f1qZkPs0_v8Cj6NA5TKokb_RtUv+XWNOocFA@mail.gmail.com> <20150820.101533.1535137181522006328.mbj@tail-f.com> <55D7148C.6090508@cisco.com> <20150821.150158.491063432174006492.mbj@tail-f.com>
To: Martin Björklund <mbj@tail-f.com>
X-Mailer: Apple Mail (2.2102)
X-Virus-Scanned: clamav-milter 0.98.7 at mail
X-Virus-Status: Clean
Archived-At: <http://mailarchive.ietf.org/arch/msg/netmod/AuhBQz52uf9uB4Qo5XOdzh19DNo>
Cc: netmod@ietf.org
Subject: Re: [netmod] Y34 - root node
X-BeenThere: netmod@ietf.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: NETMOD WG list <netmod.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/netmod>, <mailto:netmod-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/netmod/>
List-Post: <mailto:netmod@ietf.org>
List-Help: <mailto:netmod-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/netmod>, <mailto:netmod-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 21 Aug 2015 13:45:11 -0000

> On 21 Aug 2015, at 15:01, Martin Bjorklund <mbj@tail-f.com> wrote:
> 
> Robert Wilton <rwilton@cisco.com> wrote:
>> Hi Martin,
>> 
>> On 20/08/2015 09:15, Martin Bjorklund wrote:
>>> Andy Bierman <andy@yumaworks.com> wrote:
>>>> On Wed, Aug 19, 2015 at 4:25 AM, Martin Bjorklund <mbj@tail-f.com>
>>>> wrote:
>>>> 
>>>>> Robert Wilton <rwilton@cisco.com> wrote:
>>>>>> 
>>>>>> On 18/08/2015 18:22, Andy Bierman wrote:
>>>>>>> This is how languages like SMIv2 and YANG work.
>>>>>>> A conceptual object is given a permanent "home" within the tree of
>>>>>>> object identifiers.
>>>>>>> Moving data is very expensive, since any clients working with the old
>>>>>>> data
>>>>>>> will break as soon as the data is moved.
>>>>>>> 
>>>>>>>  I am not convinced the IETF can or should come up with a set of
>>>>>>>  containers
>>>>>>> that covers every possible topic that can be modeled in YANG.
>>>>>> I mostly agree, but having some more structure/advice as to where to
>>>>>> place YANG modules may be helpful.  I'm thinking more along the lines
>>>>>> of broad categories rather than precise locations.
>>>>> +1
>>>>> 
>>>>>>>     If someone wants to builds a YANG controller node that is managing
>>>>>>>     the configuration for a network of devices then wouldn't they want
>>>>>>>     a particular device's interface configuration to be located
>>>>>>>     somewhere like /network/device/<device-name>/interfaces/interface?
>>>>>>>     Ideally, they would be able to use the same YANG definitions that
>>>>>>>     are defined for /interfaces/ but root them relative to
>>>>>>>     /network/device/<device-name>/.
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> Yes -- some of us (like Martin) have pointed this out many times.
>>>>>>> The "device" container on an NE does not help at all wrt/
>>>>>>> aggregation on a controller. "/device" or "/" work the same for this
>>>>>>> purpose.
>>>>> Actually, I would argue that / works better.  On the controller, you
>>>>> probably have a list of devices you control (this is how our NCS
>>>>> works, and how ODL works (I have been told)):
>>>>> 
>>>>>   container devices {
>>>>>     list device {
>>>>>       key name;
>>>>>       // meta-info about the device goes here, things like
>>>>>       // ip-address, port, auth info...
>>>>>       container data {
>>>>>         // all models supported by the devices are "mounted" here
>>>>>       }
>>>>>     }
>>>>>   }
>>>>> 
>>>>> So on the controller, the path to interface "eth0" on device "foo"
>>>>> would be:
>>>>> 
>>>>>   /devices/device[name='foo']/data/interfaces/interface[name='eth0']
>>>>> 
>>>>> if we also have a top-level "/device" container we'd have:
>>>>> 
>>>>>   /devices/device[name='foo']/data/device/interfaces/interface[name='eth0']
>>>>> 
>>>>>> What would the real resource location for
>>>>>> "/network/device/<device-name>/interfaces/interface" be?
>>>>> I don't think there is such a thing as a "real" location.  The path is
>>>>> scoped in the system you work with; in the controller it might be as I
>>>>> illustrated above, in the device it starts with /interfaces, but in a
>>>>> controller-of-controllers it might be:
>>>>> 
>>>>>   /domains/domain[name='bar']/devices/device[name='foo']/data
>>>>>     /interfaces/interface[name='eth0']
>>>>> 
>>>>> Currently we have a proprietary way of "relocating" YANG modules, and
>>>>> ODL has its "mount", and I think Andy has some other mechanism.  Maybe
>>>>> the time has come to standardize how mount works, and maybe then also
>>>>> standardize the list of devices in a controller model.
>>>>> 
>>>>> 
>>>> +1
>>>> 
>>>> We just need to standardize a "docroot within a docroot".
>>>> This is not relocation of subtrees within the datastore, this is just
>>>> mounting
>>>> a datastore somewhere within a parent datastore.
>>>> 
>>>> In YANG validation terms, you simply adjust the docroot to the nested
>>>> mount
>>>> point,
>>>> and the replicated datastore can be used as if it were stand-alone.
>>>> This would allow any sort of encapsulation of datastores and not add
>>>> any
>>>> data model complexity to devices which do not have virtual servers
>>>> (most of them).
>>> Compared to the mount draft, I would like to decouple the schema
>>> information from the instance population mechanism.  I.e., I'd like a
>>> mechanism that simply defines the schema, not necessarily how the data
>>> is populated (in the mount draft data was fetched from a remote
>>> server, but IMO that is just one of several use cases).
>> Yes, I agree that these could/should be decoupled.  Although I note
>> that the mount draft does also allow for local mounts, although this
>> does not seem to be intended to be the mainline case.
>> 
>>> 
>>> I can think of two ways to do this.
>>> 
>>> 1)  Your "ycx:root" statement.  This is open-ended, so we could do:
>>> 
>>>       list logical-element {
>>>         key name;
>>>         leaf name { ... }
>>>         yang-root true;
>>>       }
>>> 
>>>     From a schema perspective, any top-level node from any data model
>>>     could be used within the logical-element list.
>>> 
>>> 2)  Cherry-picking:
>>> 
>>>       list logical-element {
>>>         key name;
>>>         leaf name { ... }
>>>         mount if:interfaces;
>>>         mount sys:system;
>>>         ...
>>>       }
>> I think that that it makes the overall schema more useful if it
>> explicitly states what schema is used for the mounted nodes, although
>> possibly a wildcard mount could still be allowed.
>> 
>> I wasn't quite sure how it would work if you wanted to mount a schema
>> that has augmentations.  Would you have to list all supported
>> augmentations in the mount point as well?  Otherwise you wouldn't know
>> what the full schema is.
> 
> My idea is that you mount the top-level node, and that means that
> everything below it is "copied" into the new location.  I.e.,
> augmentations to the subtree are also copied.  So you would not mount
> any augmentations (that's why the syntax is mount <top-level-node>).

But what about normal modules that refer to nodes in the mounted module? Their XPath expressions and leafrefs would be incorrect.
I think they must be mounted to the same root, too.

Lada

> 
> 
> /martin
> 
> 
> 
>> 
>> Thanks,
>> Rob
>> 
>> 
>>> 
>>> Or maybe combine them into one "mount" statement:
>>> 
>>>    mount *;  // allow any top-level node
>>>    mount sys:system; // allow this specific top-level node
>>> 
>>> 
>>> 
>>> /martin
>>> 
>>>    .
>>> 
>> 
> 
> _______________________________________________
> netmod mailing list
> netmod@ietf.org
> https://www.ietf.org/mailman/listinfo/netmod

--
Ladislav Lhotka, CZ.NIC Labs
PGP Key ID: E74E8C0C