Re: [Netconf] Comments on draft-lhotka-netconf-restconf-transactions-00

Ladislav Lhotka <lhotka@nic.cz> Fri, 13 July 2018 14:19 UTC

Return-Path: <lhotka@nic.cz>
X-Original-To: netconf@ietfa.amsl.com
Delivered-To: netconf@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 29B9F130E14 for <netconf@ietfa.amsl.com>; Fri, 13 Jul 2018 07:19:18 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.9
X-Spam-Level:
X-Spam-Status: No, score=-1.9 tagged_above=-999 required=5 tests=[BAYES_00=-1.9] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id KTy6tbyz5RsY for <netconf@ietfa.amsl.com>; Fri, 13 Jul 2018 07:19:15 -0700 (PDT)
Received: from trail.lhotka.name (trail.lhotka.name [77.48.224.143]) by ietfa.amsl.com (Postfix) with ESMTP id 66AED130DE4 for <netconf@ietf.org>; Fri, 13 Jul 2018 07:19:15 -0700 (PDT)
Received: by trail.lhotka.name (Postfix, from userid 109) id 816C81820CC0; Fri, 13 Jul 2018 16:25:49 +0200 (CEST)
Received: from localhost (nat-2.nic.cz [217.31.205.2]) by trail.lhotka.name (Postfix) with ESMTPSA id C729A18202E0; Fri, 13 Jul 2018 16:25:46 +0200 (CEST)
From: Ladislav Lhotka <lhotka@nic.cz>
To: Robert Wilton <rwilton@cisco.com>, "netconf@ietf.org" <netconf@ietf.org>
In-Reply-To: <7794cb8f-caba-c652-abfa-db754b509dd2@cisco.com>
References: <b26d88fe-2797-a8f8-a2e3-a5aed2fae6d7@cisco.com> <87sh4ofjyd.fsf@nic.cz> <7794cb8f-caba-c652-abfa-db754b509dd2@cisco.com>
Date: Fri, 13 Jul 2018 16:19:10 +0200
Message-ID: <87wotzmd5t.fsf@nic.cz>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: quoted-printable
Archived-At: <https://mailarchive.ietf.org/arch/msg/netconf/kFn936SCqSSHQa_NNg8lH7bBXVw>
Subject: Re: [Netconf] Comments on draft-lhotka-netconf-restconf-transactions-00
X-BeenThere: netconf@ietf.org
X-Mailman-Version: 2.1.27
Precedence: list
List-Id: Network Configuration WG mailing list <netconf.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/netconf>, <mailto:netconf-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/netconf/>
List-Post: <mailto:netconf@ietf.org>
List-Help: <mailto:netconf-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/netconf>, <mailto:netconf-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 13 Jul 2018 14:19:18 -0000

Robert Wilton <rwilton@cisco.com> writes:

> Hi Lada,
>
>
> On 12/07/2018 18:22, Ladislav Lhotka wrote:
>> Hi Rob,
>>
>> thanks for your comments, please see inline.
>>
>> Robert Wilton <rwilton@cisco.com> writes:
>>
>>> Hi Lada,
>>>
>>> I've had a read of this draft, and have provided some comments below.
>>>
>>> So, my top level comment is that I don't know whether or not RESTCONF
>>> needs this functionality or not.  I've heard some operators state that
>>> they think that clients can just construct an "atomic" change, and hence
>>> don't have the need for a server side staging area.  Perhaps a good
>>> question to ask in Montreal?
>> I think what you mean is an analogy to git, where all changes are
>> applied on the client's side and then new commits are pushed to the
>> server. However, git was designed for this mode of operation - I think
>> with RESTCONF it wouldn't be so efficient. And also, the client
>> functionality would be probably difficult to implement in a plain
>> browser whereas browser-based clients can be easily used with RESTCONF
>> extended according to my draft.
> No, I wasn't thinking that they would be separate commits, but a single 
> client commit.
>
> I guess it may depend on whether it is a machine constructing the 
> configuration change (in which case merging it into a single request 
> should be plausibly straight forward), or human's doing the interaction, 
> although even then I still wonder whether creating an edit buffer on the 
> client side, and then pushing that to the server as a single update 
> isn't a slightly cleaner paradigm.

I agree that a lot can be done on the client side, but eventually the
data has to be sent to the server, and it is possible that the target
config datastore has changed in the mean time by another client - this
is a conflict that has to be resolved somehow.

>
> Perhaps the draft could have a background that explains some of the 
> expected usages of private candidate datastores.

The aim is to enable transactions and concurrent R/W access of multiple
clients. This drafts attempts to solve it on the server side, somebody
else may want to propose a client-side solution.

I think both may be potentially useful - one can have capable
servers and restricted clients, or vice versa.

>
>>
>>> The rest of my comments below, apply to the proposed technical solution,
>>> and obviously only apply if this is a needed enhancement. :-)
>>>
>>> 1) Generally, I definitely prefer the idea of per session staging areas
>>> (aka private candidates) described in this draft over a shared lockable
>>> candidate datastore.  This follows my belief that loosely coupled
>>> concurrent systems are more robust than tightly coupled ones (e.g. with
>>> shared locking).
>>>
>>> 2) I don't think that this draft needs to mention <intended> at all.
>>> Instead, everywhere you mention <intended> then you should be saying
>>> <running>.  I.e. your staging datastores should update <running> on a
>>> commit operation, just like a commit of <candidate> updates <running>.
>>> <intended> is always just updated as a side effect of a write to
>>> <running>, and as such is a tangential consideration.
>> The main reason for using <intended> is that the target datastore into
>> which staging datastores are merged has to be valid at all
>> times. <running> has somewhat fuzzy semantics both in NETCONF and under
>> NMDA. But yes, the text also says that essentially we have <running> and
>> <intended> being the same. NMDA explicitly permits this simplification.
>
> <running> has the configuration supplied by the user before any template 
> expansion, or inactive config removal.

> <intended> is the same configuration data, but after template expansion, 
> inactive config removal, and any other random config manipulations that 
> the server might do.
>
> If the device doesn't do "template expansion, inactive config removal, 
> and any other random config manipulations", then <intended> is trivially 
> the same as <running>.
>
> Whenever <running> is due to be changed, <intended> is also updated at 
> the same time, and validated.
>
> Hence <intended> is always valid, and by implication, so is <running>, 
> since you cannot make a change to <running> without also updating, and 
> validating <intended> at the exact same time.  I.e. they succeed or fail 
> together.
>
> I think that your <staging> datastore design works much better with NMDA 
> if you update <running> instead of <intended>.
>

Yes, but <running> can be writable or not, may be locked and may be
invalid.

If RESTCONF is the only protocol, then it is perhaps just a matter of
naming, but if NETCONF is used along with RESTCONF on the same device, I
want to avoid their interference as much as possible. My idea is that
contributions from NETCONF and RESTCONF only meet at <intended>.

>>> 3) Rather than having clients interact via {+restconf}/data, I think
>>> that it would be much better to require NMDA and then have clients
>>> interact via {+restconf}/ds/ietf-restconf-transactions:staging, as per
>>> draft-ietf-netconf-nmda-restconf-04 section 3.1.  The new staging
>>> datastore identity should also be defined in your module to inherit from
>>> ietf-datastores:datastore identity.  I think that this probably also
>>> more closely aligns to restful principals.
>> Again, in RESTCONF it is unclear what the "unified" datastore really
>> is. We wanted to make the semantics clear and explicit and, in
>> particular, permit configuration edits only via the staging
>> datastore. With your suggestion, it is not clear to me whether the
>> client could also interact with {+restconf}/data.
> The problem with {+restconf}/data is that is combines the *desired* 
> configuration with the *actual* operational state.  This combination 
> cannot always be done in a sane way if the system isn't in a steady state.
>
> I think that we should be trying to deprecate {+restconf}/data, I think 
> that cleaner/simpler semantics can be achieved by interacting via 
> explicit datastores.

If this is done, then it would make sense to do what you suggest. For
the time being, the advantage is that clients only suporting RFC 8040
can be used with my enhancements - the commit and reset operations can
be added separately, e.g as simple curl scripts. 

>
> E.g.
> (1) If a RESTCONF client wants to make an atomic update to the 
> configuration, then it just writes to <running>.
> (2) If a RESTCONF client wants private staged configuration then it does 
> it via <staging> and a commit to <running>.  From a system perspective 
> this is pretty much the same as (1) any way.
> (3 ) If a shared candidate datastore is required, then a client writes 
> to <candidate> and then commits configuration to <running>.
> (4) If <running> can be locked, then attempts by other clients to commit 
> to <running> when it is locked must fail.

This is all very complicated, I don't want to force RESTCONF users into
learning NETCONF first. Keep it simple, stupid.

>
>
>>
>>> 4) So, I think that the <staging> datastore itself only contains the
>>> proposed changes (additions, modifications, and deletes) to <running>
>>> when they are committed.  I think that clients may also want to see the
>>> combined configuration of the current contents of <running> with the
>>> delta held in <staging> applied.  This could be exposed either as (i) a
>>> new RPC, (ii) as an extra query parameter or (iii) As another read-only
>>> datastore.  A new RPC has the disadvantage that it probably wouldn't
>>> support all the query parameters, so my instinctive preference would be
>>> to one of the other two latter options.
>> Do you mean to be able to see the result of a "dry run" of a commit?
>> This would be certainly possible and, in fact, in our implementation it
>> is pretty trivial.
> let me ask two different question first:
>
> (1) If I call GET on <staging> then do I see just what I have changed 
> (and explicitly don't see anything that I haven't changed), or do I see 
> all of the base configuration with my private changes merged in?

After you do commit or reset, your staging repository becomes
(conceptually) an exact, private and writable copy of <intended>. If you
do some changes, you see them along with the other config data (modulo
NACM).  However, you don't see any changes that have been done to
<intended> in the mean time.

>
> (2) If the answer to Q1 is you see the base configuration + private 
> changes merged in, then is it the base configuration fixed from the 
> point in time that <staging> was initialized? Or does it float, i.e. it 
> always updates to the latest committed base configuration in running?

In our implementation, it is the data from the point of time when
<staging> was last initialized (after commit or reset). I think it would
be possible to let <staging> track the changes in intended as long as
the user doesn't start editing it.

>
>>
>>> 5) If private candidate datastores are being added to RESTCONF, then
>>> should they also be added to NETCONF?  If they are added to both then I
>>> think that they should be added in the same way, as much as possible,
>>> perhaps both could be updated in a single draft to save repetitive
>>> text?  In general, I like (Kent's?) idea of NETCONF WG writing a RFC
>>> that describes all the common parts of NETCONF and RESTCONF that the
>>> individual protocol docs can then reference rather than writing similar
>>> or equivalent text in two places.
>> But private candidates are already an option in NETCONF, right? One
>> possibility would be to make it the ONLY option, because shared candidates
>> have known problems.
> How do you do private candidate in NETCONF?  I thought that it was only 
> shared candidate that had been standardized.

RFC 6241 says this in sec. 8.3.1:

   The candidate configuration can be shared among multiple sessions.
   Unless a client has specific information that the candidate
   configuration is not shared, it MUST assume that other sessions are
   able to modify the candidate configuration at the same time.

Lada

>
> Thanks,
> Rob
>
>
>>
>> Thanks, Lada
>>
>>> But otherwise, I think that it is an interesting idea, and certainly
>>> warrants some WG discussion.
>>>
>>> Thanks,
>>> Rob
>>>
>

-- 
Ladislav Lhotka
Head, CZ.NIC Labs
PGP Key ID: 0xB8F92B08A9F76C67