[netmod] Raw notes from IETF105

Lou Berger <lberger@labn.net> Wed, 24 July 2019 15:11 UTC

Return-Path: <lberger@labn.net>
X-Original-To: netmod@ietfa.amsl.com
Delivered-To: netmod@ietfa.amsl.com
Received: from localhost (localhost []) by ietfa.amsl.com (Postfix) with ESMTP id 77EF5120389 for <netmod@ietfa.amsl.com>; Wed, 24 Jul 2019 08:11:53 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.898
X-Spam-Status: No, score=-1.898 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, TVD_PH_BODY_ACCOUNTS_PRE=0.001, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (768-bit key) header.d=labn.net
Received: from mail.ietf.org ([]) by localhost (ietfa.amsl.com []) (amavisd-new, port 10024) with ESMTP id dG9TSWwPEFKi for <netmod@ietfa.amsl.com>; Wed, 24 Jul 2019 08:11:33 -0700 (PDT)
Received: from gproxy8-pub.mail.unifiedlayer.com (gproxy8-pub.mail.unifiedlayer.com []) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id A91AD120280 for <netmod@ietf.org>; Wed, 24 Jul 2019 08:11:32 -0700 (PDT)
Received: from cmgw15.unifiedlayer.com (unknown []) by gproxy8.mail.unifiedlayer.com (Postfix) with ESMTP id 516511AE38A for <netmod@ietf.org>; Wed, 24 Jul 2019 09:11:02 -0600 (MDT)
Received: from box313.bluehost.com ([]) by cmsmtp with ESMTP id qIuzhfRBJGx2GqIuzhZHO5; Wed, 24 Jul 2019 09:11:02 -0600
X-Authority-Reason: nr=8
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=labn.net; s=default; h=Content-Transfer-Encoding:Content-Type:MIME-Version:Date: Message-ID:Subject:From:To:Sender:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=0hJ8VglVwN2WcjEp/5kDVdvBzrUquHRiYWt4nz/VyRo=; b=A9DU4U/fBBy+NDd1EysKCzvvcK B4frg0yBJ65SVE1FhSmNq5CHICQHdKIGXELwFlsWL8mnG8adHSX7D+JzRrVCVUN1Zz6jE49CWvikF EpRiUCNGhcGU6R6OizjxliGHP;
Received: from [] (port=41313 helo=[IPv6:::1]) by box313.bluehost.com with esmtpsa (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128) (Exim 4.92) (envelope-from <lberger@labn.net>) id 1hqIuz-003XpT-9R for netmod@ietf.org; Wed, 24 Jul 2019 09:11:01 -0600
To: NetMod WG <netmod@ietf.org>
From: Lou Berger <lberger@labn.net>
Message-ID: <867bb530-e32c-10d1-1939-4abd5241c83e@labn.net>
Date: Wed, 24 Jul 2019 11:10:57 -0400
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.3.1
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"; format="flowed"
Content-Transfer-Encoding: 8bit
Content-Language: en-US
X-AntiAbuse: This header was added to track abuse, please include it with any abuse report
X-AntiAbuse: Primary Hostname - box313.bluehost.com
X-AntiAbuse: Original Domain - ietf.org
X-AntiAbuse: Originator/Caller UID/GID - [47 12] / [47 12]
X-AntiAbuse: Sender Address Domain - labn.net
X-BWhitelist: no
X-Source-L: Yes
X-Exim-ID: 1hqIuz-003XpT-9R
X-Source-Sender: ([IPv6:::1]) []:41313
X-Source-Auth: lberger@labn.net
X-Email-Count: 3
X-Source-Cap: bGFibm1vYmk7bGFibm1vYmk7Ym94MzEzLmJsdWVob3N0LmNvbQ==
X-Local-Domain: yes
Archived-At: <https://mailarchive.ietf.org/arch/msg/netmod/VNEXRqMZsBuQvPsiL0mjzjvZmxA>
Subject: [netmod] Raw notes from IETF105
X-BeenThere: netmod@ietf.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: NETMOD WG list <netmod.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/netmod>, <mailto:netmod-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/netmod/>
List-Post: <mailto:netmod@ietf.org>
List-Help: <mailto:netmod-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/netmod>, <mailto:netmod-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 24 Jul 2019 15:11:57 -0000

Taken from Etherpad (very sparse!) 

Also from youtube:

     Session 1 https://www.youtube.com/watch?v=jf86dU5XHbI

     Session 2 https://www.youtube.com/watch?v=9k7qggWAS5o

Please find the related presentation slot below and add/correct notes there.
Please DO NOT add notes at the end or beginning.

> Agenda for the NETMOD WG Session in IETF 105
> --------------------------------------------
> Two Sessions:  (back to back, different rooms)
>    Monday Afternoon 
>    Session 1: 13:30-15:30        in Viger (2nd  floor)
>    Session 2: 15:50-17:50        in Duluth (2nd floor)
> WG Chairs: (sorted by last name)
>   Lou Berger   (lberger at labn dot net)
>   Joel Jaeggli (joelja at bogus dot com
>   Kent Watsen  (kent plus ietf at watsen dot net)
> Available During Session:
>    Etherpad:     https://etherpad.ietf.org/p/notes-ietf-105-netmod
>    Slides:       https://datatracker.ietf.org/meeting/105/session/netmod
>    Meetecho:     http://www.meetecho.com/ietf105/netmod/
>    Jabber:        xmpp:netmod@jabber.ietf.org?join
>    Audio Stream:
>      - for Session 1:http://mp3.conf.meetecho.com/ietf/ietf1058.m3u
>      - for Session 2:http://mp3.conf.meetecho.com/ietf/ietf1052.m3u
> Available Post Session:
>    Recording:    https://www.ietf.org/audio/ietf105/
>    YouTube:      https://www.youtube.com/user/ietf/playlists
> Introduction
>   Chairs (10 minutes)
>   Session Intro & WG Status
  Action: chairs should lead discussions to close existing errata.

> Chartered Items:
>   Robert Wilton (0-20 minutes)
>   Resolve Potential Issues from Last Calls
>  https://tools.ietf.org/html/draft-ietf-netmod-intf-ext-yang-07
>  https://tools.ietf.org/html/draft-ietf-netmod-sub-intf-vlan-model-05

Rob Wilton: should we include histogram stats now or do it in a seperate 
Tim Carry: advice is a concern, dependable ranges would be better
Rob Wilton: Have standard sizes for non-jumbo frames
Joel J: existing values have historical meanings but may not be tied to 
specific specifications.  May end up with lots of different values. 
Getting a good list beyond what is listed seems very risky. Better to 
stick with non-jumbo list.

Vladimir Vassilev: Would be good to have input/output unicast/multicast 
counters.  Would prefer to not add counters to this draft.
Rob: (polling WG)

  * Who thinks we should include histogram statistic information in this
    document? (Very few)

  * Who thinks we should *not* include histogram statistic information
    in this document? (significantly more, but still not a big number)

  * Who thinks histogram statistic information should be added to a new
    document? (a few)

Rob: I'll coordinate with IEEE 802.3 on this topic and report back their 
view on IETF standardizing

>   Balazs Lengyel (10 minutes)
>   YANG Instance Data File Format
>  https://tools.ietf.org/html/draft-ietf-netmod-yang-instance-file-format-03
Lada: Would prefer metadata stored as module information
<lots of dicussion>
Lou: How many think should keep as is in document? <very few>

  * How many think informatin should be added to modules? <less>


Lou: Authors please raise this question on the list.
Balazs: Ready for last call
Joel: Will move to last call once resolve question on metadata
Lada: Current approach of using the new yang library seems very complex
Rob Wilton: Could list moudules more simply
Vladimir: the new yang library is very complex

>   Qin Wu (10 minutes)
>   Factory Default Setting
>  https://tools.ietf.org/html/draft-ietf-netmod-factory-default-02

<updates from discssion, then LC>

>   Alex Clemm or Yingzhen Qu (10 minutes, likely  remote)
>   Comparison of NMDA datastores
>  https://tools.ietf.org/html/draft-ietf-netmod-nmda-diff-02

> Design Team Items:
>   Robert Wilton and Joe Clark (50 minutes)
>   NETMOD Versioning Design Team Update
>  https://tools.ietf.org/html/draft-ietf-netmod-yang-versioning-reqs-01
>  https://tools.ietf.org/html/draft-verdt-netmod-yang-solutions-01
>  https://tools.ietf.org/html/draft-verdt-netmod-yang-module-versioning-00
> Non-Chartered Items:  (Some of these  may overflow to 2nd session)
>   Robert Wilton (15 minutes)
>   YANG Packages
>  https://tools.ietf.org/html/draft-rwilton-netmod-yang-packages-01
>   Reshad Rahman (15 minutes)
>   YANG Schema Version Selection
>  https://tools.ietf.org/html/draft-wilton-netmod-yang-ver-selection-00
>   Michael Wang (10 minutes)
>   A YANG Data model for Policy based  Event Management
>  https://tools.ietf.org/html/draft-wwx-netmod-event-yang-02
Joel: How many have read (a few)

  * How many interested in topic? (more)

  * How many interested in working on this topic? (enough)

Will take adoption to the list.
>   Ran Tao (10 minutes, remote or Qin)
>   NMDA Base Notification for Intent  based configuration update
>  https://tools.ietf.org/html/draft-wu-netmod-base-notification-nmda-03

> List discussion
>  Qin Wu
>  NMDA Protocol  Transition Issue Discussion
> https://mailarchive.ietf.org/arch/msg/netmod/CYMK1cdLp5byiAkwDjaBngcTDQo

 From https://www.youtube.com/watch?v=jf86dU5XHbI
it looks like people are slowly coming
in from lunch
it is now 1:30 which is time to start
someone doesn't mind shutting the back
doors welcome to Montreal and net Mon
I'm Lou burger Kent Watson build ugly
with me my co-chairs the meeting
material is in the usual spots we are
doing etherpad also you can see the
etherpad link which somehow drifted to
the bottom you can also find that off
the tools page please join in and help
us capture what's being said at the mic
it comments on the mic and responses we
don't need to capture everything that
said in the slides that is not go to the
next page
aha we are at the IETF which means we
have rules that govern our participation
you can find that under our note well
this is a summary of that those rules if
you're interested in are unfamiliar with
them please go to that URL WWI 80s org
about note well that HTML that also give
you a pointer to the governing documents
since we are Monday it's worth taking a
look at that make sure you're familiar
with that as usual we're doing video
streaming as well as audio streaming and
recording please make sure you state
your name if you come to the mic that's
very important for those who are remote
as well as anyone taking notes I will be
jumping on jabber if there's a
people who are willing to jump on jabber
and to help channel those who may be
remote we appreciate that I think we
have had an agenda change I'll talk
about that a moment we are we we wanted
two and a half hours because we thought
two hours was a little tight we try to
squeeze into that because we asked for
two hours that pump this into a second
slot which means we have a lot more time
than that so in theory we're here to
almost 6 o'clock that seems like a long
time from now I don't doubt that we'll
use off of time that said we haven't
added a couple of topics that we think
are useful use of the working groups
time and I'll show that at the moment
one important note when we go from this
room to the next room there's no coffee
break actually more importantly is we
are moving rooms but we're just moving
one room to the right when between the
sessions we've organized the agenda just
by laying it out in terms of anticipated
amount of time on each topic since we do
have extra time we're not going to we're
going to be a little flexible and how
long each topic goes if we're having
good conversation we're not going to
break that break that which means that
we're not exactly sure what what's going
to line up with the session brain so
we're just going to go in the order
that's on in the published agenda and
that's captured here this this slide is
unchanged this says the agenda is
unchanged the design team items aren't
change the non-charter items are
promotes previously been published but
we've added a topic that's come up on
some it came up on the list and there's
been some more private discussion
related to it but we think it's a good
good topic for the work talked about and
that's sort of how to deal with some of
these MN NMDA transition issues so we'll
be hitting that at the end of the at our
agenda if there's other topics we can
and discussed publicly on the list we're
happy to take those also wash laniel
Eric's on can you say a few words about
what's up with yang next so there's the
no update to yang next since the last
ITF meeting where we have met I think
primarily I asked I think the main issue
is we don't have someone to drive the
discussion in particular folks are
probably looking at me not just because
in the front of the room but because I
had coordinated several of those
meetings before to do that but at the
moment I don't have a bandwidth to
actually drive those discussions much
less even coordinate the meetings to for
the discussions to be had
I did ask someone to do that but they
also had not really signed up to do that
and so that's where we're at right now
it's just sort of a limbo waiting for
someone to have the bandwidth to
actually drive the discussions so the
bottom line is that there's been no
progress since last meeting yeah I also
think that the design team activity that
we're gonna hear about is probably
taking a bunch of attention from those
who might also participate in the egg
next site
my personal view not as someone who is
contributing to that work my personal
view is ones that the current design
team winds down there may be some more
so we've had some nice progress since
the last meeting we have a couple of new
RFC's it's always good to see those
wherever that is why we're here it's not
to have good discussions well that we
appreciate that it's actually to produce
these documents we have one document
that is on its way to the isg and
another one that I was hoping to push
submit on but if we need an update
published unless I missed it today which
is possible although you know Kent is
such a slacker he only had a chair
another meeting this morning so he was
one who needs to push okay so maybe
we'll have it this week but as soon as
that happens art book artwork folding
will be submitted it was the real it's
really some minor things related to
copyright from what the trust requests
on in for code segments and there's a
code segment in this document so that
that kind of paid it as soon as that is
we'll see each other we have a couple of
documents that are not on the interface
extension is on yeah so I feel like we
had something else but geolocation is
definitely not on the agenda I warned
him I was gonna put him on the spot
I guess he decided not be here there was
a little bit of a discussion on the list
about geolocation it looks like the bat
is getting pretty close to being done so
if you have opinions on it be really
good to get that robust and I think we
got the updated yang types is believe
maybe double it's not on the agenda was
on that's the one in fact it's listed
right here to mention so thanks for that
appreciate that so at the last meeting
we had an update on these gang types and
we were actually hoping to move that
document through really quickly and we
haven't seen an update uragan is
online and if he feels like mentioning
something but we're really gated by the
author's pushing that pillar seeing that
one in q2 that slide
I feel like I should return the center
control so so we don't take it all right
thanks duplicated so there's been a
recent discussion on the list about an
errata and then we realized there's
actually a whole slew of virata's that
we really haven't verified as a working
group now technically the aetyi is the
one who verifies an errata but generally
the ADEs take input from the working
group so we think it's worthwhile to go
through each of these we're not going to
go through each of them now but we think
it is worthwhile to go through each of
these as individuals as the working
group and agree on what we think the
right answer is of whether or not these
should be verified or not now I think we
had a couple of them that we had
identified it being to be objective
there's only one listed here I thought
we had listed two which you can see is
as an errata lodged against RC 7950 that
received significant discussion on the
list fairly recently we think that
should be rejected as well and the other
one that should be rejected we took a
look at and it's actually a technical
change and one thing to keep in mind is
is that if you see something in a RFC
and would like to modify the behavior of
that RFC such that others implement that
change behavior the right way to do that
is in a disc or a new RFC that updates
the existing RFC you can't do a
technical change through a neuron so
this is asking for that particular one
that's being rejected is asking for a
substantive technical change that would
impact all implementations of the
document I mean it's just adding a
default so it's not like a controversial
thing but you just can't do that
technical type of technical change and
they Iran just by our process so that's
something to keep in mind it is going to
be important that we go through each of
these and as a work
group have a response will probably push
that as chairs that if there isn't
something that has a clear discussion on
the other 157 84 which is the recent
discussion clearly that's gonna we think
that's gonna be rejected will close the
conversation if it's not otherwise
closed if you have opinions on these we
would like to hear them we can hear them
right now if you came prepared to talk
about any amendments dro last if you
have not we can take it to us any
comments okay
that appears to be the last one in this
package so we're going to move on to Bob
and talking about blocks all
yes sorry don't eat repeat that was okay
okay so more reviews please then are
saying that one of the issues has been
raised by EC I've had verbal comment
from him yesterday I think he's happy
but we'll discuss that today and then a
separate issue raised on him - not on or
not related to this document but more
generally but I'm proposing and we
discuss and have a potential change to
this draft for that so the issue that AC
raised here I mean his actual text is
there but basically he's saying the RFC
36:35 defined a load of Ethernet like
counters statistics and maybe we should
include a subset of those into this
draft and in particular he noted that
this draft has one counter for or for
reporting destination map drops but none
of the other ones there well there's
some sort of background history to this
and that's really that we looked at that
RC another one we looked at the current
a third like mabe and I looked at the
80223 manageability interface Clause 30
and I had a spreadsheet that I liked
that effectively correlated these
different counters and things and the
upshot of all of that is that the vast
majority of the counters that AC was
asking for either included in the 802
3.2 which is the yang model a 2.3 that
standardized by Triple E or all they
regarded as being effectively obsolete
counters and we don't want to carry them
forward um but there were two exceptions
to that the the aren't included in there
that worth discussing the first one is
whether we want to add a sub interface
d-max drop counter so this would be on
the trunk parent interface is if you've
got many sub interfaces they're each
classifying traffic in different ways
and a frame comes in that can't be
classified to any sub interface and you
drop it we don't currently have a drop
counter for that I think that might be
worth adding in these
my opinion so that's relatively easy to
add in a think is that anyone has an
opinion on that or is opposed to that
okay if not I'll assume I'll add it in
and that's fine that's fairly easy then
the the second one is more interesting
it's about the ethernet histogram
statistics and that's this is what they
look like so this is defined in one of
the existing MIPS or RFC's I think is a
breakdown of the number of packets
received based on the length of that
frame come that came in and what's sort
of interesting here is that this has
been set up to accommodate single tank
frames or in fact that actually is might
include CRC bytes so or untied frames
and when you get to these boundaries
then it gets a bit trickier where you
have like queueing queue or double
friend and double VLAN tags packets and
also the 80223 has a interesting view of
how you handle jumbo frames so I don't
think they specifically disallowed but I
don't think either the spectra T sort of
wants to talk about them or standardize
them where is you want to fill out this
table properly you want to go up to
about 9k as well here's a long and proud
tradition of not standardizing the
larger frame sizes okay well I did get a
comment from them about it but anyway
still so when in discussion with them
they included in the standard drops so
there there be classified as a shorts in
two separate counts for 64 byte there's
this did this counters for a comment
with the name of it there's there's
layton there is a 64 one as well but
it's not included in this set because
it's already specific counter for frames
that size I think so
when there's discussion when we do the
work with 80 2.3 and I didn't follow
that through all the way to the end I
was involved with in the early parts of
that as I say him they didn't really
want to standardize these histogram
and various issues and then the other
sort of issue that comes up is that
because there hasn't been a good
standard here that goes to these higher
ranges that different vendors hardware
does different things they choose
different bucket sizes so you can't
really usefully define these bucket
sizes and expect people to Rev they're
raised external support that doesn't
make sense so there was a proposal may
be that we could standardize this in is
part of ITF instead and the suggestion
is David law so the chair of three I
don't know how I'm not sure what they
would hit but it might be easy for me to
email him directly first and then see
and I actually think yes so this came
out of effectively the discussions when
I City 92.3 young discussions so that's
where this came up and that was the
session at the time that that might be a
way of getting around the rules
pragmatically so so basically the
proposal here is that you could and
rather having strict bucket definitions
we could return a list of bucket entries
and where each bucket entry defines what
the range is covering and low and high
end inclusive and accounts the packets
that match that bucket range so it's a
bit more of a both in terms of the data
model but it's more flexible at the same
time we would give recommendations in
the description to say we recommend you
choose these bucket sizes so
and trying to sort of encourage heading
towards a consistency somewhere so I had
the question is do I have these in now
or can be deferred so I think the
question is do we try and do this now in
this working room last call at all or if
we do want to try and do this I can
speak to David law and find out whether
he's happy for us to do this I'm fact if
I can knock it up and say that this is
what it would look like but that's my
question is is this the right time to
even trying to do this that's maybe a
question to the chairs there in the
working room care I don't know on this
specific one yeah I don't know one that
I think probably he would I don't know
he had an EP know with trying to lay
this worked to finish that's awful not
so tim carry nokia so the question I
would really have is that those those
sizes that were there were therefore
identified for various reasons right you
know for for good reasons at the time
right and so the the question is is and
they have tools and tests that that go
around that right and so what we said is
okay we're gonna provide some you know
abstraction if you will or ability to do
some meta work on that such that we can
you know define the ranges themselves
and then provide some guidelines to two
different pieces the problem is that
when you say you have a guideline now
you don't have anything that you can
standardize that I as a client and I as
a server can really agree upon I just
deal with section so I can't guarantee
that certain servers that would
implement these things would implement
say packet size 256 through 511 so I
can't really rely on that right I have
to use a a mechanism may be an operator
has to get involved in stuff like that
so that's the biggest
certain that I have of using you know
the extract portion of this is that
you've lost now the first class Ness of
oil that of that range that people can
depend upon when they're a client so
it's a good point my aim is not to
change these bucket sizes here so that's
not what I'm trying to change I think
everyone who implements these will
implement at least these bucket size yes
people that are implementing things that
might not know that they need implement
those specific sizes yeah that's the
problem that I have it's the ones that
bigger than that these the ones that are
go up to like 2k for kak those the ones
where and especially around that 15 14
15 18 by bucket size you get more in
consistency right so my question is is
that where where do we put a user is
there a place is a place where we would
put ranges that if they do implement
those that would be able to provide a
normative stance to it right yeah that's
the thing that maybe it's an identity
maybe it's something to that nature yeah
right yes okay yeah yeah Joel yeagley um
I definitely see the existing values as
being a product of historical decisions
that were made in the I Triple E like I
mean you know when you decide that you
need something bigger than fifteen
hundred because you added a VLAN tag
right that is a specific historical
decisions like there's a lot of
increments that you could come up with
above 1500 that have various kinds of
historical meaning but are not
specifically like anchored to I Triple E
standards so you you're going to end up
with like you know 15 19 to 15 for T and
then 15:40 to say 44 70 and then you
know uh nine thousand nine thousand
fourteen ninety 190 120 and then every
Ethernet vendor trolli specification up
to there and then Intel 116 Kay
because it's a round number right so
like I actually see the likelihood of us
getting a really like good list that
goes above this as like English in a
short period of time that the I triple
you will go yeah that's cool as being
pretty low so mmm maybe we shouldn't if
from my vantage point maybe we shouldn't
do that between the last call and
hopefully when we take it to ITF last
call all right that seems like a risky
item to add to progressing this see just
for clarity you saying it'd be best to
shelve this affected a than that yeah I
mean I think this is I think this is
this is good enough
like I mean from my vantage point is a
jumbo frame user you know the there are
Jumbo's or they aren't so but but that's
that's that's my network like I don't I
don't spend a lot of time distinguishing
between the say that ninety one hundred
byte packets and the nine thousand five
packets because I know what my empty mem
MTU is set to yeah right so for clarity
these counters the ones you see here not
defined in any yeah model today right so
we don't have those at all so they so I
trapeze shied away from putting these in
for historical reasons because of where
they need to get up to yes but these
these sorts of boundaries exist these
boundaries that we have here
particularly the last one are very very
specific to historical yeah precedent so
like these seem uncontroversial like
because they're kind of widely
implemented and you know encounters that
we already use right okay like so doing
them in yang doesn't see that weird okay
is that true I don't know I had to check
whether I cut my movie if the I Triple E
Ethernet management API the clause 30
defines counters for these it I sort of
suspect it probably doesn't but it
should have done okay but I guess the
point is my point is if we there's no
reason for us to add these counters if
we're going to stop at 15 18 or 15 19
whatever that dad doesn't seem a good
reason for why we would choose to do
that because they're choosing not to
standardize these counters they could
have done these if they wanted to so if
we want to do it we probably want to at
least fix it so it works for other use
cases higher so even if we define this
we could have hard-coded counters for
these ones and then the ability to
define extra ones and people want them
above that and give some recommendations
of just doubling the size each time
that's what I would do I'll do 15 20
then moved on which one thing is on
different that people are starting to
count more or jumbo frames I don't see
any jumbo frames here and this Plus this
whole concession yeah so because between
59 any around 1500s to with there are
different numbers there they're
different sizes and there are so many
variations but then jumbo frames you
know like the nine case one that is
where they are really getting interested
right that is he you know is it 9,000
9,000 190 191 2016 kay but like I would
say anything about 1600 put it as a
jumbo frame and you know but 1519 what
we do if you have like 1520 to bid so we
like put it up teen 1600 and anything
about 1600 count as a jumbo frame but
then most the point having these
counselors told that I mean what's the
point in history I'm cancers for
everything below 1500 say or 1519 but
not having them above seems like it gets
in 2/3 information are the inflation's
useful or is not so I can tell you from
some debugging the MTU sizes into so
many variants of the 1500 it is just
really annoying and then the thing does
work because the MTU size is just like a
different number in your likes damn it
and the 15 numbers they're all over the
place yeah but they are always
definitely below 1600 and then next
question is oh are those jumbo frames
what image most trans pocket I would to
add to this discussion maybe we should
also add pockets in and pockets out
counter because this we don't have we
have unicast and multicast pockets and
for various reasons about farad just
don't implement
differentiation between unit costs and
multicast pockets so really the the ITF
interfaces model is impossible to
implement on those devices there are
many devices like open flow for example
devices you cannot implement interfaces
model for open for devices you cannot
diplomatist or on reach through traffic
generators and these are supposed to be
flexible devices that you actually
should not have any problem implementing
the idea interfaces and I think I am
against adding eternal specific counters
in this draft maybe there should be
another draft that should add them but
this one should be kept as compact as
possible I think now it's to watch
already okay yeah so the John's your
question about the total packet counts
in and out I I could be mistaken I have
a feeling that went into the a 2 2 dot
3.2 I think they might have a total
package in and out so either that count
I think should be neither at night if
interfaces or it should be noted to do
3.2 you give me a look that's not the
case but I'll double check but that's no
more questions on that or comments and
no it's the same thing yes well I think
that I think we are at the end of these
time to ask the question so the
questions you want to ask effectively is
who do we try and do this now do we
defer is that the fair question you no
harm or Powhatan so who thinks we should
delay this and add these histogram
counters in now which could be
who thinks I should do this now adding
now adding the histogram counters that
we do that though I seen two hands go up
and down no one is holding their hands
up is so very very few for the minutes
whom how many think we should not now
it's not a big number but it's clearly
percentage-wise it's huge but so you
know I think the sentiment of the room
is to delay from my standpoint as chair
given the level of discussion we're
having what's that yeah that's the
suppose we you know as chair and
actually Sheppard's my view is the seems
like a little contentious that ad now if
it was important to add it we certainly
should but I don't think it's critical
yep specifically stating the opinion now
because I'd like the ear of anyone in
the working group disagrees with that so
I think we're going to leave this
meeting where we're not going to add the
histogram counters including any event
the ones each other so these are not
going to show up in the document so if
you disagree with that now is the time
all right um can I can I ask one more
question which is who thinks that we
should try and stand ice if we don't do
now as a separate draft or separate work
item to try and standardize this and
these counters in what something like it
Joe Clark Cisco clarifying question do
you mean standardized at the upper
levels or just standardized on these
buckets below 1520
I mean these on the upper levels as well
coming whole range okay I think you
should yeah sorry I mean sorry have
bigger frame sizes higher numbers higher
higher frame sizes sorry yes higher
frame sizes I think this should be
standardized because for the performance
measurement and troubleshooting across a
service topology having that information
to be uniform and be able to calculate
the histogram either you know at the
service level effort you know is very
helpful when you're trying to debug and
troubleshoot the for automation okay so
I propose that I try and email David law
and say we think about doing this what
does he think is that I mean then we'll
go from there okay the MTU issue which I
hope will be easier although MTU is
always contentious contentious thing so
there's a network thread titled question
regarding rc8 three four four and those
basically the premise of what came up
there is I think that the Linux default
loopback into you is six five five three
six bytes long whereas the all the m2
use in the ITF models are limited to you
in sixteen and in 65535 somebody pointed
out that an IP layer having them to you
above 65535 anyway and this model does
define an L - M to you so it defines it
into you across those protocols
and these currently defined as a UN 16
the question is should we change this to
you and 32 instead - specifically to
cover this Linux live back into you case
any thoughts on that opinion yeah
definitely because different interfaces
if you take out a do not take Jeff
bigger number
not only the Wako interface there are
other interfaces well that the go above
us I know you can also set it to
megabyte right so like this actually
help helps us solve that problem okay so
that someone else does it brilliant
okay so assuming movies against this
then I'll just switch that to you and
such - that's the last one so any other
comments on the interface extension
model in that mail that actually brought
up that it was my mail to the group and
that was in the end there was another
suggestion in addition to the l2 MTU I
think it makes sense to have MTU which
is the MTU definition actually used by
Linux like when you use the if'
interfaces configuration any glimpse
what you get is actually the NQ without
the the header all together that's very
important for most people who use Linux
they are used to that value so this this
has to be in the model I think and there
is advantage when you when you define
protocol configuration based on the ITA
interfaces you want to know the the the
pocket payload you can use so when you
compare parameters you can when you are
configuring other end to use if other
protocols you can put a must statement
there limiting the size of that end
you to to that empty you so this makes a
lot of sense to have it there and it's
more elegant here just interfaces
interface and to you kind of elegant l
to empty you and especially the
limitation of taking out the view on in
the description statement creates
complications because you have the moxx
that actually just don't care about it
it's a v1 edit or not the into you is
the same register value so if you
standardize this in the young model it
will be difficult implemented like you
won't have a future not sending but it's
bigger than that sizing harder and you
have this difference that a v1 edit
should not be a problem it should go
through you cannot do it with the
existing hardware that's another point I
I have about this so a general in empty
you that corresponds to the Linux
definition of MTU and is something that
I proposed vladimir can you remind me
your last name bacillus I see them so
I'm nervous about that for several
reasons one is that I think losing the
l2m to your configuration value is
probably a mistake I think there's lots
of systems that use that as there is a
configurable value on an interface so I
think changing that to an l-3 MTU so
payload m to you as you say would be
probably a poor choice I think then the
question is could you have both in
coexisting but that would also you could
you could probably allow either of the
two to be configured potentially but I'm
not sure how the constraints would work
so if your constraint was against a
payload based em to you but the user to
configure than L to him to you then it
wouldn't necessarily work I can
certainly see how you can report both
values in the operational state to say
this is the the full frame MTU and this
is what the payload LT payload em to you
is that would be more feasible well my
argument is that
the paywall and view is what all the RFC
is up to now in the eye idea for using
yeah so it is strange that we are not
going to have that end to you as part of
that instead we are going to use l2 m2
which actually you can derive from the
type of the interface if the interface
is Ethernet there is no doubt what is
the difference between the MTU and the
l2 empty so why are we going to bound
ourselves to the interfaces and the
interface type is not going to be used
as information source for that
calculation it is going to create
confusion s on this mailing list like
this person was trying to configure NTU
for the paywall thank you and he was
confused that he cannot do that did you
add confusion you are adding another
empty which means something different
from everything else that's buddy but
again it is back to what the hardware
will place often the hardware framers
and things might place a single value
and the value of everything is the l2
frame size not necessarily the IP or the
l3 and the l2 payload so that's one
complexity and in terms of the same
discussions about historically might
have standardized the l3 em to you they
did do that they did that for LT VPN and
they've ended up in a world of pain
because of it so in the l2 pn specs the
MTU negotiate across the wire is the
payload m to you but you don't know what
size those headers are so you can't
easily agree that value is very hard to
calculate when you want L to frame
coming in to say well this amount of its
headers without having to analyze the
PAC you don't have tags on there so it's
a very strange value to use so I'm still
nervous of of moving something that's
doing an l2 check to be using an L three
values of configuration value I do get
the Linux thing that's it's unusual it
might be that that's a mistake in Linux
for using that choice
so I always struggle the tournelle to up
to you okay I don't believe those are
the revisited the term was used in the I
Triple E and the question is better than
you use the same term believe it says
that its maximum frame size and
distinguish the maximum frame size from
the IP and typically the original
definition of them to you I know people
are using boost yep convinced we should
be perpetuating that tend to you is the
maximum size of the IP back that's the
that's the definition of MTU that's my
that's my memories is it's the maximum
IP size and the the use here were in the
document is saying L to UM to you you're
including the L to headers and I think
that's more back again though eager that
terminology max rate size and it may be
helpful I don't know well seems we may
be helpful to use that term I can look
at what's in a teacher of three is wrong
but I mean in to use very standard he
used and it doesn't mean different
things in different places pretty
unambiguous what that is all framing to
you but I think you're moving away from
tears and of course confusion to people
because since we do it already and it
does I think it's whatever we decide it
should be either no worse than what we
do now or less I'd like to see if there
is any artsy anywhere that says L - M -
you deal TV parents should but it
doesn't my observe
as an operator and maybe things have
changed over the last couple of years
where I haven't cared about interface
configuration actually the MTU thing is
very confused
we have been well okay in the
configuration languages it shows up as
far as I remember and there are vendors
that have different of interpretations
between vendors there are vendors that
have different interpretations
regardless depending on the line of
operating system running on the hardware
and then my observation is that the
usual the usual configuration is I
remember it is kind of well okay a rough
estimate anyway because I don't think I
don't think I ever had an MTU
configuration that was actually that was
actually precise for limiting my MPLS
MPLS label headers kind of so I would
see two questions which well okay which
concept is the model to be to address
and yeah well okay
certainly the previous question should
the concept being addressed here appear
to be at a precise one or a more on
mirroring of the existing situation that
well okay we only give a rough number
and let let the ops people someday
perhaps figure out that yes there is one
MPLS label too much too many so there's
there's different people different ways
different people implement this on some
OS is as you say you put give it an L 3
or LT payload MTU and then they add on
some slop and then say anything that's
between this is fine
there's other ones that have a values
like this l2 calculated and they will
check that strictly so any frames above
that size they'll drop it as an ingress
Rob and that there's different
approaches and as you say different
vendors do different ways in different
os's between the same vendors can do it
different ways probably a precise layer
to is the one thing that is conceptually
easy it's going to backfire when it's it
when it's going to run out in the
configuration languages and into ops
okay and it looks like there's two RFC's
both from the same basically set of
authors in the last year or so in that
space that you talked about that you use
the layer two of tu contacts I think for
something that's so general we should
not introduce it here but stick with
figure out whether the previous RFC is
help them to use maximum frame size I
think that's probably the safer term and
it's I'm not gonna say that we're gonna
clarify we're gonna clean up the
confusion because I think there but we
won't make the worse yes nothing else I
think by using the term layer to um to
you that's partly okay and I'm sorry to
see that that one okay so that was all
the what questions are had on that the
southern face this is also working group
last call this one is much shorter in
terms of what I've said to review
support publication and no comments
received as I say possibly that means
that I could flawless but otherwise if
if you interested it'd be useful to have
a review even you're happy with how it
stands now
and this Palace worker Glasgow process I
don't know when the work loss was meant
to finish but we need more reason that
said as Shepherd I'm not going to be
able to look at this until at least next
and technically all the comments are
closed you still have some comments that
need addressing before will be ready to
go and there's going to be an IDF last
call that a people can spit comments to
so rather than to be really strict about
it I say if you have comments it's not
too late to send them I will send out
the formal last call being closed when I
started when I have time to start
processing it after the IDF hopefully
you won't get comments after that but
comment whatever it shows up we should
try to address it yep and I'll say it
again there's an opportunity because we
do expect at least one update yes before
we move it forward so we encourage
anyone in the working group who has not
recently read these documents to do so
and if you have comments please send
by the way I did it rep on max frame
size it shows up in a lot of documents
Marsten Jeff bar Sonia from Ericsson
this has been a new edition of this
young instance date the draft
yes second so the concept was that we
have many use cases where we want to
document instance data so not the models
themselves but the actual values that
integer strings and we want document
them offline and potentially hand them
out to customers or store them somewhere
these are just some of the use cases I
think there are seven or eight some of
them are detailed in the document and it
was decided that at least will needs a
metadata about the instance data so when
was it produced what models are
documenting some administrative data
like a name a description stuff and we
at least need two formats XML and JSON
so the draft was updated it received
quite a number of comments close to the
last IETF earlier the modules that
define the content were called to target
modules number of people didn't like
that so now it's called content schema
and one or two case where I have to
refer to the individual modules then
it's called content defining yang
modules chain terminology and change
wherever that came up in the inside the
draft some people the yang instance data
set itself has a name which most of the
times is I think is needed some people
insisted that they don't oh don't always
want to have that so it became optional
there were some up this draft is using
the yang data and your yang structure is
that it's now cooled it was updated
according to that draft including yang
trees and all that and there was a
comment that entity tags and last
modified the time stamps that are very
quite useful in in rest conf are
actually encoded in HTTP headers and we
can't use HTTP headers in these two
formats so now they are defined as
metadata and if they are used they can
be encoded as metadata and next so this
is an example of an update the instance
data set a few things are missing like
an XML header because they didn't fit on
the slide they are included in their
examples in the draft but not here so
here we have yeah
concerning the previous slide about this
entity tagging last modified does it
does it mean that if you have a Content
schema this content schema has to have
some modules that they find these
annotations or no no the the IETF
instance data yang module defines these
annotations because I can see of course
a good use for many other annotation
that can that are or they available or
that may be defined in the future so I
don't really see any need for because as
you said currently this entity tech and
last modified are used by by the Raskin
server as a part of HTTP headers so I
don't know if somebody wants to define
it as as an innovation that that's fine
but in this case this can be the same
for any other annotation so I would
suggest really to to require in this
case to have
some module defining entity tags and
last modified s annotations and then you
can use it normally in the content
schema and using in the data I think
they are quite useful bits of
information and they're just doing a
module to define these two tags when
clearly we use them here I don't see the
reason to split them out so they won't
harm anyone
they are obviously useful in my view I
didn't want to define them because I
thought the rest comfort handles that
but as we can't use the restaurants and
coding solution I like them but it seems
it's very easy to remove them but I like
them I go to again other any supporters
of this this formulation in the draft
currently so is there any reason why to
handle these two annotations in in a
specific way because these are
well-defined bits of data and they are
quite basic information for the content
sometimes they are sometimes they aren't
I can argue the same way about the
origin annotation for example it's
mentioned actually and the origin
annotation I think is defined somewhere
else so I don't need to redefine it can
we ask them what are you saying you
think that the instant file should not
have time stamped or into the last
modified no what I am saying is that we
should treat these two or other
annotations in in the same way and the
proper way and neutral way is to define
a yang module that defines these
annotations and include it as as part of
the content schema for for this instance
data that's that's the normally
either in line or in any other way but
it's it's done just just in a normal
standard way rather than mentioning it's
specifically in this human so if I
understand that it would be an
annotation right yes see it's a Seuss's
question of what I was to find in this
module or a separate module you're
saying what I'm saying is that I would
define a module that defines these two
metadata annotations and to detect and
last positive oregon as a separate
module and then i would include this
module in every in in every content
schema for the data where this two
annotations are of any use of course if
if i have instant data that have no use
of this last modified probably I would
avoid having this module him in the
stream do you see other uses for these
annotations these specifically these two
beside the cone ITF instance data are
you asking about other annotation out
for these two annotations are you see
for seeing any other use I don't know it
could be possibly used in net conf for
example for me it's easy to remove it
it's not critical for me as an author
that we have these on we have these
annotations in the yank file I am not
only think at this point to start a new
draft to do this of course it's not
critical for me either but I think it's
a bit I rent it as being sort of
tunneling the metadata you get from the
Netcom protocol or rest Tom protocol and
so you were trying to add a parallel
piece of information where you know we
don't have a perp on the wire read
before that night so it was it was
parallel to that because that's not what
I was thinking
which is quite different from where a
lot of sewing where it becomes part of
the content what was for beer intent are
you thinking more its content or
metadata is the parallel that I think
it's it's a Barolo what we have been the
rest but I also agree with both of you
is it's not a critical item my
understanding was that it these
annotations could be just attached to
any and in instance noting in the data
in an array right
we are quite I want to be liberal what
can be here so if you want other
annotations they find wherever yes
that's also possible of course that was
my understanding I can have any hang no
use being part of the content schema so
why not why not any other annotations
contributor I think a lot of what you're
saying is in restaurant on the top-level
node must have the e-tag and if modified
tags Internode's may have them and so to
your point any node may have them and so
there's that but then separately was the
question of why do they need to have
this information in the instance file
format like who would consume that what
the use case behind that they would want
to be with you consuming this data I
think we should have that discussion on
the list and and then we'll conclude
whether or not it's important to define
the now or later
I don't know actually who could use it I
think it's useful information yeah
the use cases doesn't cover these tags
at this point but my friend is that we
needn't really care about use cases for
these two annotations if somebody does
have a use for them they can define this
module defining the annotations and just
go ahead and and use them so that's fine
it didn't be an issue for this document
because it can accommodate any
annotations yeah it's really the
questions really that is it reasonable
to have them defined here or we'll leave
it to some later later point well if
that ever happens that's a very very
small so how many like the approach that
draft right now interesting we see a few
hands that were not raised before so I'm
really confused by Roberts ago I'm not
sure how important this data is so
that's the point of view that don't
really care that much either way the
moment it just needs to be one sentence
in the draft saying that these are done
the same way as restaurant which is
fairly minimal text I to getting ladders
point about if we go to do weather why
don't we do it generally but I still
also see you could have this line in the
draft saying this is how they're done
and it still be done generically for
anything else even these I'll say it's
less then the first one none of these
are statistically significant numbers
but still it seems like there's a very
very slight preference of room to stay
as it is I would say it let's keep it
but also ask the bank again on the list
if people have any comments okay on this
I'm just looking down to see if there's
anything from from jörgen says Iran
other Arab support for annotations and
have doubts that exposing HTTP internal
metadata is terribly useful I doing the
instant format is for all agnostic okay
so to me he just said two things that
are I don't think that let's say the
last modified tag is HTTP specific
that's datastore specific and data
specific please okay so here is an
example somewhat wrote an example of how
this would look like the most
interesting part here is that we have
some metadata like the name provision
description description in the
description of the instance data set and
then we have the specification of the
content schema here we have an example
where the content schema is specified in
line first the yang library is used to
say that the angle I bury module with
format will be used to specify the
content schema and then inline content
schema really specifies that it's the
Netcom faecium module that we are
handling and this some more terrible it
was nicer early originally so this is
the content data that's the really thing
that we wanted to communicate you can I
don't know what happened with the site
anyway can you skip back one more can
you skip back
okay so never mind
here we okay here we have this in line
from inline content schema definition
there is also a possibility to just put
a reference to the content schema if you
don't want to repeat it for one in the
case that you have I don't know
diagnostic state data every five seconds
then you don't want the converse schemer
repeated every time and that's it
and I think I would like to bring this
to work group last calls but I was like
can we have that one question one
non-critical question as we agreed with
low down on the earliest part of the
last goal another question regarding
this inline specification this seems you
are now using the new yang library but I
think that the idea is not to define
like data stores and things like that
that you only really need this module
set yes
so does it mean that you can use other
parts of the nyra here or is it really
restricted to this module surface you
can use the yang library in the cone in
the content part and here you could have
parts for deviations and for features
you I didn't actually put in text saying
that you should not have text about I
don't know datastore lists but they are
kind of meaningless and unless the data
stores can be meaningful if you specify
which data store you are putting it
which part
of the yang library I think one question
is whether we really need to have this
flexibility of specifying different yang
library revisions in Olin different ways
of doing that because possibly we could
use the old yang my brain everywhere and
this this would do probably for this
person purposefully also agree with you
except that in the last round I got
those explicit statement that you must
use the new young library and even there
that it should be possible to use
something else beside the angle ivory so
I was specifically asked for being
one more thing that's not visible in
this example that here in the inline
spec you might have other modules one
thing that immediately comes to mind is
the yank versioning where you could
which augments the angle I bury with
version label that could be useful here
but I was asked to put in the
flexibility and it doesn't disturb it
seems to add a lot of complexities as
you just said and in my opinion I am not
sure if it's its complexity on the
mailing list there was a I don't know I
agree that it's not the complexity is
bit too much but I was explicitly asked
when the mailing list here again and
maybe some others that we should have
this flexibility reply to it you know
raising the issue that a lot of that I
think yes I wanted yes I don't want it
Bob Wilson Cisco so I think almost three
choices is what I prefer one is a very
simple one which is just this is the
list of the modules and their revisions
and that defines the schema so without
even needing the inline content schema
just that list there's one choice the
second one is what you've done here
where you specify the what this schema
in that way and the third one is a
remote schema I just see that here if
you just want to return the data for one
or two young modules listing them in
line without with having less metadata
boilerplate at the top of the file is
probably beneficial but xx you tend to
read these I don't agree with your first
method sorry because this is a simple
case you need to have a place for
features supportive features you need to
have a place for deviations and you also
need to at least specify which version
of the yang library you are using or
which which that's a you say that you
just want the list yeah you want to say
what what who defines or what defines
the format of that list so theists ways
we already have one in yang library wide
define a new one but then you have might
have multiple versions of that you need
to them so I'm not saying take away what
you have here I'm saying add add another
third option that's a simpler version of
it that doesn't worry about deviations
doesn't worry about features it's just
the data that you're uploading you don't
you could potentially just have the
features enabled by default
I think deviations and features are very
but I I think they are needed you still
express them using this format it's just
having my questions really is on for
some files if you have this boilerplate
at the top of every file is that a good
thing or not is it is it that actually
is just noise and for the vast majority
of cases that if you're just returning a
couple of modules that you're doing it
quite verbose way you might gain I don't
know five lines but for but with the
complexity that you have add some more
modules and then if you have deviations
or features then you had to add come
back to this one so yeah you could add
more I don't think it's needed just for
a record I also think the new who are
the me
I also think the new young wire is
overly complex for the purpose of
creating an instance data file and you
sound like you were told on the lists I
think no one actually thinks that
instance deadfall should contain
multiple stores I would rather have
multiple files containing multiple data
stores so it's more like atomic and it's
not overly complex but this discussion
against having it done this way there
was a discussion and more people should
have contributed to keeping a simple
single data store my way now we have a
new young wire which is obsoleting the
simple you need data store one so it's
very difficult we are going to use the
old one to make new RFC's it is going to
create even more confusion
so I regret not having more support when
there was a discussion that we should
keep the same young library and use
different mechanisms to achieve the goal
it does then it was only me and maybe on
the Beermen who was opposing and
everyone was completing to that so now
he just had to use the new young library
I guess I don't think while agree I
would have loved to have a more simple
solution I don't think these five lines
are worth to discuss one question what
Rob said previously in my opinion if
prop writes a module with a IDF behind
module list and then insert this module
into this in line spectrum the simplest
of modules without deviations could be
possibly used given this flexibility
here right so you still need at least
the revision yeah we could add the third
method if that's so yes I've actually
written such a draft escucha ITF young
packages we could add the fourth method
because there's always the matter that
the receiver knows what what is the
content schema in you do to some offline
method or some tech implementation
specific method which I think will be
quite common as well but if you want we
can add the for chatter bit I just like
the chair ask the chance to get to some
decision on these because none of them
you are critical and we are going back
and forth and please help me with to get
well welcome everyone my name is Hugh
I'm here to discuss February for setting
you next
so what is this job about actually a day
job to actually define a new IP say we
call the factory reset a PC and we also
introduced a new faculty for the data
store it is read-only data store and
obviously no data store actually we
typically use cases we use these faculty
for all settings we can use these in the
zero-touch conversion stage also in some
cases you may actually major the each
error on the conversion you can use
leverage it is reset kappa reset a piece
a to reset the device to the fact
default state so the current status
actually this chapter actually we have
two code option for this job and we
resolve summary issue actually I sing in
so the changes we made actually in this
draft actually in in a flourishing they
were when working actually my issue is
about terminology the young server
waited fun people have some concerns so
we actually try to reuse the existing
terminology like a server they found in
the MB architecture ii mentioned major
change actually is about how so how
these factory reset apply to the data
store actually we we can apply all the
data store but people have some consent
maybe we should you know take out that
candidate so we add some text to clarify
and so in second for adoption actually
we also raise some of issues and in
versions there were two actually the may
there's a two issue we try to resolve
one is the security issues and we make
some proposed and so we were to Scotty
so as the open issue and otherwise
the copy config actually this is we
actually extend is a copy config
operation to support security for
setting but is not you know a beautiful
set in specific so we so the result is
we remove these copy config actually so
so this just to reflect the discussion
on the many needs that we already know
remove these copy configure and as I
mention actually diseases and not the
factory before the specific is so so
also we actually release the sev several
MDA protocol like American for MT and
the rest come dear support actually it
doesn't define the kaabah config like
RPC actually so but it will be useful to
have geta configured because we have
faculty for data stall so it's it would
be useful to to have the configure to to
to allow get a configure to you know get
it access to these data store so I
sounded the the many nice discussing we
actually remove the copy configure
accession from the module in the job
actually we defer these to the context
that your choice
so the second issue is about the
security issue and because for the reset
our pcs and many focused actually to
reset the device to the factory default
state but also it will be useful you
know to use these up easy to clean our
the fire we started to notice some of
the software process or you also can set
the security password or data to the
default values but all of these
information may be sensitive so we need
to actually issue the SSL relevant
discussion Adam Kaufman in is related to
the Crystal draft actually
but we we sink yeah the cognitive or set
they stock could be useful for the
pistol draft and but we try to resolve
these conditions so the proposal we that
we propose a some text we can use some
encryption or a sign mechanism also
talked with our Khalsa we think maybe we
should you know around the access
control rules to to protect the
sensitive data so that's what way you
have but we are not a security experts
we want to hear if any additional import
for these are Joe Clark Cisco um any
input on any of these bullets are just
the last one last one
okay I don't have a input on the last
one but I really don't like bullet
number two the way it's defined in the
draft I could have my device reboot if I
set this RPC that as an opera
squishy I would rather these things be
more atomic like I use the RPC to reset
the config and then I might send another
RPC to reboot the device oh okay let me
restate I would rather there be some
more definition around this so that I
know what is is necessarily going to
happen and yeah it might be that I only
want to factory reset the startup and
then send an RPC to reboot the device as
an example but angle Ericsson has a
co-author about the last point that
factory default might contain security
data I think that's actually not a
question about the factory default data
store because the same data will be
available in running after reset so it's
the responsibility of the data model to
somehow protect this the security
critical items and this that man this
should be the same in running and and
fact factory default data store so I
don't see why this is specific problem
to this draft can't as a contributor and
also author of the keystore draft work
this is being discussed so the the
actual data lives in operational but its
desire it would need to be promoted to
configuration in order to be with
referenced by configuration and of
course it's the in shipped from
manufacturing it'd be ideal for to be in
the factory default in store or I mean
perhaps startup but all right the
problem startup is it could be deleted
thereafter I mean it could be a choice
what are the other thing it'd be a
convenience it's not a security issue
though because the data would if it's
hidden it's hidden so that's not a
pretty issue
and it's encrypted encrypted that's my
situation okay so it's not a security
issue per se okay Jim Carrey Nokia so
two points one is kind of agree with the
last speaker of the the fact that you
know the factory default I don't
understand security issue because I
would understand that when we reset
something the factory default the
factory information is going to be used
to populate the startup right you know
that's effectively what's happening to
the other question about the options and
another protocols that we've done this
with and we've done factory resets for
CPEs for going on 20 years now right and
in a standard way we we actually allow
for the other things that you're talking
about cleaning up files or restarting to
know particularly we starting to notice
simply being options that go into the
RPC so you just simply say hey look I'm
going to do a factory reset by the way
restart this thing when you're done to
answer the question Tim answer your
question of why
a security issue I think something
that's a little different here is this
this has to be done completely remotely
and in many of the factory reset options
that come up equipment you have to do it
locally you can't do it over your
network management wait there's
something dual out network management
but there's some systems that don't and
there's definitely security implications
if allowing you remote access to reset a
device a network device sure so even
with you reset the network device
factory reset again we've done this for
billions right so it's not like this is
new right so the the the information
that's put in the manufacturing store
right is is then you certainly ok I'm
sorry yes damn Bogdanovich TPM in the
trusted computing you know they're
they're solving those problems and I
know many hardware vendors are putting
TP and modules inside their hardware so
that helps you know somehow solve some
of those problems so kenta's contributor
and again i'm discuss the keystore craft
and then that how important group indeed
that isn't where or expecting the TPMS
will be used to protect the the keys
ship for manufacturing that's exactly
what was discussed in this morning's
presentation ok Rob Wilson Cisco is just
an ad for the our pcs if there's
security concerns with that surely knock
on wood discover programs as controls of
our factory reset all to see so that
means anything particularly differently
oh I don't have a response to you I just
have another addition that's in the Mike
my queue behind you so and he said I
agree that the system restart I our PC
and Mike
and yang should be argument if it's to
the new or receiving 40 steps
right for system restart actually this
is something we already discussed
actually we and I think the the property
there are different them I'm not sure
whether we should do the argument from
the system restore both Francisco
assigned liked what Tim said I think
that made a lot sense to me that you
just have an option to say also white
files and another option that says
restart the device if if that's that
seems fairly easy to define that if you
don't specify that the device isn't
okay I think that's the I think probably
we will resolve some real issue and
probably which you asked to move to the
next step
Alex yes hello can you give me yes you
sound great
okay Nixon okay yeah just a quick update
on the comparison of an MBA a co-op
again reminder this Beijing this the
center of this tariff is RPC did it well
prepare an MVA data form and we're
basing the idea is that you can this
provides a view tool to report all the
differences between data stores without
eating two upload the entire thing in
private apply application of this
opposes r2 for the troubleshoot
conditions which are due to unexpected
failures stink issues between data
stores lagging to change propagation and
so forth so we did post a few new
revisions the current revision is zero
tool the main changes that were applied
are on one hand basically the yang patch
format who report differences was
updated to add a source to everything a
new item source value that shows the
values on both sides of the comparison
and the comment that were made before
must it earlier basically we showed well
there's a source of there's a target and
the comparison is done in terms of a
patch it would be applied to the source
reach the target and then would not tell
the value on both and on both sides
influences the values replaced you would
then only nobody the value of one of the
data source you know that it would be
different in the other one but not which
one it is so busy that is the thing that
that has been that has been added this
is probably the most important change
and then in addition we added some more
extended example that shows the results
of a difference comparison and we also
updated and extended the security
next slide please so this is just
basically is a snippet of the main
difference all of the new of the new new
portion in the yang data model that has
the differences format so basically what
you see here is that the yang patch is
the young added is augmented to include
a new source value which is basically
the any data value that basically
indicates the value of the source data
item and that is it is being replaced
and which is been which applies
basically whenever you are deleting
something from the source off on the
left after comparison if you're merging
it moving it replacing it or or removing
it it is there
it's obviously not there if there's a
create because I'm busy that value did
not exist earlier next one next slide
and this already brings me to the last
time beta to the to the to the
discussion items so basically a couple
of items fading to confirm here we'll
discuss it with the group one thing
first thing concerns dispatch format
which is has been all know newly
proposed here so basically this
augmentation we do believe actually
addresses requirement to you have to
show the values of both sides of the of
the comparison the earlier request was
also believed that we should not allow
for different formats so we should
basically agree and settle on one true
to help interoperability so the question
is basically if that is the former that
we should go forward with and other
formats are possible but we just need to
settle on ones nobody there's the
question really in the room if you will
be see if it is for instance another
prayer proposal for another canonical
deformity so if there are other formats
we would like to hear about them so we
can make the choice otherwise the
proposals whether what's been presented
and so that's the first aspect the first
second discussion item
turned the the metadata of the original
of the data items so basically the idea
was if an operational data store is used
as a comparison target then the the it
would be useful to indicate what the
origin of the data is right so that you
for poisons if if the assumption is that
the we are they came from intended you
get it yet it comes from yet the
original system that person might offer
explanations for instance why data is
yeah why did a is different so this
might be useful in troubleshooting but
one question was basically whether it
should be always included or if we
should happen now whether an option
added whether or not to actually include
it or to to omit it so currently we do
not have such enough basically this foot
well the mo knobs you add the more
complexity you add the opinion here is
that is probably just to include it best
so just like to include it by default
but again that is something that we
should confirm here and alter well Andy
who I think is online actually raise
another issue or we had another item
that we wanted to discuss with regards
to the original meta and this concern
basically the action format I'll
probably let him speak for itself but it
is related to that same item and then
the third item is an item that well
there has been open for a while it is
also listed in the draft currently the
comparison filter is defined using sub
tree in XPath as per net conf and the
question is basically whether there
would be a requirement to also allow for
definition of filters relating to target
resources per s cons and then the final
item is an item that was just brought up
recently by Tim on the list
and concerning adding potentially adding
a performance consideration section the
performance reservation are way implied
in the security section this concerns
that basically somebody well they're
there Daisy and there is potentially a
hits and that is done on the system that
is doing the comparison
and the request is to add a section
which just makes this more explicit
which we can add but we which is which
has cutting up in not going to add it to
this current revision yet and that's
that concludes what I have so if we can
go through these items and get opinions
in the room yes very low yes I can hear
you properly
I was trying to interrupts you guys
you're presenting so we could ask the
room case-by-case but we'll have to run
the items as well so do them one at a
time but we may have to ask you to
repeat and go through them louder by the
way you were your voice was very low
earlier this way I could be a good sorry
we there may be a question of the mic is
this regarding the first item Rob Wilson
says no browsers have one other question
that's not related to these at all so
maybe I could raise that one never go
ahead so Alex looking the draft I'm
still not entirely sure the diff is
doing quite what I would look for so you
have you all option you can be turned on
or off and if the all option is off you
compare nodes that exist in both
datastore so intended and operational is
that right yes and only only exists in
both do you compare the value and if the
all options on it says among you you
would do a difficult for contents of
both datastore so I think that would
mean an operational you get all the data
backs or all the operational state back
or would you still apply a filter so it
only only conflict true items would come
back oh no no no the the ideas of course
that first of all that you would only
breaking report different this back
right so yes yes but you consider
compared running or intended to
operational all the config false data in
operational is guaranteed to be
different from running because it
doesn't exist in running
so all your statistics for example all
that all that beta will never ever be in
runnings is never going to be there
would you automatically exclude that we
would well okay no we would not
automatically exclude that so it is
basically the question especially what
you would request as what you would
request as a user and this is
essentially yeah that is essentially up
to the user up to the client to specify
and what what what it wants to have
compared so I think that all options
gonna if you compared any configuration
data source operation is going to give
you back a lot more data than you're
interested in because it will give you
back all operational data as well as the
applied configuration okay I would be
I've been teresting an option that just
effectively compares the conflict true
nodes in operational get What's in
running or intended but has all of those
you know they're in exist in one or the
other so I want to see configuration
items that are in running but haven't
yet been applied so maybe the line cards
missing and conversely I'd also like to
see configuration that I've removed from
running but it still persists on the
system due to some issue or some error
that some reason my bgp peer
configuration hasn't been deleted I'd
like to see that so I like an option
that filters the the config true part of
operational against what's in intended
or running if possible right now
okay so right now baby it would be in
the way this is defined right now is
that okay so maybe it's also video
you're saying we need to have something
between between these two two option
right because right now we have either
you include everything or you include or
you exclude data from the comparison
that does not pertain to both but you're
saying this you would want to have it
restricted a little bit further so if
your optional yeah between the two I
like on a question how useful the all
option will be very comparing
operational to a configuration datastore
I think that because the amount of data
will come back by and large that's
probably not what's being looked for
yeah I mean the obvious you also have a
filter spec that you'll then you specify
well so so when you say that that all
differences are returned that would be
only busy if your filter spec is empty
or if your or if you're asking to to
compare the entire tree which in general
might however not be the case but I mean
of course if you do this but it's true
everything everything will come back but
typically you would have a filter spec
as well but if you if you consider for
example say you're checking the
configuration for one interface then you
might have three or four lines of
interface configuration in an
operational you'd have that plus
hundreds of counters another operational
data and that would automatically always
be returned because I'll never ever be
in intended or running so I'll always be
reported as a difference if you
specified your all option but make you
know what they're better through it's
better by the all option mm-hmm by our
question is when is that useful when
when does any do they don't actually
really need that case is a useful case
where that's useful dates to return okay
I cannot give you an answer right now I
mean it's clearly something that we
could control I mean this I guess that
is the so I think you're you're asking
should we remove this all option or
group and just include another option
fitting instead
yes that's my question okay so I don't
have a strong opinion on the on the all
right now I do think definitely having
the option to exclude it like I get this
here that that one certainly is
important if you want to have everything
yeah I'm okay I don't have a use case
actually right now at the top of my head
for this so this is maybe a question to
ask Rob chic a Google the only thing I
can take the point I can add is we
didn't implement the all option when we
implemented this like we didn't
implement this draft core we've had a
service doing this since 2015 I think
and we don't have a no option
okay okay so that drained the
my queue and we'll go back to going over
these points and asking the room for
information or oops so first was the
patch format or the
for instance and I think the question
was so already I think we've had an
agreement that there should only be one
format that should be returned for death
the question is what should that format
be and the current proposal is an
augmentation to the patch format and the
question is that have sufficient or if
there should be another format in
particular something that may be called
gained if a dip format so instead of
augmenting patch maybe we should
actually have a format that's very
specifically customised for returning
dips is that did I capture that correct
Alex yeah this is this is correct but
barring my render the Yankee there is no
separately and deform it that has that
has been proposed yet
we do think it can be done using the
augmentation of the patch format but we
could of course define it but yeah that
is correct
so yeah Vogel since this case I wouldn't
define a diff format as part of this
work if that's to be done I'll do it as
a separate RC I so I would say do patch
format now but hopefully designed such a
way that it could be extended to cover a
different format in future if that's
feasible okay so they actually I think
what I was saying before is at last time
this meeting we concluded that there
should only be one format that we
shouldn't you know says so what you're
saying is let's reopen and possibly
support multiple formats it I don't
think I think we're going to choose one
format choose the one we already have so
rather define a new one so if the jury
concluded we only won one format then
use yang patch that seems fine okay well
objections does anyone else have a
comment about this so as myself as a
contributor I do worry just a little bit
about if we are augmenting yang patch
that the patch format it just seems a
little bit awkward to me and I worry
that I do think this has been
the important feature and one that will
be will be living with for a long time
and I think that it would be Hoover
ourselves to actually ensure that we
pick a format that this ideal for for
for this purpose and in particular
imagine us needing to extend the format
in the future if it or maybe yang hatch
changes in the future and they're
coupled in such a nasty way that we
don't get to do what you know we don't
get separated the music so I also don't
mind Moulton I also in mind if you want
to define a DIF format again I still
wouldn't define it in this document
something that's better to define
generically and reference it from this
if the proposal is to use a yang
different set of yang patch that also is
fine being good but I'm it's going to
slow down this work though if you do
that I think by whatever time it takes
to define a yang diplomat okay so I
guess that's the question
well I guess I have three questions the
other question is well to also revert on
the earlier question and if you want to
allow for yang patch augment it now and
then maybe something different later I
can ask a question
alright say let's start with that
because if we are able to support
multiple bit formats or return different
two formats then we can almost let go
the other remaining questions first
that's the first question for the person
right so we're gonna ask two questions
first is to prove supports and then the
second would be who does not support um
so for those who do support the idea of
just returning a single format that
there's no option for for for the client
to specify the format please raise their
hand I think the question you are asking
is who supports restricting the return
to a single format so who would like
only a single format
question is who wants to allow right
right but I've just think some people
know okay so the format single format
please raise your hand there's very few
thank you support a multiple formats and
also a few but statistically more a lot
more so then okay so then okay the baton
effectively reverse the decision that we
had from last time and if we allow for
multiple formats and my objection for
moving forward with this format now is
I'll obviated I no longer worry about it
because I know we can fix it later so
then I think we don't need to ask any
more questions we could move forward
with this format so Robert insisted to
clarify I support multiple formats but I
restricted small set of them probably
limited to two sure okay well it would
be they would presumably go through the
working group and the work group would
only adopt the work if it was reasonable
self-limiting well one one question this
well one question is how we would do
that practically be earlier busy we had
a flag that was allowed to specify the
format and base is the preference for
format and if this is about the but if
you're saying we need to allow for
future formats and I'm not sure how we
can oh how we could one say it's only
this or one other formats which has not
been defined yet
okay great so I'm moving on to the
second bullet point then should
parameter be included to in control
whether or not to include origin
metadata when operational is the
comparison target so I know we all have
to remember everything that Alex said
before but hopefully that's clear again
yes No
so should a parameter be included if you
believe that a parameter should be
included to control whether or not the
origin metadata should be returned
please way to raise your hand there's
very few and then I think the option is
that there be no parameter specified
then that the Alice help me here if no
parameter specified is automated
returned or not returned yeah well then
we'll just be returned by default we
return by default okay so actually maybe
if in case that wasn't clear actually
let me restart that okay because I don't
I'm not sure if everyone was clear about
that so if you think that origin met it
that there's no parameter and origin but
it it should be returned by default
please raise your hand okay there's no
one okay sorry go ahead Mike so global -
Cisco sonal devices will necessarily
support origin metadata so the question
really is whether it's better to have
that as an input parameter such that
you'd fail the request you can't support
it or whether you just don't return it
if you don't have it so so that's why
I'm have I prefer having parameter
because then at least you know as a
client whether I'm not going to get this
data okay I guess there's a good reason
to do okay that's a good point so this
will be in favor of having a parameter
true to control it in which case one
would expect it and then we could have
an explicit but if it's not supported
then obviously the request will be
denied Alex you're agreeing with problem
the progress yeah I think I am agreeing
with Rob yes and as a contributor I
agree okay so third bullet point
sunfilter to define using sub tree as an
XPath as per neck knob is the
requirement for the definition of
filters related to target resources for
rest counts which okay Alice can do it
with this bullet point again well then
is something that we it has been trap
for for y so basically the comparison
filter that we use to baby yepp the
filter spec well it's it's a question
many but include as part of the filter
spec where we say baby which part of the
data store to include and that one yeah
and this one basically is defined in a s
pro Netcom orbiting in a dead fish
weight using subtree index path and and
the issue was brought up in the past
betting you whether well what about what
about allowing a restaurant way of of
defining that of putting the filters
where you putting a different format for
the filter spec if you look at it so if
you look at the RPC but while this
parameter this business pack is defined
as a yeah essentially is a subtree
filter or expert filter these are the
things you think that you can have okay
so to clear you're saying current
through current definitions that either
a sub tree or an xpath filter to be
passed yes and the question is whether
or not we should extend to support also
a third which would be mimicking breast
cough like semantics right that was the
that was that was the issue in there yes
mmm-hmm okay now when I'm thinking about
restaurant filters they're you know the
query parameters so there's fields for
instance would be one of them is that
we're thinking yeah yeah this this
particular item I'm not exactly sure I
mean this was brought up in the past
obviously even when we define it we did
not think we thought actually that what
we have the filter specs that that
happened that the survived and
to be sufficient for what we need to
accomplish so from that perspective yeah
this is a yeah it's a contributor if you
will think I I don't see the need for
that but it was brought up by the group
before and we have listed it as an item
in the draft so this is why we need to
raise a fashion so if they are if if
people think we need a different format
or in a national format yeah
we should discuss it so the current it's
an RPC not an action correct there this
correct of the RPC so in our way rest
comp works for most part is you specify
the URL to the node that you wish to
operate on the resourcing governing on
which would in in Netcom parlance would
be more like an action as opposed to a
RPC mm-hmm to the extent that this is an
RPC and then in rest comp terms that be
in slash operations at a global level
and and hence the rest comp query
parameters would not make sense in that
context you would have to have something
like what you're suggesting of sub tree
or XPath filters don't believe the
option exists however if we wanted to
support actions instead of our PC then
we could have that conversation mm-hmm
does anyone else have an opinion about
this this actually I don't think this is
something pull the room we're gonna pull
the rooms just discussing this a
discussion point anyone wants to commit
oh all right so Alex I think we should
probably take this one to the list it's
pretty complicated but some examples
would help all right yeah or maybe uh or
if or I guess believe if nobody's coming
forward with the reason why subtree next
time would not be sufficient then we
probably just can tell this issue sure
and actually we may actually raise wide
support both should we support both
subtree and XPath okay that's a I guess
that is a six second question yeah yeah
because in in that combat lis subtree is
mandatory to implement
we're experts not of course if you're if
you're a restaurant surfer and you're
not implementing that comp you might be
you know that might be unfortunate you
may not want it as some people dream
okay lastly and the question is do we
add performance considerations section
or added out of performance
considerations okay though this is on
Tim's thing
so Tim did you want to walk us through
exactly what unless you raise some lists
you just approaching the mic yeah Tim
carry nokia so when we read the draft
there were some concerns and in terms of
an implementation because we have some
very constrained servers right that if I
was given a request to do a diff on some
data stores where I don't have the
compute resources to return the the
information being requested how what is
the question came back says what do we
do what is the appropriate response that
we should give back should we curtail
what we have and just provide what we
have should we give them a you know
thumb our nose to them or not or
whatever it's supposed to be we'd like
to have that response certainly
documented of of what happens if we
can't fulfill the request right and if
so that's and and then alex suggests
said well we've already created some of
this in the security section but in
reality we probably do need some type of
you know performance piece to say hey
look you know if you're if you can't
fulfill the request you should do this
type of things for the for the these
types of behaviors that you can't
fulfill well yeah there was an
underlying question on this is as well
but one thing was I mean obviously if
you cannot fulfill the request you can
always just decline because you can just
deny it but I guess the underlying other
question relies to do you want to have
some kind of prodding operation or so
you can have only so many requests per
time unit or or what have you or is that
something that you would
we weren't worried so much about a
median or a throttling aspect of it if
someone else might be we were just
saying look you know we we might not
necessarily be able to you know meet the
meet meet the request coming in what we
wanted was was that we wanted this the
RFC to specify the behavior specifically
so that people that are implementing
this will know you know what to do
sure yeah I think and I think what
you're asking for is mostly tutorial
item as well so I think yeah I think
let's just add it I think if we can get
clearer if it terrifies the things why
why not do it so on and I would suggest
we just added for the politics probation
so according to my clock were out it
we're almost have time but lose says
we're out of time
she don't fully understand why it is
that you wouldn't be able to fill the
request so let's take a second on the
mailing list yeah and actually blue is
incorrect there is idem beverages and
snacks in the break time and we'll see
you in the room next door 20 minutes
okay yeah okay right all right your lips
but I right thanks Alex everyone
everyone whose remote will need to

From: https://www.youtube.com/watch?v=9k7qggWAS5o

yep so we're gonna work through the
administrative detail here this is a
continuation of the immediately previous
meeting which had occurred in the
adjacent roof so our agenda still holds
as planned I would observe that the note
well still applies to this meeting just
as it did to the previous half of the
meeting so keep that in mind this is
very much an IETF activity blue sheets
are going around it is not actually
Liggett or II that you signed them both
times since we're having the same
meeting but if you didn't sign them
previously by all means do so so that we
have an accounting of the fact that you
are in the network of vertices these are
different blue sheets yes
okay administrative details we do have a
media echo session for this I was
scribing in there for who was speaking
we have the same etherpad as previously
so we can just pretty much pick up where
we left off all right so agenda wise we
are going to start in with the design
team and so this is actually about half
of our agenda for this session and first
up is Joe Clark
a versioning design team update all
right so it will actually be three of us
presenting actually be three of us
presenting today I will be your first
host oh okay next please using that so
there are agenda today will be five
minutes hopefully less for me followed
by I'm gonna guess the bulk of the time
with Rob and then Bala will round us out
and Rob will take us to next steps next
so the first thing we want to talk about
is the versioning requirement changes
and the reason this is only five minutes
is because this is going to be fairly
short essentially last time we agreed
that we would adopt this document it was
still unclear as to what the fate of
this document will be if it will be
taken RFC or if it will just be
something that the working group agrees
to that these are the requirements that
we're trying to solve and instead we
will then adopt solutions work through
those solutions as a working group and
take those two RFC but we did adopt it
there has been one change since just
transitioning it over to an IETF net mod
document and that is changed to text for
requirement 1.4 based on feedback we got
directly at the last meeting next please
that change was the original 1.4 said
essentially that non backwards
compatible changes must be allowed there
was some consternation over why must
they be allowed instead what they were
what we really wanted to focus on was
win non backwards compatible changes
occur those changes need to be signaled
or documented so that one can see that
between two revisions of a particular
yang module that
backwards-compatible changes have
occurred so what you can see down there
under the new heading is the new text we
arrived at this was posted to the
mailing list a little bit of while a
little while ago we haven't heard any
additional comments on this so at this
point the design team feels that the
requirements draft the ITF version at
revision 0-1 is final these are the
requirements that we have been working
towards in terms of a solution at this
point questions all right as to what the
final resting place of this document is
we of course leave that to the chairs
and the working group the design team
feels that what I said earlier this
document will not progress to an RFC
it will just be there to serve as the
guidelines for the requirements that
will inform the solution or solutions
that come forward actually on the next
document but it maybe is pertinent to
this question wait and it's really why
not combine the two and you have like a
solution overview document that repeats
a whole bunch of requirements in it if
you really believe that we need
requirements somewhere let's keep it at
one place rather than have because the
requirements even though you don't think
it needs to be documented it's not meant
twice so it'd be great just to have one
document and maybe maybe what we end up
as from a working group is a
requirements in solution overview or
requirements and framework document and
that's how we end up publishing that so
we fold in what's now an individual
document into the working group document
and let that go forward and you know
maybe all the historic stuff goes in an
appendix just for context okay so so I
do feel that the documenting the
requirements are important not
necessarily document is a ratified
document but I get your point and maybe
that's what then will become of the the
framework of the next document that will
be presented
any other questions comments ones
requirements are facing stone even if
they are in document we don't spend our
time reviewing them all the time because
they're wise you're back to square one
right well we definitely shouldn't
change this one once we're done with you
know the process of agreeing that we've
arrived at that point very good and
actually that's a hundred percent my
objection with repeating requirements in
another document now I have to go review
those make sure they're right now we
have to go discuss them is it the right
way to summarize let's not do that
so we're saying yeah our intent is that
these requirements now become fixed
these are the requirements for which
we're trying to solve the the
requirements in the overview draft are
only there to make it easy to read so if
it's easy to have them now just refer to
this one that sorts of it's more easier
to say the requirements are documented
here go read it just give a second
second set which isn't written quite
exactly the same way that you have to
figure out what's different what's not
different yes that's right well oddly
Rob why don't you get up here I'm gonna
get out before something changes okay so
so I need a quick update for next slide
please so I just want to we continue to
have semi regular meeting since ITF 104
there's quite a few people who've I'm
dialed in either a weekly or semi weekly
basis and we've continued to progress
various documents I like to thank
everyone down there on that list I don't
repeat them and our main output really
is that tweak the requirements document
the judges talked about an updated sort
of solution solutions overview draft
that sort of tries to pin everything
together and show
you wear the different bits of the
solution hanging together and then our
main focus has been working on this sort
of new / updated yang moderate module
revision handling draft that evolved
from the somatic version draft that we
presented in 104 my opinion is that this
draft accommodates all sort of core
feedback we received during the working
group meetings and also the versioning
discussions we had at 1:04 and so we
think it was sort of reflects that
consensus from that next slide please
so I'm now going to spend the next 20
minutes discussing the solution overview
I'm just trying to set the scene of
where these drafts ought to hang
together then there's me subsequent
talks on some of those there are
different parts so next slide please so
my intention here was that the solution
David Dreier you draft is meant to be a
sort of transient temporary document the
the idea here really is just to help the
readers who are reviewing those drafts
understand how they fit together what
bits haven't yet been written but in
what will be written a mind teacher
wasn't intended to take this to RFC I
mean it could be done but that the
attention of this was to help people
during the review process in terms of
updates there will be a separate
presentation by balázs on updated mod
revision handling I will talk about
young packages after this and Rashied
will talk about version selection so
those three ones will be covered in more
detail next slide please
so the overall solution as we see it now
is made up of five different parts the
first one is in updated yang module
revision handling so this is doesn't
talk about semantic versioning at all
but talks about yang module revisions
and how they can allow non backwards
compatible changes and Express those and
more branching and fixing other things
related to that area there's a really
just a semantic version number scheme
that it sure could be described fairly
abstractly that hasn't been written yet
but we derived from the text from the
previous one draft 30 over DT net mod
yang Simba's eros eros or take the
semantic versioning a definition from
that draft you just extract that text
and I'm putting in isolation so we'll do
that in the next phase there's a version
yank packages draft and this one was
published this update was much before
ITF 104 I need some changes to take into
account the changes in the updated yang
mode revision handling so I will present
that one this time it wasn't presented
in 104 and but there needs to be some
updates to that there's also a draft for
protocol operations for package version
selection so that's draft Wilson Ahmad
yang vs. election 0/0 again that one was
published before 104 Rashad will be
talking about that so again that is sort
of quite early draft we expect there to
be more substantial changes to that and
then the final part of this whole puzzle
we think he's tolling related to doing
schema comparisons between either a
module by module basis all you group a
set of modules together into one is yang
packages to actually version and be able
to compare two schemas and report at the
schema level what's different or maybe
on a per leaf node base of data node
basis what's changed between those two
schemas so that work is on is our least
lowest priority at the moment we're to
try and get the other work out the way
first so next slide please okay can you
talk about if there's any significance
that the naming of the document I notice
go through some transformations from
version design team only well I think
probably not at this stage the ones that
came out of the Verde T the design team
obviously have the Verde T prefix the
young version yanked packages and the
packaged version selection were not work
there was output from the design team
there in need to individual drafts that
I wrote but I that's not because the
design team doesn't align necessarily
with that direction I don't think it was
just more that that's when they produced
the proposal later on here
is for the design team to hopefully pick
up those documents and work on them as
design team documents that's the plan if
that's okay
the name doesn't it's a file name so it
doesn't really matter but whether we
have a complete solution from the design
team or a partial solution I think does
matter and we really are looking the
chairs are really hoping to get a
complete solution from the design team
yeah and if for some reason a design
team didn't feel that that was a a noble
goal we should you talk about that so
it's great to hear your plan so yeah
it's one card they're like one comment
on that
not everyone the design team believes
that the version selection is required I
would say so that's the one we're not
going to redo des Kate are our
requirements and that it's in their
requirements yeah I'm sorry that's one
so you know good so of the first of
those five drafts the updated yang
module revision handling this is the one
there's a more detailed talk on this by
launch we'll cover all of the different
parts of this in a lot more detail so
this is just a one slide summary of what
we've done and what the draft covers and
you have the core enhancements to Yang
or to allow but not necessarily
encourage nonlinear module development
eg for bug fixes so we allow we allow
some sort of branching to occur in yang
module history and to document when not
on backwards compatible changes have
occurred in the revision history so
we're not saying that we're not
encouraging that people should do this
but even when they do occur at least be
able to describe that so that when
clients when people read these documents
and or client tools are processing them
they can know where and Nan box
classical changes have occurred the next
part of that is that we we still
identified module revisions uniquely
using the revision date so nothing's
change from what we have today but we
allow a new freeform text label to be
associated with a revision
and this text label can be used
effectively to overlay like a semantic
versioning scheme for example or some
other scheme that somebody else comes up
to it's not we're not defining within
this document what the form of those
text labels can be but we do allow these
revision labels to be used when in the
file name for example in replace for a
revision date and the import by derived
revision again can also use this label
as well we define a new version of
so today yang has an import that will
choose any revision or has an import
that chooses a very specific revision
and one of those is two general ones to
specific so we are introducing one
that's import by revision data or
derived and that effectively you will
find you a revision that is either that
or that's within its history and so
that's the attention there is if a
module that you're importing has a new
container added into it or new and you
know that you're referencing then you
can put dependency on that revision or
later so you put in that source code
dependency we define
backwards-compatible versus non
backwards-compatible changes and these
are actually the non
backwards-compatible change so bogus
collateral changes are very close to
what is defined in our c 7950 with some
tweaks and the non box-cutter ones
fairly obvious from that we clarify and
prove yang status handling so in
particular we clarified the deprecated
means you still have to implement the
node or otherwise and use deviations to
indicate you're not implementing it and
obsolete means you don't implement the
node and then finally we've added some
updates and guidelines for how you
update yang word rules based on these
updated rules next slide please
so the second draft
and this one's not been written yet and
this is the semantic version number
scheme so I'm just saying what the plan
is do it and we tend to to do this over
the next window next four months is to
define a semantic versioning scheme that
allows a bug fixes to release software
released software assets so effectively
it's what's already documented the
algorithm in the draft DT net mod yang
symbol 0 0 and it's a version of semver
2 0 0 but with the ability to add these
bug fixes in to the released assets when
necessary the main changes to is that
doesn't actually now need to define what
constitutes a backwards compatible or
non box-cutter will change that would be
assumed to be defined outside this and
this is this is what will be used we
expect this is some people as a revision
label so effectively the revision labels
you have will use this semantic
versioning scheme so you end up with
having semantic version for your young
modules and then again it's worth
pointing out that in terms of this this
versioning scheme is not going to be
tied to yang so although it will be
defined or have some references to the
and models it could be used anywhere
where you want to do this sort of
semantic versioning of assets but you
want to have a bit more flexibility to
be putting be to put bug fixes in two
older released code or released api's
looks like so yang package
again an overview I'm I've got with more
detail on this later on so what is a
yang package is identifying a set of
yang modules and their dependency so
it's a bit like taking Watson Yang
library and putting it into a file or
this one example but it's a bit more
flexible than that so whereas Yang
library will tell you what the schema is
for a particular data store an entire
device this allows you to do subsets of
yang modules so bits of bits of stuff
together and group them and the key
thing here is that those packages can
then be versioned in the same way that
modules are being versions so rather
than having each vendor implementing
choosing exactly which module version
module they want so every single
language will they're implementing I
think the use of packages could
encourage people to have more
commonality in what they implement so if
you took the ITF young model for example
you could have a package that defines
the base models and types and things you
could have separate package for routine
you could have a separate package for
VPN services for example and those would
be versioned over time and so rather
than somebody saying well I want this
version this or this version of that you
could choose something off that the
history of those version packages yes
yes yep it copes with that has
dependencies between packages and you
may have dependencies between packages
that conflict and
so the the solution here requires that
you resolve those conflicts at the time
you combine those two packages together
into a into a combined package you have
any conflicting module dependencies you
have to resolve them explicitly say how
they're resolved so yes in terms of
packages that could be available offline
using the yang instance data document
they also the plan is to augment yang
library so that rather than necessarily
downloading the full module list off a
device if you knew and expected what
package version it's going to implement
you can just check that it's
implementing what you expect so make it
an easier check for comments conformance
wise and as I said there's a separate
present presentation on this following
later so that's what I cover now next
slide so the fourth one is protocol
operations for package version selection
so one of the key aims of yang packages
is to allow devices to potentially
implement different versions of modules
as cohesive sets and then to allow
clients to select which ones to use so
this could be done where they're
choosing between some different sets of
vendor modules or it might be they want
to choose to use the ITF yang models or
a particular version of the open config
yeah models as examples and so the
packages are identifying those sets of
modules and then using our pcs to select
which ones to use and this could be the
case that the server only allows you to
support implementing one particular
packaged version for the entire
management interface or it may be that
some devices allow you to have support
multiple different packaged versions and
to select those depending on maybe the
session that's being used so again
there's another presentation covering
this but as I say here is failure the
early draft there will be more changes
in here but really it's just the ideas
are being presented
and next slide please
and then the last one so there's no
draft written yet they slightly lower
priority I do think it's a key part of
this set and that's to define an
algorithm to compare to schema trees the
detects backwards-compatible versus non
backwards-compatible changes so it looks
through those two schemas and says that
these are where the changes are so P
yang already does some of this the
result could then be given at the schema
level or it could be given on a data
node level and it could be also taken to
account features or deviations or the
subset of the schema actually being used
by by the operator so if they are only
using no 1/3 of a particular protocol
module they're not using these options
then when you're comparing to see what's
changed between two versions of that
module you may not care if stuff has
changed in the stuff you're not using so
again be able to compare the schemas and
subset it in to the stuff you care about
I think gives you almost like the
perfect answer there is some some cases
where this is difficult to do you can't
well known it's very hard to have
tolling that would check the description
statements and find out whether there's
some change in the semantics behavior of
a data node until our machine learning
gets a bit better XPath expressions and
regular expressions in other cases where
it's very hard to see what has changed
it's very hard to tell if it's a
backwards compatible change that's been
made in those cases so I think there's
consideration there as to whether we use
some form of annotation potentially to
mark those cases where it might look
like it's about as compared to a change
but it's not or vice versa it looks like
a non box collateral change but actually
it's backwards compatible any questions
on this one okay
come on automatic sliders is so so this
is just a chart I'm not sure how
accurate this is it it's effective
trying to show you the dependencies
between the the modules between the the
drafts with the darker blue arrows
showing sort of proper dependencies and
then the lighter colored arrows sort of
showing user level dependency so where
you may choose to use semantic
versioning scheme for example but
there's not actually a dependency
between the drafts and you can see that
this module the module revision handling
sit to the top and pari has a key
dependencies then everything that might
want to use semantic versioning scheme
'obviously has dependency on that but
other schemes could be used it's just
one choice the packages draft depends on
the module versioning and the package
version selection of C depends on that
one and then the schema comparison
touring depends on packages if that's
what you're comparing or modules if
that's what you're comparing now next
slide and that's it for me so any
questions on that overview section at
all in terms of the sort of overall
picture of how the solution fits
together you have any examples examples
of the first the first one versioning
the Motorama version right and we're
using revisions so if you have an
example the new revision format looks
like yes in the in the next so the
module revision handling draft will be
covered in detail next so that covers
all that and a lot more detail of those
examples in there I will talk about the
packages version schema draft after that
Rashaad will talk about the package
version selection the two that we're not
covering any more detail today is the
semantic versioning scheme that's based
on what was there before we have
written in yet and the schema comparison
told him that we haven't really looked
at in great detail with you the design
team really want the working room today
I think what I would like to know is
where the people in this room think that
as a sort of solution space of solving
the entire problem is does this look
like the right set of things that we are
solving and does it look like it covers
all the pieces you would expect it to
cover does it look like it's morally the
right approach
so obviously they've become some
individual drafts and things but does
this look like overall the right thing
then work is needed exactly all of this
with you show there I am I'm not hundred
percent sure that maybe the scope is too
much over there trying to put it but
what they see as a problem when doing
yank are the dependencies and figuring
out what are my dependencies what they
have flawed in order to enable the
service and we are planning to be able
to create abstractions at different
layers that going to change where
they're happening and be able to take
into the content and maybe even
prepackaged those things up and saying
oh here's my whole package and and it's
it's getting deployed you can unit
tested etc it is very helpful exactly
how to do that that's a question not
because there many of the issues that
you're just beginning to the stuff in
the other comments so I think I'm
bashing them up they start at the end
but I think the plan will be to try and
continue down this track effectively and
try and get these bits the doctors as we
go along
earlier we're looking for the design
team to come up with a full and set of
documents that answers all the
requirements and then bring them to the
working group for further option call so
it's important to make sure that the
working group is aware of what's going
on in the design team and is generally
aligned with the answers once once they
become working group documents we're
going to follow a normal working group
process which is everyone in the working
group has it will have an opportunity to
provide equipment influence so it's not
that at adoption we're done it's that we
have a focus team that's trying to give
us a starting point don't you blurt
about the pre-comp so when you're the
semantic version sounds great I was
always wondering why it didn't start out
that way so it that's not great
when you're talking about visualizing
the difference between schemas would it
be the resolve schema
once you've resolved all the groups
because usually that's what consumers
baby eyes are mentally visualize or is
it visualizing I think I think both is
answer so I think you're comparing a
module itself then you might do it on
the module text but even even then
actually I think you have to resolve the
sub modules and allow groupings and
things to move around so I think it's
sort of the the resolved schema to some
level but I might not be fully resolving
all dependencies it might be this module
with dependency still hanging in open or
it might be the entire scheme like a
yang package you take the whole package
resolve everything internally and it's a
complete package you then then look at
the actual abstract syntax tree for the
yang effectively constructed and could
be comparing those on a node by node
basis okay I think it's time for blush
so this draft is there let's say the
main out main output at this point we
derived it from the complete solution we
try to separate parts of it and we
believe that we had discussions on the
last IDF on the on the meeting but also
outside afterwards with many people we
believe that all the all the main
comments were included one point is that
this applies to yang modules so sub
modules they are very closely connected
to module 2 the modules themselves so
they should just follow the modules
themselves next please
this updates a number of current RFC's
one is the yang one that one because the
update rules are sometimes not not
specific enough but sometimes they are
too strict those updates yang library
because the versioning provides
additional information that we want to
include there and also the authoring
guidelines for example hydrogen
information should always be present and
and life cycles of the different schema
notes should be also included so what do
we have here we have an extension that
will go into the revision statement and
include indicates that this revision of
the yang module is not known backwards
compatible and BC to a previous version
if if it's not there then it's assumed
or it must be backwards compatible we
also explicitly state that we allow
nonlinear yang module development so
branching and updating previous
previous revisions which was not
forbidden in 7950 but many people assume
that they are there we will have linear
development only we introduced to
revision labels because many people many
companies have nice versioning scheme
Sykes same wear or some other ones and
they would like that information that
gives a short compatibility information
short history information in some form
to be included provision or derived was
also mentioned earlier so that states
that you can import something that is
newer than whatever mode module date you
chose but it can't be just a date
because of the nonlinear development
anymore we had to update what backward
compatible and the unknown backward
compatible changes mean because we want
to accept some NBC changes and
especially the status related changes
were not clear in 79 15 we have some
additions for status handling and yang
library are updates as we mentioned next
slide please
so maybe the most important part is here
the red thing we have and BC changes
that indicates inside the revision
information that this revision made are
incompatible and on backwards-compatible
change to compare to the previous
revision and it's always in this full
list of revision statements the previous
revision that means that you should not
remove revision statements maybe you can
remove the description part or shorten
the description parts but you should not
remove their revision statement itself
because at that point you lose this
information also you should add a
revision statement all the time I think
it's not mandatory in 790 50 every time
if when at the Green Point where we just
added functionality we don't have this
revision extension this will allow
normal in their development so here the
February first revolution date was also
developed because whatever customer
didn't want to upgrade to the new
version and then we have two branches
and there's actually no limit of
branching so you can branch the branch
and then again and again do an infinite
tree although that's not a very good
practice but if business needs force you
you can do that because if you look at
it every yank file in itself is linear
but the yang module might have multiple
files that go on different branches in
on this tree we don't this boss this is
a possibility that can be misused we
don't recommend arbitrary branching
especially not for as the earth and each
yang module the text by itself will
represent one route from the leaf of
this tree up to maybe the root or at
least to some level next please
so we have revision labels because the
NBC only tells you that yes we have and
change but to actually understand that
you have to parse the module already and
look up all the revision statements
which is quite a bit of work so as an
alternative you can have revision label
that contains similar information this
draft does not say what is in the
revision label these are examples and we
can will have multiple drafts or at
least one drafts about at the same where
can be in the revision label the format
is rather free only concern is that it
should not be not be mistaken or as a
and it can be used later for other in
the import statements as well yeah next
one please okay here is an example of
what the revision labels for assembler
would be if you see that the main number
changes every time when we have this
read NBC changes extension there so from
when we just add something then we only
go from 3 to 0 to 3 1 next please
okay import by revision or derived
that's a very important part of this
work so we said the simple import
without any revision date is too liberal
if you have the revision date there is
too strict what the usual case is that I
want something that includes the
functionality I'm depending on already
and anything later on I very much hope
that it will not remove what the
functionality I depend on it's not a
strict promise that everything
afterwards is good for me but usually it
is enough very revision or derived can
be based both on the revision label and
it can also be based on revision date or
derived would it stop at a module
revision and has NBC changes no no
should it no because most of the time
the NBC changes don't impact the import
there is a risk there and that we had in
earlier versions very complex sets where
we said that from this version to that
version and it gets more and more
complex and usually you want to leave it
open-ended because you don't know what
changes the new revisions will bring and
many many times that even if the changes
are NBC they won't impact your import
won't impact your importers so this is
not as if you really want to be strict
and really be sure then you must check
every every date but we don't want to be
there actually in the previous versions
of the same draft we had more strict and
more complicated solutions and this was
the most user-friendly and feel helpful
the revision will derive dependency it's
almost like a source time dependency
when you actually come to use these
modules you knew something like downline
really on package that will constrain
exactly which version you're going to
use anyway and in the case that
something comes along and breaks your
source dependency they made some
nautical change at that point you've
expected probably to releasing a new
version you revision off your jewel to
do the import then fixes to a later
provisional derived or changes up to
them to fix the important pence again so
we think that this is the right balance
being not too strict because otherwise
if you limit it it's more likely than
actually but I said you're not gonna be
impacted you start update your module
anyway so we think it's better to be
better to ask forgiveness them to be too
strict performed okay I think my only
analogy here would be like suffered you
know when you're building and you have
dependencies on libraries and you'll say
I depend on library you know three dot
one dot two so if three dot X I'll
support but if it ever goes to for TEDx
I don't want to support that so
sometimes there's the ability to put
brackets or limits into how much future
you want to support automatically but I
accept you know it's been discussed by
the design team just want to make sure
that was something that you guys have to
cover and also previous might as well I
think it was yes so this the the two
future statements that you're creating
here is this what I understand to be
possible extensions to yang quarreling
gang language yes okay
max next okay so in order to define what
is when we put in those statements we
have to define what is backward
compatible and NBC 7950 gives us a very
good basis for that but not the full
statement because deprecating and
obsoleting nodes can actually be NBC or
BC depending on how it's implemented
what we assume is that if you deprecated
the node it's still there so that's a
backwards compatible change but if you
obsolete a node then it will be removed
so that we define as a non backward
compatible change also reordering data
definition statements in most of the
cases that should not hurt anyone so
that we allow that and anything else is
known backward compatible but stating
that opps obsoleting is NBC that's a
definite change from 7950 next okay we
have the so that we have the status
statement in yang we want to change what
deprecated and adopts elite means we
still is this is not a mandatory
statement so it's not a must or a shell
but we say that deprecated should be
still there working fully functional
while obsolete should be removed if if
you don't remove obsolete that can or
that can also result in surprising and
errors so even for obsolete I think we
is important to define this but for
backwards compatibility we put into the
Yang library to the extra leaves that
states that do you follow what we
recommend here or not if you don't put
these extra Leafs then you are backward
compatible meaning you can do anything
you like and your clients will be
surprised maybe
also when you deprecated or obsolete
something often you have a reason for
that often you have a time line when you
will actually remove it maybe you have a
replacement so a status description
statement was added for that so I like
the idea of having an ability to specify
whether or not the server actually
complies or not that being the case why
aren't these musts i mean and backwards
compatibility i three months the server
has data more data than it should but
for the first bullet point where it says
it should be implemented its must I mean
how else can we have interoperability if
it's not sure the clients aren't sure
that it's actually there because you can
check in the yang library that is it ro
the deprecated nodes we implemented and
functional if you sorry okay so the
third by you saying gang library you're
talking about the third bullet point yes
that indicates if you follow the first
two or not but are you saying that with
third bullet point that is possible for
the server to say that the first bullet
point is actually those two will
contract you must not judge yes if you
yes if we next then I agree that we
would try and get these to be fine
since you're doing extensions you can
change the rules for implementations
that implement that extension so you can
say if you implement this extension you
must do these things okay and it's
probably worth reducing the number of
options so in the case where you have an
extension think about making things more
bus than shirts Martin be account held
previous meetings a very strong opinion
that we need to indicate in the yang
library whether whether these first two
rules are followed or not we should not
make them must probably could get about
masks now that's what we want isn't it
yes I agree with two you know but we had
some pushback that's why we have this in
young library to indicate whether these
are followed or not
please so what we have in the yang
library is two nodes deprecated nodes
implemented obsolete node up steps and
these are the first two bullet points we
have discussed lately and if they are
not there and then there anything can
happen with these that's one of the
problems that in yang one that one that
they the definition is too open
the other problem was that currently
yang library doesn't specify which
version which revision of a yang module
is imported if multiple revisions are
present and that might mean in some
cases that mandatory data nodes are
present when we thought it's not or
mandated three nodes are absent when we
thought they would be there so it's kind
of a bug let's say or a missing point
piece in 7950 so it's not the rules data
model of some of angle eyebrow is not
modified but the rules what they mean
are modified to make it very specific
which more module is imported in this
case excellent
or in a question whether the second
point here exists really bug fixes is is
something that a raft failed to cover or
not my precious is a change of behavior
offensive it couldn't be covered in an
errata but he was opening that if we
could get in after that would be better
he's clarifying I hate Lee that's
ambiguous today we're still trying to
digest what exactly what seconds no it's
both 1 & 2 of the second so I need to
see the errata before making a call as
to the level of change that's being done
yeah okay to judge whether it's right or
something is suitable for errata or
abyss or update okay yeah I think the
specific proposed text we have to look
at not the problem but the solution
ultimately if you believe it's a
clarification you should probably just
submit it and then we can debate it
except that it will be in a row then it
will be at a unverified errata and we
can add it to the list or discuss it so
you're right on track but you're on
track for like another 10 minutes okay
yes data it was required
requirements to do say what it what
happens with versioning instance data
itself is not version because what
backwards compatibility means there
that's not defined on the other hand it
back versioning is very useful for
instance data understanding if the
schema defining modules can be somewhat
different next please
and then we have some guidelines telling
others how to do changes that not to do
NBC changes and try to make the changes
for the clients that's painful use
deprecation use this more flexible
import use status information and such
things and sometimes duplicate data to
be able to use both new and old versions
or potentially sell right version
selection next please
if clients is kind of a requirement of
recommendation for clients that yes
changes are coming and they should
tolerate a number of changes and also
they should monitor what kuat happens
with modules so they should check if the
module is backwards compatible and my
first answer is I don't know don't care
this only says that we have a revision
label where you can put in some numbers
now there will be a separate draft about
Sandler based the revision labels
yes ember has difficulty with unlimited
branching there whatever you put into
the revision label will have its
limitations or maybe you have a
versioning number
version numbering scheme that's all
powerful them no limitations but that's
not part of this draft again
we here in this draft we just defined
that there is a place where you can put
that in the further drafts will propose
the same where based a visioning system
and there they will have meaning but not
in this draft we here just we have a
placeholder so Robin just have one
comment to that so you so somebody might
a vendor might choose to choose their
module labels as foo and bar and what
said like choose whatever they want
so he's whoever's doing that scheme is
define the semantics of the scheme so
very quickly to cover the next steps
there's just one slide here actually we
sort of discussed it earlier so at the
beginning we were going to seek adoption
for this first draft I think the
feedback from the chairs was that
actually they want to adopt this as a
set so I think of fatally we're going to
continue on the same track any comments
we received on the module version draft
will update but the other ones will
effectively work on the other ones the
static version scheme yank packages in
version selection drafts and also the
tooling one but I'm not sure we'll get
those all done before the next IDF
there's still a lot of work there and
then on seryung packages please that's
so I already covered a little bit of
this this is just a little bit more
details and so next slide please on this
and yank packages and so
yes effectively the idea here as
beforethe said 'is to diversion sets of
young modules together with their
dependencies and in terms of how the
yank packages draft is written at the
moment it's using the yang cember
solution as a way of identifying or the
the version number scheme for these
modules that's the one thing I think we
need to update or at least discuss
relative to how we've changed the
modular versioning and made the semantic
versioning number a label and option
thing rather than things tied in the
packages could be hierarchical so a
package can import other packages and
that gonna curse down the packages
themselves can be I their complete or
incomplete so you don't necessarily have
to tie off all your dependencies
depending on what you're doing and that
might help if you were you had
dependencies and like type modules and
things that like you may not want to
depend those the package themselves can
be available from yang library or they
can be available on a lot of line and
yeah instance data and in the yang in
stated discussion there was talk about
how you identify the schema using yang
library I think yang packages might be
another example which were a good way of
defining that schema a better way
because it's it's more what you're
actually trying to do rather than yang
robe it's returning slightly different
information and as I mentioned the draft
is slightly out of date because it
hasn't been updated with the changes
that we've done to the module level
versioning you say it works but I think
it does say yes it's just okay so why do
young packages of what we're trying to
solve several problems here we want to
actually get consistent version sets of
modules just versioning modules
individually when you have hundreds of
them or tens and different organizations
becomes too hard it's too complex and
it's too likely that different vendors
will implement different versions of the
same modules and get too many
incompatibility so for one vendor that
might implement B GB version 2 and Ice
PFE one and on different OS or different
vendor while does BGP v1 and OSPF v2
yang models and then when it comes to an
operator and that
to use these well neither those sets are
ideal because they can't get a
consistent set across both of them if
you've got young packages and your and
your versioning for like the ITF routing
protocols and every time the module gets
a new routing module gets updated you
add that into the yank package then and
a new version that yang packed you've
now got a more linear update in terms of
how these packages are evolving but
you're not fixed to that you can still
deviate those the packages that you
still use your override and say actually
I'm going to support this baseline
package with these changed in these
differences to it another one another
use case for them is to avoid
downloading this this full set of
modules off the device and having to
check your exactly what you want because
that's a hard thing to do it'd be much
nicer to say actually I support this
package or these packages these API is
the things I support with these
modifications and rather have to do an
individual check on every single module
either I've got a higher level
conversation about the API that you're
supporting and as to say and again you
can do this with yang library making the
schemer available offline so you can
check it in advance and you can design
your tooling to expect to talk to a
device and expect which which API which
package version it's using and correlate
those is easier than how to deal with
full sets of of packages when the full
sets of modules when they differ
next please so this is just an example
I've I've got a simple ITF network
device version 1.1.2
there's some metadata description and
other information where you can wake and
a URL of where to download that package
from and that lists a set of modules
that implements here and it lists a set
of modules our import only and again
this could be fully resolved it might
not be resolved and the definition
includes metadata like URLs where to
find it it lists features that mandatory
so which features that if you implement
this package you are obliged to
implement and support it has a list of
the imported packages not there's none
shown here
but list the pictures you importing list
the modules that are implemented by that
package and the import only module it's
a very similar to yang library from that
point of view except it recursos and has
a list of imported packages and then
finally and again it's not shown here is
import conflict conflict resolution so
if you combine two packages and they
each those packages import different
versions of modules and you have
conflicts so when you do that when your
package pulls in two separate packages
with conflicts you surface and you
specify exactly which version of which
module you resolve to next slide please
one more slide so this is another
example package example ITF basic
routing package and this one is
importing from Network instant network
device is one that 1.2 and then so it
lists the extra modules that are
implemented as part of that extra
imports on top of that and as I said
before any version conflicts or change
must be explicitly resolved that's
that's one thing that's critical here
the forgiven package definition the list
of modules that it's using is is exactly
tightly defined that's the specific
versions and the package version
indicates the nature of changes in the
modules or package imports so the
version number that we're using here is
again using like a somatic versioning
scheme so if you update your package to
include a module that's changing a
number non box compatible way which in
semver would mean going from no 2.00 to
3.0 to 0 then your package would also
have a major version change in its
version number as well if you had minor
version changes in your packages or you
own or you're importing more modules in
your package definition then you have a
minor version change so it's using the
same send their ideas and rules and
things for doing versioning applied a
package level rather than a module level
yeah two slides back you said that if
you say that you support the package
you're obliged to implement all the
package versions as they are I just
wanna clarify
is that include features and or and or
deviations so you you what you'd be
allowed to do is you've allowed to run
the saying you you implement this
package you would implement your own
package on top of that or they're in
they includes this one and then has it
extra modules with deviations listed in
terms of features you would list the
features you support but you're obliged
to support the features that are listed
here the mandatory features or otherwise
deviate those nodes so you can steal
these deviations and changes but you
have to expect them explicit and then
the key issues the key question in terms
of updating this draft so the current
package draft of actually draft uses
yang Simba's the versioning scheme
because that was based on the work that
we did for ITF 104 where it looked like
that was the version seen we were going
to use but the latest module level
version now has been decoupled so in
terms of the module revision handling
and it's and its concept of backwards
compatible versus non backward but non
box collateral changes and you separated
from the really the versioning scheme
that you're using so yang semver is one
of the examples of the version scheme
you could use so I think probably it's
better for yang packages also to have
that same be coupling so for it not to
have be hard-coded to a particular
semantic versioning scheme but just to
say actually it can use any scheme
that's partially ordered sets of
identifiers to have that more
flexibility in to what's in terms of
what it's done or in terms of what it
uses and allow yang cember as one
example of what could be used in that
case then the question is is to be is
how flex lie you in that how do you do
that do you have an identifier or an
enumeration to identify the different
schemes I'm not sure we want to have a
huge proliferation of them because every
single different scheme will have impact
on clients that need to be able to
compare these things so but same time
toyotas yang Semba might be a mistake
does anyone have any thoughts or pins on
Charles that call for table more like
the version of the package as it said it
would be ordered
so is it implied as the version of the
package increases the version of I think
entirely decoupled so I think you'd be
allowed to there's no correspondence
between the package version number and
the version numbers inside other than
other than the semantic change the
package so if you've made a change to
the what's included in that package this
numbers compatible because you've
changed your numbers compared to a
module or you've down read the module
that would be like a non box
non backwards would change to that
package definition so it would go from 1
0 0 to 2 0 0 if you were doing a down
Rev I think does that answer your
question yes any other
on this issue at all opinions are you
coming up because we're speaking next
complete resolution you mentioned I
think in one slide between two different
packages in a different slide you type
one package how do you explicitly yes
so you see effectively say on the config
resolution say I want to have this
particular version and I think I try to
remember having read the draft recently
I think in terms of import only again
you might need to identify which
specific import you are overriding and
the and the overriding version but it be
only other day you're explicitly saying
this is the version I'm gonna use in the
circumstance I think you are now up so
can you elaborate either on your view of
the design team view of why you want to
allow multiple ways so in terms of the
modular version the reason for doing
that is because that's what some people
wanted they thought that they don't like
the yang symbol scheme because they find
it too restrictive some people other
people think it's not well why don't we
just use the standard Semba and I think
it's not flexible enough so there's
difference opinions as to what version
of scheme is required and if you are a
vendor and you end up having arbitrary
branching of your modules for whatever
reason then the semver scheme we
defining can't accommodate that
deliberately so it's limited
deliberately some partial branch but not
too much so I think that's the reason we
had that separation in the module level
stuff and then in terms of packages the
question is whether if you've got
modules that have that flexibility and
you package them together you probably
need a versioning scheme for the package
has the same number flexibility we
progress from the design team to the
working group because to the eventual
something we submit to the is G we'll
really want to think hard about whether
we want that flexibility because anytime
you have flexibility but that really
leads is to interoperability problems
and of course the reason we have
standards is you know for
interoperability so if there's good
reason to allow that different off those
different options sure but if there's
not you know we should think really hard
about the cost of that flexibility
greatly if you want a question about the
different water networks continue
question on that conflict so I need
werden conflict must be how can we
resolve that so if for example this was
importing two modules to the two
different packages and one of them he
was using IETF types version 1 0 0 and
up was using ITF types 1 0 1 then this
package as well as doing the import
would say I'm going to use ITF 1 0 1 as
my version for this package then but
each pack each tire package each time
you're combining modules with two
different packages two or more packages
you're explicitly resolving that you're
specifying exactly which module you're
using anything interface Bank on do
another module they are not a package
need to user interface 2.0 they are not
compatible and I hope to use two
packages together how do I do that
without I think is something you can't
I mean in the day if you're trying to
combine different
$6 another set foxy application your
doctor C library Lipstein southern euros
we all pay we have both live in there so
you define two packages in that case
interface modules - that one though they
were a madman but if you've got that
sort of splitting your yang ecosystem
where you got the modules that that
different and there and there's no way
you can combine them you got some
dependencies on the older than you
well you're combining them into one
package so if you got dependency says
half of my package depending on ITF
interfaces one and half as depending on
ITF interfaces - and there's significant
changes between the two then you've
either got to split that and just do
them for one and a separate set of
packages for the ones that depending on
version - you can't you can't magically
put those things together this won't fix
and the same system together then yang
does not allow that yang says you can
only implement a single version of a
module you can import multiple versions
but in terms of implementation you can
only forgiving scheme oh you can only
implement one version of a yang module
actually it's possible if you have two
different Netcom for restaurant servers
each server can has to have a specific
implementation show up in the device but
in that case you're in two separate
packages working packages sometimes may
not be practical too much education
about that but it but the same issue
exists with using yang libraries is no
different in that scenario so you're
trying to you can't do it yang packages
you means you can't do it young library
either there's no fun little difference
there it's just how they're being pieced
can you use the mic please only one
person can represent can be yes so I'll
be presenting the version selection
graph on behalf of Robin myself so this
slide talks about why are we doing this
well this comes from the requirements
3.1 and 3.2 from the requirements draft
you spoke about and the what this is
basically saying is because yang modules
are change in non backwards compatible
ways we need to help clients migrate so
servers can support multiple versions
and help clients select which version
they actually want to run with next
so the summary so it allows servers to
do non backwards compatible changes
without forcing clients to make
immediately migrate it does make use of
the yang packages draft Rob was
presenting earlier and it provides
mechanism for servers to advertise what
versions of the package is the support
and it allows clients to choose among
the ones advertised which one they want
to run with on the pitch Lee recession
next please so before people start
throwing bricks servers are not required
to concurrently support clients using
different schema versions those things
are optional servers are not required to
support every published version and they
are not required to support all parts of
all version schema the important thing
is when we say support non backwards
comes come
all changes if you remove functionality
in the later yahng yahng yahng version
obviously you cannot support the newer
one and the older one but in some cases
if you've we organized your yang module
you've moved your nodes around in a non
backwards compatible way the server
should be able to support all the older
version and the newer version
next please okay so the overview which
you'll see in the yang tree NEX is you
know a version schema
it's a yang schema which Rob was talking
about with an Associated yang revision
that could be the semantic version
number in the draft which will be done
soon and yes as I said this could be
yang package the schema set is a set of
related yang schema one per datastore
the server support configuration for the
default schema and they also support
configuration for what we call the
secondary and maybe more net confess
count instances to use different schema
this is done by a port number for both
net count and restaurant and for
restaurant we also support the the
different route path which we source to
see like which schema to use this is it
for this slide next please
and this is basically the schema tree
for what I was just describing you can
see you know you can you can configure
your the schema sets and all that or
read/write but in terms of the
information per datastore that's all
what the system the server decides to
support I don't believe we got any
comments on the we probably need to do
in Ex Rev
based on the latest discussions from 104
work and all that and hopefully we'll
get this cut will get comments on this
draft and the yang packages draft and as
Rob was explaining earlier in the next
steps for the design team we actually
want the this draft which is currently
private draft to be part of the design
team orc questions
don't feel obligated to rename it if you
want to rename it that's fine the
government okay not to okay and as a
contributor seeing the protocol names
alerts me just a little bit macabre
scoffs you know what about other like
co-op do we have to call out those
protocols by name here well if we just
support port we don't have to I believe
my recollection is they're separated
because Raskin has the route path okay
support but I mean the route path
support has not been discussed maybe
we'll take take that out so since
they're optional you can still have both
of them and then wholly set that when
they promote a football yep
easy too soon okay that's can do you
have a suggestion here of how many think
we should attack to know the keystore
draft some things you've got more
separation so is that what you're
suggesting might be a good approach to
use it I was leading up to it I was
looking at the port and I was wondering
what the intention for that port was and
why wouldn't we be using the the
appropriate client server drafts from my
comp oh yeah what's referencing notes
directly okay
I need that works yeah well I mean so
this isn't even adopted work yet but
nice and the that working neck off will
be done before this and so hopefully
it'll be it'll just make sense where I'm
set then I actually related to that I
think eventually this draft who then
would probably end up in good michael
wings up with a yang model for
is a microphone hello today I want to
introduce a data file unit module for in
the measurement okay sis chapter 2
funding model fold is a base the network
policy management and it can provide
some billete to fold an imagine function
to control that information a monitor
state changes on the network element and
if the sister if the trigger condition
be meted and you can perform some simple
and instant action and this document has
we discussed toys in the working group
and the since many people are interested
in two interesting related works
so we update sir document and in this
version made a new leaf
a group ID it can be used to group a set
of events that can be the group a set of
key witnesses can be excute together to
perform some specific tasks for example
to provides a service assurance and the
way also optimized must even leave it
moving from action lists to trigger
condition lists and allow the one even's
trigger to reference nazareth even stuff
nation and the way a change stratford
condition in choose a variation
condition to further clarification
difference between poland trigger and
the variation trigger and we also
simplifies action container in this size
will provide our example how to use this
module to perform some
and here with the fund even a and it's
being used to monitor if the trigger a
to be we be set up so it can trigger
even the B and the even P use a call
even if the even exist and there's a
polling condition net sending the
triggers corresponding action to active
standby a fair work okay and we use a
group ID to groups is to event to
perform the proverb provides a sort of
surance a test okay okay it says
document have discussed at least it
system is already discussed three times
and is same some many people are
interested in this works so I think
maybe we can think about document yes is
there somebody comment at the mic
processing units just is provide for
routing policy and here we use this doc
use this module to perform such high
service assurance if you here is example
if for example food loss package across
the street then it can perform the
relate action for example logins
you lost packet cross their food and
then perform some action to Octavian
birth of fare over taxes yeah yeah yeah
we want to define the more generic model
and they can perform provide ability to
do service assurance to our munchers
yeah I understand your question but here
our document
ma do not want to coverage routing
policies the walks and here is just to
provide okay if some condition met and
we can do some management walks like
starting okay policy documents one of
the co-authors I like I just like to
state that this is more a data plane
document whereas the routing is strictly
a control plane policies for a control
plane for you know the
the selection and complete and
redistribution and Newport export
complete protocols so there isn't other
than in the word policy yes so yes so I
understand this record I think I
understand this correctly there is no
correlation between either work neurotic
working group in the policy there using
the event condition action as well as
well as some constant question and in
this work here is that accurate
now this work here I would have mixed on
standings incorrect is this work just
data plank or is admit the general was
is it meant for control playing or other
usages a single the intention for these
the motive actually means that provider
of closed loop was of its managing that
close to a life cycles of its maximum
punishment and so they can Kalu imagine
model and probit model so is that using
this kind of policy in order so they can
trigger the automation so is a
management automation is it so is it
destined for applications of control
planning data plan or up
I think not sure we should right next to
the control plan but I think that may be
happy to the state of a name and
imaginary cheapening thank you um who's
read Benoit clays so goodnight let's
just explain to him some bio information
so we created the working group called
super in the past by use of policy
scratcher structured reaction shamrock
was there became an interest raft he has
submitted their of the main reference
which is then in that super working
group the goal must be something generic
so they had policy that the routing
policies we're going to be using that
and security policies to be reusing that
yep so now it's in the next right it's
based on the policy document which is
convert raft but reg do something
generic while the world has been moving
out so this is like discrepancy well
yeah and everyone still needs one of
these two for each of their applications
um so who's read this it's actually been
here twice I believe so far the zero
zero and the zero one have been
presented here so who has read it either
in its present Oh to form or in one of
the previous forms okay so some people
have read it of those people who have
read it actually let's just generalize
that let's say yeah who here is
interested in the topic and thinks we
should do work on it okay
no hands went down there so
that's a reason that's a reasonable
number like that is at least as much
attention as many of our drafts get who
thinks we should be doing this somewhere
else or not at all okay nobody so it
sounds like you're probably here for
better or worse yeah so we're at a
starting point I don't think we will
yeah yeah no no we want to pull for
working group adoption after this I
think you'll be interesting to see how
many think this is a starting point we
ask that the last meeting and there was
a pretty small number you know
interesting to see if that changed or
there are people who are not on the list
of authors presently who are interested
in this work with an eye towards
participating in any hands for that okay
well one partial hand from somebody I
greatly respect so I guess that's
probably of some value yeah I think
we'll probably pull for adoption on the
list after the meeting so what it sounds
okay thank you
hi anime based notification and jin-woo
is presenting yeah my colleague er is
not here I represent his topic my name
is him actually the topic is powder
empty base notification for antenna
based another configuration update
actually this job has been around for a
while actually moved from net cough
actually has already presented twice in
tent mode actually my colleague actually
revised this chapter base his experience
on the network network a conversation
very question so we sinker we most
negatives job to turn an empty ative
actually for actually but you know they
can work together with the problem is a
negative right now can compare the
difference between the datastore by the
limitation is the latter type ability to
verify whereas an elevation from Indian
or other sauce take effect so we we sink
so that the the solution we propose is
that affine notification actually to
reporter this cannon a verification
event to check these miscalculation
issues but the week not you know check
out all these miss conversion issue
because for for summer cases may rely on
you need to export data from different
device so so we gave some cases like you
wanna maintain the considered a
consistency in network configuration may
cover a two different device you want to
maintain the consistency as it is
something we cannot just use this event
and here we gave one use cases and we
think we can use a native to compare the
antenna with the operational data store
and we have two different cases so for
some object that may be present in
but another present in operational for
example you may have an interface but
another physical exist so it can exist
in antennae but not existing in
operational another cases they may
present in operational but not a present
in intent so typical case actually is
interface MTU and so here the maximum
weight way proposed actually we need to
you know make sure the server can detect
the hardware changing so this may rely
on the system interaction with the
hardware so based on this assumption
actually we can detect whether there are
some miss compression issues and
probably actually here okay we'll get
some phrase we can compare the
difference between in 10 and operational
and we can make sure all this actually
caused by some mystery sauce and here is
the motor structure we make some change
actually we introduce a application tag
actually to provide additional to
provide a new parameter to identify each
update and also we clarified that
difference with native and we think this
can work together with a nominative and
this is the position of the CMD
notification on education and we already
present this in an outside eye TM
meeting and we show how hot the position
of this message so this job has been
around for a while and we think actually
the seems concern synergize with any
media first so we want to hear what are
you thing about this again
yeah so I think the challenge with
discussing this since I don't see anyone
running to the mic at the moment is that
at 104 and 103 there were pretty
few people who had read this I guess
that is their queen our show of hands of
anybody who's read this in either its
present form or the I guess
o 1 and O 2 versions that are have been
present previously prevented presented
okay well there's a few does do any of
those people have the willingness to
express an opinion as to whether or not
this more work is worth progressing Rob
I'm gonna put you on the spot so I'm not
sure it turns off of how I imagine your
sisters working
I would imagine more that the devices
were just monitor operational and
monitor the system that way around so
rather than getting errors back from the
ply failures whether used monitor
operational and detect the changes
between what you as an operator wanted
the device to do and what it's actually
doing so that's one observation the
other one is I think that it's possibly
really hard for some systems to
implement this to be able to report back
when some of these operations have
failed because to be committed to the
running datastore the configurations
there the actual apply phase with system
might go through many different demons
and my how other intermediate failures
and things like that so I think
generally my experience is really hard
to track these errors back we've always
struggled with this sort of thing so I'm
not sure
that helps I'm not sure yeah one
clarification amusing actually medical
computation and verification is very
popular that's a lot of you know
research on this and you know this seems
actually morning--i to this systems data
that are put into the operational data
store actually we can leverage this
network variation Maxim to to do data is
possible so that's why I bring up this
kind of idea again it's a certain thing
for the verification side and check into
the verification the configuration is
valid what your state's all about
completely agree with and that makes
sense but in terms of the next set of
applying it which is sort of a sickness
or semi asynchronous and systems are
aware of that's the big thing that's
harder than trackback so I think that if
you have a simple device you may be able
to do that and may have value for that
but I think there's many devices that
this would be to trigger to do and
instead you would you expect the
operator to be monitoring the
operational safety device and the
applied configuration to whether this is
happened or not and another observation
here is that some things this
configuration takes a long time to be
applied so so what do you do in that
Jim Carrey Nokia so when I read the
draft there was a couple things that
were it was running through our mind
I've just couldn't I just couldn't form
an opinion on because what I've seen
here is is what we've done in the past
is grated up the examples that you use
this is well if I have something tonight
of the sign oh so did you did and now
I'm going to put it and there's a
difference between that and call me
differences between data source now it's
gonna put the easy assignment problem
right and in the past we said okay those
are just status I'm on the side or not I
put my own tl-one things right so that's
usually solved in a status type of thing
all right so so I was struggling in what
we wanted to produce notifications for
right typically to produce notifications
when there's some sort of mystery but I
on a Sunday to be this but someone plug
in that right and then I need to get any
of that from this particular draft those
things where I think it's also saw that
those particular notifications are
generalized notifications there's
notifications in the context of the
problem that you're trying to resolve on
I'm going to do a quick meet I'm going
to do that type of stuff and so I that's
what I was like going on I don't know if
this will do you realistically if this
will be you and you really use because I
think it's the problem the maintenance
of the use in the and the person or the
enemy that's interpreting that is not
the server the thing being actuated for
the thing that's doing acts like the
client and that's that's what I was
going through my head as I go through
this I go I don't know
there's other ways of doing it thank you
I think what we're hearing here like is
some people who have given this a little
thought but they're not they're not
necessarily interested in being
consumers of it and that would actually
think be the sort of thing that would
drive interest or its implementation and
refinement and would also get us better
feedback so I think um I I think
borrowing some kind of expression of
interest of that variety on the list
this is probably not something that we
need to revisit if if you know if we can
find that or muster the energy in the
community for people who are interested
in it then I think it's pretty easy to
come back here and go well we've
reviewed this enough that we can call
for adoption but I don't think we have I
don't think we have the level of energy
or enthusiasm with respect to being
consumers of this that would cause us to
to really ask that question at this
point okay um thank you for your
diligent efforts however yeah thank you
to refine this into something and to
seek feedback thank you it's in the
other room
that's not it's noisy so watch their
semen on to this one we focus on a
I'm gay protocol translation you see it
yeah I see that
okay this is a tumor again and I want to
talk about an MD protocol transition
issue discussion there's a many
discussion here we already have lot of
discussion on these kind of issues
actually recap a little bit because
right now actually
Nana colress coming and he has ought to
be published and actually right now
mostly idea for young anymore should be
the NMDA compliant and also there's more
young a motor that developers should be
done be combined but we still see
actually there's some temporary and no
MDA actually exist to bridge the capper
of the time period and kill them they
can be available so we see there's
transition stages so our you know
confusion is how how so how long is this
transition period will will take
actually when this can get started when
and so so so we have some misconception
with sinker the current are empty young
town lying actually only provide a
guideline for the young trans transition
but the tenant provide the transition
kind line for the protocols actually so
especially another case actually so
protocol actually if we forego support
empty but you you still can use some
Nnamdi modules so so this is something
we think maybe there's some gap actually
maybe this is a misconception we already
discussed this
on the man is also offline discussion
with several people and so here we give
several option you know we have a client
actually they can be the MBK liner
convenient lambda cannot client for the
device who support a net AMD could be
the AMD server on and no long an MDA
server but in a between we single may be
the other cases you know for the server
they only implement MDA Maxim but it
doesn't implement the button tenant
support MDA combined young data model so
actually talked with some some people we
think the lacing the semi an empty is a
very confusion maybe we were Chester you
know we can remove these semi MDA so so
based on these are some she actually we
we sink you know the colonel problem is
when the non empty a client they talk to
the server who support the Amity and an
MBA actually using the traditional get
operation you cannot get as a
system-generated complication so we try
to address these issues
we propose a three different solutions
so the first one actually you know we
can add as a state a copy node in in the
inner MTA young big mantra I think this
is another good approach it because we
already moved to order an MD we add back
actually seems you know actually not a
reasonable actually but this is the one
solution we can using the get operation
challenging get operation to care as a
system generator
the second option is we can you know
define the state module state model well
using the same structure as an MDA
module and this is something already be
discussed in mm dae-jung timeline and
but the problem is this state of module
actually for some cases that you may
when we actually move toward an MD for
some MD model doesn't define of these
stay the module so that will cause you
know we when we implement this we need
to implement some non-standard state a
module so whether we should you know ask
all the NMB multi-with you provide us
data module so this is a something we
single could be the solution but the
with this data mode you stay the model
we can actually address the problem we
can care system generate the
confirmation from these data modules
the third option actually we can
actually you know enhance the get
operation to allow uses the traditional
ket operation you get a system generated
a pattern this actually the impactor is
you know only modular post so we think
that's possible to actually get but that
we simply is very you know the influence
we haven't evaluated we Singham
maybe it's very hard to see this so we
so we we propose these three different
solution so so the first question we
want to ask is actually church as you
know as we assumed actually we may have
for for the server name a support semi
an MDA so we you know so the question is
a supporter you know when when we
migrated to the MDA just athos away
supporter and MDA
protocol like an alcohol rescue empty
air support and then we can support a M
de young beta model are we when we
implement all the other choices that we
implement the young Vinney model and MDA
at the same time so so so we for the
we we proposed actually we may have
three option me we may actually be
single we can skip these transition
stage if we can go directly to the nmda
solution actually this already be done
by ITF is a completed standard solution
but the we the problem we are facing is
we sink we assume many at nanoka line
that doesn't support an mp4 right now at
this stage here how to migrate it and so
we propose the another to option one is
option one actually option will actually
you may take a one of solution we prefer
actually the the solution to actually
you may define a state a module but this
is not a standard state a module in some
cases so another one which has to you
know Chester you know agrees there's a
transition stage it but there we may be
we may have different no standard
solution we we can pick oh what do we
and take so that's a option we single we
can address these kind of issues any
comments on this so I mean I I think
pragmatically probably something to so I
think one Colin here is the potential
for a given server or vendor they could
have a bespoke in a conflict switch to
choose whether it's going in and nmda
mode I think for a lot of deployments
that's probably sufficient and fine and
that the operators would would either
upgrade all their clients and won't go
or not so to do this in terms of the
option to we do there is a specification
in the young off guidelines of how to
generate that state module that's quite
prescriptive and fairly easy steps so
one option here could be to say the ITF
young modules that we run through that
process and we dump those date trees and
the names well-defined into github for
example and then they're just there so
I think that that may be a pragmatic way
for it and then the other session I had
was I think on vesties in terms of the
new young library and get operations
used to have a case of where existing
gets returns there are the extra spec
trees exact estate modules but the new
ones also work as before so I think
there are ways through this whether we
needed another standard or document to
define the I don't know yeah the puzzle
we're facing is the way which taker the
operation to actually we need to you
know have a completed standard solution
you for every an MD model you should
define and stay the module so for some
Oh an empty a model they may didn't
define the state model such as a motive
attack or some maybe some other cases or
so that will delay implementation so how
do you how do you address these so I
think it's just you need to publish a
scientific you publishing I took routine
- state I think that's okay with the
same version number and you follow that
also how to generate it and then you've
got that state route so even if they are
not included with the appendix that
module how to construct that equivalent
state module is defined so it logically
this doesn't just because this is a file
somewhere I think two different vendors
could apparently generate that same
yeah Tim do you want to go okay so Kent
as a contributor so you say agree that
there's an NDA a transition period I
mean that is in fact what we have right
now we are in a in MD a transition
period so I don't think there's anything
to agree on the you also early on your
other slide I think is solution number
two you mention the possibility that the
bad state module is missing but I think
that's a hypothetical like do you
actually have an example of dismissing
ya Modi tag actually is a typical
that one that particular draft was
discussed on this just recently it's
okay for that draft not have a - state
equivalent because it contains no config
false notes and also the has no
meaningful operational impact have
tracked the operational value of the
conviction notes so it's actually
allowed to not have a state variant for
that draft yeah but I see the nd respond
actually he sees is wrong but I don't
know but it may be this is another cool
example actually we have some other you
dumb prods Chester you know operated for
some of mmm be compliant model doesn't
define state emotive so so these state a
mode we should be follows a young and a
young kind line and but this is not a
standard state Amodeo so it's just that
defined in the appendix issue we are
free to may be different people given a
developer implement in different ways
that will cause the you know you know a
lot of interoperability issues models
that are progressing right now through
the ITF that don't have - state that are
an FDA compliant yeah I haven't check of
these so if that exists we should go
back to the working group that's
progressing that tell them they they
should add - okay unless there's a
reason not to and if the ist is
processing if the is GE is processing
such in in other areas they the ISP
should should say you need to get formed
with the PCP and it is true that like in
tags one of the ways you conform is
we've looked at it and don't think it's
opiate for this particular module so
that's a that's an okay - way to conform
with the BCP okay but if it's going on
and really this is a comment i Torrini
to bring back to the is-3 is please make
sure that the PCE is being followed
yeah that's a cool situation definitely
knew these he says that the I think it's
the introduction or the abstract to say
that module conforms to NMDA or not it
just says to imitate well it should also
have in it the thing like we have
especially looked at and have concluded
that it's not necessary to conform to
NMDA so it's explicit supposedly said
yeah there's some saying maybe there's
something guideline should be modified
to just yeah yeah yeah okay but
regardless I think what Rob said earlier
is that if there isn't missing that
state on a published draft we can always
retroactively to go back and publish the
- state for okay okay okay let me
utilize others SPO those were our two
great models on my own right so they
said oh okay yeah we need your stuff
it's all problem who use it well the
problem is is that we moved to NDA right
there are organizations that said I'm
not going to produce a module that has
produced eight modules friends of d8 -
so their organization - yes yes sir
that's the standard operating procedure
there are organizations that says that
will do them right so there's a problem
in the industry that were not consistent
on our point of the Rothstein transition
to them yet and I would say that as the
the authors and the owners that probably
has incumbent upon this group us to
provide guidance where it's necessary
above and beyond
now we've done that need some of the
some of the guidance I will say the guys
in some of the groups that are marketed
and looked at that since 1918 and has
has made some decisions of it right s -
I can't go into too much because of both
member organization tree had have made
decisions on some sort of some of the
decisions that have been made here says
everything has to be a state model but
they're getting caught up on some of the
permutations that human even was missed
in earlier drafts which says I got that
non on a client's client you know for
permutations trying to find the
audience aw server is not defined like
in what and they're getting cut up
around like some of the operations that
we're introduced it so I like my my
concern is is that if we don't talk
about this I don't know if we've got I
don't know if we've got all all the
pieces in place that we get from
industry the necessary guidance that
they need to get through this we're just
I think crates tink modules and has to
end and look at the existing DCP some
say some of the resolutions in this one
room was simply okay so we are out of
just quickly so three things one I'm and
we'll have to take all this the lists
but one I'm wondering if we should
consider a flag day something on the
order of currently the guideline to say
you know as soon as possible but maybe
we should as important group say how
many years one year two year three years
like is there a deadline should there be
a deadline if so could we do that
secondly Andy had written something -
Jabar but we don't have time to go model
text and lastly I knew one of the ask if
there's some interest in
Leia's on to address your issue Tim
again we're out of time so we'll take
all that to list thank you and there is
like a yang next meeting tomorrow
morning yeah for that amazing actually
there's a young next step meeting on
Tuesday morning remember you can check
out the wiki page for the side meeting
and we will organize this meeting to
discuss the summer issue may be related
to the I am the transition but not know
that is who I actually just so it is a
side meeting that and if you're
interested in it you should go but it is
not one of our things okay thank you if
you haven't signed the blue sheet please
come up and do so this is your last