Re: [homenet] Alia Atlas' Discuss on draft-ietf-homenet-dncp-09: (with DISCUSS and COMMENT)

Alia Atlas <akatlas@gmail.com> Fri, 09 October 2015 18:39 UTC

Return-Path: <akatlas@gmail.com>
X-Original-To: homenet@ietfa.amsl.com
Delivered-To: homenet@ietfa.amsl.com
Received: from localhost (ietfa.amsl.com [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 0BD301B2D0A; Fri, 9 Oct 2015 11:39:03 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -101.999
X-Spam-Level:
X-Spam-Status: No, score=-101.999 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FREEMAIL_FROM=0.001, HTML_MESSAGE=0.001, SPF_PASS=-0.001, USER_IN_WHITELIST=-100] autolearn=ham
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id grfcPyx74PjH; Fri, 9 Oct 2015 11:38:54 -0700 (PDT)
Received: from mail-ob0-x231.google.com (mail-ob0-x231.google.com [IPv6:2607:f8b0:4003:c01::231]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id E98CE1B49E9; Fri, 9 Oct 2015 11:38:53 -0700 (PDT)
Received: by obcgx8 with SMTP id gx8so68919300obc.3; Fri, 09 Oct 2015 11:38:53 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=PdyfeIR6Q6fRoFGxE9JmBWs7dUs9HIL6Hlwnv2Ton2E=; b=T6p1jvTYByaMUQGr0ufFnaCM2IlNqBK503AQHdLt787SMxHD+4Ntjw0O5mHuqscMv1 CaiqLPrr05oSfxcrvvYGIH2EmKTZ74tuYHS+9/yK/egjSNuIcEpO2FVdtP3BYNnRmnoW KRzJbu/cBSiX4CJCo3vi2Oyy1mjKjb1rG6XX1xYIdxTwaoE2X2oNdXOuVjQ4kBhP3wAq CerKzSTTb7p4wVjcThSQA4CtmWJblVx3dJirqOSzPdtlaZgpCO8Bp7AePLnmGqWWJDcl otKzMOFsSl6hJkIlTA3MnV6ZU8eoyy9OwPGzA5/fS6crOdLjN+XSvExTv2wxB3pcULXj DIZg==
MIME-Version: 1.0
X-Received: by 10.60.134.40 with SMTP id ph8mr8866213oeb.58.1444415933270; Fri, 09 Oct 2015 11:38:53 -0700 (PDT)
Received: by 10.60.116.41 with HTTP; Fri, 9 Oct 2015 11:38:53 -0700 (PDT)
In-Reply-To: <56160B05.9050208@midlink.org>
References: <20150916221825.25125.13814.idtracker@ietfa.amsl.com> <55FAD7F9.9000100@openwrt.org> <56160B05.9050208@midlink.org>
Date: Fri, 09 Oct 2015 14:38:53 -0400
Message-ID: <CAG4d1reye3-VfkHbSfbEprxEZeX1PMY-KvbD4Y+OB88MY12G8Q@mail.gmail.com>
From: Alia Atlas <akatlas@gmail.com>
To: Steven Barth <steven@midlink.org>
Content-Type: multipart/alternative; boundary="047d7b47289a9c516a0521b04a74"
Archived-At: <http://mailarchive.ietf.org/arch/msg/homenet/aIBmXpxFIlaVOqLbflXLUqzaPFY>
Cc: homenet-chairs@ietf.org, Mark Townsley <mark@townsley.net>, draft-ietf-homenet-dncp.shepherd@ietf.org, The IESG <iesg@ietf.org>, draft-ietf-homenet-dncp@ietf.org, HOMENET <homenet@ietf.org>
Subject: Re: [homenet] Alia Atlas' Discuss on draft-ietf-homenet-dncp-09: (with DISCUSS and COMMENT)
X-BeenThere: homenet@ietf.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: IETF Homenet WG mailing list <homenet.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/homenet>, <mailto:homenet-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/homenet/>
List-Post: <mailto:homenet@ietf.org>
List-Help: <mailto:homenet-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/homenet>, <mailto:homenet-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 09 Oct 2015 18:39:03 -0000

Hi Steven,

On Thu, Oct 8, 2015 at 2:19 AM, Steven Barth <steven@midlink.org> wrote:

> Hello Alia,
>
> unfortunately we haven't gotten any response from you wrt. your DISCUSS on
> DNCP yet.
> We would really like to address the issues you have raised, but would need
> some feedback
> from your side to do so. Please note that we have pushed a new revision in
> the meantime
> and tried to clear off the remaining issues in our mail from September
> 17th which I have
> quoted below again.
>

I was waiting for the updated version and have now read it.

I did find the changes and extra text to be good improvements.

What is missing is frequently absolute clarity on how to do various parts.

If you want, I can take a pass at some more serious restructuring and
writing
some clarifications - but I will only do that if you feel it is helpful.


> Please let us know how to proceed on the matter to resolve your DISCUSS.
>
>
>
> Thanks,
>
>
> Steven
>
>
>
> On 17.09.2015 17:10, Steven Barth wrote:
> > Hello Alia,
> >
> >
> > please find replies inline.
> >
> >
> > Cheers,
> >
> > Steven & Markus
> >
> >
> >> ----------------------------------------------------------------------
> >> DISCUSS:
> >> ----------------------------------------------------------------------
> >>
> >> First,  I have a number of specific comments.   Some of these are
> hazards
> >> to technical interoperability; I've tried
> >> to include those in my discuss - but the general point is that it is
> very
> >> hard to tell a number of details because the
> >> structure is assumed.   Having read this document, I do not think that I
> >> could properly implement DNCP from scratch.
> >
> > Please note that an independent DNCP (or more specifically an HNCP)
> > implementation has been successfully developed based on
> > an earlier version of this draft and has been shown to be
> > interoperable with the reference implementation of the draft authors.
> > We used the implementer’s feedback afterwards to further refine the draft
> > to avoid possible ambiguities.
>

I understand that but there were still a number of gaps.  For instance, I
see nothing
that describes how to compute the network state hash.  I would expect a
section like:

"4.1.1 Computing Node Data Hash

To compute the data hash associated with a node, the TLVs are ordered first
based
on the lowest type and then numerically increasing based on the values.
This creates
a bit string that is input to the Hash Function specified by the DNCP
application profile.  The
length of the output hash is dependent upon the DNCP application profile.
The following gives an example using the fictitious profile given in
Appendix C.

.....

A Node Data Hash may be computed for a locally stored node or for a Node
TLV that is
received in the following cases....

4.1.2  Computing a Network State Hash

To compute the Local Network State Hash, only the nodes which are
bidirectionally connected
to the local node can be used.  These nodes are determined by the algorithm
given in
Section 4.6 which determines the current network topology graph from the
local computing
node's perspective.  As discussed in Section 4.6, any nodes which are not
reachable
may be removed from the local node's knowledge; at a minimum, such nodes
MUST NOT
be used in computing the Local Network State Hash.

To compute a Received Network State Hash, the local node uses the
information from the
associated received Node TLVs.  If Node Data is included in a Node TLV,
then an updated
Node Data Hash MUST be computed as described in Sec 4.1.1.  Otherwise, the
Node Data
Hash contained in the Node TLV MUST be used.  . It is assumed that all
nodes included in the
Network State TLV are considered to be bidirectionally reachable by the
originating node.

To compute either a Local or a Received network state hash, the nodes are
ordered based
upon their node identifiers in increasing numerical order.  Each node is
checked to see that
it has an updated Node Data Hash.  A node is considered to have an updated
Node Data Hash if ....
If a node doesn't have an updated Node Data Hash, then that must first be
computed before
the Network State Hash can be computed.  Finally, the bit string created by
taking the Node Data
Hash of each node in the specified ordered is input to the Hash Function
specified by the
DNCP application profile.  The  length of the output hash is dependent upon
the DNCP
application profile.

The following is an example using the fictitious profile given in
Appendix C.

...


The Local Network State Hash is recomputed when ....
"


>> a) What is a topology graph?  When is it created, modified, or destroyed?
> >>  Is it a conceptual artifact constructed
> >> from the various TLVs?  I think a quick paragraph describing it would
> >> help.
> >
> > The term “topology graph” is defined in the Terminology Section and based
> > on bidirectional peer reachability which is defined right afterwards. In
> the
> > latter definition it is mentioned that the process is solely based on
> published
> > TLVs so the topology graph is to my understanding already unambiguously
> > defined. It is solely up to the implementer if this is implemented as an
> actual
> > graph or not, as long as the outcome is consistent with what is
> described.
> >
> > If you still think it is ambiguous please provide some ideas on how this
> > could be worded better.
>

The draft has "Topology graph the undirected graph of DNCP nodes produced
by
retaining only bidirectional peer relationships between nodes."

In Section 4.6, I think that you are describing how to create the topology
graph,
rather than "traversing it".

At the start of Section 4.6, I would add something like:

"As described in Section 4.1.2, the local node's computation of an accurate
network
hash is dependent upon knowing which nodes are reachable.  Which nodes are
reachable is done by computing the  topology graph whenever a
topology change may have occurred.

A topology change can be a new node being added or removed, a node loosing
bidirectional connectivity to its last bidirectionally connected neighbor,
or a node's
state timing out.   Bidirectional connectivity is determined based upon the
Peer
TLVs received.  A node's state will time out based on its origination time
as follows.

...

At the start of the algorithm, all nodes are marked as unreachable except
for the
local node.  It is assumed that the local node's state has not timed out."

"When a node R is marked as reachable, the node's Peer TLVs are examined.
For each Peer TLV, if the specified Peer Node Identifier identifies a node
N in the
database, then that node N is a candidate to be added to the graph.  Node N
is
marked as reachable if and only if...

.....

Once this algorithm completes because there are no more candidate nodes that
can be added, the nodes that are not marked as reachable are not used for
any
purpose. [Text on details from draft here]  The unreachable nodes MAY be
removed
from the database, but it is RECOMMENDED to store them as unreachable to
speed DNCP convergence. For instance, if a link has gone down and is coming
back
up, then a node may only be temporarily unreachable."

>> b) How are peer relationships discovered and established and destroyed?
> >> I really can't tell from the draft and even a quick scan of
> >> RFC 6206 doesn't give any hints.
> >
> > This is explained in section “4.5 Adding and Removing Peers”. As for
> > methods for discovery, this is actually mentioned multiple times across
> > the document, e.g. in the second paragraph of the Overview or in the
> > terminology.
> >
> > Could you please be more specific on what exactly is unclear here in your
> > opinion?
>

"Sec 4.5.1  Initiating Neighbor Relationships

A DNCP profile MAY specify a multicast address on which each DNCP node MUST
listen to learn about new neighbors.  If multicast discovery is specified
in the DNCP
profile, then when a node starts, it SHOULD send a Node Endpoint TLV to the
multicast
address using ??UDP??.

In addition to or instead of multicast discovery, a node MAY be configured
with a
set of DNCP neighbors and the associated interfaces to use.  When a node
needs to initiate
a neighbor relationship, that node SHOULD send a Node Endpoint TLV to the
unicast address
of the desired neighbor.  Note that security considerations may influence
whether the relationship
is established."

"Sec 4.5.2 Destroying a Neighbor Relationship

A node can destroy an established neighbor relationship, regardless of
whether that node initiated
or responded to create the neighbor relationship.  A node MUST send a Node
State TLV that does
not include the Node Endpoint TLV with the neighbor's Node Identifier and
the associated link's Endpoint
Identifiers.  A node MAY <somehow remove/expire its Node State TLV
information> as part of
terminating DNCP.



> >> c) It looks like the TLVs are sent one after another in a datagram or
> >> stream across a socket.  The closest I see to any detail
> >> is "TLVs are sent across the transport as is".
> >
> > Section “4.2 Data Transport” says
> >    TLVs are sent across the transport as is, and they SHOULD be sent
> >    together where, e.g., MTU considerations do not recommend sending
> >    them in multiple batches.  TLVs in general are handled individually
> >    and statelessly, with one exception …
> >
> > Could you please be more specific on what is unclear?
> >
> > The section states that TLV handling is stateless except for the one
> exception
> > which is explained, so otherwise  it does not really matter in what order
> > they are sent or how you split them up onto datagrams (if that is your
> > transport).
>

At a minimum, you should specify "in network order" for how the TLVs are
sent.
It would be good to specify whether TLVs need to be sent in any particular
order.
It'd be nice for the receiver to know whether there's a way of seeing that
the last
TLV in the message has been received so it's easier to avoid doing work
multiple
times.

When does a node decide to use unicast transport?  When does the node decide
to use multicast transport?  There are some indications about what is sent,
but
not in a very normative way.

When and how does a node take into consideration MTU?  I know this is tied
to
the profile, but it needs to be discussed here.  Can a TLV be broken across
multiple packets?


> >> d) As far as I can tell, Trickle is used to decide basically how often
> to
> >> send information - but the details of this and the interactions
> >> aren't clear to me.
> >
> > This is explained in detail in section “4.3.  Trickle-Driven Status
> Updates”.
> >
> >    The Trickle state for all Trickle instances is considered
> >    inconsistent and reset if and only if the locally calculated network
> >    state hash changes.  This occurs either due to a change in the local
> >    node's own node data, or due to receipt of more recent data from
> >    another node.  A node MUST NOT reset its Trickle state merely based
> >    on receiving a Network State TLV (Section 7.2.2) with a network state
> >    hash which is different from its locally calculated one.
> >
> >    Every time a particular Trickle instance indicates that an update
> >    should be sent, the node MUST send a Network State TLV…
> >
> > Could you please be more specific on what is missing here?
>

First, I did read the entire draft.  Repeatedly pointing me to what is
there doesn't
forward the conversation.  I don't think the section on it is clear.
What I would add is approximately - and if my assumptions are correct:

"The Trickle algorithm is used to provide reliability across unreliable
unicast
or multicast transports.   When the DNCP profile is using reliable unicast,
then the Trickle algorithm is not needed.

As mentioned in the terminology, a Trickle Instance is associated with a
particular endpoint for multicast transport or (peer, endpoint) for
unreliable
unicast transport.  A Trickle Instance will indicate when an update should
be
sent.  To do this, the Trickle Instance must be aware of when the Local
Network
State Hash changes.  When the Local Network State Hash changes, the
Trickle state for all Trickle instances is considered inconsistent or reset.

....

When a reliable unicast transport is used, the local node MUST send a status
update consisting of ... whenever the Local Network State Hash changes.

"

What's missing is the context of what the Trickle Instance is for, when it's
actually used, and so on.


> >> I suspect that there are dependencies on the HNCP draft that would make
> >> this a lot easier to understand but since we want it to progress
> >> separately, the document does need to stand alone.
> >
> > No, there are no dependencies. This was noted already in response to
> > reviews from Thomas Claussen and Les Ginsberg.
> >
> > Please refer to section “9.  DNCP Profile-Specific Definitions“ for the
> > comprehensive list of things we have intentionally left out in DNCP.
>

I do think that the updated draft with the example profile helps a bit here.
It's still hard to read and be clear on the details of DNCP in isolation.

>> 8) In Sec 4.6 "   o  The origination time (in milliseconds) of R's node
> >> data is greater
> >>       than current time in - 2^32 + 2^15."  Since origination time
> hasn't
> >> yet been introduced, I'm going
> >> on an assumption that it means when R's node data was originated from R.
> >> So - this requirement is
> >> saying that R's node data must have been generated in the future (but
> >> already known by the local node)???
> >
> > The term “origination time” is actually explained in Section “5. Data
> Model”,
> > -10 will have a forward-reference here. About the time itself,
> > the text says greater than “current time - 2^32+2^15”; that threshold is
> always
> > in the past.
>

Yes, you're right.  Please have the draft explained why that magic value
was picked
and what concern doing the time check is resolving  I assume that it's that
value
because of concerns about the time value wrapping so any origination time
older
is considered aged out?

>> 9) In Sec 4.6 "They MAY be removed immediately after the topology graph
> >> traversal"  The DNCP nodes can
> >> be removed from what??  The topology graph?  From some type of received
> >> TLV??  How would they be
> >> added back in later?
> >
> > Maybe “forgotten (about)” may be a more intuitive term here. Each node
> stores
> > each other node’s data and metadata so “removing” the node essentially
> means
> > removing that information from the local node’s memory. If it is not
> removed
> > immediately the node can still keep it in its memory in case the remote
> node
> > becomes bidirectionally reachable again to avoid synchronizing the node
> data
> > (if it did not change in the meantime).
> >
> > Clarified in -10.
>

It's better.  I suggested some additional text in the Network Graph
section.



> >> 11) In sec 6.1: "Trickle-driven status updates (Section 4.3) provide a
> >> mechanism for
> >>    handling of new peer detection on an endpoint, as well as state
> >>    change notifications."   Nothing in Sec 4.3 talked about a mechanism
> >> for detecting
> >> new peers on an endpoint.  It is entirely possible that Trickle does
> >> provide this (but what
> >> about the modes where Trickle isn't used/needed??) but there needs to be
> >> a description of how
> >> new peer detection is done - even if it's just a pointer to Trickle
> >> RFCs.
> >
> > As noted in the reply for c) addition is based on TLV reception as noted
> in 4.3.
> > Actual discovery is mentioned multiple times in this draft, e.g. 2nd
> paragraph
> > of Overview. Summarizing: it boils down to either one node receives that
> TLV
> > over multicast or one of the node knows the other’s unicast address based
> > on configuration or some other protocol then the other node discovers the
> > former when that TLV is unicasted to it.
> >
> > Could you please note how this could be improved?
>

The draft says "Trickle-driven status updates (Section 4.3) provide a
mechanism for
  handling of new peer detection on an endpoint, as well as state
   change notifications.  Another mechanism may be needed to get rid of
   old, no longer valid peers if the transport or lower layers do not
   provide one."

What I think you may mean is:

"When status updates with the Network Status TLV are sent via a multicast
transport, this can provide a mechanism for new peers on that particular
link to
discover the sending node.  The status updates, regardless of the transport
method used, provide state change notifications.

The mechanism to remove old, no longer valid peers, is described in Section
....
In some DNCP profiles, the loss of the transport layer connectivity is also
an
indication that the associated peer should be removed.
"


> >> 12) In Sec 6.1.4: "   On receipt of a Network State TLV (Section 7.2.2)
> >> which is consistent
> >>    with the locally calculated network state hash, the Last contact
> >>    timestamp for the peer MUST be updated."   Could you add some
> >> rationale for why this
> >> is needed?
> >
> > This is to indicate the node is still alive (6.1. is the section on
> keep-alives).
> > The Last Contact timestamp is defined in 6.1.1 and 6.1.5 explains what
> > happens when it reaches a threshold.
>

What I'm looking for is how the timestamp is used.  Please add a forward
reference to 6.1.5 or reorganize.


> >> I suspect that part of my confusion is that the "locally
> >> calculated network state
> >> hash" could mean two different things.  Is it the hash computed by the
> >> local node on the
> >> data received in the Network State TLV to validate that the Network
> State
> >> TLV is not
> >> corrupted?
> >
> > Since there is no mention of any hashing to validate generic TLVs
> anywhere
> > in the RFC or any such hash values for comparison at all I cannot really
> follow
> > how one could come to that conclusion.
> >
>

Trying desperately to figure out what you are trying to say in the draft.



> >> Or is it the hash computed by the local node on its
> >> arbitrarily wide 1-hash tree
> >> that represents the local node's network state?   Since the term is
> never
> >> defined, it's hard
> >> to guess here.  The bottom of Sec 7.2.2 uses "current locally calculated
> >> network state hash"
> >> to refer to, I believe, the latter.
> >
> > The term “network state hash” is actually defined in the terminology
> section and hash tree section 4.1;
> > if the words “locally calculated” make it really ambiguous please let me
> know how to improve it.
>

All of the network state hashes are locally calculated!  The local node is
the one running this stuff.
Perhaps some of them are based off of local data (i.e., the
local node's view of the network) and some of them are based off of
received data (i.e. a received Network
State TLV).   Or maybe you never compute the Received Network State Hash
because you always use the value found in the Network State TLV???


> >> 13) In Sec 6.2: "normally form peer relationships with all peers."
> Where
> >> is forming a peer
> >> relationship defined?  Is this tied solely to Trickle instances?  What
> >> about with reliable unicast
> >> where presumably Trickle isn't used between peers as the Overview states
> >> "If used in pure unicast mode with unreliable transport, Trickle is also
> >> used between peers"?
> >
> > Please see replies to c) and 11).
>

Neither of those actually better describe what a peer relationship entails.

I'd suggest
"Section 4.5.3 A Peer Relationship

A peer relationship is established with a neighbor across one or more
endpoints.
When the local node needs to send a status update, that is sent to the
neighbor
across ??one or all?? of the connected endpoints.  While the relationship
exists,
the local node MUST include, for each associated endpoint, a Peer TLV that
specifies the neighbor and endpoint.

If possible with the DNCP profile's transport choice, each peer's transport
connectivity
SHOULD be monitored so that the peer relationship can be removed if all the
peer's
transports are down (or have one or more endpoints removed as the transport
connectivity
goes down).  Keepalives (Sec 6.1) may also be used to remove a peer
relationship....
"


> >> 14) In Sec 7: "   For example, type=123 (0x7b) TLV with value 'x' (120 =
> >> 0x78) is
> >>    encoded as: 007B 0001 7800 0000.  If it were to have sub-TLV of
> >>    type=124 (0x7c) with value 'y', it would be encoded as 007B 0009 7800
> >>    0000 007C 0001 7900 0000."   In this case, the padding between the
> >> TLV's value
> >> and the sub-TLV is included but the padding after the sub-TLV is not.
> >> What would
> >> happen if there were multiple sub-TLVs??  Would the padding between
> those
> >> sub-TLVs
> >> be included?
> >
> > Yes, that is correct, fixed in -10. The individual sub-TLVs themselves
> contain
> > length fields which will not include even their own padding, but the
> container
> > TLV’s length ends at the end of the last sub-TLV’s last non-padding byte
> > (and then padding is inserted if any).
>

Ok


> >> Also related :"In this case the length field includes the length of the
> >> original TLV, the
> >> length of the padding that are inserted before the embedded TLVs and the
> >> length of the
> >> added TLVs."  Here, the phrase "length of the TLV" means different
> >> things.  In the first case,
> >> "length of the original TLV" means the "length of the value in the
> >> encapsulating TLV".  In
> >> the second case, "length of the added TLVs" appears to mean the length
> of
> >> the sub-TLVs
> >> including the type/length header.  As I mentioned above, I can't tell
> >> what happens to the
> >> padding in between sub-TLVs.
> >
> > Clarified in -10.
>

Ok


> >> ----------------------------------------------------------------------
> >> COMMENT:
> >> ----------------------------------------------------------------------
> >>
> >> 1) In Sec 4.1, I think it would be clearer if you moved the sentence
> >> "These leaves are ordered in ascending
> >>    order of the respective node identifiers. " to after the first
> >> sentence with appropriate text changes.  I was left
> >> confused by why the leaf would be represented by a node's sequence
> >> number.  I think it's that a leaf represents
> >>  a node and is ordered based upon that node's identifier.   The value of
> >> a leaf is a tuple of the node's sequence
> >> number and the hash value...
> >
> > Clarified as
> > “These leaves are ordered in ascending
> >    order of the node identifiers of the nodes they represent.”
> > in -10.
> >
> >
> >> 2) In Sec 4.2, it says "As multicast is used only to identify potential
> >> new DNCP
> >>    nodes and to send status messages which merely notify that a unicast
> >>    exchange should be triggered, the multicast transport does not have
> >>    to be secured."   There aren't attacks from processing fake potential
> >> new DNCP node
> >>   or triggering lots of unneeded/unterminated unicast exchanges?
> >
> > This is already covered in the Security Considerations section.
>
> ok


> >
> >> 3) In Sec 4.3, it says "Imin, Imax and k.   Imin and Imax represent the
> >> minimum and maximum values for I"
> >> I believe this isn't quite accurate.  Imax is a max multiplier of Imin
> >> for I and not a maximum value.
> >> I've seen this error in another draft also; I think it's important to be
> >> correct & clear here.
> >
> > Addressed for -10. 6206 defines it as “The maximum interval size, Imax“,
> > which probably led to this. (I wonder why it is not maximum interval
> exponent, or something..)
>

That would have made more sense, but water under the bridge now.


> >> 4) There are multiple references to different unintroduced TLVs.  There
> >> is also the idea that
> >> each DNCP protocol can have its own TLVs.  It'd be very useful in the
> >> Introduction to simply state
> >> what TLVs are required by DNCP and conceptually what they are for.
> >
> > Section 7 is entirely devoted to that purpose. I do not think that it
> should
> > be repeated in the general introduction.
> >
> > If there are particular forward references that are missing, please let
> us know.
>

Section 7 is defining them and the rules around them.  I'm looking for
something like

"As defined in Section 7, DNCP uses request TLVs and data TLVs.  The
request TLVs
are Request Network State TLV and Request Node State TLV.  These are used
for
synchronizing the database between two neighbors.  The Data TLVs are : Node
Endpoint,
Network State, Node State with optional sub-TLVs, Peer, and Keep-Alive.
DNCP profiles
may add additional TLVs or add new sub-TLVs to the Node State."

You don't want to lose or confuse your readers.


> >> 5) In Sec 4.3, it says "The Trickle state for all Trickle instances is
> >> considered
> >>    inconsistent and reset if and only if the locally calculated network
> >>    state hash changes."  but I have no idea yet what a Trickle instance
> >> is, when
> >>    a Trickle instance is created or associated with a node?  an
> >> interface? or something else?
> >
> > Please see Section “5. Data Model”
> >
> >    “For each remote (peer, endpoint) pair detected on a local endpoint, a
> >    DNCP node has:”
> >    [...]
> >    * Trickle instance: the particular peer's Trickle instance with
> >       parameters I, T, and c (only on an endpoint in Unicast mode, when
> >       using an unreliable unicast transport) .
> >
> > we will add an xref there and add “trickle instance” to the terminology.
>

The definition does help.  It's still not clear from the text what things
are handled
because it's Trickle and what are handled b/c a status update is sent or so
on.

>> 6) I think it would be more useful to describe generally at least what is
> >> in a DNCP profile earlier in the
> >> document.  This may be bringing Section 9 forward or it may be listing
> it
> >> earlier.  I'm seeing numerous
> >> forward references.
> >
> > There would be numerous forward references in that case too, as it refers
> > to the protocol behavior and individual TLVs. (There are currently 2
> forward
> > references in the terminology and 2 in the text; we probably cannot move
> it
> > before the terminology, and right after the terminology it would
> introduce
> > more forward references than it would eliminate.)
>

Try adding a pointer to the appendix C to show there's an example.

>  >> 7) Start of Sec 4.6 talks about the topology graph - but there's been
>
>> absolutely no introduction of
> >> what the topology graph is or how it was created, updated, etc.
> >
> > See reply to a).
> >
> >
> >> 10) In Sec 5:  This is a helpful section.  It tells me that a Trickle
> >> instance is per interface.  I don't see anything
> >> like a topology graph in it.  It clarifies origination time slightly.
> It
> >> talks about "For each remote (peer, endpoint)
> >> pair detected on a local endpoint" but I still have no ideas how that
> >> detection is done.  It implies some policy
> >> restrictions "Set of addresses: the DNCP nodes from which connections
> are
> >> accepted." but has no details on whether
> >> that is created via multicast messaging or via local configuration or
> >> something else.
> >
> > See previous answers referring to peer detection.
> >
> >
> >> 15) Also, in Sec. 4.6, the terminology for the fields of the Peer TLV is
> >> different than defined
> >> in Sec 7.3.1 - just "Endpoint Identifier" instead of "Local Endpoint
> >> Identifier".
> >
> > Addressed in -10.
> >
> >
> >> 16) In Sec 7.3.2: "   Endpoint identifier is used to identify the
> >> particular endpoint for
> >>    which the interval applies.  If 0, it applies for ALL endpoints for
> >>    which no specific TLV exists."   Is this the Local Endpoint
> Identifier
> >> or the remote?
> >>    The same question applies for Sec 7.2.1.
> >
> > Both refer to the endpoint identifier of the (local) sending node. Given
> the
> > TLV is published network-wide within the particular node’s node state,
> > identifying that of remote nodes would be nontrivial and probably not
> unique
> > either.
>

So clarify that in the text.  Don't make the implementer go through
figuring out
all the assumptions and get it wrong.

Regards,
Alia



> > _______________________________________________
> > homenet mailing list
> > homenet@ietf.org
> > https://www.ietf.org/mailman/listinfo/homenet
> >
>