[ippm] Benjamin Kaduk's Discuss on draft-ietf-ippm-ioam-data-12: (with DISCUSS and COMMENT)
Benjamin Kaduk via Datatracker <noreply@ietf.org> Wed, 24 March 2021 04:43 UTC
Return-Path: <noreply@ietf.org>
X-Original-To: ippm@ietf.org
Delivered-To: ippm@ietfa.amsl.com
Received: from ietfa.amsl.com (localhost [IPv6:::1]) by ietfa.amsl.com (Postfix) with ESMTP id 3EF083A20D4; Tue, 23 Mar 2021 21:43:33 -0700 (PDT)
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
From: Benjamin Kaduk via Datatracker <noreply@ietf.org>
To: The IESG <iesg@ietf.org>
Cc: draft-ietf-ippm-ioam-data@ietf.org, ippm-chairs@ietf.org, ippm@ietf.org, Al Morton <acm@research.att.com>, acm@research.att.com
X-Test-IDTracker: no
X-IETF-IDTracker: 7.27.0
Auto-Submitted: auto-generated
Precedence: bulk
Reply-To: Benjamin Kaduk <kaduk@mit.edu>
Message-ID: <161656101267.7087.8167870905178123095@ietfa.amsl.com>
Date: Tue, 23 Mar 2021 21:43:33 -0700
Archived-At: <https://mailarchive.ietf.org/arch/msg/ippm/xscID6PofK1xlifIEx2ynCmRdho>
Subject: [ippm] Benjamin Kaduk's Discuss on draft-ietf-ippm-ioam-data-12: (with DISCUSS and COMMENT)
X-BeenThere: ippm@ietf.org
X-Mailman-Version: 2.1.29
List-Id: IETF IP Performance Metrics Working Group <ippm.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/ippm>, <mailto:ippm-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/ippm/>
List-Post: <mailto:ippm@ietf.org>
List-Help: <mailto:ippm-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/ippm>, <mailto:ippm-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 24 Mar 2021 04:43:33 -0000
Benjamin Kaduk has entered the following ballot position for draft-ietf-ippm-ioam-data-12: Discuss When responding, please keep the subject line intact and reply to all email addresses included in the To and CC lines. (Feel free to cut this introductory paragraph, however.) Please refer to https://www.ietf.org/iesg/statement/discuss-criteria.html for more information about IESG DISCUSS and COMMENT positions. The document, along with other ballot positions, can be found here: https://datatracker.ietf.org/doc/draft-ietf-ippm-ioam-data/ ---------------------------------------------------------------------- DISCUSS: ---------------------------------------------------------------------- I think we need some greater clarity on the relationship between IOAM "layers" and IOAM-Namespaces. For example, in Section 4 there is a principle of "Layering" that seems to indicate that different layers operate entirely independently, such as might occur when traffic from one operator that uses IOAM is conveyed in a tunnel over a different operator's network and both operators use IOAM independently. But in Section 5.3 we seem to see some discussion that IOAM-Namespaces can be used to enforce a separation of layers ("IOAM-Namespaces provide additional context [...] e.g. if an operator wishes to use different node identifiers for different IOAM layers"), and that namespace identifiers allow for determination of which IOAM-Option-Types need to be processed "in case of a layered IOAM deployment". I think there is also some internal inconsistency relating to the role of IOAM transit nodes. This may be localised in Section 5.2 where we see both that a transit node is one that "read and/or write or process [the] IOAM data" and that a transit node "updates one or more of the IOAM-Data-Fields" (i.e., always writes), but I did not attempt an exhaustive check for other instances. I don't think the definition of the POSIX epoch is correct -- it seems to be copied (roughly) from the definition of the PTP epoch (i.e., using midnight TAI as the reference) but all the references I consulted indicate that the POSIX epoch started at midnight UTC. As foreshadowed in https://mailarchive.ietf.org/arch/msg/last-call/Ak2NAIKQ7p4Rij9jfv123xeTXQY/ I think we need to have a discussion about the expectations and provisions for cryptographic (e.g., integrity) protection of IOAM data. >From my perspective, IOAM is a new (class of) protocols that is seeking publication on the IETF stream as Proposed Standard. While we do make exceptions for modifications to protocols that were developed before we realized how important integrated security mechanisms are, it's generally the case that new protocols are held to the current IETF expectations for what security properties are provided; the default expectation is that a protocol is expected to provide secure operation in the internet threat model of RFC 3552. This draft currently only provides a brief note in the security considerations that there exists an individual draft (draft-brockners-ippm-ioam-data-integrity) that might provide ways to protect the integrity of IOAM data fields. Shouldn't the security mechanisms be getting developed side-by-side by the protocol mechanisms, to ensure that they are properly integrated and fit for purpose? (This does not necessarily have to be in the same document and could be part of a cluster of related documents, but I don't think that an informative reference to a non-WG draft really qualifies.) ---------------------------------------------------------------------- COMMENT: ---------------------------------------------------------------------- Thanks to Shawn Emery for the secdir review. I have grave misgivings about the proposed behavior that provides for a separate domain boundary at the granularity of IOAM-Namespace. It does not quite rise to a Discuss because it is technically possible to implement successfully if done perfectly, but it seems incredibly risky and very hard to get right, and there does not seem to be sufficient motivation presented for the benefits of this behavior to justify the risk. Specifically, by partitioning all IOAM behaviors by IOAM-Namespace, we require a node to know whether or not it is to behave as ingress/egress for a domain on a per-namespace basis. In effect, it allows for a proliferation of as many distinct and partially overlapping IOAM Domains as there are namespace values, and whether a node is ingress (or egress) for any given domain has to be looked up on the basis of the specific IOAM-Namespace value. I find this concerning because the security properties of the system rely on properly policing the domain boundary, so that IOAM markings from outside the domain are rejected at ingress, and IOAM data is not leaked out of the domain by policing at egress. While we have ample evidence to suggest that it is feasible to maintain a single tightly controlled domain that properly polices ingress and egress (e.g., MPLS), I don't know of evidence to suggest that it is feasible to do so with a spate of partially overlapping domains where domains and domain membership are malleable based the interaction of configuration and in-band protocol elements. It seems incredibly likely that some domain boundary somewhere will be inadvertently permeable, and compromise the security properties of the system. Section 1 (side note) There's perhaps something of a philosophical question of whether a mechanism really qualifies as "in situ" if it involves encapsulating the packet in a full-fledged tunnel (as at least the IPv6 encapsulation does, as is needed to add the additional EHs that hold IOAM data fields). That said, I don't really have any alternative suggestions, and I am sure that IOAM is a well-established terminology, so this is just a side node. Section 4 Scope: This document defines the data fields and associated data types for in-situ OAM. The in-situ OAM data field can be encapsulated in a variety of protocols, including NSH, Segment Routing, Geneve, IPv6, or IPv4. Specification details for these I found drafts (split between targeting ippm and "the relevant working groups") that would cover Geneve and IPv6, but not anything for IPv4. Are we fully confident that the aim is actually achievable for IPv4? Deployment domain (or scope) of in-situ OAM deployment: IOAM is a network domain focused feature, with "network domain" being a set of network devices or entities within a single administration. For example, a network domain can include an enterprise campus using physical connections between devices or an overlay network using virtual connections / tunnels for connectivity between said devices. If these virtual connections/tunnels do not provide cryptographic confidentiality and integrity protection, then the security considerations for IOAM need to include the full physical deployment scope including the underaly over which the overlay is constructed, not just the "administrative domain" as defined here. Furthermore, there seems to be a very questionable interaction between labeling the IOAM deployment domain an administrative domain yet allowing for multiple partially overlapping IOAM domains as distinguished by namespace ID. Does the administrative domain have to encompass the union of all the different IOAM domains (as identified by namespace ID)? Section 5.2 An "IOAM transit node" updates one or more of the IOAM-Data-Fields. (per the discuss) The previous discussion suggested that even a node that only reads IOAM-DataFields is still considered a "transit node", and that updating the field contents is not necessary in order to justify that moniker. That said, the notion that a transit node is writing (e.g., to update trace option-types) seems to be prevailing throughout later portions of the document, so perhaps it is the text a few paragraphs earlier that is in error. If both the Pre-allocated and the Incremental Trace Option-Types are present in the packet, each IOAM transit node based on configuration and available implementation of IOAM populates IOAM trace data in either Pre-allocated or Incremental Trace Option-Type but not both. (per the discuss) Likewise, is this "populates" mandatory? A transit node MUST ignore IOAM-Option-Types that it does not understand. A transit node MUST NOT add new IOAM-Option-Types to a packet, MUST NOT remove IOAM-Option-Types from a packet, and MUST NOT change the IOAM-Data-Fields of an IOAM Edge-to-Edge Option-Type. Almost all of this (not the Edge-to-Edge stuff) seems fairly tautological, in that if a node did that stuff it would be an encapsulating and/or decapsulating node, possibly in addition to being a transit node. The role of an IOAM-encapsulating, IOAM-transit or IOAM-decapsulating node is always performed within a specific IOAM-Namespace. This means that an IOAM node which is e.g. an IOAM-decapsulating node for IOAM-Namespace "A" but not for IOAM-Namespace "B" will only remove the IOAM-Option-Types for IOAM-Namespace "A" from the packet. Note that this applies even for IOAM-Option-Types that the node does not understand, for example an IOAM-Option-Type other than the four described above, that is added in a future revision. An IOAM decapsulating node situated at the edge of an IOAM domain MUST remove all IOAM-Option-Types and associated encapsulation headers for all IOAM-Namespaces from the packet. I can only make sense of this paragraph as a whole if the last sentence instead says "MUST remove from the packet [...] for all IOAM-Namespaces for which the node is a decapsulating node. The current text says to remove literally all IOAM information for literally all IOAM-Namespaces, which seems to contradict the separation of namespaces depicted earlier in the paragraph. Section 5.4 A particular implementation of IOAM MAY choose to support only one of the two trace option types. In the event that both options are utilized at the same time, the Incremental Trace-Option MUST be placed before the Pre-allocated Trace-Option. Deployments which mix devices with either the Incremental Trace-Option or the Pre-allocated Trace-Option could result in both Option-Types being present in a packet. Given that the operator knows which equipment is deployed in a particular IOAM, the operator will decide by means of configuration which type(s) of trace options will be used for a particular domain. Up in Section 5.2 we said that "each IOAM transit node based on configuration and available implementation" populates exactly one trace option type. I think I can read the two statements as being consistent with each other, but it might be useful to harmonize the specific language used to make it very clear that these refer to the same behavior. Section 5.4.1 Just to check my understanding: when I first read the discussion of this being an "array" and there being a potential "Opaque State Snapshot" component, I assumed that the length of this opaque snapshot would need to be fixed per array element (i.e., as part of the namespace definition). Having read a bit more, it seems that my initial assumption is incorrect, since there is a length field in the opaque state snapshot framing, and so a node parsing the array of elements will be able to skip over the approprate (variable) length for each element in the array. Please confirm that my updated understanding is correct. Namespace-ID: 16-bit identifier of an IOAM-Namespace. The Namespace-ID value of 0x0000 is defined as the "Default-Namespace- ID" (see Section 5.3) and MUST be known to all the nodes implementing IOAM. For any other Namespace-ID value that does not match any Namespace-ID the node is configured to operate on, the node MUST NOT change the contents of the IOAM-Data-Fields. Though it may seem banal to do so, explicitly listing "change, add, or remove" might help avoid future questions of how to interpret this text. E.g., there is some similar text in RFC 2460 about "examined or processed" that proved to be highly controversial and was changed in RFC 8200. A node receiving an IOAM Pre-allocated or Incremental Trace-Option relies on the NodeLen value, or it can ignore the NodeLen value and calculate the node length from the IOAM-Trace-Type bits (see below). Allowing an implementation to pick one or the other option like this introduces fragility to the system as a whole and requires strict controls that encapsulating nodes always set the value even if their implementation does not use it. I would prefer if we had a single well-specified required behavior for all nodes, that could be easily tested for conformance. Bit 0 "Overflow" (O-bit) (most significant bit). If there are not enough octets left to record node data, the network element MUST NOT add any fields and MUST set the overflow "O-bit" to "1" in the IOAM-Trace-Option header. This is useful for transit nodes to ignore further processing of the option. This makes things awkward for a lot of the earlier statements that require transit nodes to populate trace data. How is the document internally consistent in this regard, if transit nodes are both required to populate trace data and required to not add trace data? NodeLen - sizeof(opaque snapshot) in 4 octet units. If RemainingLen in a pre-allocated trace option exceeds the length of the option, as specified in the preceding header, then the node MUST NOT add any fields. What is the "preceding header"? This seems like perhaps it originated in a description of a concrete protocol encoding of this data item and does not quite apply to the abstract description thereof. Bit 12-21 Undefined. An IOAM encapsulating node MUST set the value of each of these bits to 0. If an IOAM transit node receives a packet with one or more of these bits set to 1, it MUST either: 1. Add corresponding node data filled with the reserved value 0xFFFFFFFF, after the node data fields for the IOAM-Trace-Type bits defined above, such that the Just to confirm: this means that there cannot be any future allocations of bits to "wide format" items and thus that only bits 8-10 will ever consume 2 units each of four-octets? (This does not preclude allocating two adjacent bits to indicate a single logical data item, of course, with appropriate error handling for when only one of the two is set.) Bit 23 Reserved: MUST be set to zero upon transmission and ignored upon receipt. I note that this description for the "reserved" bit has a different specified behavior than the unallocated bits 12-21 (that require a node to fill the data with ones if the bit is set). Do we have a sense for any ways in which this reserved bit's semantics could be used for future extensibility, vs being locked into these (basically useless) semantics forever? Section 5.4.2 Some IOAM-Data-Fields defined below, such as interface identifiers or IOAM-Namespace specific data, are defined in both "short format" as well as "wide format". Their use is not exclusive. A deployment could choose to leverage both. [...] While the text goes on to clarify that, based on per-domain configuration, they can hold qualitatively different types of information, I still must ask if there is any entity that is or might be tasked with enforcing that there is consistency between the two fields (mostly for when they are different representations of the same information, but not necessarily exclusively so). Section 5.4.2.1 Hop_Lim: 1-octet unsigned integer. It is set to the Hop Limit value in the packet at the node that records this data. Hop Limit "In the packet" at ingress to, or egress from, the node? node_id: 3-octet unsigned integer. Node identifier field to uniquely identify a node within the IOAM-Namespace and associated IOAM-Domain. The procedure to allocate, manage and map the node_ids is beyond the scope of this document. Even if we attempt to leave allocation/management of node_ids out of scope for this document, I think we still need to talk about what goes wrong and how to recover in case of collision. Section 5.4.2.2 I kind of expected some note that the interpretation of the interface identifier fields here is (or might be?) going to be specified by the node they apply to and can only be interpreted in that context. Or are these expected to be allocated by a central entity? Section 5.4.2.12 The "buffer occupancy" field is a 4-octet unsigned integer field. This field indicates the current status of the occupancy of the common buffer pool used by a set of queues. The units of this field are implementation specific. Hence, the units are interpreted within the context of an IOAM-Namespace and/or node-id if used. The authors acknowledge that in some operational cases there is a need for the units to be consistent across a packet path through the network, hence it is RECOMMENDED for implementations to use standard units such as Bytes. I guess I'm not sure that I understand exactly what this field should be indicating, even if my implementation does adhere to the recommendation to "use standard units such as Bytes" (which, by the way, could probably stand to be a stronger recommendation for a single distinguished unit chosen by the authors). That is, if I suppose that the node in question has some common pool of buffers that's shared across queues. Maybe all the buffers are the same size, maybe not. But in order to measure "occupancy", is it more important to know how many bytes are occupied, how many buffers are in use, or what proportion of the total buffers are in use? Just knowing the number of bytes or buffers in use does not convey much information if the total capacity is unknown, and having 40 MB of buffers in use would mean something very different for a CPE router vs "big iron". Can we give a bit more clarity into at least what portions of the semantics need to be set at a per-namespace level, even if we can't nail them down more tightly as part of the protocol spec? Section 5.4.13 Schema ID: 3-octet unsigned integer identifying the schema of Opaque data. Just to check my understanding: the interpretation of this schema ID is per IOAM-Namespace, which is mostly going to be something maintained by the individual operators. So in some sense each operator will have to maintain and publish (internally) a registry of these Schema IDs and avoid collisions. Is this the sort of thing that is already a common practice, or is there a risk of operational fragility being introduced? Furthermore, some of the Namespace IDs are not operator-managed, e.g., 0x0000. Is the opaque state snapshot functionality just not going to be used for those well-known namespaces? If so, we should say so explicitly. Section 5.5 o Random: Unique identifier for the packet (e.g., 64-bits allow for the unique identification of 2^64 packets). We should probably say that this is generated using a cryptographic-strength PRNG (i.e., not rand()). BCP 106 covers randomness requirements for security. Also, due to the birthday paradox, an actual 64-bit random identifier will produce collisions well before 2^64 packets. Since these identifiers (AFAICT) need to be assigned in an uncoordinated fashion, the random allocation scheme may well be the best scheme (or "least bad"), but if that's the case I don't think we should make bold statements like "allow for the unique identification of 2^64 packets". IOAM POT flags: 8-bit. Following flags are defined: It's slightly surprising to me that the flags are defined as having global semantics across all POT values, as opposed to being interpreted in the context of the POT type they are used with, but that's not inherently problematic. Section 5.5.1 Should we just say that the Namespace-ID, POT Type, and flags are as defined in Section 5 rather than repeating the definitions wholesale (or, as we currently do, having to modify the definition text slightly)? In particular (but not exclusively), it's pretty distracting to have Section 5 refer to "Bit 0" and "Bit 1-7" but Section 5.5.1 refer to the "P bit" and "R (7 bits)" P bit: 1-bit. "Profile-to-use" (P-bit) (most significant bit). Indicates which POT-profile is used to generate the Cumulative. Any node participating in POT will have a maximum of 2 profiles configured that drive the computation of cumulative. The two profiles are numbered 0, 1. This bit conveys whether profile 0 or profile 1 is used to compute the Cumulative. Is it worth saying a few more words about how the P bit is used as a "generation count" for performing incremental/in-place updates of what profile to use, and that profile 0 can be repurposed for something new once all uses of its previous interpretation have been removed from the network? ("No" is a fine answer, but it might be worth proactively allaying the concern that you can only have two profiles, ever.) Section 5.6 I suggest giving some guidance as to whether the initial sequence number should (not) start at zero, for the fields indicated by bits 0 and 1. I note that draft-gont-numeric-ids-sec-considerations has some discussion supporting starting at non-zero values. It seems that the "increment by one for each packet" behavior is needed, though, in order to be able to use this value to detect loss. Bit 0 (Most significant bit) When set indicates presence of a 64-bit sequence number added to a specific "packet group" which is used to detect packet loss, packet reordering, or packet duplication within the group. The "packet group" is deployment dependent and defined at the IOAM encapsulating node e.g. by n-tuple based classification of packets. Bit 1 When set indicates presence of a 32-bit sequence number added to a specific "packet group" which is used to detect packet loss, packet reordering, or packet duplication within that group. The "packet group" is deployment dependent and defined at the IOAM encapsulating node e.g. by n-tuple based classification of packets. When both of these bits are set, are the contained values agglomerated into a hybrid 96-bit sequence number? If so, in which order? Bit 2 When set indicates presence of timestamp seconds, [...] packet is encapsulated into the tunnel. Each implementation has to document when the E2E timestamp that is going to be put in the packet is retrieved. This It seems a little awkward that an operator is going to have to base their processing logic on knowledge of what *implementation* the encapsulating node is, especially on a heterogeneous network. But I suppose it would be an RFC 6919 "MUST (BUT WE KNOW YOU WON'T)" if we said that implementations had to allow configuring the different possibilities, based on the IOAM-Namespace... Section 6.3 (POSIX-based Timestamp Format) The POSIX-based timestamp format is affected by leap seconds; it represents the number of seconds since the epoch minus the number of leap seconds that have occurred since the epoch. The value of a timestamp during or slightly after a leap second could be temporarily inaccurate. (If I'm wrong about the Discuss point, this description seems inconsistent with the POSIX epoch being midnight TAI.) Section 8 I note that the "RFC Required" policy lets ISE and IRTF stream documents allocate values, in theory without any IETF review at all. It seems unlikely that this is what is intended in all of the cases where "RFC Required" is specified, such asa the 4-bit IOAM Trace-Flags registry that has only three unallocated codepoints. Section 8.4 0: 16 Octet POT data I'd suggest a slightly more descriptive name, as there may well be other POT formats that want to use 16 octets for their data. Section 8.7 I suggest removing the sentence "Upon a new allocation request, the responsible AD will appoint a designated expert, who will review the allocation request." as it is not really accurate (the IESG appoints DEs) and is not helpful for the potential registrant. It's arguably also needlessly restrictive, preventing the IESG from appointing an expert before there is a request to review. Section 10 "Direct Export" sounds like a self-induced DoS attack by traffic amplification, but that's probably more a matter to be discussed in that document, not this one. (Though, do we need to mention direct export here at all?) We should probably say that the opaque snapshot, namespace specific data, etc., will have security considerations corresponding to their defined data contents that should be described where those formats are defined. From a confidentiality perspective, although IOAM options do not contain user data, they can be used for network reconnaissance, Given that we provide multiple fields that essentially carry opaque or operator-defined data, the blanket "do not contain" may be too strong of a statement. (What if someone decides to put a subscriber identifier in the namespace-specific data?) So maybe "are not expected to" is more appropriate". allowing attackers to collect information about network paths, performance, queue states, buffer occupancy and other information. One possible application of such reconiassance is to gauge the effectiveness of an ongoing attack (e.g., if buffers and queues are overflowing). I don't know whether it's particularly useful to mention that scenario here or not, though. IOAM can be used as a means for implementing Denial of Service (DoS) attacks, or for amplifying them. For example, a malicious attacker can add an IOAM header to packets in order to consume the resources of network devices that take part in IOAM or entities that receive, collect or analyze the IOAM data. [...] Messing up the POT data seems worth calling out here as well (though the particular behavior when proof of transit fails is not defined in this document, of course). Notably, in most cases IOAM is expected to be deployed in specific network domains, thus confining the potential attack vectors to Where does the "most cases" come from? I thought the definitions restricted IOAM to the IOAM domain. Indeed, in order to limit the scope of threats mentioned above to within the current network domain the network operator is expected to enforce policies that prevent IOAM traffic from leaking outside of the IOAM domain, and prevent IOAM data from outside the domain to be processed and used within the domain. It would be great if we could provide a bit more detail on the scope of consequences if the operator fails to do so. Section 12.1 The 2008 POSIX reference has since been superseded by the 2017 version. NITS Abstract nits: NSH is not listed as "well known" at the RFC Editor's abbreviation list, so should probably be written out in full. Also, please put commas before and after "e.g." (which will presumably help with the spurious double-space as well). Section 1 cannot be considered passive. In terms of the classification given in [RFC7799] IOAM could be portrayed as Hybrid Type 1. IOAM RFC 7799 writes it with a majuscule 'I', not the numeral '1'. Section 3 Please use the updated RFC 8174 version of the BCP 14 boilerplate. Section 4 Scope: This document defines the data fields and associated data types for in-situ OAM. The in-situ OAM data field can be encapsulated in a variety of protocols, including NSH, Segment Routing, Geneve, IPv6, or IPv4. [...] s/field/fields/, and s/or/and/ Section 5.3 A subset or all of the IOAM-Option-Types and their corresponding IOAM-Data-Fields can be associated to an IOAM-Namespace. IOAM- The way this is written seems to imply that any given IOAM-Option-Type is associated with at most one IOAM-Namespace, which I think is not the intent. Namespaces add further context to IOAM-Option-Types and associated IOAM-Data-Fields. Any IOAM-Namespace MUST interpret the IOAM-Option- Types and associated IOAM-Data-Fields per the definition in this document. IOAM-Namespaces group nodes to support different Presumably this ("MUST interpret") only applies to the option-type and data fields defined in this document? deployment approaches of IOAM (see a few example use-cases below) as IIUC the meaning here is "IOAM-Namespaces provide a way to group nodes" and would be easier to read if formulated in that manner. The RFC Editor will probably have a hard time in this section with which things that end in 's' are possessives (and thus benefit from an apostrophe) and which are not, though it may not be an efficient use of time to try to tidy up before the document gets to them. o whether IOAM-Option-Type(s) has to be removed from the packet, e.g. at a domain edge or domain boundary. there's a singular/plural mismatch here (irrespective of the "(s)" -- "whether an [option-type] has to be removed" vs "whether [option-types] have to be removed" Section 5.4.2.3 The "timestamp seconds" field is a 4-octet unsigned integer field. Absolute timestamp in seconds that specifies the time at which the s/Absolute timestamp/It contains the absolute timestamp/ Section 5.4.2.4 The "timestamp subseconds" field is a 4-octet unsigned integer field. Absolute timestamp in subseconds that specifies the time at which the s/Absolute timestamp/It contains the absolute timestamp/ Section 5.4.2.9 Please use the exact same wording in the description of the Hop_Lim field that was used in Section 5.4.2.1 (or just incorporate that definition by reference). (The node_id descriptions properly differ only in the 3-octet vs 7-octet phrase.) Section 5.4.3 An entry in the "node data list" array can have different formats, following the needs of the deployment. [...] This phrasing seems needlessly confusing. Within a single "node data list" (i.e., a single packet), all the list entries have the same format. What we want to be describing is that the per-entry format can vary across packets and across deployments. So perhaps just "the format used for the entries in a packet's "node data list" array can vary from packet to packet and deployment to deployment". (Also, there's a singular/plural mismatch between "an entry" and "different formats".) Section 5.5, 5.6 When we talk about how the different Data-Fields "MUST be 4-octet aligned", having the figure show a variable-height entry might be helpful; the current formulation looks like it's exactly a 32-bit field. Section 5.5 IOAM Proof of Transit Option-Type is to support path or service function chain [RFC7665] verification use cases. Proof-of-transit s/is to/is used to/ Sections 6.1, 6.2, 6.3 Seconds: specifies the integer portion of the number of seconds since the epoch. I suggest writing out "the PTP epoch", "the NTP epoch", and "the Unix epoch" in the respective sections, to avoid giving the impression (via the definite article) that there is a single distinguished epoch, when there is not. (Similarly for the Fractions.)
- [ippm] Benjamin Kaduk's Discuss on draft-ietf-ipp… Benjamin Kaduk via Datatracker
- Re: [ippm] Benjamin Kaduk's Discuss on draft-ietf… Frank Brockners (fbrockne)
- Re: [ippm] Benjamin Kaduk's Discuss on draft-ietf… Tal Mizrahi
- Re: [ippm] Benjamin Kaduk's Discuss on draft-ietf… Benjamin Kaduk