Re: [Edm] Please review draft-iab-use-it-or-lose-it-01

Toerless Eckert <tte@cs.fau.de> Thu, 15 July 2021 15:47 UTC

Return-Path: <eckert@i4.informatik.uni-erlangen.de>
X-Original-To: edm@ietfa.amsl.com
Delivered-To: edm@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 006CA3A0F02 for <edm@ietfa.amsl.com>; Thu, 15 Jul 2021 08:47:08 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -3.198
X-Spam-Level:
X-Spam-Status: No, score=-3.198 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, HEADER_FROM_DIFFERENT_DOMAINS=0.001, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001, URI_DOTEDU=1] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id PJD69YO_P0L0 for <edm@ietfa.amsl.com>; Thu, 15 Jul 2021 08:47:00 -0700 (PDT)
Received: from faui40.informatik.uni-erlangen.de (faui40.informatik.uni-erlangen.de [131.188.34.40]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 52A4A3A0EBF for <edm@iab.org>; Thu, 15 Jul 2021 08:46:59 -0700 (PDT)
Received: from faui48e.informatik.uni-erlangen.de (faui48e.informatik.uni-erlangen.de [131.188.34.51]) by faui40.informatik.uni-erlangen.de (Postfix) with ESMTP id 30ADE54804C; Thu, 15 Jul 2021 17:46:49 +0200 (CEST)
Received: by faui48e.informatik.uni-erlangen.de (Postfix, from userid 10463) id 2852D4E7A54; Thu, 15 Jul 2021 17:46:49 +0200 (CEST)
Date: Thu, 15 Jul 2021 17:46:49 +0200
From: Toerless Eckert <tte@cs.fau.de>
To: Tommy Pauly <tpauly=40apple.com@dmarc.ietf.org>
Cc: edm@iab.org
Message-ID: <20210715154649.GD24216@faui48e.informatik.uni-erlangen.de>
References: <162626887655.14379.5438309391409890693@ietfa.amsl.com> <80F3F8A2-2C3C-4DE8-BD1D-07842F5B2F89@apple.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <80F3F8A2-2C3C-4DE8-BD1D-07842F5B2F89@apple.com>
User-Agent: Mutt/1.10.1 (2018-07-13)
Archived-At: <https://mailarchive.ietf.org/arch/msg/edm/VOv5hTGdavGqEs4hkOXVipFCnEk>
Subject: Re: [Edm] Please review draft-iab-use-it-or-lose-it-01
X-BeenThere: edm@iab.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: "Evolvability, Deployability, & Maintainability \(Proposed\) Program" <edm.iab.org>
List-Unsubscribe: <https://www.iab.org/mailman/options/edm>, <mailto:edm-request@iab.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/edm/>
List-Post: <mailto:edm@iab.org>
List-Help: <mailto:edm-request@iab.org?subject=help>
List-Subscribe: <https://www.iab.org/mailman/listinfo/edm>, <mailto:edm-request@iab.org?subject=subscribe>
X-List-Received-Date: Thu, 15 Jul 2021 15:47:14 -0000

Thanks for the work Tommy

Some high level comment before inlining my more detailled feedback.

The document discusses several great examples, but it then
inlines the discussion and proposed conclusions from them.

It would help a lot if the document was structured such that
examples/discussion are separated from structural restatement
of problems and guidelines.

For the structural part:

I think it would be good to write that the problems an solutions likely
fall all into three high level categories:

a) p2p protocol extension considerations.
b) 3 or more party protocol extension considerations
   (including middleboxes as supported parties).
c) 2 or more party protocol considerations plus undesirable middleboxes.

Separating b) and c) is IMHO most important, 

For example, the example of the IP version codepoint not "working"
when it was needed is argued by you as a bad extension point,
where in my opinion, it is an unavoidable result from c) and
the fact that you can-not, want-not fight against them (enryption).
My proof point of course is that the very same problem would happen
to the ethernet protocol field (which you cite as th solution),
if/when some middlebox would do the same to it.

Aka: judgements can not easily assume that all problems have the
sam constraints/problems. Jut because it maybe possible to
build better extension points for one type of problems doesn't
mean the same can be done for other problems. And just because
that means that other extension points will continue to be more
problematic doesn't meant they can or should be avoided.

The other high-level suggestion is that at least in my opinion
there is more that could be written about reasons.

IMHO: The deeper the extension point is within a protocol, and the
more complex the protocols starte machienry reaction to
unsupported extensions hs to be in the state machinery,
the more difficult it is to make sure it will work correctly
when needed. Of course, the most easy case is when you have
extensible data structures signalled and some parts can
just be ignored, but that is not always the semantic required.


> ---
> title: "Long-term Viability of Protocol Extension Mechanisms"
> abbrev: Use It Or Lose It
> docname: draft-iab-use-it-or-lose-it-latest
> category: info
> ipr: trust200902
> 
> stand_alone: yes
> pi: [toc, sortrefs, symrefs, docmapping]
> 
> author:
>   -
>     ins: M. Thomson
>     name: Martin Thomson
>     org: Mozilla
>     email: mt@lowentropy.net
>   -
>     ins: T. Pauly
>     name: Tommy Pauly
>     org: Apple
>     email: tpauly@apple.com
> 
> normative:
> 
> 
> informative:
>   HASH:
>     title: "Deploying a New Hash Algorithm"
>     author:
>       -
>         ins: S. Bellovin
>         name: Steven M. Bellovin
>       -
>         ins: E. Rescorla
>         name: Eric M. Rescorla
>     date: 2006
>     target: "https://www.cs.columbia.edu/~smb/papers/new-hash.pdf"
>     seriesinfo: "Proceedings of NDSS '06"
> 
>   SNI:
>     title: "Accepting that other SNI name types will never work"
>     author:
>       -
>         ins: A. Langley
>         name: Adam Langley
>     date: 2016-03-03
>     target: "https://mailarchive.ietf.org/arch/msg/tls/1t79gzNItZd71DwwoaqcQQ_4Yxc"
> 
>   INTOLERANCE:
>     title: "Re: [TLS] Thoughts on Version Intolerance"
>     author:
>       -
>         ins: H. Kario
>         name: Hubert Kario
>     date: 2016-07-20
>     target: "https://mailarchive.ietf.org/arch/msg/tls/bOJ2JQc3HjAHFFWCiNTIb0JuMZc"
> 
>   RIPE-99:
>     title: "RIPE NCC and Duke University BGP Experiment"
>     author:
>       -
>         ins: E. Romijn
>         name: Erik Romijn
>     date: 2010-08-27
>     target: https://labs.ripe.net/Members/erik/ripe-ncc-and-duke-university-bgp-experiment/
> 
>   DNSFLAGDAY:
>     title: "DNS Flag Day 2019"
>     date: 2019-05
>     target: https://dnsflagday.net/2019/
> 
>   HTTP11: I-D.ietf-httpbis-messaging
> 
> 
> 
> --- abstract
> 
> The ability to change protocols depends on exercising the extension and version
> negotiation mechanisms that support change.  Protocols that don't use these
> mechanisms can find it difficult and costly to deploy changes.

qualify what "use" means - specifications of extensions, implementations of
such extensions, active deployments of such extensions.

rephrase second part of sentence to introduce the "loose it" part of the
documents catch phrase.

> --- middle
> 
> # Introduction
> 
> A successful protocol {{?SUCCESS=RFC5218}} needs to change in ways that allow it
> to continue to fulfill the needs of its users.  New use cases, conditions and
> constraints on the deployment of a protocol can render a protocol that does not
> change obsolete.

[mayor] Your text is overall very assertive, which i think will just challenge
everybody with free cycles to comment to find examples of when the claimed is
not true. 

"may need to change", "may render" might aleviate this concern.

more fundamentally, this paragraph would only be true if the "New" stuff would
completely replace the old stuff, and that is most likely the least common
condition. There is "New" stuff, but there more likely is also the "Old" stuff
(use-cases, conditions, constrints, deployments). In this more likely case,
the protocol will not become obsolete, but it may lead to a multitute of
slightly similar protocols for different use-cases. Wort yet, it might
lead to often unresolavlbe problems of how to run these multiple protocols
in parallel when a deployment would require this.

> Usage patterns and requirements for a protocol shift over time.  In response,

More inclusive: s/shift/evolve/

> implementations might adjust usage patterns within the constraints of the
> protocol, the protocol could be extended, or a replacement protocol might be
> developed.  Experience with Internet-scale protocol deployment shows that each
> option comes with different costs.  {{?TRANSITIONS=RFC8170}} examines the
> problem of protocol evolution more broadly.
> 
> This document examines the specific conditions that determine whether protocol
> maintainers have the ability to design and deploy new or modified protocols.

Would it be appropriate to say he document attempts to provide guidance ?
If it does, sayin so would be useful. The way its written right now
it sounds more like an analysis from an uninterested bystander which i
don't think it intends to be. 

If the document intends to guve guidance, consider whether BCP might be a
better goal than inf.

> {{implementations}} highlights some historical examples of difficulties in
> transitions to new protocol features.  {{use-it}} argues that ossified protocols
> are more difficult to update and successful protocols make frequent use of new
> extensions and code-points.  {{use}} and {{other}} outline several strategies
> that might aid in ensuring that protocol changes remain possible over time.
> 
> The experience that informs this document is predominantly at "higher" layers of
> the network stack, in protocols that operate at very large scale and
> Internet-scale applications.  It is possible that these conclusions are less
> applicable to protocol deployments that have less scale and diversity, or
> operate under different constraints.
> 
> 
> # Imperfect Implementations Limit Protocol Evolution {#implementations}
> 
> It can be extremely difficult to deploy a change to a protocol if there are
> bugs in implementations with which the new deployment needs to interoperate.
> Bugs in how new codepoints or extensions are handled often mean that endpoints
> will react poorly to the use of extension mechanisms. This can manifest
> as abrupt termination of sessions, errors, crashes, or disappearances of
> endpoints and timeouts.

I have often seen he term "forward compatibility" be used to characterize
this problem. Feel free to use the term.

I also think that "bugs in how ... handled" is blaming only one side (coders), and
i can think of at least one equl if not worse community: spec writers. handling
of extension points i easily underspecified and may actually be a challenging
part of the specification to write bulletproof.

> Interoperability with other implementations is usually highly valued, so
> deploying mechanisms that trigger adverse reactions can be untenable.

How about all those security protocols that simply silently stop
when anything unexpected or undesirable happens, under the rule that any
additional information/diagnostics would help attackers. In my experience
with security systems (across different implementations), that has been the
biggest challenge.

>  Where
> interoperability is a competitive advantage, this is true even if the negative
> reactions happen infrequently or only under relatively rare conditions.

In the prior pargraph you ue "adverse reaction", here you use "negative reaction".
Would suggest you pick one term and maybe also explain it a bit. Right now
i can only think of puking protocol implementations reading those terms.
Which is fun, but something more practical would be more useful for the text.
For example explicit diagnostics of a identifyable and traceable error condition to
the user instead of to the user unexplainable misbehavior of the system ?  

> Deploying a change to a protocol could require implementations fix a
> substantial proportion of the bugs that the change exposes.  This can
> involve a difficult process that includes identifying the cause of
> these errors, finding the responsible implementation(s), coordinating a
> bug fix and release plan, contacting users and/or the operator of affected
> services, and waiting for the fix to be deployed.

I fear that this explanation does not well explain the root problem to the
reader not familiar with the problem already. And even i am guessing if what
i think the problem is also what you think. If it is, then maybe a text like mine
here would help:

Assume a protocol is deployed by a large user community utilizing
different implementations. Some subset of users (group A) should/want to get a
new protocol feature and are happy for their systems to be updated to get
it. And this feature works fine amongst group A members. But when they
now communicate with non group A members (lets call this group B,
which may be a much larger number than group A), not only is there no
 new benefit of the new feature, but instead the new feature
cases interoperability problems whose cause is not the new feature itself,
but bugs or bad specification of he implemenations used by group B. Now,
how do you even persuade group B to be interested in getting that problem
fixed given that their only benefit is to re-establish the old status-quo
of being able to communicate with group A. And in many cases, those
implementations can not even be upgraded because their vendor went out of
busines.

> Given the effort involved in fixing problems, the existence of these sorts of

or often impossibility

> bugs can outright prevent the deployment of some types of protocol changes,
> especially for protocols involving multiple parties or that are considered
> critical infrastructure (e.g., IP, BGP, DNS, or TLS).  It could even be
> necessary to come up with a new protocol design that uses a different method to
> achieve the same result.
> 
> The set of interoperable features in a protocol is often the subset of its
> features that have some value to those implementing and deploying the protocol.
> It is not always the case that future extensibility is in that set.

Unfortunately, IETF has stopped writing RFCs like RFC1025. Test-suites,
Protocol Implemenation Compliance Specifications or whatever you want to
call them. Every time i raise the point i hear just answers that sound like
'we've given up' / 'not our job'.

> ## Good Protocol Design is Not Itself Sufficient {#not-good-enough}
> 
> It is often argued that the careful design of a protocol extension point or
> version negotiation capability is critical to the freedom that it ultimately
> offers.
> 
> RFC 6709 {{?EXTENSIBILITY=RFC6709}} contains a great deal of well-considered
> advice on designing for extension.  It includes the following advice:
> 
> > This means that, to be useful, a protocol version-negotiation mechanism
>   should be simple enough that it can reasonably be assumed that all the
>   implementers of the first protocol version at least managed to implement the
>   version-negotiation mechanism correctly.
> 
> This has proven to be insufficient in practice.  Many protocols have evidence of
> imperfect implementation of critical mechanisms of this sort.  Mechanisms that
> aren't used are the ones that fail most often.  The same paragraph from RFC
> 6709 acknowledges the existence of this problem, but does not offer any remedy:
> 
> > The nature of protocol version-negotiation mechanisms is that, by definition,
>   they don't get widespread real-world testing until *after* the base protocol
>   has been deployed for a while, and its deficiencies have become evident.

See above. IMHO you should to mention stuff pike RFC1025, test-suites, PICS
as the methods used in other industries (just not IETF) for this.

> Indeed, basic interoperability is considered critical early in the deployment of
> a protocol.  A desire to deploy can result in an engineering practice that
> values simplicity, which could result in deferring implementation of version
> negotiation and extension mechanisms.  This leads to these mechanisms being
> particularly affected by this problem.

There is one big joker card you can and should pull here: Security.
Whatever causes interop problems on extension points is likely also
some form of attack vector that in a non insignificant subset of cases
could also be exploited, even by third parties. test-suites that
include all such , even illegal protocol options re in scurity circles
well understood to be crucial for system protection/assessment.

> ## Disuse Can Hide Problems {#disuse}
> 
> There are many examples of extension points in protocols that have been either
> completely unused, or their use was so infrequent that they could no longer be
> relied upon to function correctly.
> 
> 
> ### TLS
> 
> Transport Layer Security (TLS) {{?TLS12=RFC5246}} provides examples of where a
> design that is objectively sound fails when incorrectly implemented.  TLS
> provides examples of failures in protocol version negotiation and extensibility.

Do you think a better, more explanatory spec could have help to reduce the
amont of incorrect implementations ? Of course, not in hindishgt (although
that would still be good for a -bis), but when writing the original RFC.

Aka: if you agree with me that better specs can lead to fewer bugs1 i would
appreciate anything you can write about that.
> 
> Version negotiation in TLS 1.2 and earlier uses the "Highest mutually supported
> version (HMSV)" scheme exactly as it is described in {{?EXTENSIBILITY}}.
> However, clients are unable to advertise a new version without causing a
> non-trivial proportions of sessions to fail due to bugs in server and middlebox
> implementations.

Ok, lets forget middleboxes for a moment.

For those server implementations that you claim have bugs:
Does each instance of such a "bug" have one or more clearly identifyable MUST
statements or otherwise clearly identified REQUIRED element(s) of the spec ?
If not, hen it is not an implemntation bug but a spec shortcoming.

> Intolerance to new TLS versions is so severe {{INTOLERANCE}} that TLS 1.3
> {{?TLS13=RFC8446}} has abandoned HMSV version negotiation for a new mechanism.

please ad reference to TLS 1.3 section describing that new mechanism.
> 
> The server name indication (SNI) {{?TLS-EXT=RFC6066}} in TLS is another
> excellent example of the failure of a well-designed extensibility point.  SNI
> uses the same technique for extension that is used with considerable success in
> other parts of the TLS protocol.  The original design of SNI includes the
> ability to include multiple names of different types.
> 
> What is telling in this case is that SNI was defined with just one type of name:
^^^^^^^^^^^^^^^^^
> a domain name.  No other type has ever been standardized, though several have
> been proposed.

Not sure what the term means to imply.

I uess a text like this: When you define
an extension point that should support multiple alternative options,
experience has shown that it is less likely to be implemented at
all or without bugs if only one such option within the protocols base
spec or in time before the firsst round of protocol implementations
is finished.

>  Despite an otherwise exemplary design, SNI is so inconsistently
                                                    ^^^^^^^^^^^^^^
> implemented that any hope for using the extension point it defines has been
> abandoned {{SNI}}.

do you mean rarely ? If its really inconsistent, then it would be
ineresting to see a short description as to how the difference in
implementations does/do cause problems.

> Even where extension points have multiple valid values, if the set of permitted
> values does not change over time, there is still a risk that new values are not
> tolerated by existing implementations.  If the set of values for a particular
> field remains fixed over a long period, some implementations might not correctly
> handle a new value when it is introduced.  For example, implementations of TLS
> broke when new values of the signature_algorithms extension were introduced.
> 
> 
> ### DNS
> 
> Ossified DNS code bases and systems resulted in fears that new Resource Record
> Codes (RRCodes) would take years of software propagation before new RRCodes
> could be used.

This was not fear but proven experience. 
"Ossified..systems" is maybe unclear. Ossified deployments might be an
easy textual fix, but it would be great to further elaborate on the
fact that its not about the software itself, but primarily about the
operationally federated international nature of the deployment why it took
so long. In these type of systems, extensibility can reasonably only be
used agile if it does not require code upgrades but at most configuration
upgrade in as few as possible places. That was what RFC3597 enabled.

Here i would take the opposite stance from above and ask this is reverted
from "fear" to actual "experience". But its again not only the ode bassis,
but in this case also the operational processes of deployments. "just because
a new version of the code basis exists for 10 years doesn't mean we want to
deploy it".

>  The result for a long time was heavily overloaded use of the TXT
> record, such as in the Sender Policy Framework {{?SPF=RFC7208}}.  It wasn't
> until after the standard mechanism for dealing with new RRCodes
> {{?RRTYPE=RFC3597}} was considered widely deployed that new RRCodes can be
> safely created and used.
> 
> 
> ### SNMP
> 
> As a counter example, the first version of the Simple Network Management
> Protocol (SNMP) {{?SNMPv1=RFC1157}} defines that unparseable or unauthenticated
> messages are simply discarded without response:
> 
> > It then verifies the version number of the SNMP message. If there is a
>   mismatch, it discards the datagram and performs no further actions.
> 
> When SNMP versions 2, 2c and 3 came along, older agents did exactly what the
> protocol specifies.  Deployment of new versions was likely successful because
> the handling of newer versions was both clear and simple.

I am just imagining replacing SNMP with IP (v4/v6) and wonder if the same
conclusion would make sense.

I just don't think its a "counter" example, because the goals of the previous
example of course where those of extensions, whereas here we're really talking
about completely incompatible protocols that are just multiplexed on
a version instead of a protocol number.

Aka: whats the message of this section ? Creating different, completely
incompatible protocols is the easiest protocol evolution ? I guess
thats what people might have thought when they decided on IPv6, but
20 years laer we have >= 24 IPv4/IPv6 interop mechanisms:
https://en.wikipedia.org/wiki/IPv6_transition_mechanism

Aka: try to formulate some more useful conclusions out of the example.

> ### HTTP
> 
> HTTP has a number of very effective extension points in addition to the
> aforementioned header fields.  It also has some examples of extension points
> that are so rarely used that it is possible that they are not at all usable.
> 
> Extension points in HTTP that might be unwise to use include the extension point
> on each chunk in the chunked transfer coding {{Section 7.1 of HTTP11}}, the
> ability to use transfer codings other than the chunked coding, and the range
> unit in a range request {{Section 14 of HTTP}}.
> 
> 
> ### IPv4
> 
> Codepoints that are reserved for future use can be especially problematic.
> Reserving codepoints without attributing semantics to their use can result in
> diverse or conflicting semantics being attributed without any hope of
> interoperability.  An example of this is the "class E" address space in IPv4
> {{?RFC0988}}, which was reserved without assigning any semantics.

Adress prefixes are not called code-points AFAIK, reads at least weird to me.

IMHO it is not about not attributing semantic to extension points but
underspecifying what to do.

Here is how i would suggest to rewrite this paragraph:

[RFC-editor node: Yes, we want to keep the obsoleted by RFC1112 reference
 RFC0988 because we want to have an earliest time of spec reference]

Extension points that are reserved for future use can become problematic
when their behavior is underspecified. {{?RFC0988}} introduced the
IP Multicast semantic for the "class E" adress space in IPv4, which was
previously reserved in {{?RFC900}} and not mentioned in {{?RFC791}},
resulting in the underspecified use of class E addresses for the IP
unicast semantic.  A more future proof version of RFC791 today should
use normative language prohibiting forwarding of packets by its
rules whn they have source an/or destination addresses from a reserved
prefix.

[ A bit off-topic:
  I was actually deploying IP multicast in 1989 on the Internet but i am
  not aware of this actually having been a real problem, so the example
  is somewhat theoretical to me. Maybe somene else remembers differently.
  
  However, we did have a real significant problem as late as 2005 (if
  i remember the timeline correctly), and that was that illegal packets
  with a unicast destination address and a class E source address
  could be sent to some router adress, which would turn this around
  into for example an ICMP port unreachable reply sent back to the
  source address, which resulted in some really bad DoS attacks when
  you knew an IP multicast address with many receivers active.
  
  So, now i am trying to figure out if/how that problem could have
  been stopped through better specs, but:
  RFC1112 is FULL of sentences highlighting that class E addresses
  in the source address field are illegal AND NOT IP multicast.
  
  Of course, the problem happened because of modular software
  in routers: The HOST stack in routers had not been updated to
  IP multicast (and/or implementers where lazy), and it was the
  host stack creating the ICMP port unreachable.
  
  So, actually, i am finally trying to get rfc1112 -bis'ed
  
  https://datatracker.ietf.org/doc/draft-eckert-pim-rfc1112bis/

  And my best thinking right now is that we might want to make
  that rfc1112bis be also an update to rfc791 (and rfc8200) 
  with exactly the type of above refinement that packets
  with an IP multicast source address are illegal and must be
  discarded. Even if the host stack does otherwise not implement
  IP multicast (because the forwarding module may).
]

> For protocols that can use negotiation to attribute semantics to codepoints, it
> is possible that unused codepoints can be reclaimed for active use, though this
> requires that the negotiation include all protocol participants.  For something
> as fundamental as addressing, negotiation is difficult or even impossible, as
> all nodes on the network path plus potential alternative paths would need to be
> involved.

All your Internet communication is likely running primarily across multiple
MPLS/IP networks, and the semantic of the mayority of MPLS label is solely based on
eventual consistency control plane protocols (IGPs/BGP). Arguably, IP unicast
routing is not too different but the amount of different semantics is
smaller than in MPLS (except when using SRv6 which has even more different
adress semantics).

Aka: expect for thss paragraph to be questioned by anybody who like me
thinks about the reality of the Internet.

If you have a very specific example you know is broken, lets start from that
and maybe expand into some better generic conclusion.

How about:

3 party systems are more difficult than 2-party system.
more than  3 party systems are terrible difficult.

> ## Multi-Party Interactions and Middleboxes {#middleboxes}
> 
> Even the most superficially simple protocols can often involve more actors than
> is immediately apparent.  A two-party protocol has two ends, but even at the
> endpoints of an interaction, protocol elements can be passed on to other
> entities in ways that can affect protocol operation.
> 
> One of the key challenges in deploying new features is ensuring compatibility
> with all actors that could be involved in the protocol.

If i think about my above multicast example, the problem is as you said
across multiple actors (forwarder, host stack), but also across multiple
protocols IP/IP-multicast, ICMP. So maybe
s/in the protocol/in the complete, likely multi-protocol communications/

> Protocols deployed without active measures against intermediation will tend to
> become intermediated over time, as network operators deploy middleboxes to
> perform some function on traffic {{?PATH-SIGNALS=RFC8588}}.

Is that the right RFC number: 8588 ?

>  In particular, one
> of the consequences of an unencrypted protocol is that any element on path can
> interact with the protocol. 

The problem is a bit that you do not provide a definition of "interact with the
protocol".

In its generality, the sentence ignores mentioning that authentication
removes everything i would call "interact with the protcol", aka: modify
protocol elements. Additional encrypion is only necessary to protection
against interactions with the packet, such as filtering or protocol
aware routing.

Suggest:

In particular, one consequences is that any element on path can
interact with any unauthenticated protocol element. Furthermore, for protection
against protocol aware packet processing such as filtering, encryption is required.


> For example, HTTP was specifically designed with
> intermediation in mind, transparent proxies {{?HTTP=I-D.ietf-httpbis-semantics}}
> are not only possible but sometimes advantageous, despite some significant
> downsides.  Consequently, transparent proxies for cleartext HTTP are commonplace.

If there is some reference for the downsides, that would be lovely.

> The DNS protocol was designed with intermediation in mind through its use of
> caching recursive resolvers {{?DNS=RFC1034}}.  What was less anticipated was the
> forced spoofing of DNS records by many middle-boxes such as those that inject
> authentication or pay-wall mechanisms as an authentication and authorization
> check, which are now prevalent in hotels, coffee shops and business networks.
> 
> Middleboxes are also protocol participants, to the degree that they are able
> to observe and act in ways that affect the protocol.  The degree to which a
> middlebox participates varies from the basic functions that a router performs
> to full participation.  For example, a SIP back-to-back user agent (B2BUA)
> {{?B2BUA=RFC7092}} can be very deeply involved in the SIP protocol.

Alas, i have not read up on the IEF curriculum about middleboxes, but i would
suspect the term is primarily used for elements that are neither considered in the
design of the protocol, nor operating at a higher layer than the protocol.
I think a SIP b2b UA is either of the two (considered as part of the design
or operating at a higher layer), so maybe another example for a more clearly
middlebox function might be beter.

The best (positive) examples of course are TCP accelerators via transparent
TCP proxies, which AFAIK are still commonplace in many, especially lowr WAN-speed
edge routers. Alas, this is all non-IEF standards based. We had some IRTF? BOF where
researchers wanted to start spec'ing something of this out recently.

Maybe the best reference for middleboxes and even specs for them are
NAT middleboxs because NAT was clearly not part of the design and still
is considered by many in the IETF to be the enemy
https://datatracker.ietf.org/wg/behave/documents/


> This phenomenon appears at all layers of the protocol stack, even when
> protocols are not designed with middlebox participation in mind.

"even" ? Isn't this "primarily"

> TCP's
> {{?TCP=RFC0793}} extension points have been rendered difficult to use, largely
> due to middlebox interactions, as experience with Multipath TCP
> {{?MPTCP=RFC6824}} and Fast Open {{?TFO=RFC7413}} has shown.

suggest rephrase:

as documented for Multipath TCP in {{?MPTCP=RFC6824}} and for TCP Fast Open in {{?TFO=RFC7413}}.

aka: original sentence makes it sounds references are just pointers to
the functions but not the middlebox impact onto them.

>   IP's version field
> was rendered useless when encapsulated over Ethernet, requring a new ethertype
> with IPv6 {{?RFC2464}}, due in part to layer 2 devices making
> version-independent assumptions about the structure of the IPv4 header.  The
> announcements of new optional transitive attributes in BGP caused significant
> routing instability {{RIPE-99}}.
> 
> By increasing the number of different actors involved in any single protocol
> exchange, the number of potential implementation bugs that a deployment needs to
> contend with also increases.

s/potential/likely/ ?
s/bugs/bugs and missing features/
s/deployments need to content/implementers need to waste time building workarounds for/

>  In particular, incompatible changes to a protocol
> that might be negotiated between endpoints in ignorance of the presence of a
> middlebox can result in a middlebox interfering in negative and
> unexpected ways.
> 
> Unfortunately, middleboxes can considerably increase the difficulty of
> deploying new versions or other changes to a protocol.

I think it is somewhat difficult to ignore the elephant in the room, which is
that a lot of middleboxes do perform security functions, and that there
is a significant market demand for those and the counter meaures against
middle boxes can make those self-defending protocols undeplyable and/or
may ultimately create more negative  problems at even higher layers.

Aka: would be good to have some section tackling this.

> # Retaining Viable Protocol Evolution Mechanisms {#use-it}
> 
> The design of a protocol for extensibility and eventual replacement
> {{?EXTENSIBILITY}} does not guarantee the ability to exercise those options.
> The set of features that enable future evolution need to be interoperable in the
> first implementations and deployments of the protocol.  Implementations of

s/interoperable/validated for interoperability/

> mechanisms that support evolution is necessary to ensure that they remain
> available for new uses, and history has shown this occurs almost exclusively
> through active mechanism use.
> 
> The conditions for retaining the ability to evolve a design is most clearly
> evident in the protocols that are known to have viable version negotiation or
> extension points.  The definition of mechanisms alone is insufficient; it's the
> assured implementation through active use of those mechanisms that determines
> the existence of freedom.  Protocols that routinely add new extensions and code
                   ^^^^^^^

something more specific maybe ? agile protocol evolution ?
Or else specify what you mean with "freedom".
Wihout qualifiers i always think freedom means this: https://www.youtube.com/watch?v=CdKVX45wYeQ

> points rarely have trouble adding additional ones, especially when the handling
> of new versions or extension is well defined.

Maybe also the observation, that the extensions that crete least problems are
those that only extend the data model, but not the state machinery of protocols.
Thats how i would decribe the hundreds of DHCP options that where easily added
for example or all of Yang models or the like.

> ## Examples of Active Use {#ex-active}
> 
> For example, header fields in email {{?SMTP=RFC5322}}, HTTP {{?HTTP}}
> and SIP {{?SIP=RFC3261}} all derive from the same basic design, which amounts to
> a list name/value pairs.  There is no evidence of significant barriers to
> deploying header fields with new names and semantics in email and HTTP as
> clients and servers can ignore headers they do not understand or need.  The
> widespread deployment of SIP B2BUAs means that new SIP header fields do not
> reliably reach peers, however, which doesn't necessarily cause interoperability
> issues but rather causes feature deployment issues due to the lack of
> option passing {{middleboxes}}.

I would suggest to separate the SIP discussion from SMTP/HTTP and highlight
for both of them the core aspect:

SMTP/HTTP default behavior for extensions is "ignore when unknown". This
is the best case for agile development/deployment.

In SIP because of B2BUA, the default behavior is "filter/block" and i was
involved when we tried to work around this in products for nex extensions
we needed, and it really has lead to much less adoption of extensions.

So i think the way how you describe the SIP example in your current paragraph
is somewhat misleading: IMHO, active Use is necessary, but not sufficient. You do need actually
working deployment models in the face of normal disturbances (middlebox such
as B2BUA).

> As another example, the attribute-value pairs (AVPs) in Diameter
> {{?DIAMETER=RFC6733}} are fundamental to the design of the protocol.  Any use of
> Diameter requires exercising the ability to add new AVPs.  This is routinely
> done without fear that the new feature might not be successfully deployed.

Data-models point i made above. I do remember DHCP functions requring new
state machineries. Those of course are harder to deploy, ensure interop.

> These examples show extension points that are heavily used are also being
> relatively unaffected by deployment issues preventing addition of new values
> for new use cases.
> 
> These examples also confirm the case that good design does not guarantee
> success.  On the contrary, success is often despite shortcomings in the design.
> For instance, the shortcomings of HTTP header fields are significant enough that
> there are ongoing efforts to improve the syntax
> {{?HTTP-HEADERS=RFC8941}}.
> 
> Only by using a protocol's extension capabilities does it ensure the
> availability of that capability.  Protocols that fail to use a mechanism, or a
> protocol that only rarely uses a mechanism, may suffer an inability to rely on
> that mechanism.

So i guess "use" actually is two stage, and thats not clear from the text.
You need to use extensions in spec, but you also need to use them in deployments
to not loose them. Maybe you can rephrase/refine accordingly.

> 
> ## Dependency is Better {#need-it}
> 
> The best way to guarantee that a protocol mechanism is used is to make the
      ^^^^ easiest
> handling of it critical to an endpoint participating in that protocol.

I am thinking primarily of error handling options. Only few deployment can
afford regular error insertion to harden implementations. Something is not
"best" if its not feasible in key problem areas. But it is easiest.

> This means that implementations must rely on both the existence of extension
> mechanisms and their continued, repeated expansion over time.
> 
> For example, the message format in SMTP relies on header fields for most of its
> functions, including the most basic delivery functions.  A deployment of SMTP
> cannot avoid including an implementation of header field handling.  In addition
> to this, the regularity with which new header fields are defined and used
> ensures that deployments frequently encounter header fields that it does not yet
> (and may never) understand.  An SMTP implementation therefore needs to be able
> to both process header fields that it understands and ignore those that it does
> not.

Its easy to attempt establising a generic principle from the most easy
instance (ignoring of data model leaves). Try to come up with the most
complex instance of need-it extensibility and document that. Otherwise
maybe refine that the need-it can easily only be applied to a subset
of extension cases.

> In this way, implementing the extensibility mechanism is not merely mandated by
> the specification, it is crucial to the functioning of a protocol deployment.
> Should an implementation fail to correctly implement the mechanism, that failure
> would quickly become apparent.
> 
> Caution is advised to avoid assuming that building a dependency on an extension
> mechanism is sufficient to ensure availability of that mechanism in the long
> term.  If the set of possible uses is narrowly constrained and deployments do
> not change over time, implementations might not see new variations or assume a
> narrower interpretation of what is possible.  Those implementations might still
> exhibit errors when presented with new variations.

The corollary of this correct analsysis though is NOT to not
have hese type of extension points because depending on protocol its
the only way to do things. Instead, IMHO the corllary is that
if an extension point is not sufficienly well enough exercisable through
actual deployment use, hen it is even more important to code path
analysis through testing. In routers where this problem is widespread,
this is exactly whats being done through code coverage analsysis
and explicitly a lot of negative testing for extension code points.

Of course, third-party testing would always be better, see also examples
such as the OpenSSH secrutiy issues.

> ## Restoring Active Use
> 
> With enough effort, active use can be used to restore capabililities.
> 
> EDNS {{?EDNS=RFC6891}} was defined to provide extensibility in DNS.  Intolerance
> of the extension in DNS servers resulted in a fallback method being widely
> deployed (see {{Section 6.2.2 of EDNS}}), This fallback resulted in EDNS being
> disabled for affected servers.  Over time, greater support for EDNS and
> increased reliance on it for different features motivated a flag day
> {{DNSFLAGDAY}} where the workaround was removed.
> 
> The EDNS example shows that effort can be used to restore capabilities.  This is
> in part because EDNS was actively used with most resolvers and servers.  It was
> therefore possible to force a change to ensure that extension capabilities would
> always be available.  However, this required an enormous coordination effort.  A
> small number of incompatible servers and the names they serve also become
> inaccessible to most clients.
> 
> 
> # Active Use {#use}
> 
> As discussed in {{use-it}}, the most effective defense against ossification of
> protocol extension points is active use.
> 
> Implementations are most likely to be tolerant of new values if they depend on
> being able to frequently use new values.  Failing that, implementations that
> routinely see new values are more likely to correctly handle new values.  More
> frequent changes will improve the likelihood that incorrect handling or
> intolerance is discovered and rectified.  The longer an intolerant
> implementation is deployed, the more difficult it is to correct.
> 
> What constitutes "active use" can depend greatly on the environment in which a
> protocol is deployed.  The frequency of changes necessary to safeguard some
> mechanisms might be slow enough to attract ossification in another protocol
> deployment, while being excessive in others.
> 
> 
> ## Version Negotiation
> 
> As noted in {{not-good-enough}}, protocols that provide version negotiation
> mechanisms might not be able to test that feature until a new version is
> deployed.  One relatively successful design approach has been to use the
> protocol selection mechanisms built into a lower-layer protocol to select the
> protocol.  This could allow a version negotiation mechanism to benefit from
> active use of the extension point by other protocols.
> 
> For instance, all published versions of IP contain a version number as the four
> high bits of the first header byte.  However, version selection using this
> field proved to be unsuccessful. Ultimately, successful deployment of IPv6
> over Ethernet {{?RFC2464}} required a different EtherType from IPv4.  This
> change took advantage of the already-diverse usage of EtherType.

last paragraph sounds familiar from earlier in the doc.

> Other examples of this style of design include Application-Layer Protocol
> Negotiation ({{?ALPN=RFC7301}}) and HTTP content negotiation ({{Section 12 of
> HTTP}}).
> 
> This technique relies on the codepoint being usable.  For instance, the IP
> protocol number is known to be unreliable and therefore not suitable
> {{?NEW-PROTOCOLS=DOI.10.1016/j.comnet.2020.107211}}.
> 
> 
> ## Falsifying Active Use {#grease}
> 
> "Grease" was originally defined for TLS {{?GREASE=RFC8701}}, but has been
> adopted by other protocols, such as QUIC {{?QUIC=RFC9000}}.  Grease identifies
> lack of use as an issue (protocol mechanisms "rusting" shut) and proposes
> reserving values for extensions that have no semantic value attached.
> 
> The design in {{?GREASE}} is aimed at the style of negotiation most used in TLS,
> where one endpoint offers a set of options and the other chooses the one that it
> most prefers from those that it supports.  An endpoint that uses grease randomly
> offers options - usually just one - from a set of reserved values.  These values
> are guaranteed to never be assigned real meaning, so its peer will never have
> cause to genuinely select one of these values.

Nice, but if code points are reserved for never to be used in rality, they
would also be followed with specialized state machinery in implementations,
resulting in the fact that the state machinery for other, real future
code points may not be exercised.

> More generally, greasing is used to refer to any attempt to exercise extension
> points without changing endpoint behavior, other than to encourage participants
> to tolerate new or varying values of protocol elements.
> 
> The principle that grease operates on is that an implementation that is
> regularly exposed to unknown values is less likely to be intolerant of new
> values when they appear.  This depends largely on the assumption that the
> difficulty of implementing the extension mechanism correctly is as easy or
> easier than implementing code to identify and filter out reserved values.
> Reserving random or unevenly distributed values for this purpose is thought to
> further discourage special treatment.
> 
> Without reserved greasing codepoints, an implementation can use code points from
> spaces used for private or experimental use if such a range exists.  In addition
> to the risk of triggering participation in an unwanted experiment, this can be
> less effective.  Incorrect implementations might still be able to identify these
> code points and ignore them.
> 
> In addition to advertising bogus capabilities, an endpoint might also
> selectively disable non-critical protocol elements to test the ability of peers
> to handle the absence of certain capabilities.
> 
> This style of defensive design is limited because it is only superficial.  As
> greasing only mimics active use of an extension point, it only exercises a small
> part of the mechanisms that support extensibility.  More critically, it does not
> easily translate to all forms of extension points.  For instance, HMSV
> negotiation cannot be greased in this fashion.  Other techniques might be
> necessary for protocols that don't rely on the particular style of exchange that
> is predominant in TLS.
> 
> Grease is deployed with the intent of quickly revealing errors in implementing
> the mechanisms it safeguards.  Though it has been effective at revealing
> problems in some cases with TLS, the efficacy of greasing isn't proven more
> generally.  Where implementations are able to tolerate a non-zero error rate in
> their operation, greasing offers a potential option for safeguarding future
> extensibility.  However, this relies on there being a sufficient proportion of
> participants that are willing to invest the effort and tolerate the risk of
> interoperability failures.
> 
> 
> # Complementary Techniques {#other}
> 
> The protections to protocol evolution that come from [active use](#use) can be
> improved through the use of other defensive techniques. The techniques listed
> here might not prevent ossification on their own, but can make active use more
> effective.
> 
> 
> ## Cryptography
> 
> Cryptography can be used to reduce the number of middlebox entities that can
> participate in a protocol or limit the extent of participation.  Using TLS or
> other cryptographic tools can therefore reduce the number of entities that can
> influence whether new features are usable.
> 
> {{?PATH-SIGNALS=RFC8588}} recommends the use of encryption and integrity
> protection to limit participation.  For example, encryption is used by the QUIC
> protocol {{?QUIC=RFC9000}} to limit the information that is available to
> middleboxes and integrity protection prevents modification.

As mentioned above, add authentication in before cryptography as another
option and discuss differences.

These complemenary techniques should be listed/assigned to my initial c),
aka: unintended middleboxes.

> ## Fewer Extension Points
> 
> A successful protocol will include many potential types of extension.  Designing
> multiple types of extension mechanism, each suited to a specific purpose, might
> leave some extension points less heavily used than others.
> 
> Disuse of a specialized extension point might render it unusable.  In contrast,
> having a smaller number of extension points with wide applicability could
> improve the use of those extension points.  Use of a shared extension point for
> any purpose can protect rarer or more specialized uses.
> 
> Both extensions and core protocol elements use the same extension points in
> protocols like HTTP {{?HTTP}} and DIAMETER {{?DIAMETER}}; see {{ex-active}}.

Whats missing from techniques are thing like:

a) Diagnostics for extension point failures. 
   Ability to distinguish extension point failure between a), b), c),
   aka: designing an extension point so that it wold be clear if/who it fails.

b) Upfront Extension point Profile signaling.
   Reduces depth of state machinery at which extension point failure occurs,
   and allows to proacively avoid probing an extension point.

> ### Invariants
> 
> Documenting aspects of the protocol that cannot or will not change as
> extensions or new versions are added can be a useful exercise. Understanding
> what aspects of a protocol are invariant can help guide the process of
> identifying those parts of the protocol that might change.

Example ?

> As a means of protecting extensibility, a declaration of protocol invariants is
> useful only to the extent that protocol participants are willing to allow new
> uses for the protocol.  Like with greasing, protocol participants could still
> purposefully block the deployment of new features.  A protocol that declares
> protocol invariants relies on implementations understanding and respecting those
> invariants.

Examples ? This seems to leak also into security, e..: desire/need for
middleboxes to constrain protocols to a predefined functionality.

> Protocol invariants need to be clearly and concisely documented.  Including
> examples of aspects of the protocol that are not invariant, such as the
> appendix of {{?QUIC-INVARIANTS=RFC8999}}, can be used to
> clarify intent.
> 
> 
> ## Effective Feedback
> 
> While not a direct means of protecting extensibility mechanisms, feedback
> systems can be important to discovering problems.
> 
> Visibility of errors is critical to the success of techniques like grease (see
> {{grease}}).  The grease design is most effective if a deployment has a means of
> detecting and reporting errors.  Ignoring errors could allow problems to become
> entrenched.
> 
> Feedback on errors is more important during the development and early deployment
> of a change.  It might also be helpful to disable automatic error recovery
> methods during development.

I don't aagree on he "early deployment". It equally valuable whenever
new deployments happen. Just think about all the middlebox issues that reopened
in >= transport protocols years after initial deployments when IPv4 was
amnded with IPv6. Or hen new classes of middleboxes appeared (Intrusion
Detection Systems).

> Automated feedback systems are important for automated systems, or where error
> recovery is also automated.  For instance, connection failures with HTTP
> alternative services {{?ALT-SVC=RFC7838}} are not permitted to affect the
> outcome of transactions.  An automated feedback system for capturing failures in
> alternative services is therefore necessary for failures to be detected.
> 
> How errors are gathered and reported will depend greatly on the nature of the
> protocol deployment and the entity that receives the report.  For instance, end
> users, developers, and network operations each have different requirements for
> how error reports are created, managed, and acted upon.
> 
> Automated delivery of error reports can be critical for rectifying deployment
> errors as early as possible, such as seen in {{?DMARC=RFC7489}} and
> {{?SMTP-TLS-Reporting=RFC8460}}.

Local on the system: circular, space limited automatic logging of all events
coupled with auomatic comression/reduction by severity level. Especially
making it intelligent: around the time points of severe errors you do
maintain also the less severe errors.

Alas, i have not found good implementations of this.

More easily: good management plane models for monitoring. E.g.: Yang
models for protocols where there are counters for all possible error
conditions. With yang push and other yang event models you can trigger/trace
errors. 

> # Security Considerations
> 
> Many of the problems identified in this document are not the result of
> deliberate actions by an adversary, but more the result of mistakes, decisions
> made without sufficient context, or simple neglect.  Problems therefore not the
> result of opposition by an adversary.  In response, the recommended measures
> generally assume that other protocol participants will not take deliberate
> action to prevent protocol evolution.

But equally many problems occur from middleboxes that want to be friendly,
but don't work. Even more so from middleboxes needing to do a do a
different job. Most often security.

> The use of cryptographic techniques to exclude potential participants is the
> only strong measure that the document recommends.

In this security considerations it is IMHO most important to explicitly
highlight the option of authentication without encryption. It prohibits
messin with the procols, but it allows whats negatively is called
perpass/monitoring, and a required security function - followed potentially
by mandaory filtering/breaking of inappropriat protocol flows.

>  However, authorized protocol
> peers are most often responsible for the identified problems, which can mean
> that cryptography is insufficient to exclude them.

Again, authenticated, but unencrypted. Firewalls need to keep bad stuff
out, and good stuff in. SO they need to look at stuff and accordingly
filter. 

> The ability to design, implement, and deploy new protocol mechanisms can be
> critical to security.  In particular, it is important to be able to replace

Please say that these new mechanisms will require new extension point values.

> cryptographic algorithms over time {{?AGILITY=RFC7696}}.  For example,
> preparing for replacement of weak hash algorithms was made more difficult
> through misuse {{HASH}}.
> 
> 
> # IANA Considerations
> 
> This document makes no request of IANA.

Actually there is IMHO an IANA related issue: We do not have the
tooling for automated updating of code-point values. Whereas
we have YAND and a lot of other formal models, if you want to especially
build diagnostic systems, there is no automated way to suck in current
IANA registry information becuase he only authoritative representation
are "read-by-human-only" web paes.

Aka: We should have formal language model authoritative IANA registry
description. In additionl to the information we have now in IANA
registries, such a formal model representation could/should alow for
every code-point to have formal langauge "protocol informatin", aka:
formal code that could be sucked in by protocol toolchains to
automatically extend code.

Cheers
    Toerless

> --- back
> 
> # Acknowledgments
> {:numbered="false"}
> 
> Wes Hardaker, Mirja Kühlewind, Mark Nottingham, and Brian Trammell made
> significant contributions to this document.

On Wed, Jul 14, 2021 at 07:20:02AM -0700, Tommy Pauly wrote:
> Hello EDM,
> 
> We’ve posted a new version of the use-it-or-lose-it draft, which has been discussed in several of our past calls.
> 
> The IAB would like to get this document progressed to be published as an RFC between IETF 111 and IETF 112. Please take a look at the current version, and provide your reviews either here by email, or on the GitHub (https://github.com/intarchboard/draft-use-it-or-lose-it/issues <https://github.com/intarchboard/draft-use-it-or-lose-it/issues>).
> 
> Best,
> Tommy
> 
> > On Jul 14, 2021, at 6:21 AM, internet-drafts@ietf.org wrote:
> > 
> > 
> > A new version of I-D, draft-iab-use-it-or-lose-it-01.txt
> > has been successfully submitted by Cindy Morgan and posted to the
> > IETF repository.
> > 
> > Name:		draft-iab-use-it-or-lose-it
> > Revision:	01
> > Title:		Long-term Viability of Protocol Extension Mechanisms
> > Document date:	2021-07-14
> > Group:		iab
> > Pages:		19
> > URL:            https://www.ietf.org/archive/id/draft-iab-use-it-or-lose-it-01.txt
> > Status:         https://datatracker.ietf.org/doc/draft-iab-use-it-or-lose-it/
> > Html:           https://www.ietf.org/archive/id/draft-iab-use-it-or-lose-it-01.html
> > Htmlized:       https://datatracker.ietf.org/doc/html/draft-iab-use-it-or-lose-it
> > Diff:           https://www.ietf.org/rfcdiff?url2=draft-iab-use-it-or-lose-it-01
> > 
> > Abstract:
> >   The ability to change protocols depends on exercising the extension
> >   and version negotiation mechanisms that support change.  Protocols
> >   that don't use these mechanisms can find it difficult and costly to
> >   deploy changes.
> > 
> > 
> > 
> > 
> > The IETF Secretariat
> > 
> > 
> 

> -- 
> Edm mailing list
> Edm@iab.org
> https://www.iab.org/mailman/listinfo/edm


-- 
---
tte@cs.fau.de