Re: AD review for draft-ietf-bfd-multipoint-active-tail

Greg Mirsky <gregimirsky@gmail.com> Wed, 23 May 2018 03:56 UTC

Return-Path: <gregimirsky@gmail.com>
X-Original-To: rtg-bfd@ietfa.amsl.com
Delivered-To: rtg-bfd@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 5CFA3120047; Tue, 22 May 2018 20:56:51 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: 0.402
X-Spam-Level:
X-Spam-Status: No, score=0.402 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FREEMAIL_FROM=0.001, FREEMAIL_REPLY=1, HTML_COMMENT_SAVED_URL=1.391, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_PASS=-0.001, T_HTML_ATTACH=0.01] autolearn=no autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 23dJFGVQ22lJ; Tue, 22 May 2018 20:56:42 -0700 (PDT)
Received: from mail-wr0-x234.google.com (mail-wr0-x234.google.com [IPv6:2a00:1450:400c:c0c::234]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 15B17120725; Tue, 22 May 2018 20:56:41 -0700 (PDT)
Received: by mail-wr0-x234.google.com with SMTP id x9-v6so20891201wrl.13; Tue, 22 May 2018 20:56:40 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=RwS5cwT4rZKyV7pCuMqjZrYMu1xe34FQx2UVBFg8d/8=; b=ZoocmIDRA+6tG+Yqpd6/6xTr0C3c4OWAZKBCWnCYcMoQlCD04lcjT2rZE6GxE1KCeM I1QkMZnlLQHxgFx7mz9sILvPJJ12GfAzlXkUqKt02DRgxxYz+qBYqx98ju9OdX/h8tuM rHyQu+tGr/ZlFzMTsRg86cVHLXFlQ5XSZ/TbJqsA5eeB5oiLkUY1QnZgdOnt6DrKd08m tZF0vPKVk71+f6nvUBPjkl7pcdEzAh8V+evu8egJ7C2uowJyQ5YxI6Upde2xJdIMrMsG W8z76067vTSZr6cgOi2/o+TFvFn0IpyGjhlngApPMnSvpgKIhx4VVKAs6IseSmh1RvVr yPcQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=RwS5cwT4rZKyV7pCuMqjZrYMu1xe34FQx2UVBFg8d/8=; b=SDHDqPwt+ftk5h+GcudpUAig8lzShYJ4adlzOmKSNRK8gbMdL+6h8RKNou+/8dHuQH wgRVEx8UTKL8L86r7T77o+16QT5Vy0pcjwfhArjiAOLDCeHoRT/fWUQcfZqC6md11ziH vld2d/SuBVkrDJE7nu2nNAf+tLxO77r0KTLrHFOuhCDnDy9sZLcEfX2Oih0ekKkj0Nrj wGnl1yUBd3z5clkvqvvkrp6se4V7zZ8bCn0kp6lDLgG33QxjY3YDR6m2pbLkvY3o/kCw mcJwfpHJF7/x7bOOkAMO/qjR+0H1ipS1LMFE8xZxfKiCUto1OoF24bs9doxtnb6VunfD EpYw==
X-Gm-Message-State: ALKqPwd8cJ4XIpZ08hqDS3pzwnyO4Qv0mFh3VYylc6ghfQMHOjFiCBZV B4vOh16UM5kAkD3LUREu1jEV/k9H1dcKmraxWtA=
X-Google-Smtp-Source: AB8JxZrpTt3pMjtmWJXxGDKe0Dth9qMeMjJqqD/ViwYxo+LfTw9vci2xRPGv3XzLUtIOFhQOpKOmDeh5SQVm5XusroA=
X-Received: by 2002:a19:d894:: with SMTP id r20-v6mr534138lfi.7.1527047798978; Tue, 22 May 2018 20:56:38 -0700 (PDT)
MIME-Version: 1.0
Received: by 2002:a2e:850d:0:0:0:0:0 with HTTP; Tue, 22 May 2018 20:56:37 -0700 (PDT)
In-Reply-To: <04c76bec-ed2d-c3b1-6b92-b31d6a5ff620@nokia.com>
References: <04c76bec-ed2d-c3b1-6b92-b31d6a5ff620@nokia.com>
From: Greg Mirsky <gregimirsky@gmail.com>
Date: Tue, 22 May 2018 20:56:37 -0700
Message-ID: <CA+RyBmUBmrmdpX0e_yei06cH6NnC+LpxGtx1hiTf4AmygizWNw@mail.gmail.com>
Subject: Re: AD review for draft-ietf-bfd-multipoint-active-tail
To: Martin Vigoureux <martin.vigoureux@nokia.com>
Cc: draft-ietf-bfd-multipoint-active-tail@ietf.org, "Reshad Rahman (rrahman)" <rrahman@cisco.com>, rtg-bfd@ietf.org, bfd-chairs@ietf.org
Content-Type: multipart/mixed; boundary="0000000000009d2327056cd78512"
Archived-At: <https://mailarchive.ietf.org/arch/msg/rtg-bfd/CbSZrgAw8f43m8Dc7THAFzg3Gu0>
X-Mailman-Approved-At: Wed, 23 May 2018 06:23:48 -0700
X-BeenThere: rtg-bfd@ietf.org
X-Mailman-Version: 2.1.22
Precedence: list
List-Id: "RTG Area: Bidirectional Forwarding Detection DT" <rtg-bfd.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/rtg-bfd>, <mailto:rtg-bfd-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/rtg-bfd/>
List-Post: <mailto:rtg-bfd@ietf.org>
List-Help: <mailto:rtg-bfd-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/rtg-bfd>, <mailto:rtg-bfd-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 23 May 2018 03:56:52 -0000

Hi Martin,
thank you for the thorough review, thoughtful comments, and questions that
require discussion.
Please find my answers, notes in-line tagged GIM>>.

Attached are the new working version of the draft and the diff to -07. I'll
update the references and share an update after that's done.

Regards,
Greg

On Fri, May 18, 2018 at 10:35 AM, Martin Vigoureux <
martin.vigoureux@nokia.com> wrote:

> WG, Authors,
>
> hello.
>
> thank you for this Document. I have done my AD review.
> I find the document much harder to apprehend than mpBFD. I have a number
> of comments but may have another round of them based on your replies.
>
> -m
>
> ---
>
> General:
> I find the use of reliable (and of its variants) not appropriate. You
> refer to reliability in terms disambiguating failure scenarios, you don't
> make packet transfer/delivery more reliable which is usually the context
> that comes in mind when talking about reliability. So I'd really prefer if
> you could use another word.
>
GIM>> Trying to characterize polling tails by the head over the multicast
path as unreliable mechanism vs. over the unicast as relaible may be is as
a strech. I think that we should replace
"unreliable/semi-reliable/reliable" references with ones that characterize
the polling. The "unreliable" notification to the head doesn't use polling
tail(s) and may be referred to as no-polling method. For the two other
methods I can propose two options:

   - "in-band" for polling using the multicast tree and "out-band" -
   polling using unicast;
   - "fate-sharing" for polling using the multicast tree and "disjoint" -
   polling using unicast.


> The discussion on fate sharing between unicast and multipoint paths is
> really reduced to the bare minimum while it is of key importance on the
> operation of the protocol and on the deduction the head can make of what it
> receives or not.
>
GIM>> The new text to introduce and explain the terms may be good place to
expand on how selection of the path to use for tail polling impacts on how
useful is the proposed extension.

>
>
> Abstract
>
> Please specify here which document(s) this one updates. Please see further
> down on the Update point.
>
>
> 1.  Introduction
>
>    Multipoint BFD base document [I-D.ietf-bfd-multipoint] describes
>    procedures to verify only the head-to-tail connectivity over the
>    multipoint path.  Although it may use unicast paths in both
>    directions, Multipoint BFD does not verify those paths (and in fact
>    it is preferable if unicast paths share as little fate with the
>    multipoint path as is feasible, so to increase probability of
>    delivering the notification from the tail to the head).
> This is unclear. The first sentence sets the reader in the context of
> I-D.ietf-bfd-multipoint where unicast paths are not discussed at all. And
> the rest of this paragraph discusses the unicast paths which are in fact
> introduced later in this document. So this is totally confusing. One only
> understands this after having read the whole document...
> I would suggest to completely remove, or rephrase to indicate to the user
> that this is an aspect that is introduced later, or move to the relevant
> place in the document.
>
GIM>> The text is confusing, agree. And I think that removing this
paragraph would not lose any helpful information as the very next paragraph
gives clear motivation for this extension:
   The goal of this application is for the head to reasonably rapidly
   have knowledge of tails that have lost connectivity from the head.

>
>    This document effectively modifies and adds to the base BFD
>    specification [RFC5880] and base BFD multipoint document
>    [I-D.ietf-bfd-multipoint].
> In the same was as for mpBFD, I don't see how this document updates 7880.
> Please clarify.
>
GIM>> It is all related, or so we thought, to bfd.SessionType that has been
added as the new BFD variable in RFC 7880 and the new values added in mp
BFD. In this draft a new value, MultipointClient, is added:
    bfd.SessionType

      The type of this session as defined in [RFC7880].  A new value
      introduced is:

         MultipointClient: A session on the head that tracks the state
         of an individual tail, when desirable.

We've discussed whether the base mp BFD specification updates RFC 7880 and
had agreed to remove it from the list. I'll be glad to do the same for this
draft.

In fact I also think this document does not update 5880.
> This document updates mpBFD so in principle that should be reflected in
> the header, but I'm not sure if we can reference anything else than RFCs
> there... But I'll push the two at the same time to IESG so that might work.
> And one could wonder why these two documents are separate and not merged.
>
GIM>> I think that you're right that this specification does not  update
RFC 5880 but does update mp BFD specification (which, in turn, updates RFC
5880). Should references to sections of RFC 5880 in 14.13.1 through 14.13.3
of this draft be replaced with references to corresponding sections 4.13.1
through 4.13.3 of I-D.ietf-bfd-multipoint?

>
>
> 2.  Overview
>
>    A head may wish to be alerted to the tails' connectivity (or lack
>    thereof), there are a number of options.
> Something's wrong with the structure of that sentence.
>
> I find this:
>    First, if all that is needed is an unreliable failure notification,
>    as discussed in Section 3.2, the head can request the tails to
>    transmit unicast BFD Control packets back to the head when the path
>    fails, as described in Section 4.4.
> somehow in conflict with what is said in 3.2 (to which the reader is
> pointed) and which says:
>    In this scenario, the tail sends back unsolicited BFD packets in
>    response to the detection of a multipoint path failure.  It uses the
>    reverse unicast path, but not the forward unicast path.
> On one hand you say "request" on the other you say "unsolicited", and just
> before that word you say "sends back" which gives a sense of "in response
> to". Could you clarify?
> I have more comments on this section, and more precisely on the different
> modes of operations, but I'll discuss theses as part of the review of
> Section 3.x
>
GIM>> The new state variable bfd.ReportTailDown controls whether a
MultipointClient sends periodic, i.e. unsolicited, BFD control packets to
the MultipointHead, as

>
> In the two subsequent paragraphs a pointer to 3.3. and 3.4 would not be
> superfluous.
>
GIM>> Will add.

>
>    If the head wishes to know the identity of the tails on the
>    multipoint path, it may solicit membership by sending a multipoint
> I don't think it is appropriate to talk about identity and membership.
> Head is polling a set of tails. You can't say much more than that.
>
GIM>> Agree. Would the following update be acceptable:
OLD TEXT:
   If the head wishes to know the identity of the tails on the
   multipoint path, it may solicit membership by sending a multipoint
   BFD Control packet with the Poll (P) bit set ...
NEW TEXT:
   If the head wishes to know of the active tails on the
   multipoint path, it may send a multipoint
   BFD Control packet with the Poll (P) bit set ...


>
>    Individual tails may be configured so that they never send BFD
>    control packets to the head, even when the head wishes notification
>    of path failure from the tail.  Such tails will never be known to the
>    head, but will still be able to detect multipoint path failures from
>    the head.
> ok, but how does the head knows of this config? How can the head
> distinguish between a failure and this? I guess the answer is oos of the
> document but I think that this situation is worth more than 4 lines.
> Or is there a requirement that a Head SHOULD/MUST NOT have a
> MulticastClient session with a tail who is silent?
>
GIM>> I think this refers to the case when bfd.SilentTail being set to 1.
In this case the tail is invisible to the head as it would not respond to
any polling, muticast or unicast. As result, such tail would not notify the
head of the detected failure either. These tails acting as in the mode
defined in the base mp BFD specification.

>
>
> 3.x Operational Scenarios
>
> I find the description of the different modes of operation quite
> confusing. Beyond this I have other comments/questions on 3.x that you'll
> find after.
> 3.1 is plain multipoint BFD I guess. Correct?
>
GIM>> Yes the behavior of MultipointTail as defined
in I-D.ietf-bfd-multipoint (can we refer to it as base mp BFD
specification?). But with active tail extension this behavior is the result
of setting the new BFD variables to very specific values. Section 4.4
explains that the base mp BFD mode is when bfd.ReportTailDown to 0
bfd.SessionType is  MultipointHead.

In 3.2 you say that tails send packets to the source when they detect a
> failure (stop receiving packets). At this point of the reading it is not
> clear which element makes a difference between 3.1 and 3.2 such that,
> suddenly in 3.2, tails can send packets.
>
GIM>> For one, bfd.SilentTail must be set to 0.

> I believe it is worth clarifying that, though not giving too many details.
> Relates to 4.4?
> Also at this stage it is not clear what are those packets that tails send
> in 3.2. Are they replies to Polls? If so what is the difference between 3.2
> and 3.3?
>
GIM>> The MultipointTail may periodically send BFD control packets with
Poll set. If the MultipointHead does not send unicast BFD control packet
with Poll cleared and the Final set, then, I believe, the MultipointTail
will continue sending its Poll packets periodically. Hence the difference
between polling by the MultipointHead per 3.3 and 3.4 and the unsolicited
periodic Polls from the tail.

>
>
> 3.1.  No Head Notification
>
> You say:
>    Since the only path used in this scenario is the multipoint path
> as if it had been stated before that this scenario only uses the mpPath.
> It would be much more comprehensible if it was saying:
>    In this scenario only the multipoint path is used.
>
GIM>> Thank you, accepted. The text now is:
In this scenario only the multipoint path is used and none of the others
matter.

>
>
> 3.3.  Semi-reliable Head Notification and Tail Solicitation
>
> You say (twice):
>    the head will see the BFD session fail
> Could you clarify which session fails,?
>
GIM>> It is the MultipointClient session. Would the new text make it
clearer:
OLD TEXT:
... the head will see the BFD session fail, but the state of the
   multipoint path will be unknown to the head.
NEW TEXT:
... the head will see that the particular MultipointClient
session fail ...

>
>
> 3.4.  Reliable Head Notification
>
> same comment as for 3.3
>
GIM>> Would the text proposed above be acceptable?

>
>
> 4.  Protocol Details
>
>    This section is an update to section 4 of [I-D.ietf-bfd-multipoint].
> Should you rather say that it's only some parts of this section which
> update mpBFD, and say which ones.
>
GIM>> Here's the proposed new text:
OLD TEXT:
   This section is an update to section 4 of [I-D.ietf-bfd-multipoint].
NEW TEXT:
   This section updates Section 4 of [I-D.ietf-bfd-multipoint] as the
following:
   - Section 4.3 introduces new state variables and modifies the usage of a
few existing ones;
   - Section 4.13 replaces the corresponding sections in the base BFD for
multipoint networks specification.



>
> 4.1.  Multipoint Client Session
>
>    If the head is keeping track of some or all of the tails, it has a
>    session of type MultipointClient per tail that it cares about.  All
>    of the MultipointClient sessions for tails on a particular multipoint
>    path are grouped with the MultipointHead session to which the clients
>    are listening.
> What do you mean by "grouped", associated?
>
GIM>> Yes, "associated" is better term, I agree. Will update.

>
>    These sessions receive any BFD Control packets sent by the tails, and
>    never transmit periodic BFD Control packets other than Poll Sequences
>    (since periodic transmission is always done by the MultipointHead
>    session).
> Should it be "MUST never send"?
>
GIM>> Would s/never/MUST NOT/ to make it into "MUST NOT transmit" be
acceptable?

>
>    A BFD Poll Sequence may be sent over such a session to a tail if the
>    head wishes to verify connectivity.
> It is not clear here if you are talking about sending a multipoint Poll
> sequence to all tails over the MultipointHead session or a unicast Poll
> sequence to a single tail over one MultipointClient session.
>
GIM>> s/such a session/a MultipointClient session/

>
>
> 4.3.2.  New State Variable Value
>
>       This variable MUST be initialized to the appropriate type when the
>       session is created, according to the rules in section 4.13 of
>       [I-D.ietf-bfd-multipoint].
> There is nothing in 4.13 of mpBFD that talks about the initialization of
> bfd.SessionType.
>
GIM>> This is the problem with keeping cross-references while updating both
documents. The correct reference now is to Section 4.4 of
[I-D.ietf-bfd-multipoint].

>
>
> 4.3.3.  State Variable Initialization and Maintenance
>
>    Some state variables defined in section 6.8.1 of the [RFC5880] needs
>    to be initialized or manipulated differently depending on the session
>    type (see section 4.4.2 of [I-D.ietf-bfd-multipoint]).
> s/of the/of/
> s/needs/need/
> s/ (see section 4.4.2 of [I-D.ietf-bfd-multipoint])./. The values of some
> of these variables relate to those of the same variables of a
> MultipointHead session (see section 4.4.2 of [I-D.ietf-bfd-multipoint])./
>
GIM>> All accepted and applied to the working version.

>
>
>       bfd.RequiredMinRxInterval
>          It should be noted that for sessions of type MultipointTail,
>          this variable only affects the rate of unicast Polls sent by
>          the head; the rate of multipoint packets is necessarily
>          unaffected by it.
> what is specific to MultipointClient here? If nothing, please remove.
> If something, please add it clearly.
>
GIM>> I propose the new text below.
OLD TEXT:
         It should be noted that for sessions of type MultipointTail,
         this variable only affects the rate of unicast Polls sent by
         the head; the rate of multipoint packets is necessarily
         unaffected by it.
NEW TEXT:
It MAY be set to zero at the head BFD system to suppress traffic from the
tails.
Setting it to zero in the MultipointHead session suppresses traffic from
all tails,
setting in a MultipointClient session suppresses traffic form a single tail.


>
> 4.4.  Controlling Multipoint BFD Options
>
>    The most basic form of operation, as explained in
>    [I-D.ietf-bfd-multipoint], in which BFD Control packets flow only
>    from the head and no tracking is desired of tail state at the head,
>    is accomplished by setting bfd.ReportTailDown to 0 in the
>    MultipointHead session (Section 3.1).
> I am a bit concerned that mpBFD in fact works with a state variable
> defined in another document. Wouldn't it be better to introduce this
> variable in mpBFD and set it to always be zero and then allow in this doc
> to be set at 1. A bit like the M bit.
>
GIM>> Great idea, thank you! If we do that, would such update to mpBFD
document require re-start of WGLC?

>
> You have text relating to 3.1. What about 3.2/3/4?
>
GIM>> The fifth paragraph can be back referenced to section 3.4. The fourth
paragrah describes use of bfd.ReportTailDown common to unsolicited
notifications, multicast and unicast polling, i.e. sections 3.2, 3.3, and
3.4.

>
>    If the head wishes to know the identity of the tails, it sends
>    multipoint Polls as needed.  Previously known tails that don't
>    respond to the Polls will be detected (as per Section 3.3).
> Again, I'd prefer not to talk about identity, but simply about messages
> received from tails or not.
> I don't see the purpose of this paragraph here. What is the relation with
> state variables?
>
GIM>> It may be better to move this paragraph down by one, swap paragraphs.
And would the following re-wording mak text clearer:
OLD TEXT:
   If the head wishes to know the identity of the tails, it sends
   multipoint Polls as needed.  Previously known tails that don't
   respond to the Polls will be detected (as per Section 3.3).
NEW TEXT:
   If the head wishes to know of the active tails, it sends
   multipoint Polls as needed.  Previously known tails that don't
   respond to the Polls will be detected (as per Section 3.3).


>    If the head wishes to be notified by the tails when they lose
>    connectivity, it sets bfd.ReportTailDown to 1 in either the
>    MultipointHead session (if such notification is desired from all
>    tails) or in the MultipointClient session (if notification is desired
>    from a particular tail).  Note that the setting of this variable in a
>    MultipointClient session for a particular tail overrides the setting
>    in the MultipointHead session.
> Does that correspond to 3.2, 3.3, 3.4?
>
GIM>> Yes, it enables all three modes.

>
>    If the head wishes to verify the state of a tail on an ongoing basis,
>    it sends a Poll Sequence from the MultipointClient session associated
>    with that tail as needed.
>    If the head wants to more quickly be alerted to a session failure
>    from a particular tail, it sends a BFD Control packet from the
>    MultipointClient session associated with that tail.  This has the
>    effect of eliminating the initial delay, described in Section 4.13.3,
>    that the tail would otherwise insert prior to transmission of the
>    packet.
> I don't see the link with state variables here neither. Consider moving
> somewhere else.
>
GIM>> I read it as continuation of what described in the preceeding
paragraph regarding setting  bfd.ReportTailDown in MultipointClient.

>
>    If a tail wishes to operate silently (sending no BFD Control packets
>    to the head) it sets bfd.SilentTail to 1 in the MultipointTail
>    session.  This allows a tail to be silent independent of the settings
>    on the head.
> The implications of that option are not really discussed. The head will
> likely consider the session down, no?
>
GIM>> The head would not learn of such tail nor it will be able to notice
the tail state change. I think that s/be silent/be invisible to the head/
may make the text clearer.

>
>
> 4.5.  State Machine
>
>    The state machine for session of type MultipointClient is same as
>    defined in section 4.5 of [I-D.ietf-bfd-multipoint].
> Is that really the case? MultipointHead only fails administratively while
> MultipointClient can fails based on received message, no?
>
GIM>> True. It is noted in Section 4.5 in mpBFD that for MultipointHead all
state transitions are administratively driven. But the diagram is still
applicable for BFD MultipointClient session.

>
>
> 4.6.  Session Establishment
>
>    The head directly controls whether or not tails are allowed to send
>    BFD Control packets back to the head.
> Not fully true, because of SilentTail, no?
>
GIM>> It can be done by setting bfd.RequiredMinRxInterval to zero in
MultipointHead session or a MultipointClient session. The former will force
all tails not to send any BFD packet to the head. The latter - only the
particular BFD tail. As stressed in the specification, the value in
MultipointClient overrides the value in MultipointHead.

>
>
> 4.13.1/2/3
>
> I think that, as said, you are not updating 5880. Also, you said that you
> update sections but really you are updating parts of them.
> I encourage you to find a clear way to indicate what you change/update.
>
GIM>> I'll remove from these sections references to sections 6.8.6 and
6.8.7 of RFC 5880 and link updates to mpBFD specification.

>
>
> 7. Security Considerations
>
> Can't you elaborate a bit? This look a bit like the bare minimum.
>
GIM>> You're right and more should be said about possible impact
MultipointClient sessions. Proposed new text below:
NEW TEXT:
   Additionally, implementations that create
   MultpointClient sessions dynamically upon receipt of BFD
   Control packet from a tail MUST implement protective measures to prevent
an
   infinite number of MultipointClient sessions being created.  Below are
   listed some points to be considered in such implementations.

      When the number of MultipointClient sessions exceeds the number of
      expected tails, then the implementation should generate an alarm
      to users to indicate the anomaly.

      The implementation should have a reasonable upper bound on the
      number of MultipointClient sessions that can be created, with the
      upper bound potentially being computed based on the number of
      multicast streams that the system is expecting.

The text may be inserted as the second paragraph or replace the current
second paragraph.

> ---
>