Re: WGLC for draft-ietf-bfd-large-packets

Jeffrey Haas <> Thu, 03 October 2019 20:09 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id 2783412083E for <>; Thu, 3 Oct 2019 13:09:51 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -1.901
X-Spam-Status: No, score=-1.901 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id ZG2ngTLFF8QB for <>; Thu, 3 Oct 2019 13:09:49 -0700 (PDT)
Received: from ( []) by (Postfix) with ESMTP id 201121200BA for <>; Thu, 3 Oct 2019 13:09:49 -0700 (PDT)
Received: by (Postfix, from userid 1001) id ED97E1E2F4; Thu, 3 Oct 2019 16:12:53 -0400 (EDT)
Date: Thu, 3 Oct 2019 16:12:53 -0400
From: Jeffrey Haas <>
To: "Les Ginsberg (ginsberg)" <>
Cc: "Ketan Talaulikar (ketant)" <>, "Reshad Rahman (rrahman)" <>, "" <>
Subject: Re: WGLC for draft-ietf-bfd-large-packets
Message-ID: <>
References: <> <> <> <> <> <> <> <> <> <>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <>
User-Agent: Mutt/1.5.21 (2010-09-15)
Archived-At: <>
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: "RTG Area: Bidirectional Forwarding Detection DT" <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Thu, 03 Oct 2019 20:10:02 -0000


On Fri, Sep 27, 2019 at 09:14:08PM +0000, Les Ginsberg (ginsberg) wrote:
> > The primary reason this is a "may" in the non-RFC 2119 sense is that our
> > experience also suggests that when the scaling impacts are primarily pps
> > rather than bps that this feature will likely have no major impact on
> > implementations beyond your valid concerns about exercising bugs.
> > 
> > I suspect had this not been mentioned at all, you would have been happier.
> > But you're not the target audience for this weak caveat.
> > 
> [Les:] I am not opposed to a discussion of potential issues in the draft -
> rather I am encouraging it. But the current text isn't really on the mark
> as far as potential issues - and we seem to agree on that. It also
> suggests lengthening detection time to compensate - which I think is not
> at all what you want to suggest as it diminishes the value of the
> extension. It also isn't likely to address a real problem.

I think what I'm seeing from you is roughly:
- Note that larger MTUs may have impact on some implementations for BFD
- And simply stop there.

> For me, the potential issues are:
> a)Some BFD implementations might not be able to handle MTU sized BFD
> packets - not because of performance - but because they did not expect BFD
> packets to be full size and therefore might have issues passing a large
> packet through the local processing engine.

In such cases, the BFD session wouldn't be able to come up.  Are you
picturing a problem more dire than that?

> b)Accepted MTU is impacted by encapsulations and what layer is being
> considered (L2 or L3). And oftentimes link MTUs do not match on both ends
> ("shudder"), so you might end up with unidirectional connectivity.

Did you mean for BFD or more in the general sense?

For BFD, if you have one side testing for large MTU but not the other, we
can still have a Up BFD session with possible packet drop for large packets
on the opposite side.  But there's the chance in some paths that MTU may be
unidirectionally different - e.g. satellite down vs. land up.[1]

In such cases, configuring BFD large on both sides would be the right
answer.  But it's also possible that large packets may only need to be
unidirectionally delivered.

> I
> appreciate that this is exactly the problem that the extensions are
> designed to detect. I am just asking that these issues be discussed more
> explicitly as an aid to the implementor. If that also makes Transports ADs
> happier that is a side benefit - but that's not my motivation.

We're happy to have that in the document.

> > > What might be better?
> > >
> > > 1)Some statement that MTU isn't necessarily a consistent value for all
> > > systems connected to an interface - which can impact the results when large
> > > BFD packets are used. Implementations might then want to consider
> > > supporting "bfd-mtu" configuration and/or iterating across a range of packet
> > > sizes to determine what works and what doesn't.
> > 
> > I'm not clear what you intend by this statement.
> > 
> > Are you asking that we emphasize the use case in a different way?  The
> > Introduction currently states:
> >   "However,
> >    some applications may require that the Path MTU [RFC1191] between
> >    those two systems meets a certain minimum criteria.  When the Path
> >    MTU decreases below the minimum threshold, those applications may
> >    wish to consider the path unusable."
> > 
> > I'm also unclear what "Implementations" may refer to here.  BFD?  An
> > arbitrary user application?  If the latter, the application may not have
> > strict control over the generation of a given PDU size; e.g. TCP
> > applications.
> > 
> [Les:] I am talking about BFD implementations.
> I suppose one can imagine each BFD client requesting a certain MTU value -
> but that wouldn't be my choice.

BFD conversations happen between pairs of devices.  In the case that you
have multiple devices connected to a network segment, each conversation
could (and may intentionally) have different properties.  

An easy example of this is two devices running an IGP may want fast failure
and two other devices running BGP may be happy with just under second-level
failure.  So too could some device decide that it cares about bi-directional
path MTU while the others may not.

Given prior BFD documents' lack of discussion about such multi-access
network considerations, I'm not sure it's in character to have it just for
such a case, if that's what you're concerned with.

> I would think the value we want is really the maximum L3 payload that the
> link is intended to support - which should be independent of the BFD
> client. This might be larger than any client actually uses - but that
> seems like a good thing.

In this case we have actual existence proof of desired behavior.  The links
may be 9k but the user cares only about 1500 bytes end to end. If 1500 bytes
for BFD large works but 9k doesn't, we've not tested what the user actually

-- Jeff