Re: WGLC for draft-ietf-bfd-large-packets

Robert Raszuk <> Sat, 28 September 2019 20:04 UTC

Return-Path: <>
Received: from localhost (localhost []) by (Postfix) with ESMTP id B9EC212009E for <>; Sat, 28 Sep 2019 13:04:51 -0700 (PDT)
X-Virus-Scanned: amavisd-new at
X-Spam-Flag: NO
X-Spam-Score: -1.999
X-Spam-Status: No, score=-1.999 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Authentication-Results: (amavisd-new); dkim=pass (2048-bit key)
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id lfspfjB1yv7j for <>; Sat, 28 Sep 2019 13:04:49 -0700 (PDT)
Received: from ( [IPv6:2607:f8b0:4864:20::82d]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by (Postfix) with ESMTPS id 739F3120019 for <>; Sat, 28 Sep 2019 13:04:49 -0700 (PDT)
Received: by with SMTP id n7so11752575qtb.6 for <>; Sat, 28 Sep 2019 13:04:49 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=KOEtG2wua6PnHmcgLJ5O1Utq3wr1ImpHmcqx4Rmt25I=; b=JJyAwQV9FEpibfWYSHYPFtwl77FVS8JM36+Shehl60PHifcckbPPO7VMlNGis9Jv6E 46u5v2Kq9+CY10zOGXEy+enfNBEMMS8iw1XBx4ciBiBWT3QHxAsTnJaYw1/M40wVX2UD Snw9uFc6Rs+3wVGAZirNz5fYMXxkN+eOtqpTp+jbYlUQGxm9nR/PlXmtp9SneSvF6aoV mQ1PvgOwN3/Y0BKwYKN0FFmqpbKdVrY4zg+KpZhTzNZZ4su1MItpQQZ/xD/E3ce3FrCN dDS30X7f2smujZ2kyk5PjheDATAkLRmd7T+NpjhgrqcZ+Agkej1mEQ7DR4bSzupm7RLR BhCA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=KOEtG2wua6PnHmcgLJ5O1Utq3wr1ImpHmcqx4Rmt25I=; b=JvLZ5rfgezf6Xev4vrxOIXnSwoVJot3xpKF0AN0jVlOuyVgghJafTCJBY7QTCJEr1a NtwWup13DGAWPxlKEdzAh/Zw0tCQ6E/P5gfV5gjU27tLUKx+9kXKwHVweXPAHKKOGMpJ cJ5U3zZR8IVwz1Q6ER93O2fb4A80kDuKeze5+Z0MdirbInH/QgTrBcJub9xGt2z/MCHZ /K+FbvFKQav7dEMOpor8SqgU899DA1cxA0hXSphwiG0zOviQRgf3zGorkljPsxKGuqit 1FJ+lKRvPZ/MNQIhvmuS6nBYxHAECSU8irG1LLy9snC1HHR3ucZlZIFaqxuXZ73xggQL gvkg==
X-Gm-Message-State: APjAAAWhbtirhE6hH8hCx4f8gy23vVLn/8U/uRagHemEGRwOZWoIM1VL dFYVTjWjDznMtn+vrkM6XmjkVw787uu/jPD0I7ugGg==
X-Google-Smtp-Source: APXvYqyRlJDFyKNS7IpAf+/hZXpLcDfOdQkNL5uMkGh8O6FUN9Z1+PXU+xgJM1sQ5D88burYc24tvNFlYl3itbc192w=
X-Received: by 2002:ac8:3059:: with SMTP id g25mr17193825qte.154.1569701088245; Sat, 28 Sep 2019 13:04:48 -0700 (PDT)
MIME-Version: 1.0
References: <> <> <> <> <> <> <> <> <> <> <>
In-Reply-To: <>
From: Robert Raszuk <>
Date: Sat, 28 Sep 2019 22:04:39 +0200
Message-ID: <>
Subject: Re: WGLC for draft-ietf-bfd-large-packets
To: Jeffrey Haas <>
Cc: "Les Ginsberg (ginsberg)" <>, "Reshad Rahman (rrahman)" <>, "" <>
Content-Type: multipart/alternative; boundary="000000000000c4ced20593a283b2"
Archived-At: <>
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: "RTG Area: Bidirectional Forwarding Detection DT" <>
List-Unsubscribe: <>, <>
List-Archive: <>
List-Post: <>
List-Help: <>
List-Subscribe: <>, <>
X-List-Received-Date: Sat, 28 Sep 2019 20:04:52 -0000

Hi Jeff,

Imagine two scenarios which were already highlighted as justification for
this work:

*Scenario 1 -* IGP with nodes interconnected with ECMP links

*Scenario 2 -* IGP nodes interconnected with L2 emulated circuits which in
turn are riding on telco IP network with ECMPs or LAGs.

*Questions Ad 1 - *

Is the idea to use in those cases "ECMP-Aware BFD for LDP LSPs" vendor's
feature to be able to detect MTU issues on any of the L3 paths ? Is there
feature extension to accomplish the same without LDP just when using ECMP
with OSPF ?

How do you solve this when there is L2 LAG hashing across N links enabled ?

*Question Ad 2 - *

How do you detect it if your L2 circuit provider maps BFD flows to one
underlay path and some encapsulated data packets is hashed to traverse the
other path(s) ? Clearly running multiple BFD sessions is not going to help
much in this scenario .... For example if someone is using v6 flow label it
may be directly copied to the outer service header.

Many thx,

On Thu, Sep 26, 2019 at 9:46 PM Jeffrey Haas <> wrote:

> Les,
> On Tue, Sep 24, 2019 at 10:48:51PM +0000, Les Ginsberg (ginsberg) wrote:
> > A few more thoughts - maybe these are more helpful than my previous
> comments - maybe not. I am sure you will let me know.
> >
> > Protocol extensions allowing negotiation and/or advertisement of support
> for larger PDUs may well be useful - but let's agree that it is desirable
> to deploy this without protocol extensions just to keep the
> interoperability bar low.
> >
> > My primary angst is with the following paragraph in Section 3:
> >
> > "It is also worthy of note that even if an implementation can function
> >    with larger transport PDUs, that additional packet size may have
> >    impact on BFD scaling.  Such systems may support a lower transmission
> >    interval (bfd.DesiredMinTxInterval) when operating in large packet
> >    mode.  This interval may depend on the size of the transport PDU."
> >
> > Given long experience that CPU use correlates more highly with number of
> packets than with number of bytes, the first sentence would seem to be
> weakly supported.
> > Given the previously mentioned concerns about detection time, the second
> sentence seems to compromise the value of the extension.
> My experience is largely identical to yours.
> The motivation for mentioning anything at all here is TANSTAAFL[1], and
> we've already had people ask about possible impacts.  And, as we discussed
> previously in the thread we shall inevitably get asked about it during TSV
> review in IESG.
> The primary reason this is a "may" in the non-RFC 2119 sense is that our
> experience also suggests that when the scaling impacts are primarily pps
> rather than bps that this feature will likely have no major impact on
> implementations beyond your valid concerns about exercising bugs.
> I suspect had this not been mentioned at all, you would have been happier.
> But you're not the target audience for this weak caveat.
> > What might be better?
> >
> > 1)Some statement that MTU isn't necessarily a consistent value for all
> systems connected to an interface - which can impact the results when large
> BFD packets are used. Implementations might then want to consider
> supporting "bfd-mtu" configuration and/or iterating across a range of
> packet sizes to determine what works and what doesn't.
> I'm not clear what you intend by this statement.
> Are you asking that we emphasize the use case in a different way?  The
> Introduction currently states:
>   "However,
>    some applications may require that the Path MTU [RFC1191] between
>    those two systems meets a certain minimum criteria.  When the Path
>    MTU decreases below the minimum threshold, those applications may
>    wish to consider the path unusable."
> I'm also unclear what "Implementations" may refer to here.  BFD?  An
> arbitrary user application?  If the latter, the application may not have
> strict control over the generation of a given PDU size; e.g. TCP
> applications.
> > 2)Use of both padded and unpadded packets in combination with
> draft-ietf-bfd-stability to determine whether a BFD failure is due to
> padding or a generic forwarding failure.
> >
> > Either of these suggestions are really "diagnostic modes" which may help
> diagnose a problem but aren't meant to be used continuously as part of fast
> failure detection.
> We could certainly add a paragraph or two as an application note about
> using
> this for BFD stability purposes as well.
> -- Jeff