Re:Rtg-bfd Digest, Vol 163, Issue 25

"Albert Fu (BLOOMBERG/ 120 PARK)" <afu14@bloomberg.net> Mon, 30 September 2019 04:11 UTC

Return-Path: <afu14@bloomberg.net>
X-Original-To: rtg-bfd@ietfa.amsl.com
Delivered-To: rtg-bfd@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 7EBBC1200FE for <rtg-bfd@ietfa.amsl.com>; Sun, 29 Sep 2019 21:11:37 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -5.001
X-Spam-Level:
X-Spam-Status: No, score=-5.001 tagged_above=-999 required=5 tests=[BAYES_20=-0.001, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_HI=-5, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 7hwzepuM4TBn for <rtg-bfd@ietfa.amsl.com>; Sun, 29 Sep 2019 21:11:35 -0700 (PDT)
Received: from mgnj2.bloomberg.net (mgnj2.bloomberg.net [69.191.244.20]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id B596E120019 for <rtg-bfd@ietf.org>; Sun, 29 Sep 2019 21:11:35 -0700 (PDT)
X-BB-Reception-Complete: 30 Sep 2019 00:11:34 -0400
X-IP-Listener: Outgoing Mail
X-IP-MID: 1263244341
Received: from msllnjpmsgsv06.bloomberg.com (HELO msllnjpmsgsv06) ([10.126.134.166]) by mgnj2.bloomberg.net with SMTP; 30 Sep 2019 00:11:34 -0400
X-BLP-INETSVC: version=BLP_APP_S_INETSVC_1.0.1; host=mgnj2:25; conid=82
Date: Mon, 30 Sep 2019 04:11:33 -0000
From: "Albert Fu (BLOOMBERG/ 120 PARK)" <afu14@bloomberg.net>
Reply-To: "Albert Fu" <afu14@bloomberg.net>
To: rtg-bfd@ietf.org
MIME-Version: 1.0
Message-ID: <5D9180750134017A00390137_0_6820@msllnjpmsgsv06>
X-BLP-GUID: 5D9180750134017A003901370000
Subject: =?UTF-8?B?UmU6UnRnLWJmZCBEaWdlc3QsIFZvbCAxNjMsIElzc3VlIDI1?=
Content-Type: multipart/alternative; boundary="BOUNDARY_5D9180750134017A00390137_0_9179_msllnjpmsgsv06"
Content-ID: <ID_5D9180750134017A00390137_0_6819@msllnjpmsgsv06>
Archived-At: <https://mailarchive.ietf.org/arch/msg/rtg-bfd/M3or80adSPp33gdZq-C59KuoAqM>
X-BeenThere: rtg-bfd@ietf.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: "RTG Area: Bidirectional Forwarding Detection DT" <rtg-bfd.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/rtg-bfd>, <mailto:rtg-bfd-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/rtg-bfd/>
List-Post: <mailto:rtg-bfd@ietf.org>
List-Help: <mailto:rtg-bfd-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/rtg-bfd>, <mailto:rtg-bfd-request@ietf.org?subject=subscribe>
X-List-Received-Date: Mon, 30 Sep 2019 04:11:37 -0000

Hi Robert,


> Imagine two scenarios which were already highlighted as justification for
> this work:

> *Scenario 1 -* IGP with nodes interconnected with ECMP links

> *Scenario 2 -* IGP nodes interconnected with L2 emulated circuits which in
> turn are riding on telco IP network with ECMPs or LAGs.

> *Questions Ad 1 - *

> Is the idea to use in those cases "ECMP-Aware BFD for LDP LSPs" vendor's
> feature to be able to detect MTU issues on any of the L3 paths ? Is there
> feature extension to accomplish the same without LDP just when using ECMP
> with OSPF ?
The draft does not go into the specific use cases. I think most BFD uses cases (certainly in our case) are on p2p IGP/eBGP links. (btw some vendors do not support control-plane BFD independence for multihops BFD, making it unreliable for fast detection). 


The end to end paths may have multiple multiple-ECMP links/paths. The BFD sessions on the individual link along the path will detect large packet issue.


> How do you solve this when there is L2 LAG hashing across N links enabled ?
This is a situation where you need more than standard BFD. It is a reason why some customers like us prefer not to run LAG on parallel WAN circuits, so we can diagnose interface issues easily via standard tools like ping. It is a design compromise.


> *Question Ad 2 - *

> How do you detect it if your L2 circuit provider maps BFD flows to one
> underlay path and some encapsulated data packets is hashed to traverse the
> other path(s) ? Clearly running multiple BFD sessions is not going to help
> much in this scenario .... For example if someone is using v6 flow label it
> may be directly copied to the outer service header.
We won't have control over how the Provider maps our traffic (BFD/data). In my experience, the chances of this happening is prob small based on my involvement with all of these issues, in that when the issue happened, all packets > certain size would fail, not some getting through and some failing.


Thanks

Albert