Re: BFD WG adoption for draft-haas-bfd-large-packets

"Albert Fu (BLOOMBERG/ 120 PARK)" <afu14@bloomberg.net> Tue, 23 October 2018 16:45 UTC

Return-Path: <afu14@bloomberg.net>
X-Original-To: rtg-bfd@ietfa.amsl.com
Delivered-To: rtg-bfd@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 0EC82130EFF for <rtg-bfd@ietfa.amsl.com>; Tue, 23 Oct 2018 09:45:50 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -4.199
X-Spam-Level:
X-Spam-Status: No, score=-4.199 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_MED=-2.3, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id tbiAImnhWWMP for <rtg-bfd@ietfa.amsl.com>; Tue, 23 Oct 2018 09:45:48 -0700 (PDT)
Received: from mgnj13.bloomberg.net (mgnj13.bloomberg.net [69.191.244.234]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 6096713102E for <rtg-bfd@ietf.org>; Tue, 23 Oct 2018 09:45:45 -0700 (PDT)
X-BB-Reception-Complete: 23 Oct 2018 12:45:43 -0400
X-IP-Listener: Outgoing Mail
X-IP-MID: 204473940
Received: from msllnjpmsgsv06.bloomberg.com (HELO msllnjpmsgsv06) ([10.126.134.166]) by mgnj13.bloomberg.net with SMTP; 23 Oct 2018 12:45:43 -0400
X-BLP-INETSVC: version=BLP_APP_S_INETSVC_1.0.1; host=mgnj13:25; conid=143
Date: Tue, 23 Oct 2018 16:45:43 -0000
From: "Albert Fu (BLOOMBERG/ 120 PARK)" <afu14@bloomberg.net>
Reply-To: Albert Fu <afu14@bloomberg.net>
To: rtg-bfd@ietf.org, ginsberg@cisco.com, acee@cisco.com
MIME-Version: 1.0
Message-ID: <5BCF503702AB073A00390757_0_57088@msllnjpmsgsv06>
X-BLP-GUID: 5BCF503702AB073A003907570000
Subject: Re: BFD WG adoption for draft-haas-bfd-large-packets
Content-Type: multipart/alternative; boundary="BOUNDARY_5BCF503702AB073A00390757_0_74471_msllnjpmsgsv06"
Content-ID: <ID_5BCF503702AB073A00390757_0_55294@msllnjpmsgsv06>
Archived-At: <https://mailarchive.ietf.org/arch/msg/rtg-bfd/pJYaJ-cyGHVzz02_o2RXGC4oLvI>
X-BeenThere: rtg-bfd@ietf.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: "RTG Area: Bidirectional Forwarding Detection DT" <rtg-bfd.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/rtg-bfd>, <mailto:rtg-bfd-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/rtg-bfd/>
List-Post: <mailto:rtg-bfd@ietf.org>
List-Help: <mailto:rtg-bfd-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/rtg-bfd>, <mailto:rtg-bfd-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 23 Oct 2018 16:45:50 -0000

Hi Acee,

You are right in that this issue does not happen frequently, but when it does, it is time consuming to troubleshoot and causes unnecessary network downtime to some applications (e.g. between two end hosts, some applications worked fine, but others would intermittently fail when they tried to send large size packets over the failing ECMP path).

I believe the OSPF MTU detection is a control plane mechanism to check config, and may not necessary detect a data plane MTU issue (since OSPF does not support padding). Also, most of our issues occurred after routing adjacency had been established, and without any network alarms.

Thanks
Albert

From: acee@cisco.com At: 10/23/18 12:30:55To:  Albert Fu (BLOOMBERG/ 120 PARK ) ,  rtg-bfd@ietf.org,  ginsberg@cisco.com
Subject: Re: BFD WG adoption for draft-haas-bfd-large-packets

      

Hi Albert, Les,  
  
I tend to agree with Les that BFD doesn’t seem like the right protocol for this. Note that if you use OSPF as your IGP and flap the interface when the MTU changes, you’ll detect MTU mismatches immediately  due to OSPF’s DB exchange MTU negotiation. Granted, control plane detection won’t detect data plane bugs resulting in MTU fluctuations but I don’t see this as a frequent event. 
  
Thanks, 
Acee 
 

 

From: Rtg-bfd <rtg-bfd-bounces@ietf.org> on behalf of "Albert Fu (BLOOMBERG/ 120 PARK)" <afu14@bloomberg.net>
Reply-To: Albert Fu <afu14@bloomberg.net>
Date: Tuesday, October 23, 2018 at 11:44 AM
To: "rtg-bfd@ietf.org" <rtg-bfd@ietf.org>, "Les Ginsberg (ginsberg)" <ginsberg@cisco.com>
Subject: RE: BFD WG adoption for draft-haas-bfd-large-packets 

  

Hi Les,  

  

Given that it takes relative lengthy time to troubleshoot the MTU issue, and the associated impact on customer traffic, it is important to have a reliable  and fast mechanism to detect the issue.  

  

I believe BFD, especially for single hop control-plane independent situation (btw, this covers majority of our BFD use case), is indeed an ideal and reliable  solution for this purpose. It is also closely tied with the routing protocols, and enable traffic to be diverted very quickly.  

  

The choice of BFD timer is also one of the design tradeoffs - low BFD detection timer will cause more network churns. We do not need extremely aggressive  BFD timer to achieve fast convergence. For example, with protection, we can achieve end to end sub-second convergence by using relatively high BFD interval of 150ms.  

  

In the case where the path will be used for a variety of encapsulations (e.g. Pure IP and L3VPN traffic), we would set the BFD padding to cater for the largest  possible payload. So, in our case, our link needs to carry a mix of pure IP (1500 max payload) and MPLS traffic (1500 + 3 headers), we would set the padding so that the total padded BFD packet size is 1512 bytes.  

  

As you rightly pointed out, ISIS routing protocol does support hello padding, but since this is a control plane process, we can not use aggressive timer.  The lowest hello interval the can be configured is 1s, so with default multiplier of 3, the best we can achieve is 3s detection time. 

  

What we would like is a simple mechanism to validate that a link can indeed carry the expected max payload size before we put it into production. If an issue  occurs where this is no longer the case (e.g. due to outages or re-routing within the Telco circuit), we would like a reliable mechanism to detect this, and also divert traffic around the link quickly. I feel BFD is a good method for this purpose. 

  

Thanks 

Albert  

  

From: ginsberg@cisco.com At: 10/23/18 10:45:02 
To: Albert Fu (BLOOMBERG/ 120 PARK ) , rtg-bfd@ietf.org
Subject: RE: BFD WG adoption for draft-haas-bfd-large-packets 


Albert – 
  
Please understand that I fully agree with the importance of being able to detect/report MTU issues. In my own experience this can be a difficult problem to diagnose. You do not have to convince me that some improvement in detection/reporting  is needed. The question really is whether using BFD is the best option. 
  
Could you respond to my original questions – particularly why sub-second detection of this issue is a requirement? 
  
For your convenience: 
  
<snip> 
It has been stated that there is a need for sub-second detection of this condition – but I really question that requirement.  
What I would expect is that MTU changes only occur as a result of some maintenance operation (configuration change, link addition/bringup, insertion of a new box in the physical path etc.). The idea of using a mechanism which  is specifically tailored for sub-second detection to monitor something that is only going to change occasionally seems inappropriate. It makes me think that other mechanisms (some form of OAM, enhancements to routing protocols to do what IS-IS already does J) could be more appropriate and would still meet the operational requirements. 
  
I have listened to the Montreal recording – and I know there was discussion related to these issues (not sending padded packets all the time, use of BFD echo, etc.) – but I would be interested in more discussion of the need  for sub-second detection. 
  
Also, given that a path might be used with a variety of encapsulations, how do you see such a mechanism being used when multiple BFD clients share the same BFD session and their MTU constraints are different? 
<end snip> 
  
Thanx. 
  
   Les