Re: BFD WG adoption for draft-haas-bfd-large-packets

"Acee Lindem (acee)" <acee@cisco.com> Tue, 23 October 2018 17:04 UTC

Return-Path: <acee@cisco.com>
X-Original-To: rtg-bfd@ietfa.amsl.com
Delivered-To: rtg-bfd@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 9BE2D130E44 for <rtg-bfd@ietfa.amsl.com>; Tue, 23 Oct 2018 10:04:56 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -14.5
X-Spam-Level:
X-Spam-Status: No, score=-14.5 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIMWL_WL_MED=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_HI=-5, SPF_PASS=-0.001, URIBL_BLOCKED=0.001, USER_IN_DEF_DKIM_WL=-7.5] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (1024-bit key) header.d=cisco.com
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 8VKIKXcXAuJ3 for <rtg-bfd@ietfa.amsl.com>; Tue, 23 Oct 2018 10:04:53 -0700 (PDT)
Received: from alln-iport-3.cisco.com (alln-iport-3.cisco.com [173.37.142.90]) (using TLSv1.2 with cipher DHE-RSA-SEED-SHA (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 8FD92124C04 for <rtg-bfd@ietf.org>; Tue, 23 Oct 2018 10:04:53 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=@cisco.com; l=31084; q=dns/txt; s=iport; t=1540314293; x=1541523893; h=from:to:subject:date:message-id:mime-version; bh=brPS9FWjyJmh840nRTgJ5XVm0AFnhbHkOgMAFGNvdCg=; b=dhWszK8iYx6vmy7rWkPdMDShMucQu0a8GWjQcjC/xLC7JkNIDYIsGWzH IE4vKh+ync1KLCEWCyE4nS9edfiqcUtAiLh8Ug0Hgx+tOmH1tzyoLHxHT mXqYjLmdStEKMZeWwWNy+KxsT2EhpZPFDy4mnVuoKE3spzax8+3dQgto8 8=;
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: A0AKAABGU89b/5BdJa1jGgEBAQEBAgEBAQEHAgEBAQGBUwMBAQEBCwGBDU0qZn8oCoNrlDWCDZcVgXoLAQGEbAIXhRMhNgsNAQMBAQIBAQJtKIU6AQYjaAEGAg4DAwECIQcDAgQwFAkKBAESG4I7SwGBHWSLfJtNgS6KIotiF4IAgREnH4JMhRUJFoJNMYImAo43j0VUCQKJQ4cvF5AwlkQCERSBJiQCL4FVcBVlAYJBgiMafQECB40Tb4piK4EBgR8BAQ
X-IronPort-AV: E=Sophos;i="5.54,416,1534809600"; d="scan'208,217";a="190707255"
Received: from rcdn-core-8.cisco.com ([173.37.93.144]) by alln-iport-3.cisco.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 23 Oct 2018 17:04:52 +0000
Received: from XCH-RTP-005.cisco.com (xch-rtp-005.cisco.com [64.101.220.145]) by rcdn-core-8.cisco.com (8.15.2/8.15.2) with ESMTPS id w9NH4qnj015111 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=FAIL); Tue, 23 Oct 2018 17:04:52 GMT
Received: from xch-rtp-015.cisco.com (64.101.220.155) by XCH-RTP-005.cisco.com (64.101.220.145) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Tue, 23 Oct 2018 13:04:51 -0400
Received: from xch-rtp-015.cisco.com ([64.101.220.155]) by XCH-RTP-015.cisco.com ([64.101.220.155]) with mapi id 15.00.1395.000; Tue, 23 Oct 2018 13:04:51 -0400
From: "Acee Lindem (acee)" <acee@cisco.com>
To: Albert Fu <afu14@bloomberg.net>, "rtg-bfd@ietf.org" <rtg-bfd@ietf.org>, "Les Ginsberg (ginsberg)" <ginsberg@cisco.com>
Subject: Re: BFD WG adoption for draft-haas-bfd-large-packets
Thread-Topic: BFD WG adoption for draft-haas-bfd-large-packets
Thread-Index: AQHUavKD2obCS5tWFUCJRL2ls+mZdA==
Date: Tue, 23 Oct 2018 17:04:51 +0000
Message-ID: <59FA61C0-3B50-49C7-82B2-BCF69E1A4C55@cisco.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
x-originating-ip: [10.116.152.200]
Content-Type: multipart/alternative; boundary="_000_59FA61C03B5049C782B2BCF69E1A4C55ciscocom_"
MIME-Version: 1.0
X-Outbound-SMTP-Client: 64.101.220.145, xch-rtp-005.cisco.com
X-Outbound-Node: rcdn-core-8.cisco.com
Archived-At: <https://mailarchive.ietf.org/arch/msg/rtg-bfd/ZxWb-fbTg1WH1EUrsSQjPZqt5yA>
X-BeenThere: rtg-bfd@ietf.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: "RTG Area: Bidirectional Forwarding Detection DT" <rtg-bfd.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/rtg-bfd>, <mailto:rtg-bfd-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/rtg-bfd/>
List-Post: <mailto:rtg-bfd@ietf.org>
List-Help: <mailto:rtg-bfd-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/rtg-bfd>, <mailto:rtg-bfd-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 23 Oct 2018 17:04:57 -0000

Hi Albert,

Resending due to a related problem – message was too big for IETF filter and I pruned part of the thread at the end.

From: "Albert Fu (BLOOMBERG/ 120 PARK)" <afu14@bloomberg.net>
Reply-To: Albert Fu <afu14@bloomberg.net>
Date: Tuesday, October 23, 2018 at 12:45 PM
To: "rtg-bfd@ietf.org" <rtg-bfd@ietf.org>, "Les Ginsberg (ginsberg)" <ginsberg@cisco.com>, Acee Lindem <acee@cisco.com>
Subject: Re: BFD WG adoption for draft-haas-bfd-large-packets

Hi Acee,

You are right in that this issue does not happen frequently, but when it does, it is time consuming to troubleshoot and causes unnecessary network downtime to some applications (e.g. between two end hosts, some applications worked fine, but others would intermittently fail when they tried to send large size packets over the failing ECMP path).

So you’re saying there is a problem where the data plane interfaces do not support the configured MTU due to a SW bug? I hope these are not our routers 😉

I believe the OSPF MTU detection is a control plane mechanism to check config, and may not necessary detect a data plane MTU issue (since OSPF does not support padding). Also, most of our issues occurred after routing adjacency had been established, and without any network alarms.

Right. However, if the interface is flapped when the MTU changes, OSPF would detect dynamic MTU changes (e.g., configuration), that the control plane is aware of.

Thanks,
Acee

Thanks
Albert

From: acee@cisco.com At: 10/23/18 12:30:55
To: Albert Fu (BLOOMBERG/ 120 PARK ) <mailto:afu14@bloomberg.net> , rtg-bfd@ietf.org<mailto:rtg-bfd@ietf.org>, ginsberg@cisco.com<mailto:ginsberg@cisco.com>
Subject: Re: BFD WG adoption for draft-haas-bfd-large-packets
Hi Albert, Les,

I tend to agree with Les that BFD doesn’t seem like the right protocol for this. Note that if you use OSPF as your IGP and flap the interface when the MTU changes, you’ll detect MTU mismatches immediately due to OSPF’s DB exchange MTU negotiation. Granted, control plane detection won’t detect data plane bugs resulting in MTU fluctuations but I don’t see this as a frequent event.

Thanks,
Acee


From: Rtg-bfd <rtg-bfd-bounces@ietf.org> on behalf of "Albert Fu (BLOOMBERG/ 120 PARK)" <afu14@bloomberg.net>
Reply-To: Albert Fu <afu14@bloomberg.net>
Date: Tuesday, October 23, 2018 at 11:44 AM
To: "rtg-bfd@ietf.org" <rtg-bfd@ietf.org>, "Les Ginsberg (ginsberg)" <ginsberg@cisco.com>
Subject: RE: BFD WG adoption for draft-haas-bfd-large-packets

Hi Les,

Given that it takes relative lengthy time to troubleshoot the MTU issue, and the associated impact on customer traffic, it is important to have a reliable and fast mechanism to detect the issue.

I believe BFD, especially for single hop control-plane independent situation (btw, this covers majority of our BFD use case), is indeed an ideal and reliable solution for this purpose. It is also closely tied with the routing protocols, and enable traffic to be diverted very quickly.

The choice of BFD timer is also one of the design tradeoffs - low BFD detection timer will cause more network churns. We do not need extremely aggressive BFD timer to achieve fast convergence. For example, with protection, we can achieve end to end sub-second convergence by using relatively high BFD interval of 150ms.

In the case where the path will be used for a variety of encapsulations (e.g. Pure IP and L3VPN traffic), we would set the BFD padding to cater for the largest possible payload. So, in our case, our link needs to carry a mix of pure IP (1500 max payload) and MPLS traffic (1500 + 3 headers), we would set the padding so that the total padded BFD packet size is 1512 bytes.

As you rightly pointed out, ISIS routing protocol does support hello padding, but since this is a control plane process, we can not use aggressive timer. The lowest hello interval the can be configured is 1s, so with default multiplier of 3, the best we can achieve is 3s detection time.

What we would like is a simple mechanism to validate that a link can indeed carry the expected max payload size before we put it into production. If an issue occurs where this is no longer the case (e.g. due to outages or re-routing within the Telco circuit), we would like a reliable mechanism to detect this, and also divert traffic around the link quickly. I feel BFD is a good method for this purpose.

Thanks
Albert