Re: [mpls-tp] BFD sessions wrt protection types

David Allan I <david.i.allan@ericsson.com> Wed, 09 June 2010 18:02 UTC

Return-Path: <david.i.allan@ericsson.com>
X-Original-To: mpls-tp@core3.amsl.com
Delivered-To: mpls-tp@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id 82E883A6767 for <mpls-tp@core3.amsl.com>; Wed, 9 Jun 2010 11:02:40 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -3.998
X-Spam-Level:
X-Spam-Status: No, score=-3.998 tagged_above=-999 required=5 tests=[BAYES_50=0.001, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_MED=-4]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id wHVO1-jVWW7W for <mpls-tp@core3.amsl.com>; Wed, 9 Jun 2010 11:02:35 -0700 (PDT)
Received: from imr1.ericy.com (imr1.ericy.com [198.24.6.9]) by core3.amsl.com (Postfix) with ESMTP id 7ED0D3A67D1 for <mpls-tp@ietf.org>; Wed, 9 Jun 2010 11:02:35 -0700 (PDT)
Received: from eusaamw0712.eamcs.ericsson.se ([147.117.20.181]) by imr1.ericy.com (8.13.1/8.13.1) with ESMTP id o59I9MVT005247; Wed, 9 Jun 2010 13:09:23 -0500
Received: from EUSAACMS0703.eamcs.ericsson.se ([169.254.2.83]) by eusaamw0712.eamcs.ericsson.se ([147.117.20.181]) with mapi; Wed, 9 Jun 2010 14:02:30 -0400
From: David Allan I <david.i.allan@ericsson.com>
To: Lavanya Srivatsa <lavanya.srivatsa@aricent.com>, MPLS TP <mpls-tp@ietf.org>
Date: Wed, 9 Jun 2010 14:02:28 -0400
Thread-Topic: BFD sessions wrt protection types
Thread-Index: AcsHyTVFfLyTNt2fR5mS6ReefWpUvQAM8AjA
Message-ID: <60C093A41B5E45409A19D42CF7786DFD4FD772813E@EUSAACMS0703.eamcs.ericsson.se>
References: <AF085525D89CCA4EB233BE7E5BF2FDAB1693EBA7AF@GUREXMB02.ASIAN.AD.ARICENT.COM>
In-Reply-To: <AF085525D89CCA4EB233BE7E5BF2FDAB1693EBA7AF@GUREXMB02.ASIAN.AD.ARICENT.COM>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
acceptlanguage: en-US
Content-Type: multipart/alternative; boundary="_000_60C093A41B5E45409A19D42CF7786DFD4FD772813EEUSAACMS0703e_"
MIME-Version: 1.0
Subject: Re: [mpls-tp] BFD sessions wrt protection types
X-BeenThere: mpls-tp@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: MPLS-TP Mailing list <mpls-tp.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/mpls-tp>, <mailto:mpls-tp-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/mpls-tp>
List-Post: <mailto:mpls-tp@ietf.org>
List-Help: <mailto:mpls-tp-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/mpls-tp>, <mailto:mpls-tp-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 09 Jun 2010 18:02:40 -0000

HI Lavanya:

When the draft-ietf-mpls-tp-cc-cv-rdi states:
"A single bi-directional BFD session is used for fate sharing operation. Two independent BFD sessions are used for independent operation.
....
The normal usage is that 1:1 protected paths must use fate sharing, and independent operation applies to 1+1 protected paths."

Does this mean that every time (and whenever it is done so) the protection switching architecture is changed by configuration/administrator from one to the other, the number of BFD sessions must change? Changing BFD sessions would include tear down or new setup of sessions?

As currently written the answer would be "yes". However how frequently do you actually expect this to happen in practice. Isn't this a change that moves at the speed of lawyers?

Since protection switching is an "application" that uses the OAM functionality to protect traffic, why should the "underlying" OAM operation be disturbed just because the "application" above it undergoes a change?

I am unable to understand the dependency of the number of BFD-OAM sessions based on the protection switching architectures/types.

In a perfect world specifying a greenfield technology you'd get that decoupling. My take is that BFD cares about whether somthing is up or down in order to conserve nodal resources. When something is broken, the node has better things to do than expend compute cycles originiating large numbers of messages that simply black hole. That is when the nodal resources are better spent being available for management or CP activity, and restoration will happen faster in some implementations as a consequence of dialing back superfluous BFD message generation.

I hope this helps
D