draft-ietf-teas-gmpls-controller-inter-work-14.txt   draft-ietf-teas-gmpls-controller-inter-work-15.txt 
TEAS Working Group Haomian Zheng TEAS Working Group Haomian Zheng
Internet Draft Yi Lin Internet Draft Yi Lin
Category: Informational Huawei Technologies Category: Informational Huawei Technologies
Yang Zhao Yang Zhao
China Mobile China Mobile
Yunbin Xu Yunbin Xu
CAICT CAICT
Dieter Beller Dieter Beller
Nokia Nokia
Expires: December 13, 2024 June 11, 2024 Expires: January 17, 2025 July 16, 2024
Interworking of GMPLS Control and Centralized Controller Systems Interworking of GMPLS Control and Centralized Controller Systems
draft-ietf-teas-gmpls-controller-inter-work-14 draft-ietf-teas-gmpls-controller-inter-work-15
Abstract Abstract
Generalized Multi-Protocol Label Switching (GMPLS) control allows Generalized Multi-Protocol Label Switching (GMPLS) control allows
each network element (NE) to perform local resource discovery, each network element (NE) to perform local resource discovery,
routing and signaling in a distributed manner. routing and signaling in a distributed manner.
On the other hand, with the development of software-defined The advancement of software-defined transport networking technology
transport networking technology, a set of NEs can be controlled via enables a group of NEs to be managed through centralized controller
centralized controller hierarchies to address the issues from multi- hierarchies. This helps to tackle challenges arising from multiple
domain, multi-vendor, and multi-technology. An example of such domains, vendors, and technologies. An example of such a centralized
centralized architecture is Abstraction and Control of Traffic architecture is the Abstraction and Control of Traffic Engineered
Engineered Networks (ACTN) controller hierarchy described in RFC Networks (ACTN) controller hierarchy, as described in RFC 8453.
8453.
Instead of competing with each other, both the distributed and the Both the distributed and centralized control planes have their
centralized control plane have their own advantages, and should be respective advantages and should complement each other in the
complementary in the system. This document describes how the GMPLS system, rather than competing. This document outlines how the GMPLS
distributed control plane can interwork with a centralized distributed control plane can work together with a centralized
controller system in a transport network. controller system in a transport network.
Status of this Memo Status of this Memo
This Internet-Draft is submitted to IETF in full conformance with This Internet-Draft is submitted to IETF in full conformance with
the provisions of BCP 78 and BCP 79. the provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF), its areas, and its working groups. Note that Task Force (IETF), its areas, and its working groups. Note that
other groups may also distribute working documents as Internet- other groups may also distribute working documents as Internet-
skipping to change at page 2, line 10 skipping to change at page 2, line 10
Internet-Drafts are draft documents valid for a maximum of six Internet-Drafts are draft documents valid for a maximum of six
months and may be updated, replaced, or obsoleted by other documents months and may be updated, replaced, or obsoleted by other documents
at any time. It is inappropriate to use Internet-Drafts as at any time. It is inappropriate to use Internet-Drafts as
reference material or to cite them other than as "work in progress." reference material or to cite them other than as "work in progress."
The list of current Internet-Drafts can be accessed at The list of current Internet-Drafts can be accessed at
http://www.ietf.org/ietf/1id-abstracts.txt. http://www.ietf.org/ietf/1id-abstracts.txt.
The list of Internet-Draft Shadow Directories can be accessed at The list of Internet-Draft Shadow Directories can be accessed at
http://www.ietf.org/shadow.html. http://www.ietf.org/shadow.html.
This Internet-Draft will expire on December 13, 2024. This Internet-Draft will expire on January 17, 2025.
Copyright Notice Copyright Notice
Copyright (c) 2024 IETF Trust and the persons identified as the Copyright (c) 2024 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of (http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with carefully, as they describe your rights and restrictions with
respect to this document. Code Components extracted from this respect to this document. Code Components extracted from this
document must include Simplified BSD License text as described in document must include Simplified BSD License text as described in
Section 4.e of the Trust Legal Provisions and are provided without Section 4.e of the Trust Legal Provisions and are provided without
warranty as described in the Simplified BSD License. warranty as described in the Simplified BSD License.
Table of Contents Table of Contents
1. Introduction ................................................... 3 1. Introduction ................................................... 3
2. Overview ....................................................... 4 2. Abbreviations .................................................. 4
2.1. Overview of GMPLS Control Plane ........................... 4 3. Overview ....................................................... 5
2.2. Overview of Centralized Controller System ................. 4 3.1. Overview of GMPLS Control Plane ........................... 5
2.3. GMPLS Control Interworking with a Centralized Controller 3.2. Overview of Centralized Controller System ................. 5
System ......................................................... 5 3.3. GMPLS Control Interworking with a Centralized Controller
3. Discovery Options .............................................. 7 System ......................................................... 6
3.1. LMP ....................................................... 7 4. Discovery Options .............................................. 8
4. Routing Options ................................................ 7 4.1. LMP ....................................................... 8
4.1. OSPF-TE ................................................... 8 5. Routing Options ................................................ 8
4.2. ISIS-TE ................................................... 8 5.1. OSPF-TE ................................................... 9
4.3. NETCONF/RESTCONF .......................................... 8 5.2. ISIS-TE ................................................... 9
5. Path Computation ............................................... 8 5.3. NETCONF/RESTCONF .......................................... 9
5.1. Controller-based Path Computation ......................... 8 6. Path Computation ............................................... 9
5.2. Constraint-based Path Computing in GMPLS Control .......... 9 6.1. Controller-based Path Computation ......................... 9
5.3. Path Computation Element (PCE) ............................ 9 6.2. Constraint-based Path Computing in GMPLS Control ......... 10
6. Signaling Options .............................................. 9 6.3. Path Computation Element (PCE) ........................... 10
6.1. RSVP-TE .................................................. 10 7. Signaling Options ............................................. 10
7. Interworking Scenarios ........................................ 10 7.1. RSVP-TE .................................................. 11
7.1. Topology Collection & Synchronization .................... 10 8. Interworking Scenarios ........................................ 11
7.2. Multi-domain Service Provisioning ........................ 10 8.1. Topology Collection & Synchronization .................... 11
7.3. Multi-layer Service Provisioning ......................... 14 8.2. Multi-domain Service Provisioning ........................ 11
7.3.1. Multi-layer Path Computation ........................ 14 8.3. Multi-layer Service Provisioning ......................... 15
7.3.2. Cross-layer Path Creation ........................... 17 8.3.1. Multi-layer Path Computation ........................ 15
7.3.3. Link Discovery ...................................... 18 8.3.2. Cross-layer Path Creation ........................... 18
8.3.3. Link Discovery ...................................... 19
7.4. Recovery ................................................. 18 8.4. Recovery ................................................. 19
7.4.1. Span Protection ..................................... 19 8.4.1. Span Protection ..................................... 20
7.4.2. LSP Protection ...................................... 19 8.4.2. LSP Protection ...................................... 20
7.4.3. Single-domain LSP Restoration ....................... 19 8.4.3. Single-domain LSP Restoration ....................... 20
7.4.4. Multi-domain LSP Restoration ........................ 20 8.4.4. Multi-domain LSP Restoration ........................ 21
7.4.5. Fast Reroute ........................................ 24 8.4.5. Fast Reroute ........................................ 25
7.5. Controller Reliability ................................... 24 8.5. Controller Reliability ................................... 25
8. Manageability Considerations .................................. 25 9. Manageability Considerations .................................. 26
9. Security Considerations ....................................... 25 10. Security Considerations ...................................... 26
10. IANA Considerations........................................... 25 11. IANA Considerations........................................... 27
11. References ................................................... 25 12. References ................................................... 27
11.1. Normative References .................................... 25 12.1. Normative References .................................... 27
11.2. Informative References .................................. 27 12.2. Informative References .................................. 29
12. Contributors ................................................. 30 13. Contributors ................................................. 31
13. Authors' Addresses ........................................... 30 14. Authors' Addresses ........................................... 32
Adknowledgements ................................................. 30 Acknowledgements ................................................. 32
1. Introduction 1. Introduction
Generalized Multi-Protocol Label Switching (GMPLS) [RFC3945] extends Generalized Multi-Protocol Label Switching (GMPLS) [RFC3945] extends
MPLS to support different classes of interfaces and switching MPLS to support different classes of interfaces and switching
capabilities such as Time-Division Multiplex Capable (TDM), Lambda capabilities such as Time-Division Multiplex Capable (TDM), Lambda
Switch Capable (LSC), and Fiber-Switch Capable (FSC). Each network Switch Capable (LSC), and Fiber-Switch Capable (FSC). Each network
element (NE) running a GMPLS control plane collects network element (NE) running a GMPLS control plane collects network
information from other NEs and supports service provisioning through information from other NEs and supports service provisioning through
signaling in a distributed manner. A more generic description of signaling in a distributed manner. A more generic description of
skipping to change at page 3, line 46 skipping to change at page 3, line 46
Centralized controllers can collect network information from each Centralized controllers can collect network information from each
node and provision services on corresponding nodes. One example is node and provision services on corresponding nodes. One example is
the Abstraction and Control of Traffic Engineered Networks (ACTN) the Abstraction and Control of Traffic Engineered Networks (ACTN)
[RFC8453], which defines a hierarchical architecture with [RFC8453], which defines a hierarchical architecture with
Provisioning Network Controller (PNC), Multi-domain Service Provisioning Network Controller (PNC), Multi-domain Service
Coordinator (MDSC) and Customer Network Controller (CNC) as Coordinator (MDSC) and Customer Network Controller (CNC) as
centralized controllers for different network abstraction levels. A centralized controllers for different network abstraction levels. A
Path Computation Element (PCE) based approach has been proposed as Path Computation Element (PCE) based approach has been proposed as
Application-Based Network Operations (ABNO) in [RFC7491]. Application-Based Network Operations (ABNO) in [RFC7491].
GMPLS can be applied for the Network Element (NE) level control in GMPLS can be used to control Network Elements (NEs) in such
such centralized controller architectures. A centralized controller centralized controller architectures. A centralized controller may
may support GMPLS enabled domains and interact with a GMPLS enabled support GMPLS-enabled domains and communicate with a GMPLS-enabled
domain where the GMPLS control plane does the service provisioning domain where the GMPLS control plane handles service provisioning
from ingress to egress. In this case the centralized controller from ingress to egress. In this scenario, the centralized controller
sends the request to the ingress node and does not have to configure sends a request to the entry node and does not need to configure all
all NEs along the path through the domain from ingress to egress, NEs along the path within the domain from ingress to egress, thus
thus leveraging the GMPLS control plane. This document describes how leveraging the GMPLS control plane. This document describes how the
the GMPLS control plane interworks with a centralized controller GMPLS control plane interworks with a centralized controller system
system in a transport network. in a transport network.
2. Overview 2. Abbreviations
This section provides an overview of the GMPLS control plane and The following abbreviations are used in this document.
centralized controller systems and the interactions between the
GMPLS control plane and centralized controllers, for transport ABNO Application-Based Network Operations
ACTN Abstraction and Control of Traffic Engineered Networks
APS Automatic Protection Switching
BRPC Backward-Recursive PCE-Based Computation
CNC Customer Network Controller
CSPF Constraint-based Shortest Path First
DoS Denial-of-Service
E2E End-to-end
ERO Explicit Route Object
FA Forwarding Adjacency
FRR Fast Reroute
FSC Fiber-Switch Capable
GMPLS Generalized Multi-Protocol Label Switching
H-PCE Hierarchical PCE
IDS Intrusion Detection System
IGP Interior Gateway Protocol
IoC Indicators of Compromise
IPS Intrusion Prevention System
IS-IS Intermediate System to Intermediate System protocol
LMP Link Management Protocol
LSC Lambda Switch Capable
LSP Label Switched Path
LSP-DB LSP Database
MD Multi-domain
MDSC Multi-domain Service Coordinator
MITM Man-In-The-Middle
ML Multi-layer
MPI MDSC to PNC Interface
NE Network element
NETCONF Network Configuration Protocol
NMS Network Management System
OSPF Open Shortest Path First protocol
PCC Path Computation Client
PCE Path Computation Element
PCEP Path Computation Element communication Protocol
PCEP-LS Link-state PCEP
PLR Point of Local Repair
PNC Provisioning Network Controller
RSVP Resource Reservation Protocol
SBI Southbound Interface
SDN Software-Defined Networking
TDM Time-Division Multiplex Capable
TE Traffic Engineering
TED Traffic Engineering Database
TLS Transport Layer Security
VNTM Virtual Network Topology Manager
3. Overview
This section provides an overview of the GMPLS control plane,
centralized controller systems and their interactions in transport
networks. networks.
A transport network [RFC5654] is a server-layer network designed to A transport network [RFC5654] is a server-layer network designed to
provide connectivity services for client-layer connectivity. This provide connectivity services for client-layer connectivity. This
facilitates client traffic to be carried transparently across the setup allows client traffic to be carried seamlessly across the
server-layer network resources. server-layer network resources.
2.1. Overview of GMPLS Control Plane 3.1. Overview of GMPLS Control Plane
GMPLS separates the control plane and the data plane to support GMPLS separates the control plane and the data plane to support
time-division, wavelength, and spatial switching, which are time-division, wavelength, and spatial switching, which are
significant in transport networks. For the NE level control in significant in transport networks. For the NE level control in
GMPLS, each node runs a GMPLS control plane instance. GMPLS, each node runs a GMPLS control plane instance.
Functionalities such as service provisioning, protection, and Functionalities such as service provisioning, protection, and
restoration can be performed via GMPLS communication among multiple restoration can be performed via GMPLS communication among multiple
NEs. At the same time, the GMPLS control plane instance can also NEs. At the same time, the GMPLS control plane instance can also
collect information about node and link resources in the network to collect information about node and link resources in the network to
construct the network topology and compute routing paths for serving construct the network topology and compute routing paths for serving
skipping to change at page 4, line 42 skipping to change at page 5, line 43
Several protocols have been designed for the GMPLS control plane Several protocols have been designed for the GMPLS control plane
[RFC3945], including link management [RFC4204], signaling [RFC3471], [RFC3945], including link management [RFC4204], signaling [RFC3471],
and routing [RFC4202] protocols. The GMPLS control plane instances and routing [RFC4202] protocols. The GMPLS control plane instances
applying these protocols communicate with each other to exchange applying these protocols communicate with each other to exchange
resource information and establish Label Switched Paths (LSPs). In resource information and establish Label Switched Paths (LSPs). In
this way, GMPLS control plane instances in different nodes in the this way, GMPLS control plane instances in different nodes in the
network have the same view of the network topology and provision network have the same view of the network topology and provision
services based on local policies. services based on local policies.
2.2. Overview of Centralized Controller System 3.2. Overview of Centralized Controller System
With the development of SDN technologies, a centralized controller With the development of SDN technologies, a centralized controller
architecture has been introduced to transport networks. One example architecture has been introduced to transport networks. One example
architecture can be found in ACTN [RFC8453]. In such systems, a architecture can be found in ACTN [RFC8453]. In such systems, a
controller is aware of the network topology and is responsible for controller is aware of the network topology and is responsible for
provisioning incoming service requests. provisioning incoming service requests.
Multiple hierarchies of controllers are designed at different levels Multiple hierarchies of controllers are designed at different levels
to implement different functions. This kind of architecture enables to implement different functions. This kind of architecture enables
multi-vendor, multi-domain, and multi-technology control. For multi-vendor, multi-domain, and multi-technology control. For
example, a higher-level controller coordinates several lower-level example, a higher-level controller coordinates several lower-level
controllers controlling different domains, for topology collection controllers controlling different domains, for topology collection
and service provisioning. Vendor-specific features can be abstracted and service provisioning. Vendor-specific features can be abstracted
between controllers, and a standard API (e.g., generated from between controllers, and a standard API (e.g., generated from
RESTCONF [RFC8040] / YANG [RFC7950]) may be used. RESTCONF [RFC8040] / YANG [RFC7950]) may be used.
2.3. GMPLS Control Interworking with a Centralized Controller System 3.3. GMPLS Control Interworking with a Centralized Controller System
Besides GMPLS and the interactions among the controller hierarchies, Besides GMPLS and the interactions among the controller hierarchies,
it is also necessary for the controllers to communicate with the it is also necessary for the controllers to communicate with the
network elements. Within each domain, GMPLS control can be applied network elements. Within each domain, GMPLS control can be applied
to each NE. The bottom-level centralized controller can act as an NE to each NE. The bottom-level centralized controller can act as an NE
to collect network information and initiate LSPs. Figure 1 shows an to collect network information and initiate LSPs. Figure 1 shows an
example of GMPLS interworking with centralized controllers (ACTN example of GMPLS interworking with centralized controllers (ACTN
terminologies are used in the figure). terminologies are used in the figure).
+-------------------+ +-------------------+
skipping to change at page 6, line 28 skipping to change at page 7, line 32
in the domain with PNC implementing a PCE, Path Computation in the domain with PNC implementing a PCE, Path Computation
Clients (PCCs) (e.g. NEs, other controller/PCE) use PCEP to ask Clients (PCCs) (e.g. NEs, other controller/PCE) use PCEP to ask
the PNC for a path and get replies. The Multi-Domain Service the PNC for a path and get replies. The Multi-Domain Service
Coordinator (MDSC) communicates with PNCs using, for example Coordinator (MDSC) communicates with PNCs using, for example
REST/RESTCONF based on YANG data models. As a PNC has learned its REST/RESTCONF based on YANG data models. As a PNC has learned its
domain topology, it can report the topology to the MDSC. When a domain topology, it can report the topology to the MDSC. When a
service arrives, the MDSC computes the path and coordinates PNCs service arrives, the MDSC computes the path and coordinates PNCs
to establish the corresponding LSP segment; to establish the corresponding LSP segment;
- Alternatively, the NETCONF protocol can be used to retrieve - Alternatively, the NETCONF protocol can be used to retrieve
topology information utilizing the [RFC8795] Yang model and the topology information utilizing the [RFC8795] YANG model and the
technology-specific YANG model augmentations required for the technology-specific YANG model augmentations required for the
specific network technology. The PNC can retrieve topology specific network technology. The PNC can retrieve topology
information from any NE (the GMPLS control plane instance of each information from any NE (the GMPLS control plane instance of each
NE in the domain has the same topological view), construct the NE in the domain has the same topological view), construct the
topology of the domain, and export an abstract view to the MDSC. topology of the domain, and export an abstract view to the MDSC.
Based on the topology retrieved from multiple PNCs, the MDSC can Based on the topology retrieved from multiple PNCs, the MDSC can
create a topology graph of the multi-domain network, and can use create a topology graph of the multi-domain network, and can use
it for path computation. To set up a service, the MDSC can exploit it for path computation. To set up a service, the MDSC can exploit
the [TE-Tunnel] YANG model together with the technology-specific the [TE-Tunnel] YANG model together with the technology-specific
YANG model augmentations. YANG model augmentations.
skipping to change at page 7, line 29 skipping to change at page 8, line 32
- L-Controller(N): A domain controller controlling the lower-layer - L-Controller(N): A domain controller controlling the lower-layer
non-GMPLS domain, in the context of multi-layer networks; non-GMPLS domain, in the context of multi-layer networks;
- Orchestrator(MD): An orchestrator used to orchestrate the multi- - Orchestrator(MD): An orchestrator used to orchestrate the multi-
domain networks; domain networks;
- Orchestrator(ML): An orchestrator used to orchestrate the multi- - Orchestrator(ML): An orchestrator used to orchestrate the multi-
layer networks. layer networks.
3. Discovery Options 4. Discovery Options
In GMPLS control, the link connectivity must be verified between In GMPLS control, the link connectivity must be verified between
each pair of nodes. In this way, link resources, which are each pair of nodes. In this way, link resources, which are
fundamental resources in the network, are discovered by both ends of fundamental resources in the network, are discovered by both ends of
the link. the link.
3.1. LMP 4.1. LMP
Link management protocol (LMP) [RFC4204] runs between nodes and Link management protocol (LMP) [RFC4204] runs between nodes and
manages TE links. In addition to the setup and maintenance of manages TE links. In addition to the setup and maintenance of
control channels, LMP can be used to verify the data link control channels, LMP can be used to verify the data link
connectivity and correlate the link properties. connectivity and correlate the link properties.
4. Routing Options 5. Routing Options
In GMPLS control, link state information is flooded within the In GMPLS control, link state information is flooded within the
network as defined in [RFC4202]. Each node in the network can build network as defined in [RFC4202]. Each node in the network can build
the network topology according to the flooded link state the network topology according to the flooded link state
information. Routing protocols such as OSPF-TE [RFC4203] and ISIS-TE information. Routing protocols such as OSPF-TE [RFC4203] and ISIS-TE
[RFC5307] have been extended to support different interfaces in [RFC5307] have been extended to support different interfaces in
GMPLS. GMPLS.
In a centralized controller system, the centralized controller can In a centralized controller system, the centralized controller can
be placed in the GMPLS network and passively receives the IGP be placed in the GMPLS network and passively receives the IGP
information flooded in the network. In this way, the centralized information flooded in the network. In this way, the centralized
controller can construct and update the network topology. controller can construct and update the network topology.
4.1. OSPF-TE 5.1. OSPF-TE
OSPF-TE is introduced for TE networks in [RFC3630]. OSPF extensions OSPF-TE is introduced for TE networks in [RFC3630]. OSPF extensions
have been defined in [RFC4203] to enable the capability of link have been defined in [RFC4203] to enable the capability of link
state information for the GMPLS network. Based on this work, OSPF state information for the GMPLS network. Based on this work, OSPF
has been extended to support technology-specific routing. The has been extended to support technology-specific routing. The
routing protocol for Optical Transport Network (OTN), Wavelength routing protocol for Optical Transport Network (OTN), Wavelength
Switched Optical Network (WSON) and optical flexi-grid networks are Switched Optical Network (WSON) and optical flexi-grid networks are
defined in [RFC7138], [RFC7688] and [RFC8363], respectively. defined in [RFC7138], [RFC7688] and [RFC8363], respectively.
4.2. ISIS-TE 5.2. ISIS-TE
ISIS-TE is introduced for TE networks in [RFC5305] and is extended ISIS-TE is introduced for TE networks in [RFC5305] and is extended
to support GMPLS routing functions [RFC5307], and has been updated to support GMPLS routing functions [RFC5307], and has been updated
to [RFC7074] to support the latest GMPLS switching capability and to [RFC7074] to support the latest GMPLS switching capability and
Types fields. Types fields.
4.3. NETCONF/RESTCONF 5.3. NETCONF/RESTCONF
NETCONF [RFC6241] and RESTCONF [RFC8040] protocols are originally NETCONF [RFC6241] and RESTCONF [RFC8040] protocols are originally
used for network configuration. These protocols can also utilize used for network configuration. These protocols can also utilize
topology-related YANG models, such as [RFC8345] and [RFC8795]. These topology-related YANG models, such as [RFC8345] and [RFC8795]. These
protocols provide a powerful mechanism for notification of topology protocols provide a powerful mechanism for notification of topology
changes to the client. changes to the client.
5. Path Computation 6. Path Computation
5.1. Controller-based Path Computation 6.1. Controller-based Path Computation
Once a controller learns the network topology, it can utilize the Once a controller learns the network topology, it can utilize the
available resources to serve service requests by performing path available resources to serve service requests by performing path
computation. Due to abstraction, the controllers may not have computation. Due to abstraction, the controllers may not have
sufficient information to compute the optimal path. In this case, sufficient information to compute the optimal path. In this case,
the controller can interact with other controllers by sending, for the controller can interact with other controllers by sending, for
example, YANG-based Path Computation requests [PAT-COMP] or PCEP, to example, YANG-based Path Computation requests [PAT-COMP] or PCEP, to
compute a set of potential optimal paths and then, based on its compute a set of potential optimal paths and then, based on its
constraints, policy, and specific knowledge (e.g. cost of access constraints, policy, and specific knowledge (e.g. cost of access
link) can choose the more feasible path for end-to-end (E2E) service link) can choose the more feasible path for end-to-end (E2E) service
path setup. path setup.
Path computation is one of the key objectives in various types of Path computation is one of the key objectives in various types of
controllers. In the given architecture, it is possible for different controllers. In the given architecture, it is possible for different
components that have the capability to compute the path. components that have the capability to compute the path.
5.2. Constraint-based Path Computing in GMPLS Control 6.2. Constraint-based Path Computing in GMPLS Control
In GMPLS control, a routing path may be computed by the ingress node In GMPLS control, a routing path may be computed by the ingress node
([RFC3473]) based on the ingress node Traffic Engineering Database ([RFC3473]) based on the ingress node Traffic Engineering Database
(TED). In this case, constraint-based path computation is performed (TED). In this case, constraint-based path computation is performed
according to the local policy of the ingress node. according to the local policy of the ingress node.
5.3. Path Computation Element (PCE) 6.3. Path Computation Element (PCE)
The PCE has been introduced in [RFC4655] as a functional component The PCE was first introduced in [RFC4655] as a functional component
that provides services to compute paths in a network. In [RFC5440], that offers services for computing paths within a network. In
the path computation is accomplished by using the TED, which [RFC5440], path computation is achieved using the TED, which
maintains a view of the link resources in the network. The emergence maintains a view of the link resources in the network. The
of PCE efficiently improves the quality of network planning and introduction of PCE has significantly improved the quality of
offline computation. However, there is a risk that the computed path network planning and offline computation. However, there is a
may be infeasible if there is a diversity requirement, because potential risk that the computed path may be infeasible when there
stateless PCE has no knowledge about the former computed paths. is a diversity requirement, as stateless PCE lacks knowledge about
previously computed paths.
To address this issue, stateful PCE has been proposed in [RFC8231]. To address this issue, stateful PCE has been proposed in [RFC8231].
Besides the TED, an additional LSP Database (LSP-DB) is introduced Besides the TED, an additional LSP Database (LSP-DB) is introduced
to archive each LSP computed by the PCE. This way, PCE can easily to archive each LSP computed by the PCE. This way, PCE can easily
determine the relationship between the computing path and former determine the relationship between the computing path and former
computed paths. In this approach, PCE provides computed paths to computed paths. In this approach, PCE provides computed paths to
PCC, and then PCC decides which path is deployed and when to be PCC, and then PCC decides which path is deployed and when to be
established. established.
With PCE-Initiated LSPs [RFC8281], PCE can trigger the PCC to With PCE-Initiated LSPs [RFC8281], PCE can trigger the PCC to
perform setup, maintenance, and teardown of the PCE-initiated LSP perform setup, maintenance, and teardown of the PCE-initiated LSP
under the stateful PCE model. This would allow a dynamic network under the stateful PCE model. This would allow a dynamic network
that is centrally controlled and deployed. that is centrally controlled and deployed.
In a centralized controller system, the PCE can be implemented in a In a centralized controller system, the PCE can be implemented
centralized controller, and the centralized controller performs path within the centralized controller. The centralized controller then
computation according to its local policies. On the other hand, the calculates paths based on its local policies. Alternatively, the PCE
PCE can also be placed outside of the centralized controller. In can be located outside of the centralized controller. In this
this case, the centralized controller acts as a PCC to request path scenario, the centralized controller functions as a PCC and sends a
computation to the PCE through PCEP. One of the reference path computation request to the PCE using the PCEP. A reference
architecture can be found in [RFC7491]. architecture for this can be found in [RFC7491].
6. Signaling Options 7. Signaling Options
Signaling mechanisms are used to set up LSPs in GMPLS control. Signaling mechanisms are used to set up LSPs in GMPLS control.
Messages are sent hop by hop between the ingress node and the egress Messages are sent hop by hop between the ingress node and the egress
node of the LSP to allocate labels. Once the labels are allocated node of the LSP to allocate labels. Once the labels are allocated
along the path, the LSP setup is accomplished. Signaling protocols along the path, the LSP setup is accomplished. Signaling protocols
such as Resource Reservation Protocol - Traffic Engineering (RSVP- such as Resource Reservation Protocol - Traffic Engineering (RSVP-
TE) [RFC3473] have been extended to support different interfaces in TE) [RFC3473] have been extended to support different interfaces in
GMPLS. GMPLS.
6.1. RSVP-TE 7.1. RSVP-TE
RSVP-TE is introduced in [RFC3209] and extended to support GMPLS RSVP-TE is introduced in [RFC3209] and extended to support GMPLS
signaling in [RFC3473]. Several label formats are defined for a signaling in [RFC3473]. Several label formats are defined for a
generalized label request, a generalized label, a suggested label generalized label request, a generalized label, a suggested label
and label sets. Based on [RFC3473], RSVP-TE has been extended to and label sets. Based on [RFC3473], RSVP-TE has been extended to
support technology-specific signaling. The RSVP-TE extensions for support technology-specific signaling. The RSVP-TE extensions for
OTN, WSON, and optical flexi-grid network are defined in [RFC7139], OTN, WSON, and optical flexi-grid network are defined in [RFC7139],
[RFC7689], and [RFC7792], respectively. [RFC7689], and [RFC7792], respectively.
7. Interworking Scenarios 8. Interworking Scenarios
7.1. Topology Collection & Synchronization 8.1. Topology Collection & Synchronization
Topology information is necessary on both network elements and Topology information is necessary on both network elements and
controllers. The topology on a network element is usually raw controllers. The topology on a network element is usually raw
information, while the topology used by the controller can be either information, while the topology used by the controller can be either
raw, reduced, or abstracted. Three different abstraction methods raw, reduced, or abstracted. Three different abstraction methods
have been described in [RFC8453], and different controllers can have been described in [RFC8453], and different controllers can
select the corresponding method depending on the application. select the corresponding method depending on the application.
When there are changes in the network topology, the impacted network When there are changes in the network topology, the impacted network
elements need to report changes to all the other network elements, elements need to report changes to all the other network elements,
together with the controller, to sync up the topology information. together with the controller, to sync up the topology information.
The inter-NE synchronization can be achieved via protocols mentioned The inter-NE synchronization can be achieved via protocols mentioned
in Sections 3 and 4. The topology synchronization between NEs and in Sections 4 and 5. The topology synchronization between NEs and
controllers can either be achieved by routing protocols OSPF- controllers can either be achieved by routing protocols OSPF-
TE/PCEP-LS in [PCEP-LS] or NETCONF protocol notifications with YANG TE/PCEP-LS in [PCEP-LS] or NETCONF protocol notifications with YANG
model. model.
7.2. Multi-domain Service Provisioning 8.2. Multi-domain Service Provisioning
Service provisioning can be deployed based on the topology Service provisioning can be deployed based on the topology
information on controllers and network elements. Many methods have information on controllers and network elements. Many methods have
been specified for single-domain service provisioning, such as the been specified for single-domain service provisioning, such as the
PCEP and RSVP-TE, methods. PCEP and RSVP-TE methods.
Multi-domain service provisioning would require coordination among Multi-domain service provisioning would require coordination among
the controller hierarchies. Given the service request, the end-to- the controller hierarchies. Given the service request, the end-to-
end delivery procedure may include interactions at any level (i.e. end delivery procedure may include interactions at any level (i.e.
interface) in the hierarchy of the controllers (e.g. MPI and SBI for interface) in the hierarchy of the controllers (e.g. MPI and SBI for
ACTN). The computation for a cross-domain path is usually completed ACTN). The computation for a cross-domain path is usually completed
by controllers who have a global view of the topologies. Then the by controllers who have a global view of the topologies. Then the
configuration is decomposed into lower-level controllers, to configuration is decomposed into lower-level controllers, to
configure the network elements to set up the path. configure the network elements to set up the path.
skipping to change at page 11, line 23 skipping to change at page 12, line 23
interworking of RSVP-TE protocols between different domains. interworking of RSVP-TE protocols between different domains.
There are two possible methods: There are two possible methods:
1.1) One single end-to-end RSVP-TE session 1.1) One single end-to-end RSVP-TE session
In this method, an end-to-end RSVP-TE session from the source node In this method, an end-to-end RSVP-TE session from the source node
to the destination node will be used to create the inter-domain to the destination node will be used to create the inter-domain
path. A typical example would be the PCE Initiation scenario, in path. A typical example would be the PCE Initiation scenario, in
which a PCE message (PCInitiate) is sent from the controller(G) to which a PCE message (PCInitiate) is sent from the controller(G) to
the source node, and then trigger an RSVP procedure along the path. the source node, triggering an RSVP procedure along the path.
Similarly, the interaction between the controller and the source Similarly, the interaction between the controller and the source
node of the source domain can be achieved by using the NETCONF node of the source domain can be achieved by using the NETCONF
protocol with corresponding YANG models, and then it can be protocol with corresponding YANG models, and then it can be
completed by running RSVP among the network elements. completed by running RSVP among the network elements.
1.2) LSP Stitching 1.2) LSP Stitching
The LSP stitching method defined in [RFC5150] can also create the The LSP stitching method defined in [RFC5150] can also create the
E2E LSP. I.e., when the source node receives an end-to-end path E2E LSP. I.e., when the source node receives an end-to-end path
creation request (e.g., using PCEP or NETCONF protocol), the source creation request (e.g., using PCEP or NETCONF protocol), the source
skipping to change at page 14, line 13 skipping to change at page 15, line 13
orchestrator(MD) the list of available labels (e.g. timeslots if OTN orchestrator(MD) the list of available labels (e.g. timeslots if OTN
is the scenario) using the IETF Topology YANG model and related is the scenario) using the IETF Topology YANG model and related
technology specific extension. Once the orchestrator(MD) has technology specific extension. Once the orchestrator(MD) has
computed the E2E path, RSVP-TE or PCEP can be used in the different computed the E2E path, RSVP-TE or PCEP can be used in the different
domains to set up the related segment tunnel consisting of label domains to set up the related segment tunnel consisting of label
inter-domain information, e.g. for PCEP, the label Explicit Route inter-domain information, e.g. for PCEP, the label Explicit Route
Object (ERO) can be included in the PCInitiate message to indicate Object (ERO) can be included in the PCInitiate message to indicate
the inter-domain labels, so that each border node of each domain can the inter-domain labels, so that each border node of each domain can
configure the correct cross-connect within itself. configure the correct cross-connect within itself.
7.3. Multi-layer Service Provisioning 8.3. Multi-layer Service Provisioning
GMPLS can interwork with centralized controller systems in multi- GMPLS can interwork with centralized controller systems in multi-
layer networks. layer networks.
+----------------+ +----------------+
|Orchestrator(ML)| |Orchestrator(ML)|
+------+--+------+ +------+--+------+
| | Higher-layer Network | | Higher-layer Network
| | .--------------------. | | .--------------------.
| | / \ | | / \
skipping to change at page 14, line 48 skipping to change at page 15, line 48
Figure 4: GMPLS-controller interworking in multi-layer networks Figure 4: GMPLS-controller interworking in multi-layer networks
An example with two layers of network is shown in Figure 4. In this An example with two layers of network is shown in Figure 4. In this
example, the GMPLS control plane is enabled in at least one layer example, the GMPLS control plane is enabled in at least one layer
network (otherwise, it is out of the scope of this document) and network (otherwise, it is out of the scope of this document) and
interworks with the controller of its domain (H-Controller and L- interworks with the controller of its domain (H-Controller and L-
Controller, respectively). The Orchestrator(ML) is used to Controller, respectively). The Orchestrator(ML) is used to
coordinate the control of the multi-layer network. coordinate the control of the multi-layer network.
7.3.1. Multi-layer Path Computation 8.3.1. Multi-layer Path Computation
[RFC5623] describes three inter-layer path computation models and [RFC5623] describes three inter-layer path computation models and
four inter-layer path control models: four inter-layer path control models:
- 3 Path computation: - 3 Path computation:
o Single PCE path computation model o Single PCE path computation model
o Multiple PCE path computation with inter-PCE communication o Multiple PCE path computation with inter-PCE communication
model model
skipping to change at page 17, line 7 skipping to change at page 18, line 7
the centralized controller system, the path computation the centralized controller system, the path computation
requests are typically from the Orchestrator(ML) (acts as requests are typically from the Orchestrator(ML) (acts as
VNTM). VNTM).
o For the other two path control models "PCE-VNTM cooperation" o For the other two path control models "PCE-VNTM cooperation"
and "Higher-layer signaling trigger", the path computation is and "Higher-layer signaling trigger", the path computation is
triggered by the NEs, i.e., NE performs PCC functions. These triggered by the NEs, i.e., NE performs PCC functions. These
two models are still possible to be used, although they are not two models are still possible to be used, although they are not
the main methods. the main methods.
7.3.2. Cross-layer Path Creation 8.3.2. Cross-layer Path Creation
In a multi-layer network, a lower-layer LSP in the lower-layer In a multi-layer network, a lower-layer LSP in the lower-layer
network can be created, which will construct a new link in the network can be created, which will construct a new link in the
higher-layer network. Such lower-layer LSP is called Hierarchical higher-layer network. Such lower-layer LSP is called Hierarchical
LSP, or H-LSP for short, see [RFC6107]. LSP, or H-LSP for short, see [RFC6107].
The new link constructed by the H-LSP can then be used by the The new link constructed by the H-LSP can then be used by the
higher-layer network to create new LSPs. higher-layer network to create new LSPs.
As described in [RFC5212], two methods are introduced to create the As described in [RFC5212], two methods are introduced to create the
skipping to change at page 18, line 24 skipping to change at page 19, line 24
The source node of the Higher-layer LSP follows the procedure The source node of the Higher-layer LSP follows the procedure
defined in Section 4 of [RFC6001], to trigger the GMPLS control defined in Section 4 of [RFC6001], to trigger the GMPLS control
plane in both higher-layer network and lower-layer network to create plane in both higher-layer network and lower-layer network to create
the higher-layer LSP and the lower-layer H-LSP. the higher-layer LSP and the lower-layer H-LSP.
On success, the source node of the H-LSP should report the On success, the source node of the H-LSP should report the
information of the H-LSP to the L-Controller(G) via, for example, information of the H-LSP to the L-Controller(G) via, for example,
PCRpt message. PCRpt message.
7.3.3. Link Discovery 8.3.3. Link Discovery
If the higher-layer network and the lower-layer network are under If the higher-layer network and the lower-layer network are under
the same GMPLS control plane instance, the H-LSP can be a Forwarding the same GMPLS control plane instance, the H-LSP can be a Forwarding
Adjacency LSP (FA-LSP). Then the information of the link constructed Adjacency LSP (FA-LSP). Then the information of the link constructed
by this FA-LSP, called Forwarding Adjacency (FA), can be advertised by this FA-LSP, called Forwarding Adjacency (FA), can be advertised
in the routing instance, so that the H-Controller can be aware of in the routing instance, so that the H-Controller can be aware of
this new FA. [RFC4206] and the following updates to it (including this new FA. [RFC4206] and the following updates to it (including
[RFC6001] and [RFC6107]) describe the detailed extensions to support [RFC6001] and [RFC6107]) describe the detailed extensions to support
advertisement of an FA. advertisement of an FA.
If the higher-layer network and the lower-layer network are under If the higher-layer network and the lower-layer network are under
separate GMPLS control plane instances, or one of the layer networks separate GMPLS control plane instances, or one of the layer networks
is a non-GMPLS domain, after an H-LSP is created in the lower-layer is a non-GMPLS domain, after an H-LSP is created in the lower-layer
network, the link discovery procedure will be triggered in the network, the link discovery procedure will be triggered in the
higher-layer network to discover the information of the link higher-layer network to discover the information of the link
constructed by the H-LSP. LMP protocol defined in [RFC4204] can be constructed by the H-LSP. LMP protocol defined in [RFC4204] can be
used if the higher-layer network supports GMPLS. The information of used if the higher-layer network supports GMPLS. The information of
this new link will be advertised to the H-Controller. this new link will be advertised to the H-Controller.
7.4. Recovery 8.4. Recovery
The GMPLS recovery functions are described in [RFC4426]. Span The GMPLS recovery functions are described in [RFC4426]. Span
protection, end-to-end protection and restoration, are discussed protection, end-to-end protection and restoration, are discussed
with different protection schemes and message exchange requirements. with different protection schemes and message exchange requirements.
Related RSVP-TE extensions to support end-to-end recovery is Related RSVP-TE extensions to support end-to-end recovery is
described in [RFC4872]. The extensions in [RFC4872] include described in [RFC4872]. The extensions in [RFC4872] include
protection, restoration, preemption, and rerouting mechanisms for an protection, restoration, preemption, and rerouting mechanisms for an
end-to-end LSP. Besides end-to-end recovery, a GMPLS segment end-to-end LSP. Besides end-to-end recovery, a GMPLS segment
recovery mechanism is defined in [RFC4873], which also intends to be recovery mechanism is defined in [RFC4873], which also intends to be
compatible with Fast Reroute (FRR) (see [RFC4090] which defines compatible with Fast Reroute (FRR) (see [RFC4090] which defines
RSVP-TE extensions for the FRR mechanism, and [RFC8271] which RSVP-TE extensions for the FRR mechanism, and [RFC8271] which
described the updates of GMPLS RSVP-TE protocol for FRR of GMPLS TE- described the updates of GMPLS RSVP-TE protocol for FRR of GMPLS TE-
LSPs). LSPs).
7.4.1. Span Protection 8.4.1. Span Protection
Span protection refers to the protection of the link between two Span protection refers to the protection of the link between two
neighboring switches. The main protocol requirements include: neighboring switches. The main protocol requirements include:
- Link management: Link property correlation on the link protection - Link management: Link property correlation on the link protection
type; type;
- Routing: announcement of the link protection type; - Routing: announcement of the link protection type;
- Signaling: indication of link protection requirement for that LSP. - Signaling: indication of link protection requirement for that LSP.
GMPLS already supports the above requirements, and there are no new GMPLS already supports the above requirements, and there are no new
requirements in the scenario of interworking between GMPLS and requirements in the scenario of interworking between GMPLS and
centralized controller system. centralized controller system.
7.4.2. LSP Protection 8.4.2. LSP Protection
The LSP protection includes end-to-end and segment LSP protection. The LSP protection includes end-to-end and segment LSP protection.
For both cases: For both cases:
- In the provisioning phase: - In the provisioning phase:
In both single-domain and multi-domain scenarios, the disjoint In both single-domain and multi-domain scenarios, the disjoint
path computation can be done by the centralized controller system, path computation can be done by the centralized controller system,
as it has the global topology and resource view. And the path as it has the global topology and resource view. And the path
creation can be done by the procedure described in Section 7.2. creation can be done by the procedure described in Section 8.2.
- In the protection switchover phase: - In the protection switchover phase:
In both single-domain and multi-domain scenarios, the existing In both single-domain and multi-domain scenarios, the existing
standards provide the distributed way to trigger the protection standards provide the distributed way to trigger the protection
switchover. For example, data plane Automatic Protection Switching switchover. For example, data plane Automatic Protection Switching
(APS) mechanism described in [G.808.1], [RFC7271] and [RFC8234], (APS) mechanism described in [G.808.1], [RFC7271] and [RFC8234],
or GMPLS Notify mechanism described in [RFC4872] and [RFC4873]. In or GMPLS Notify mechanism described in [RFC4872] and [RFC4873]. In
the scenario of interworking between GMPLS and centralized the scenario of interworking between GMPLS and centralized
controller system, using these distributed mechanisms rather than controller system, using these distributed mechanisms rather than
centralized mechanism (i.e., the controller triggers the centralized mechanism (i.e., the controller triggers the
protection switchover) can significantly shorten the protection protection switchover) can significantly shorten the protection
switching time. switching time.
7.4.3. Single-domain LSP Restoration 8.4.3. Single-domain LSP Restoration
- Pre-planned LSP protection (including shared-mesh restoration): - Pre-planned LSP protection (including shared-mesh restoration):
In pre-planned protection, the protecting LSP is established only In pre-planned protection, the protecting LSP is established only
in the control plane in the provisioning phase, and will be in the control plane in the provisioning phase, and will be
activated in the data plane once failure occurs. activated in the data plane once failure occurs.
In the scenario of interworking between GMPLS and centralized In the scenario of interworking between GMPLS and centralized
controller system, the route of protecting LSP can be computed by controller system, the route of protecting LSP can be computed by
the centralized controller system. This takes the advantage of the centralized controller system. This takes the advantage of
skipping to change at page 20, line 38 skipping to change at page 21, line 38
network. network.
In the scenario of interworking between GMPLS and centralized In the scenario of interworking between GMPLS and centralized
controller system, the pre-computation of the alternate route controller system, the pre-computation of the alternate route
could take place in the centralized controller (and may be stored could take place in the centralized controller (and may be stored
in the controller or the head-end node of the LSP). In this way, in the controller or the head-end node of the LSP). In this way,
any changes in the network can trigger the refreshment of the any changes in the network can trigger the refreshment of the
alternate route by the centralized controller. This makes sure alternate route by the centralized controller. This makes sure
that the alternate route will not become out of date. that the alternate route will not become out of date.
7.4.4. Multi-domain LSP Restoration 8.4.4. Multi-domain LSP Restoration
A working LSP may traverse multiple domains, each of which may or A working LSP may traverse multiple domains, each of which may or
may not support GMPLS distributed control plane. may not support GMPLS distributed control plane.
If all the domains support GMPLS, both the end-to-end rerouting If all the domains support GMPLS, both the end-to-end rerouting
method and the domain segment rerouting method could be used. method and the domain segment rerouting method could be used.
If only some domains support GMPLS, the domain segment rerouting If only some domains support GMPLS, the domain segment rerouting
method could be used in those GMPLS domains. For other domains which method could be used in those GMPLS domains. For other domains which
do not support GMPLS, other mechanisms may be used to protect the do not support GMPLS, other mechanisms may be used to protect the
LSP segments, which are out of scope of this document. LSP segments, which are out of scope of this document.
1) End-to-end rerouting: 1) End-to-end rerouting:
In this scenario, a failure on the working LSP inside any domain or In this scenario, a failure on the working LSP inside any domain or
on the inter-domain links will trigger the end-to-end restoration. on the inter-domain links will trigger the end-to-end restoration.
In both pre-planned and full LSP rerouting, the end-to-end In both pre-planned and full LSP rerouting, the end-to-end
protecting LSP could be computed by the centralized controller protecting LSP could be computed by the centralized controller
system, and could be created by the procedure described in Section system, and could be created by the procedure described in Section
7.2. Note that the end-to-end protecting LSP may traverse different 8.2. Note that the end-to-end protecting LSP may traverse different
domains from the working LSP, depending on the result of multi- domains from the working LSP, depending on the result of multi-
domain path computation for the protecting LSP. domain path computation for the protecting LSP.
+----------------+ +----------------+
|Orchestrator(MD)| |Orchestrator(MD)|
+-------.--------+ +-------.--------+
............................................ ............................................
. . . . . . . .
+----V-----+ +----V-----+ +----V-----+ +----V-----+ +----V-----+ +----V-----+ +----V-----+ +----V-----+
|Controller| |Controller| |Controller| |Controller| |Controller| |Controller| |Controller| |Controller|
skipping to change at page 23, line 9 skipping to change at page 24, line 9
- Report of the result of intra-domain segment rerouting to its - Report of the result of intra-domain segment rerouting to its
Controller(G), and then to the Orchestrator(MD). The former one Controller(G), and then to the Orchestrator(MD). The former one
could be supported by the PCRpt message in [RFC8231], while the could be supported by the PCRpt message in [RFC8231], while the
latter one could be supported by the MPI interface of ACTN. latter one could be supported by the MPI interface of ACTN.
- Report of inter-domain link failure to the two Controllers (e.g., - Report of inter-domain link failure to the two Controllers (e.g.,
Controller(G) 1 and Controller(G) 2 in Figure 7) by which the two Controller(G) 1 and Controller(G) 2 in Figure 7) by which the two
ends of the inter-domain link are controlled respectively, and ends of the inter-domain link are controlled respectively, and
then to the Orchestrator(MD). The former one could be done as then to the Orchestrator(MD). The former one could be done as
described in Section 7.1 of this document, while the latter one described in Section 8.1 of this document, while the latter one
could be supported by the MPI interface of ACTN. could be supported by the MPI interface of ACTN.
- Computation of rerouting path or path segment crossing multi- - Computation of rerouting path or path segment crossing multi-
domains by the centralized controller system (see [PAT-COMP]); domains by the centralized controller system (see [PAT-COMP]);
- Creation of rerouting LSP segment in each related domain. The - Creation of rerouting LSP segment in each related domain. The
Orchestrator(MD) can send the LSP segment rerouting request to the Orchestrator(MD) can send the LSP segment rerouting request to the
source Controller(G) (e.g., Controller(G) 1 in Figure 7) via MPI source Controller(G) (e.g., Controller(G) 1 in Figure 7) via MPI
interface, and then the Controller(G) can trigger the creation of interface, and then the Controller(G) can trigger the creation of
rerouting LSP segment through multiple GMPLS domains using GMPLS rerouting LSP segment through multiple GMPLS domains using GMPLS
skipping to change at page 24, line 5 skipping to change at page 25, line 5
|| * | | || || | | * | | || || | | || || * | | || || | | * | | || || | | ||
|| ******************************* | | || || | | || || ******************************* | | || || | | ||
|| | | || || | | | | || || | | || || | | || || | | | | || || | | ||
|+---+ +---+| |+---+ +---+ +---+| |+---+ +---+| |+---+ +---+| |+---+ +---+ +---+| |+---+ +---+|
+-------------+ +---------------------+ +-------------+ +-------------+ +---------------------+ +-------------+
====: Working LSP ****: Rerouting LSP segment /~/: Failure ====: Working LSP ****: Rerouting LSP segment /~/: Failure
Figure 7: Inter-domain segment rerouting Figure 7: Inter-domain segment rerouting
7.4.5. Fast Reroute 8.4.5. Fast Reroute
[RFC4090] defines two methods of fast reroute, the one-to-one backup [RFC4090] defines two methods of fast reroute, the one-to-one backup
method and the facility backup method. For both methods: method and the facility backup method. For both methods:
1) Path computation of protecting LSP: 1) Path computation of protecting LSP:
In Section 6.2 of [RFC4090], the protecting LSP (detour LSP in one- In Section 6.2 of [RFC4090], the protecting LSP (detour LSP in one-
to-one backup, or bypass tunnel in facility backup) could be to-one backup, or bypass tunnel in facility backup) could be
computed by the Point of Local Repair (PLR) using, for example, computed by the Point of Local Repair (PLR) using, for example,
Constraint-based Shortest Path First (CSPF) computation. In the Constraint-based Shortest Path First (CSPF) computation. In the
skipping to change at page 24, line 39 skipping to change at page 25, line 39
[RFC4873] could be used to explicitly indicate the route of the [RFC4873] could be used to explicitly indicate the route of the
protecting LSP. protecting LSP.
3) Failure detection and traffic switchover: 3) Failure detection and traffic switchover:
If a PLR detects that failure occurs, it may significantly shorten If a PLR detects that failure occurs, it may significantly shorten
the protection switching time by using the distributed mechanisms the protection switching time by using the distributed mechanisms
described in [RFC4090] to switch the traffic to the related detour described in [RFC4090] to switch the traffic to the related detour
LSP or bypass tunnel, rather than in a centralized way. LSP or bypass tunnel, rather than in a centralized way.
7.5. Controller Reliability 8.5. Controller Reliability
Given the important role in the network, the reliability of The reliability of the controller is crucial due to its important
controller is critical. If the controller is shut down or role in the network. It is essential that if the controller is shut
disconnected from the network, it is highly desirable that all of down or disconnected from the network, all currently provisioned
the services currently provisioned in the network continue to services in the network continue to function and carry traffic. In
function and carry traffic. Furthermore, protection switching to addition, protection switching to pre-established paths should also
pre-established paths should also function. Additionally, it is work. It is desirable to have protection mechanisms, such as
desirable to provide protection mechanisms, such as redundancy, so redundancy, to maintain full operational control even if one
that full operational control can be maintained even when one instance of the controller fails. This can be achieved through
instance of the controller fails. This can be either achieved by controller backup or functionality backup. There are several
controller back up or functionality back up. There are several of
controller backup or federation mechanisms in the literature. It is controller backup or federation mechanisms in the literature. It is
also more reliable to have some function back up in the network also more reliable to have function backup in the network element to
element, to guarantee the performance in the network. guarantee performance in the network.
8. Manageability Considerations 9. Manageability Considerations
Each network entity, including controllers and network elements, Each network entity, including controllers and network elements,
should be managed properly and with the relevant trust and security should be managed properly and with the relevant trust and security
policies applied, as they will interact with other entities. The policies applied (see Section 10 of this document), as they will
manageability considerations in controller hierarchies and network interact with other entities. The manageability considerations in
elements still apply, respectively. The overall manageability of the controller hierarchies and network elements still apply,
protocols applied in the network should also be a key consideration. respectively. The overall manageability of the protocols applied in
the network should also be a key consideration.
The responsibility of each entity should be clarified. The control The responsibility of each entity should be clarified. The control
of function and policy among different controllers should be of function and policy among different controllers should be
consistent via a proper negotiation process. consistent via a proper negotiation process.
9. Security Considerations 10. Security Considerations
This document provides the interworking between the GMPLS and This document outlines the interworking between GMPLS and controller
controller hierarchies. The security requirements in both systems hierarchies. The security requirements specific to both systems
still apply respectively. Protocols referenced in this document also remain applicable. Protocols referenced herein possess security
have various security considerations, which are expected to be considerations, which must be adhered to, with their core
satisfied, with known risks detailed in their core specifications specifications and identified risks detailed earlier in this
and referenced earlier in this document. document.
Other considerations on the controller and the network element Security is a critical aspect in both GMPLS and controller-based
interface are also important. Such security includes the functions networks. Ensuring robust security mechanisms in these environments
to authenticate and authorize the control access to the controller is paramount to safeguard against potential threats and
from multiple network elements. Security mechanisms on the vulnerabilities. Below are expanded security considerations and some
controller are also required to safeguard the underlying network relevant IETF RFC references.
elements against attacks on the control plane and/or unauthorized
usage of data transport resources.
10. IANA Considerations - Authentication and Authorization: It is essential to implement
strong authentication and authorization mechanisms to control
access to the controller from multiple network elements. This
ensures that only authorized devices and users can interact with
the controller, preventing unauthorized access that could lead to
network disruptions or data breaches. Transport Layer Security
(TLS) Protocol [RFC8446] and Enrollment over Secure Transport
[RFC7030] provide guidelines on secure communication and
certificate-based authentication that can be leveraged for these
purposes.
- Controller Security: The controller's security is crucial as it
serves as the central control point for the network elements. The
controller must be protected against various attacks, such as
Denial-of-Service (DoS), Man-In-The-Middle (MITM), and
unauthorized access. Security mechanisms should include regular
security audits, application of security patches, and firewalls
and Intrusion Detection/Prevention Systems (IDS/IPS).
- Data Transport Security: Security mechanisms on the controller
should also safeguard the underlying network elements against
unauthorized usage of data transport resources. This includes
encryption of data in transit to prevent eavesdropping and
tampering, as well as ensuring data integrity and confidentiality.
- Secure Protocol Implementation: Protocols used within the GMPLS
and controller frameworks must be implemented with security in
mind. Known vulnerabilities should be addressed, and secure
versions of protocols should be used wherever possible.
Finally, robust network security often depends on Indicators of
Compromise (IoCs) to detect, trace, and prevent malicious activities
in networks or endpoints. These are described in [RFC9424] along
with the fundamentals, opportunities, operational limitations, and
recommendations for IoC use.
11. IANA Considerations
This document requires no IANA actions. This document requires no IANA actions.
11. References 12. References
11.1. Normative References 12.1. Normative References
[RFC3209] Awduche, D., Berger, L., Gan, D., Li, T., Srinivasan, V., [RFC3209] Awduche, D., Berger, L., Gan, D., Li, T., Srinivasan, V.,
and G. Swallow, "RSVP-TE: Extensions to RSVP for LSP and G. Swallow, "RSVP-TE: Extensions to RSVP for LSP
Tunnels", RFC 3209, December 2001. Tunnels", RFC 3209, December 2001.
[RFC3473] Berger, L., Ed., "Generalized Multi-Protocol Label [RFC3473] Berger, L., Ed., "Generalized Multi-Protocol Label
Switching (GMPLS) Signaling Resource ReserVation Protocol- Switching (GMPLS) Signaling Resource ReserVation Protocol-
Traffic Engineering (RSVP-TE) Extensions", RFC 3473, Traffic Engineering (RSVP-TE) Extensions", RFC 3473,
January 2003. January 2003.
skipping to change at page 27, line 5 skipping to change at page 28, line 40
(MLN/MRN)", RFC 6001, October 2010. (MLN/MRN)", RFC 6001, October 2010.
[RFC6107] Shiomoto K. and Farrel A., "Procedures for Dynamically [RFC6107] Shiomoto K. and Farrel A., "Procedures for Dynamically
Signaled Hierarchical Label Switched Paths", RFC 6107, Signaled Hierarchical Label Switched Paths", RFC 6107,
February 2011. February 2011.
[RFC6241] Enns, R., Bjorklund, M., Schoenwaelder J., Bierman A., [RFC6241] Enns, R., Bjorklund, M., Schoenwaelder J., Bierman A.,
"Network Configuration Protocol (NETCONF)", RFC 6241, June "Network Configuration Protocol (NETCONF)", RFC 6241, June
2011. 2011.
[RFC7030] Pritikin, M., Yee, P. and Harkins, D., "Enrollment over
Secure Transport", RFC7030, October 2013.
[RFC7074] Berger, L. and J. Meuric, "Revised Definition of the GMPLS [RFC7074] Berger, L. and J. Meuric, "Revised Definition of the GMPLS
Switching Capability and Type Fields", RFC 7074, November Switching Capability and Type Fields", RFC 7074, November
2013. 2013.
[RFC7491] King, D., Farrel, A., "A PCE-Based Architecture for [RFC7491] King, D., Farrel, A., "A PCE-Based Architecture for
Application-Based Network Operations", RFC7491, March Application-Based Network Operations", RFC7491, March
2015. 2015.
[RFC7926] Farrel, A., Drake, J., Bitar, N., Swallow, G., Ceccarelli, [RFC7926] Farrel, A., Drake, J., Bitar, N., Swallow, G., Ceccarelli,
D. and Zhang, X., "Problem Statement and Architecture for D. and Zhang, X., "Problem Statement and Architecture for
skipping to change at page 27, line 34 skipping to change at page 29, line 21
[RFC8271] Taillon M., Saad T., Gandhi R., Ali Z. and Bhatia M., [RFC8271] Taillon M., Saad T., Gandhi R., Ali Z. and Bhatia M.,
"Updates to the Resource Reservation Protocol for Fast "Updates to the Resource Reservation Protocol for Fast
Reroute of Traffic Engineering GMPLS Label Switched Reroute of Traffic Engineering GMPLS Label Switched
Paths", RFC 8271, October 2017. Paths", RFC 8271, October 2017.
[RFC8282] Oki E., Takeda T., Farrel A. and Zhang F., "Extensions to [RFC8282] Oki E., Takeda T., Farrel A. and Zhang F., "Extensions to
the Path Computation Element Communication Protocol (PCEP) the Path Computation Element Communication Protocol (PCEP)
for Inter-Layer MPLS and GMPLS Traffic Engineering", RFC for Inter-Layer MPLS and GMPLS Traffic Engineering", RFC
8282, December 2017. 8282, December 2017.
[RFC8446] Rescorla, E., "The Transport Layer Security (TLS) Protocol
Version 1.3", RFC8446, August 2018.
[RFC8453] Ceccarelli, D. and Y. Lee, "Framework for Abstraction and [RFC8453] Ceccarelli, D. and Y. Lee, "Framework for Abstraction and
Control of Traffic Engineered Networks", RFC 8453, August Control of Traffic Engineered Networks", RFC 8453, August
2018. 2018.
[RFC8795] Liu, X., Bryskin, I., Beeram, V., Saad, T., Shah, H., [RFC8795] Liu, X., Bryskin, I., Beeram, V., Saad, T., Shah, H.,
Gonzalez De Dios, O., "YANG Data Model for Traffic Gonzalez De Dios, O., "YANG Data Model for Traffic
Engineering (TE) Topologies", RFC8795, August 2020. Engineering (TE) Topologies", RFC8795, August 2020.
11.2. Informative References [RFC9424] Paine, K., Whitehouse, O., Sellwood, J. and Shaw, A.,
"Indicators of Compromise (IoCs) and Their Role in Attack
Defence", RFC9424, August 2023.
12.2. Informative References
[RFC3471] Berger, L., Ed., "Generalized Multi-Protocol Label [RFC3471] Berger, L., Ed., "Generalized Multi-Protocol Label
Switching (GMPLS) Signaling Functional Description", RFC Switching (GMPLS) Signaling Functional Description", RFC
3471, January 2003. 3471, January 2003.
[RFC4202] Kompella, K., Ed. and Y. Rekhter, Ed., "Routing Extensions [RFC4202] Kompella, K., Ed. and Y. Rekhter, Ed., "Routing Extensions
in Support of Generalized Multi-Protocol Label Switching in Support of Generalized Multi-Protocol Label Switching
(GMPLS)", RFC 4202, October 2005. (GMPLS)", RFC 4202, October 2005.
[RFC4204] Lang, J., Ed., "Link Management Protocol (LMP)", RFC 4204, [RFC4204] Lang, J., Ed., "Link Management Protocol (LMP)", RFC 4204,
skipping to change at page 30, line 5 skipping to change at page 31, line 46
Engineering Tunnels and Interfaces", draft-ietf-teas-yang- Engineering Tunnels and Interfaces", draft-ietf-teas-yang-
te, work in progress. te, work in progress.
[sPCE-ID] Dugeon, O. et al., "PCEP Extension for Stateful Inter- [sPCE-ID] Dugeon, O. et al., "PCEP Extension for Stateful Inter-
Domain Tunnels", draft-ietf-pce-stateful-interdomain, work Domain Tunnels", draft-ietf-pce-stateful-interdomain, work
in progress. in progress.
[G.808.1] ITU-T, "Generic protection switching - Linear trail and [G.808.1] ITU-T, "Generic protection switching - Linear trail and
subnetwork protection", G.808.1, May 2014. subnetwork protection", G.808.1, May 2014.
12. Contributors 13. Contributors
Xianlong Luo Xianlong Luo
Huawei Technologies Huawei Technologies
G1, Huawei Xiliu Beipo Village, Songshan Lake G1, Huawei Xiliu Beipo Village, Songshan Lake
Dongguan Dongguan
Guangdong, 523808 China Guangdong, 523808 China
Email: luoxianlong@huawei.com Email: luoxianlong@huawei.com
Sergio Belotti Sergio Belotti
Nokia Nokia
Email: sergio.belotti@nokia.com Email: sergio.belotti@nokia.com
13. Authors' Addresses 14. Authors' Addresses
Haomian Zheng Haomian Zheng
Huawei Technologies Huawei Technologies
H1, Huawei Xiliu Beipo Village, Songshan Lake H1, Huawei Xiliu Beipo Village, Songshan Lake
Dongguan Dongguan
Guangdong, 523808 China Guangdong, 523808 China
Email: zhenghaomian@huawei.com Email: zhenghaomian@huawei.com
Yunbin Xu Yunbin Xu
CAICT CAICT
skipping to change at page 30, line 46 skipping to change at page 32, line 36
Nokia Nokia
Email: Dieter.Beller@nokia.com Email: Dieter.Beller@nokia.com
Yi Lin Yi Lin
Huawei Technologies Huawei Technologies
H1, Huawei Xiliu Beipo Village, Songshan Lake H1, Huawei Xiliu Beipo Village, Songshan Lake
Dongguan Dongguan
Guangdong, 523808 China Guangdong, 523808 China
Email: yi.lin@huawei.com Email: yi.lin@huawei.com
Adknowledgements Acknowledgements
The authors would like to thank Jim Guichard, AD Director of IETF The authors would like to thank Jim Guichard, Area Director of IETF
Routing Area, and Vishnu Pavan Beeram, Chair of TEAS WG, for their Routing Area, Vishnu Pavan Beeram, Chair of TEAS WG, Jia He and
reviews and comments on this document. Stewart Bryant, rtgdir reviewers, Thomas Fossati, Gen-ART reviewer,
Yingzhen Qu, opsdir reviewer, David Mandelberg, secdir reviewer, and
David Dong, IANA Services Sr. Specialist, for their reviews and
comments on this document.
 End of changes. 66 change blocks. 
169 lines changed or deleted 262 lines changed or added

This html diff was produced by rfcdiff 1.45. The latest version is available from http://tools.ietf.org/tools/rfcdiff/