Provision Transponder and Muxponder Cards

This chapter describes the transponder and muxponder cards used in Cisco NCS 2000 SVO and its related tasks.

The following table lists the package support for the transponder and muxponder cards.

Card

SSON Package

(12.xx-xxxx-xx.xx-S-SPA)

MSTP Package

(12.xx-xxxx-xx.xx-L-SPA)

10x10G-LC

CFP-LC

MR-MXP

100G-LC-C

100G-CK-C

100GS-CK-C

200G-CK-C

400G-XP

40E-MXP-C

OTU2-XP

1.2T-MXP

10x10G-LC Card

In this chapter, "10x10G-LC" refers to the 15454-M-10x10G-LC card.

The 10x10G-LC card is a DWDM client card, which simplifies the integration and transport of 10 Gigabit Ethernet interfaces and services to enterprises or service provider optical networks. The 10x10G-LC card is supported on Cisco NCS 2000 Series platforms.

The 10x10G-LC card is a single-slot card and can be installed in any service slot of the chassis. The 10x10G-LC card consists of a 10-port SFP+ based (with gray-colored, coarse wavelength division multiplexing ([CWDM] and DWDM optics available) and one 100 G CXP-based port.

The 10x10G-LC card interoperates with 100G-LC -C, 100G-CK-C, 100GS-CK-C, and 200G-CK-C cards through a chassis backplane.

The 10x10G-LC card supports the following signal types:

  • 10 Gigabit Ethernet LAN PHY (10.3125 Gbps)

  • OTU-2

  • G.709 overclocked to transport 10 Gigabit Ethernet as defined by ITU-T G. Sup43 Clause 7.1 (11.0957 Gbps)

  • IB_5G (supported only in TXP-10G operating mode)


Note

You may observe traffic glitches on the receiving direction of client ports 7, 8, 9, and 10 on the 400G-XP-LC card that you connect to CXP port of a 10x10G-LC card in fanout mode. To bringup traffic in such cases, change the admin state of the CXP port from OOS-DSBLD state to IS-NR state. Repeat the same action if you continue to observe glitches.


The key features of 10x10G-LC card are listed in Key Features of 100G-LC-C, 100G-CK-C, 100GS-CK-C, 200G-CK-C, 10x10G-LC, CFP-LC, and MR-MXP Cards.

Operating Modes for 10x10G-LC Card

The 10x10G-LC card supports the following operating modes:

  • MXP-10x10G (10x10G Muxponder)

  • RGN-­10G (5x10G Regenerator)/TXP-­10G (5x10G Transponder)

  • Low Latency

  • Fanout-10X10G

  • TXPP-10G

Each operating mode can be configured using specific set of cards and client payloads. Key Features of 100G-LC-C, 100G-CK-C, 100GS-CK-C, 200G-CK-C, 10x10G-LC, CFP-LC, and MR-MXP Cards lists the valid port pair for a specific operating mode and the supported payloads, and describes how each mode can be configured.

MXP-10x10G (10x10G Muxponder)

The 10x10G-LC card can be configured as a 10x10G muxponder. It can be connected with a 100G-LC-C, 100G-CK-C, or 100GS-CK-C card to support 10-port 10 G muxponder capabilities. The 100G-LC-C, 100G-CK-C, 100GS-CK-C, or 200G-CK-C card can be connected through the chassis backplane (no client CXP/CPAK is required) with the 10x10G-LC card to provide OTN multiplexing of the 10 G data streams into a single 100 G DWDM OTU4 wavelength. When the 10x10G-LC card is configured with the 100GS-CK-C card, and 10 Gigabit Ethernet LAN PHY payloads are supported. The allowed slot pairs are 2-3, 4-5, 6-7, 8-9, 10-11, 12-13, or 14-15.

The 10x10G muxponder mode supports client signals that are a combination of any 10 Gigabit Ethernet LAN-PHY or OTU2 data rates.

RGN-10G (5x10G Regenerator)/TXP-10G (5x10G Transponder)

The 10x10G-LC card works as a standalone card, supporting the multitransponder functionality. The 10 Gbps SFP+ ports should be paired to provide the 10 G transponder functionality for each port of the port pair. By using the grey optics SFP+ to provide the client equipment connectivity and DWDM SFP+ on the WDM side, up to five 10 G transponders are supported by a single 10x10G-LC card. Up to six 10x10G-LC cards are supported on the Cisco NCS 2006 chassis allowing for 30 10 Gbps transponders in a single shelf.

All ports can be equipped with or without the G.709 Digital Wrapper function that provides wide flexibility in terms of the supported services.

As the client and trunk ports are completely independent, it is also possible to equip both SFP+ of the same pair of ports with the DWDM SFP+, thereby allowing them to function as a WDM regenerator. The CXP pluggable is unused in this configuration.

Each of the SFP+ ports can be provisioned as a client or trunk. When one port is selected as a trunk, the other port of the pair is automatically selected as the client port. The allowed port pairs are 1-2, 3-4, 5-6, 7-8, or 9-10.

For RGN-10G mode, both ports are trunk ports.

It is not a constraint to provision five pairs of TXP-10G mode or five pairs of RGN-10G mode. A mix of TXP-10G and RGN-10G modes can be configured. For example, pairs 1-2 and 5-6 can be configured as TXP-10G mode and the remaining pairs as RGN-10G mode.

Table 1. Supported Payload Mapping Between Two SFP+ Ports

SFP+ Payload (Peer-1)

SFP+ Payload (Peer -2)

10GE-LAN (CBR Mapped)

OTU2e or 10GE-LAN (CBR Mapped)

OTU2

OC192 or OTU2

Low Latency

The 10x10G-LC card can be configured in low latency mode. This configuration minimizes the time spent by the signal to cross the card during the regeneration process. Athough each SFP port functions as a unidirectional regenerator, adjacent SFP ports must be selected while provisioning this mode. Both ports are trunk ports. The allowed ports are 1-2, 3-4, 5-6, 7-8, or 9-10. A mix of TXP-10G, RGN-10G, and low latency modes can be configured.

The low latency mode supports 10GE data rates. The same payload must be provisioned on both SFP ports involved in this operating mode. GCC cannot be provisioned on the ports used in low latency mode. The low latency mode does not support terminal and facility loopback.

Fanout-10X10G

The 10x10G-LC card can be configured in the fanout-10x10G mode. The fanout configuration configures the CXP side as the client and SFP side as the trunk. This configuration functions as ten independent transponders. The CXP lanes are managed independently and the payload for each CXP-lane-SPF+ pair is independent of the other pairs.

The fanout configuration provides the following mapping for the port pairs:

  • CXP lane 2-SFP1

  • CXP lane 3-SFP2

  • CXP lane 4-SFP3

  • CXP lane 5-SFP4

  • CXP lane 6-SFP5

  • CXP lane 7-SFP6

  • CXP lane 8-SFP7

  • CXP lane 9-SFP8

  • CXP lane 10-SFP9

  • CXP lane 11-SFP10


    Note

    CXP lane 1 and CXP lane 12 are not supported in this configuration.


The fanout configuration supports the following payload types and mapping modes:

  • 10GE (CXP line), transparent (no mapping), 10GE (SFP)

  • 10GE (CXP line), GFP mapping, OTU2 (SFP)

  • 10GE (CXP line), CBR mapping, OTU2e (SFP)

TXPP-10G

Splitter protection can be implemented on the 10x10G-LC card in TXPP-10G mode. The 10x10G-LC card supports up to two splitter protection groups with one client and two trunk ports. The client and trunk ports on the two groups are:

  • Port 3 (client), port 4, and port 6 (trunks) on the first protection group

  • Port 7 (client), port 8, and port 10 (trunks) on the second protection group

Port 1 and port 2 are available for unprotected transponders and can be configured in the standard TXP-10G mode, with the first port selected as the trunk and the other port selected as the client. Two ports, port 5 and port 9, are left unused. A Y-Cable protection group cannot be defined on the same 10x10G-LC card when it is provisioned in the TXPP-10G mode. The splitter protection is supported only for 10GE traffic, with trunk ports set to disabled FEC, standard FEC, or enhanced FEC (E-FEC) mode.

The following figure shows the 10x10G-LC card configured for splitter protection.

Figure 1. Splitter Protection on the 10x10G-LC card

For more information about the 10x10G-LC card, see http://www.cisco.com/en/US/prod/collateral/optical/ps5724/ps2006/data_sheet_c78-713296.html.

CFP-LC Card

In this chapter, "CFP-LC" refers to the _15454-M-CFP-LC card.

The CFP-LC card is a client card, which simplifies the integration and transport of 40 GE and 100 GE interfaces and services to enterprises or service provider optical networks. The CFP-LC card is supported on the Cisco NCS 2006 and NCS 2015 platform. The CFP-LC card provides 100 Gbps services to support 100 G DWDM wavelengths generated by the 100G-LC-C card. The traffic coming from CFP interfaces is switched to the trunk port through a cross-switch.

The CFP-LC card supports the following signal types:

  • 100 Gigabit Ethernet

  • 40 Gigabit Ethernet

  • OTU-3

  • OTU-4

Client ports can be equipped with a large variety of CFP pluggables.

Key Features

The key features of CFP-LC card are listed in Key Features of 100G-LC-C, 100G-CK-C, 100GS-CK-C, 200G-CK-C, 10x10G-LC, CFP-LC, and MR-MXP Cards.

The CFP-LC card is a double-slot card and can be installed in Slot 3 or Slot 5 in the Cisco NCS 2006 chassis, and the 100G-LC-C peers cards must be placed in the adjacent slots (2 and 5 or 4 and 7). If the card is plugged in one of the unsupported slots or in a Cisco NCS 2002 chassis, the system raises an EQPT::MEA (Mismatch of Equipment Alarm) notification. Up to two CFP-LC cards per Cisco NCS 2006 shelf assembly can be installed, supporting up to 28x 40-Gbps or 14x 100 Gbps interfaces per 42-rack units (RU) bay frame.

The CFP-LC card is equipped with two 100 G CFP pluggable modules and a cross-bar embedded switch module. The CFP-LC card provides two backplane interfaces (working both at 100 Gb or 40 Gb) that are suitable for the cross-switch application on the incoming CFP signals. The CFP-LC card can be configured to send all client CFP services towards the backplane to be connected with up to two 100G-LC-C cards placed in the two adjacent slots (upper and lower) of the Cisco NCS 2006 chassis in order to provide two 100 G transponders configurations.

Operating Modes for CFP-LC Card

The CFP-LC card supports the following operating modes:

  • 2x40G Muxponder

  • CFP-TXP (100G Transponder)

Each operating mode can be configured using the specific set of cards and client payloads. Key Features of 100G-LC-C, 100G-CK-C, 100GS-CK-C, 200G-CK-C, 10x10G-LC, CFP-LC, and MR-MXP Cards lists the valid port pair for a specific operating mode and the supported payloads, and describes how each mode can be configured.

2x40G Muxponder

The CFP-LC card can be configured as a 2-port 40 G muxponder. It can be connected with the 100G-LC-C or 100G-CK-C card to support 2-port 40 G muxponder capabilities. The 100G-LC-C or 100G-CK-C card can be connected through the Cisco NCS 2006 backplane (no client CXP/CPAK required) with the CFP-LC card to provide OTN multiplexing of the 40 G data streams into a single 100 G WDM OTU4 wavelength. The 2x40G muxponder mode supports client signals that are a mix and combination of any 40 Gigabit Ethernet LAN-PHY or OTU3 data rates.

CFP-TXP (100G Transponder)

The CFP-LC card can be configured as a 100G transponder. It can be connected with the 100G-LC-C or 100G-CK-C card to support the client interface for the 100-Gbps transponder capabilities. The 100G CXP pluggable available on the 100G-LC card supports only the 100GE-BASE-SR10 client interface, while the 100GE-BASE-LR4 is supported using only a CFP form factor. The 100G CPAK pluggable available on the 100G-CK-C card supports the CPAK-100G-SR10 and CPAK-100G-LR4 client interfaces.

The CFP-LC card can be connected through the Cisco NCS 2006 backplane with up to two 100G-LC cards placed in the upper or lower slot of the same shelf to provide the equivalent functionalities of two 100 G LR4 transponders, leveraging on CFP pluggables as the client interface.

For more information about the CFP-LC card, see http://www.cisco.com/en/US/prod/collateral/optical/ps5724/ps2006/data_sheet_c78-713295.html

MR-MXP Card

In this chapter, "MR-MXP" refers to the NCS2K-MR-MXP card.

The MR-MXP card is a mixed rate 10G and 40G client muxponder that is supported on Cisco NCS 2000 Series platforms. The card is equipped with one CPAK port, two SFP ports, and two QSFP+ ports. The card can interoperate with 100GS-CK-C, 200G-CK-C, and 10x10G-LC cards through a chassis backplane.

Operating Modes for MR-MXP Card

The MR-MXP card supports the following 200G operating modes:

  • MXP-200G

  • MXP-10x10G-100G

  • MXP-CK-100G

Each operating mode can be configured using specific set of cards and client payloads. The operating mode is configured on the companion trunk card (100GS-CK-C or 200G-CK-C). For more information about these operating modes, see Key Features of 100G-LC-C, 100G-CK-C, 100GS-CK-C, 200G-CK-C, 10x10G-LC, CFP-LC, and MR-MXP Cards.

The MR-MXP card supports the following 100G operating modes:

  • MXP-100G

  • TXP-100G

  • 100G-B2B


Note

All 100G and 200G operating modes support the encryption feature except for the MXP-CK-100G mode.


MXP-100G

The MXP-100G operating mode is provisioned with MR-MXP card on the client side and the adjacent 200G-CK-C card or 100GS-CK-C card on the trunk side. The operating mode can be provisioned only from the client side but can be deleted from both client and trunk sides. This mode supports 10GE as the payload. This mode uses the SFP+ and QSFP+ ports on MR-MXP client card and the DWDM port on the 200G-CK-C card or 100GS-CK-C card. The aggregate signal from the client is sent to trunk through the backplane.

The MXP-100G operating mode is also provisioned with MR-MXP card on the client side and the adjacent 200G-CK-C card on the trunk side. The operating mode can be provisioned only from the client side but can be deleted from both client and trunk sides. This mode supports 2X10GE+2X40GE as the payload. This mode uses the SFP+ and QSFP+ ports on MR-MXP client card and the DWDM port on the 200G-CK-C card. The aggregate signal from the client is sent to trunk through the backplane.

The operating mode can be provisioned on the following slots:

  • NCS 2006: Slots 2 and 3, 4 and 5, 6 and 7

  • NCS 2015: Slots 2 and 3, 4 and 5, 6 and 7, 8 and 9, 10 and 11, 12 and 13, 14 and 15

TXP-100G

TXP-100G operating mode is provisioned with MR-MXP card on the client side and the adjacent 200G-CK-C card or 100GS-CK-C card on the trunk side. The operating mode can be provisioned only from the client side but can be deleted from both client and trunk sides. This mode supports 100GE as the payload. This mode uses the CPAK port on MR-MXP client card and the DWDM port on the 200G-CK-C card or 100GS-CK-C card. The aggregate signal from client is sent to trunk through the backplane.

The operating mode can be provisioned on the following slots:

  • NCS 2006: Slots 2 and 3, 4 and 5, 6 and 7

  • NCS 2015: Slots 2 and 3, 4 and 5, 6 and 7, 8 and 9, 10 and 11, 12 and 13, 14 and 15

100G-B2B

The 100G-B2B operating mode can be provisioned with MR-MXP card on the client side and the adjacent MR-MXP card card on the trunk side. The operating mode performs encryption of an 100GE client signal taken from the CPAK interface or 10x10GE client signal taken from the two QSFP and SFP interfaces of the client MR-MXP card and maps it to an OTU4 signal with encryption. The OTU4 signal is passed to the trunk MR-MXP card in the peer slot through the back plane. The trunk MR-MXP card converts the OTU4 signal to grey wavelength with either an SR-10 or an LR-4 through the CPAK interface of the trunk card. The 100GE client payload can be divided into either four or 10 sub-lanes.

The CPAK port or two QSFP and 2 SFP+ ports can be selected on the client card during the provisioning. The operating mode can be provisioned from any MR-MXP card in the peer slot pair. When the operating mode is created, the card that the user selects to create operating mode acts as the client card and the peer card for that card acts as the trunk card.

The operating mode can be provisioned on the following slots:

  • NCS 2006: Slots 2 and 3, 4 and 5, 6 and 7

  • NCS 2015: Slots 2 and 3, 4 and 5, 6 and 7, 8 and 9, 10 and 11, 12 and 13, 14 and 15

The provisioning operations like payload/operating mode creation and FEC settings in 100G-B2B operating mode of MR-MXP card takes longer when compared to other operating modes.

Sub Operating Modes

The sub OpMode in MR-MXP cards determines the operating mode on the card client ports. For example, the QSFP+ port can be provisioned either as a 40GE port or can be divided into four 10G ports. This provisioning is controlled by the sub OpMode. The sub OpMode is created by default when the operating mode is configured on the card.

  • OPM_10x10G—This is the default sub OpMode for the MXP-100G, MXP-200G, and MXP-10x10G-100G operating modes. In this sub OpMode, the SFP and QSFP+ ports are divided in such a way that 10 10GE payloads can be provisioned. When a PPM is provisioned on a QSFP+ port, four internal ports are created. A 10 GE payload can be provisioned on each of these ports. The OPM-10x10G operating mode is provisioned with MR-MXP card on the client side and the adjacent MR-MXP card on the trunk side. The operating mode can be provisioned only from the client side but can be deleted from both client and trunk sides. The aggregate signal from client is sent to trunk through the backplane.

  • OPM_100G—This is the default sub OpMode for the MXP-CK-100G operating mode where the CPAK port can be provisioned with a 100GE or OTU4 payload. The 100GE payload can be divided into either four or ten sub-lanes. For 100GE payload, the OPM-100G operating mode is provisioned with MR-MXP card on the client side and the adjacent MR-MXP card on the trunk side. For OTU4 payload, the OPM-100G operating mode is provisioned with MR-MXP card on the client side and the adjacent 200G-CK-C card on the trunk side.The operating mode can be provisioned only from the client side but can be deleted from both client and trunk sides. The aggregate signal from client is sent to trunk through the backplane.

  • OPM_2x40G_2x10G—This sub OpMode is provisioned for the MXP-100G operating mode to support the 2X10GE+2X40GE payload. This operating mode is provisioned with MR-MXP card on the client side and the adjacent 200G-CK-C card on the trunk side. The operating mode can be provisioned only from the client side but can be deleted from both client and trunk sides. The aggregate signal from client is sent to trunk through the backplane.

    This sub OpMode is also provisioned for the MXP-200G operating mode to support the following sub OpMode combinations on both peer and skip MR-MXP cards.

    • OPM_10x10G and OPM_10x10G

    • OPM_2x40G_2x10G and OPM_2x40G_2x10G

    • OPM_2x40G_2x10G and OPM_10x10G

    • OPM_10x10G and OPM_2x40G_2x10G

Limitations for MR-MXP Card

  • Line timing is not supported.

  • GCC0 communication channel is not supported.

  • Overclocking of OTU2 payload is not supported.

  • Y cable protection is not supported.

  • Only G-FEC is supported on OTN payloads.

  • The lanes in a QSFP+ port support only homogeneous payloads.

  • Terminal loopback on the client port is not supported for the CPAK-FRS pluggable.

100G-LC-C, 100G-CK-C, 100GS-CK-C, and 200G-CK-C Cards

100G-LC-C and 100G-CK-C Cards

In this chapter, "100G-LC-C" refers to the _15454-M-100G-LC-C card. "100G-CK-C" refers to the NCS2K-100G-CK-C card.

The 100G-LC-C and 100G-CK-C cards are tunable DWDM trunk cards. These cards simplify the integration and transport of 100-Gigabit Ethernet interfaces and services to enterprises or service provider optical networks. The 100GS-CK-C and 200G-CK-C cards simplify the integration and transport of 100 and 200-Gigabit Ethernet interfaces and services to enterprises or service provider optical networks. These cards are supported on Cisco NCS 2006 and Cisco NCS 2015 platforms.

The cards interoperate with 10x10G-LC and CFP-LC cards through a chassis backplane.


Note

The 100GS-CK-C and 200G-CK-LC cards do not operate with the CFP-LC card.


The cards provide the following benefits:

  • Provide 100-Gbps wavelengths transport over fully uncompensated networks, with more than 2,500 km of unregenerated optical links

  • Enable 100-Gbps transport over very high Polarization Mode Dispersion (PMD)

  • Improve overall system density of up to 100-Gbps per slot, which is five times greater than what can be achieved with 40-Gbps units

You can install up to six cards per Cisco NCS 2006 shelf, supporting up to 42 100-Gbps interfaces per 42-rack units (RU) bay frame. It is possible to place up to two 100G TXPs, one 100 G Regen, or one 100 G MXP on a Cisco NCS 2006 shelf.


Note

You must instal the fan-tray assembly NCS2006-FTA= (for the NCS 2006 chassis) on the shelf, where the cards are installed. When you use an ONS-SC+-10G-C pluggable along with the 10x10G-LC card, the maximum operating temperature of the shelf must not exceed 50 degrees Celsius.


The 100G-CK-C card works in a similar way as the 100G-LC-C card. The 100G-CK-C card has the new CPAK client interface replacing the CXP client interface of the 100G-LC-C card. The CPAK client interface enables different payload combinations, and so this card can be used instead of the 100G-LC-C and CFP-LC cards.

The 100G-CK-C card supports the following pluggables:

  • CPAK-100G-SR10 pluggable with 100GE/OTU4 and 40GE payloads

  • CPAK-100G-LR4 pluggable with 100GE/OTU4 payloads

  • CPAK-100G-SR4 pluggable with 100GE payloads

The 100G-LC-C card supports the following client signal types:

  • 100GE/OTU4

  • OTU4 from BP OTL4.10 (interconnect with the CFP client)

  • 100GE from BP CAUI (interconnect with CFP client)

  • 3 x OTU3e(255/227) from BP OTL3.4 (interconnect with 10 x10G client)

  • 2 x OTU3 from BP OTL3.4 (interconnect with the CFP client)

  • 2 x 40 GE from BP XLAUI (interconnect with the CFP client)

In addition to the above, the 100G-CK-C card supports the following client signal types:

  • 100GE/OTU4 for the CPAK-100G-SR10/CPAK-100G-LR4 client interface

  • 40GE for the CPAK-100G-SR10 client interface

The 100G-LC-C card and 100G-CK-C cards provide a 100G DWDM trunk interface that supports up to 70000 ps/nm of CD robustness. These cards enable configuration of the CD dispersion tolerance to 50000 ps/nm and 30000 ps/nm to reduce power consumption.

100GS-CK-C and 200G-CK-C Cards

In this chapter, "100GS-CK-C" refers to the NCS2K-100GS-CK-C card. "200G-CK-C" refers to the NCS2K-200G-CK-C card.

The 100GS-CK-C and 200G-CK-C cards are tunable DWDM trunk cards, which simplify the integration and transport of 100 and 200- Gigabit Ethernet interfaces and services to enterprises or service provider optical networks. The 200G-CK-C card is an enhancement of the 100GS-CK-C card.

The 100GS-CK-C and 200G-CK-C cards provide the following benefits:

  • Allow choosing 16 QAM and QPSK as the modulation formats at the line side

  • Provide Standard G-FEC (Reed-Solomon), Soft Decision FEC (SD-FEC) encoding with 20% overhead, and Hard Decision FEC (HD-FEC) encoding with 7% overhead

  • Provide Nyquist filtering for best performance and optimal band usage

  • Support gridless tunability

  • Allow client access either through the local 100G CPAK interface or through backplane lines

  • In MXP-10X10G operating mode, allow 10GE clients (multiplexed on 100G trunk)

Key Features of 100G-LC-C, 100G-CK-C, 100GS-CK-C, 200G-CK-C, 10x10G-LC, CFP-LC, and MR-MXP Cards

The 100G-LC-C, 100G-CK-C, 100GS-CK-C, 200G-CK-C, 10x10G-LC, CFP-LC, and MR-MXP cards support the following key feature:

  • Operating Modes—You can configure the cards into multiple operating modes. The cards can be equipped with pluggables for client and trunk options, and offer large variety of configurations. When you configure the card into multiple operational modes, make sure that you complete the following tasks:

    • The card must be preprovisioned and the modes must be configured. None of the modes are provisioned on the card by default. All operating modes are created on the card level. These are card-specific provisioning, which decides the behavior of a particular card.

    • Depending on the card mode selected, the supported payload for that particular card mode must be provisioned on the PPMs. The payloads can be provisioned after configuring the operational mode on the card.

For a detailed list of the supported pluggables, see, http://www.cisco.com/c/en/us/td/docs/optical/spares/gbic/guides/b_ncs_pluggables.html.

For operating modes of the respective cards, see the “Operating Modes for 10x10G-LC Card” section on page 11-125, “Operating Modes for CFP-LC Card” section on page 11-128, Operating Modes for MR-MXP Card, and Operating Modes for 100G-LC-C, 100G-CK-C, 100GS-CK-C, and 200G-CK-C Cards.

  • Protocol Transparency—The 100G-LC-C, 100G-CK-C, 100GS-CK-C, and 200G-CK-C cards deliver any 100-Gbps services for cost-effective, point-to-point networking. The 10x10G-LC card delivers any 10-Gbps services for cost-effective, point-to-point networking. In case of 100 G muxponder clients that are mapped into OTU4 DWDM wavelength.

Table 2. Transponder Client Configurations and Mapping for 100G-LC-C and 100G-CK-C Cards

Client

Trunk

Format

Rate (Gbps)

Mapping

Format

Rate with 7% GFEC, 20% GFEC, or EFEC OH (Gbps)

100GE LAN-PHY

101.125

Bit transparent through standard G.709v3 mapping

OTU4         111.809    

OTU4

111.809

Transparent G.709 standard

Table 3. Transponder Client Configurations and Mapping for a 10x10G-LC Card

Client

Mapping

Format

Rate (Gbps)

10GE LAN-PHY (MXP-10x10G mode)

10.3125

CBR-BMP clause 17.2.4 (ex G sup43 7.1) + GMP ODU2e to OPU3e4

10GE LAN-PHY (MXP-10x10G mode)

10.3125

GFP-F clause 17.4.1 (ex G sup43 7.3) + GMP ODU2 to OPU3e4

10GE LAN-PHY (TXP-10G mode)

10.3125

CBR-BMP clause 17.2.4 (ex G sup43 7.1)

10GE LAN-PHY (TXP-10G mode)

10.3125

GFP-F clause 17.4.1 (ex G sup43 7.3)

OTU2

10.709

ODU transparent + GMP ODU2 to OPU3e4

OTU2e

11.095

ODU transparent + GMP ODU2e to OPU3e4

IB-5G

5.0000

GMP ODU2e to OPU3e4

Table 4. Client Configurations and Mapping for CFP-LC Card

Client

Trunk

Format

Rate (Gbps)

Mapping

Format

Rate with 7% GFEC or EFEC OH (Gbps)

100GE LAN-PHY

101.125

Bit transparent through standard G.709v3 mapping


OTU4         111.809

OTU4

111.809

Transparent G.709 standard

40GE LAN-PHY

41.250

1024b/1027b transc + OPU4 GMP G709 Appendix VIII

OTU3

43.018

Transparent G.709 standard

  • Flow-Through Timing—The cards allow the timing to flow through from the client to line optical interface. The received timing from the client interface is used to time the line transmitter interface. This flow-through timing allows multiple cards to be placed in the same shelf but be independently timed fully, independent of the NE timing.

  • Far-End Laser Control (FELC)—FELC is supported on the cards.

  • Performance Monitoring—The 100-Gbps DWDM trunk provides support for both transparent and non-transparent signal transport performance monitoring. The Digital Wrapper channel is monitored according to G.709 (OTN) and G.8021 standards. Performance Monitoring of optical parameters on the client and DWDM line interface include loss of signal (LOS), Laser Bias Current, Transmit Optical Power, and Receive Optical Power. Calculation and accumulation of the performance monitoring data is supported in 15-minute and 24-hour intervals as per G.7710. Physical system parameters measured at the wavelength level, like Mean PMD, accumulated Chromatic Dispersion, or Received OSNR, are also included in the set of performance monitoring parameters. These measurements can greatly simplify troubleshooting operations and enhance the set of data which can be collected directly from the equipment.
The performance monitoring for the CFP-LC card takes into account the fact that the two CFP-LC cards are a host board supporting CFP client equipment. The digital monitoring takes into account the fact that if the incoming client is implemented on the 100G cards. There is a virtual port connection that displays the Digital Wrapper monitoring according to G.709 (OTN) and the RMON for Ethernet signals, while the optical performance monitoring is directly available on the two CFP-LC cards. Calculation and accumulation of the performance monitoring data is supported in 15 minute and 24 hour intervals according to G.7710.

  • Loopback—The terminal, facility, or backplane loopback can be provisioned on all the ports of the 100G-LC-C, 100G-CK-C, 10x10G-LC, 100GS-CK-C, and 200G-CK-C cards, configured in any operating mode except for the low latency mode. The backplane facility loopback cannot be configured on the 10x10G -LC card that is configured in the MXP-10x10G mode. The loopback can be provisioned only when the port is in OOS-MT state. A new port cannot be provisioned when the backplane loopback is configured on the 10x10G-LC card. For the CFP-LC card configured in the CFP-TXP or CFP-MXP mode, the facility or terminal loopback can be configured on the backplane of the peer 100G-LC-C, 100G-CK-C, 100GS-CK-C, and 200G-CK-C cards. Terminal and facility loopback can be provisioned on MR-MXP cards that are configured in any operating mode.

  • Fault propagation on 10GE, 40GE, and 100GE clients—A new squelch option that is named LF is supported for GigE payloads. A local fault (LF) indication is fowarded to the client port in the downstream direction when a failure on the trunk port occurs. The LF option is supported for:

    • 10GE payloads on 10x10G-LC cards configured in the:

      • RGN-10G or TXP-10G mode

      • MXP-10x10G mode (paired with 100G-LC-C, 100G-CK-C, or 100GS-CK-C card)

      • MXP-10x10G-100G mode (paired with a 100GS-CK-C or 200G-CK-C card)

    • 100GE payloads on:

      • 100G-LC-C, 100G-CK-C, 100GS-CK-C, or 200G-CK-C cards configured in the TXP-100G mode

      • CFP-LC cards configured in the CFP-TXP mode (paired with 100G-LC-C or 100G-CK-C card)

    • 40GE payloads on:

      • CFP-LC card configured in the 2x40G Muxponder mode (paired with a 100G-LC-C or 100G-CK-C card)

      • 100G-CK-C card configured in the MXP-2x40G mode

  • Trail Trace Identifier—The Trail Trace Identifier (TTI) in the path monitoring overhead is supported in OTU, and ODU OTN frames.

    • 10x10G-LC—OTU4 and ODU4 payloads

    • CFP-LC—OTU4, ODU4, OTU3, and ODU3 payloads

    • 100G-LC-C, 100G-CK-C, 100GS-CK-C, 200G-CK-C—OTU4 and ODU4 payloads

    The Trail Trace Identifier Mismatch (TTIM) alarm is raised after checking only the SAPI bytes.

  • Automatic Laser Shutdown (ALS) can be configured on all ports. ALS is supported only on the ports that are configured with OTU2 and OTU4 payloads.

  • GCC channels—can be provisioned on the OTU2 client and trunk ports of the 10 x10G-LC card, OTU3 port (virtual port on the peer 100G-LC-C or 100G-CK-C card) of the CFP-LC card, and the OTU4 client and trunk ports of the 100G-LC-C or 100G-CK-C card.

  • Pseudo Random Binary Sequence (PRBS)—PRBS allows you to perform data integrity checks on their encapsulated packet data payloads using a pseudo-random bit stream pattern. PRBS generates a bit pattern and sends it to the peer router that uses this feature to detect whether the sent bit pattern is intact or not. The supported PRBS patterns are PRBS_NONE and PRBS_PN31.

  • Multivendor Interoperability - The 200G-CK line card can be configured to interoperate with other vendor interfaces. A new option called, Interop Mode is available to disable or enable interoperability. This option is available when the:

    • Modulation format is 100G-QPSK.

    • FEC is set to 7% High Gain FEC.

    • Admin state of the trunk port is set to OOS-DSBLD (Out of service and disabled).

    The behavior and performance of the card that is configured with HG-FEC Multivendor FEC is the same as the old HG-FEC mode. There is no optical performance variation.

Operating Modes for 100G-LC-C, 100G-CK-C, 100GS-CK-C, and 200G-CK-C Cards

100G Operating Modes

Each operating mode can be configured using the specific set of cards and client payloads. The Key features section lists the valid port pair for a specific operating mode and the supported payloads, and describes how each mode can be configured.

The 100G-LC-C, 100G-CK-C, 100GS-CK-C, and 200G-CK-LC cards support the following 100G operating modes. You can perform the operating mode configuration for the 100G operating modes on the client card.

  • TXP-100G (Standalone 100GE Transponder)

  • RGN-100G (100G Regenerator)

TXP-100G (Standalone 100GE Transponder)

You can configure the cards as a standalone 100-Gigabit Ethernet transponder. CXP or CPAK and coherent optical trunk supports the 100-Gigabit Ethernet traffic. The 100-Gigabit Ethernet or OTU4 payload traffic is routed from the CXP or CPAK to the optical trunk, passing through the T100 framer and the opposite way. The supported client signals in this mode are 100-Gigabit Ethernet LAN-PHY or OTU4 data rates.

RGN-100G (100G Regenerator)

You can configure the cards as a regenerator. You can connect the two cards to work in back-to-back mode connecting through the chassis backplane in the same shelf. The allowed slot pairs are 2–3, 4–5, 6–7, 8–9, 10–11, 12–13, or 14–15.

The card supports 100-Gigabit Ethernet or OTU4 client signals. Regeneration is performed leveraging on the OTU4 backplane interconnection. OTU4 overhead is terminated, allowing ODU4 to transparently pass through. GCC0 is terminated, while GCC1 and GCC2 are allowed to pass through.

The CXP client is not required because communication between the two cards acting as a regeneration group is supported through the chassis backplane.

MXP-2x40G

The 100G-CK-C card supports the MXP-2x40G operating mode. The 100G-CK-C card can be configured as a 2-port 40 GE muxponder. Two 40 GE flows through the CPAK client interface and are multiplexed in the 100G trunk interface. You can configure the traffic on the second client interface only after provisioning the traffic on the first client interface. This operating mode is not supported on the 100GS-CK-C card.


Note

The synchronization for the 100G-CK-C card is derived only from port 1. Hence, the traffic on port 2 must originate from the same synchronization source as port 1. The two ports must carry traffic from the same synchronization source.


200G Operating Modes

The 100GS-CK-C and 200G-CK-LC cards also support the 200G operating modes. You can perform the operating mode configuration for these modes on the trunk card.

  • MXP-200G

  • MXP-10x10G-100G

  • MXP-CK-100G

MXP-200G

Three cards such as a trunk card, a peer card, and a skip card are required to configure this operating mode. The skip card is next to the peer card.

The trunk card is a 100GS-CK-C or 200G-CK-LC card; the peer card and skip cards are MR-MXP. You can use the first 10x10G from the two SFP and two QFSP+ ports of the peer MR-MXP card. You can use the second 10x10G from the two SFP and two QFSP+ ports of the skip MR-MXP card.

The 200G-CK-LC card supports another configuration in the MXP_200G operating mode. In this configuration, 2x40GE clients on QSFP+ ports and 2x10GE clients on SFP+ ports of both peer MR-MXP and skip MR-MXP cards are multiplexed into 200G traffic on the trunk 200G-CK-LC card.

You can provision the operating mode on the following slots:

  • NCS 2006: 100GS-CK-C or 200G-CK-LC card in slots 2 or 7. The peer and skip MR-MXP cards in adjacent slots 3, 4 or 5, 6.

  • NCS 2015: 100GS-CK-C or 200G-CK-LC card in slots 2, 7, 8, 13, or 14. The peer and skip MR-MXP cards in adjacent slots.

MXP-10x10G-100G

You require three cards such as the trunk card, peer card, and skip card to configure this operating mode.

The trunk card is a 100GS-CK-C or 200G-CK-LC card; the peer card is a 10x10G-LC and the skip card is a MR-MXP.card You can use the first 10x10G from the ten SFP ports of the peer 10x10G-LC card. You can use the second 10x10G from the two SFPs and two QFSP+ ports of the skip MR-MXP card.

You can provision the operating mode on the following slots:

  • NCS 2006: The 100GS-CK-C or 200G-CK-LC card in slots 2 or 7, peer, and skip MR-MXP cards in adjacent slots 3, 4 or 5, 6.

  • NCS 2015: The 100GS-CK-C or 200G-CK-LC card in slots 2, 7, 8, 13, or 14, peer, and skip MR-MXP cards in adjacent slots.

MXP-CK-100G

Two cards, trunk and peer cards, are required to configure this operating mode. The trunk card is 100GS-CK-C or 200G-CK-LC; the peer card is MR-MXP. The first 100G is taken from the CPAK client port of the trunk 100GS-CK-C or 200G-CK-LC card and the second 100G is taken from the CPAK client port of the MR-MXP card.

200G-CK-LC card supports another configuration in the MXP_CK_100G operating mode. In this configuration, 10x10GE clients on QSFP+ or SFP+ ports of the peer MR-MXP card and 100GE client on the CPAK port of the 200G-CK-LC card are multiplexed into a 200G configuration on the trunk 200G-CK-LC card.

The operating mode can be provisioned on the following slots:

  • NCS 2006: 100GS-CK-C or 200G-CK-LC card and the peer MR-MXP card must be in adjacent slots 2–3, 4–5, and 6–7.

  • NCS 2015: 100GS-CK-C or 200G-CK-LC card and the peer MR-MXP card must be in adjacent slots 2–3, 4–5, 6–7, 8–9, 10–11, 12–13, and 14–15.

400G-XP Card

In this chapter, "400G-XP" refers to the NCS2K-400G-XP card.

The 400G-XP card is a tunable DWDM trunk card that simplifies the integration and transport of 10 Gigabit and 100 Gigabit Ethernet interfaces and services to enterprises and service provider optical networks. The card is a double-slot unit that provides 400 Gbps of client and 400 Gbps of trunk capacity. The card supports six QSFP+ based client ports that can be equipped with 4x 10 Gbps optics and four QSFP28 or QSFP+ based client ports that can be equipped with 100 Gbps QSFP28 and 4x 10 Gbps QSFP+ optics. The card is capable of aggregating client traffic to either of the two 200 Gbps coherent CFP2 trunk ports. The CFP2 - 11 trunk port of the 400G-XP card can interoperate with the 10x10G-LC card through the chassis backplane. To enable this interoperability between the 400G-XP and 10x10G-LC cards, the OPM_PEER_ODU2 and OPM_PEER_ODU2e slice modes are supported on Slice 2 when the 400G-XP card is configured in the MXP mode.

The table below details the layout constraints when the 400G-XP card is paired with the 10x10G-LC card in the Cisco NCS 2006 and Cisco NCS 2015 chassis.

Table 5. Slot Constraints for the 400G-XP and 10x10G-LC Cards

Chassis

Slot (10x10G-LC)

Slot (400G-XP)

Notes

Cisco NCS 2006

2

3-4

Only one of these two combinations can be deployed at a time.

4

5-6

NCS 2015

2

3-4

A maximum of four of these combinations can be deployed at a time.

4

5-6

6

7-8

8

9-10

10

11-12

12

13-14

14

15-16

The 400G-XP card supports the following client signals:

  • 10 GE: The payload can be provisioned for the OPM_10x10G, OPM_PEER_ODU2, or OPM_PEER_ODU2e slice mode for any trunk configuration. 10GE is provisioned for the OPM_PEER_ODU2 and OPM_PEER_ODU2e slice modes in the GFP and CBR mapping modes respectively. The cross-connect circuit bandwidth is ODU2e.

  • 100 GE: The payload can be provisioned for the OPM_100G slice mode for any trunk configuration. The cross-connect circuit bandwidth is ODU4.

  • OTU2: This payload is supported only on the QSFP-4X10G-MLR pluggable. The payload can be provisioned for the OPM_10x10G or OPM_PEER_ODU2 slice mode for any trunk configuration. The cross-connect circuit bandwidth is ODU2.

  • OTU2e: This payload is supported only on the QSFP-4X10G-MLR pluggable. The payload can be provisioned for the OPM_10x10G or OPM_PEER_ODU2e slice mode for any trunk configuration. The cross-connect circuit bandwidth is ODU2e.

  • OC192/STM64: This payload is supported only on the QSFP-4X10G-MLR pluggable. The payload can be provisioned for the OPM_10x10G or OPM_PEER_ODU2 slice mode for any trunk configuration. The cross-connect circuit bandwidth is ODU2.


    Note

    This payload is not supported for R 12.0.


  • OTU4: This payload is supported only on the ONS-QSFP28-LR4 pluggable. The payload can be provisioned for the OPM_100G slice mode for any trunk configuration. The cross-connect circuit bandwidth is ODU4.


Note

For any card mode except REGEN with slide mode as OPM-10x10G, you can configure a mix of 10G payloads ( OTU2, 10GE) on the same slice or client port with the exception of CDR ports (7, 8, 9, and 10). On CDR ports, the first configured 10G lane would determine the configurable payloads for the other three port lanes.



Note

If a slice is configured using the OPM_10x10G slice mode, it can be used only for 10G circuit creation whereas if a slice is configured using the OPM_100G slice mode, it can be used only for 100G circuit creation.



Note

Until R11.1, ODU alarms and PMs on cross-connected trunks ODUs are raised under the OTU4C2 trunk port of the 400G-XP card for both near-end and far-end directions. From R11.1, ODU alarms and PMs are raised under the specific cross-connected trunks ODUs for both near-end and far-end directions for OTU4 client payload. OTN alarms and PMs are raised under the OTU4C2 trunk port of the 400G-XP card for both near-end and far-end directions.


The 400G-XP card is supported on Cisco NCS 2002, Cisco NCS 2006, and Cisco NCS 2015 platforms.

One 400G-XP card can be installed in the Cisco NCS 2002 DC chassis that is powered by NCS2002-DC or NCS2002-DC-E. Three 400G-XP cards can be installed in the Cisco NCS 2006 chassis that is powered by NCS2006-DC, NCS2006-DC40, or NCS2006-AC (180V AC to 264V AC). Seven 400G-XP cards can be installed in the Cisco NCS 2015 chassis that is powered by DC 2 + 2, DC 3 + 1, or AC 2 + 2 PSU.

Limitations

  • Terminal loopback on the client port is not supported for the QSFP28-FRS pluggable.

  • Terminal loopback is not supported on the client port having non-FRS pluggable at the near-end node when the peer client port at the far-end node has QSFP28-FRS pluggable or vice versa.

  • Encrypted traffic is not supported on the client port with a QSFP28-FRS pluggable.


Note

The maximum short term operating temperature of the Cisco NCS 2002 shelf must not exceed 50 degrees when the 400G-XP card is installed.



Note

You may observe traffic glitches on the receiving direction of client ports 7, 8, 9, and 10 on the 400G-XP-LC card that you connect to CXP port of a 10x10G-LC card in fanout mode. To bringup traffic in such cases, change the admin state of the CXP port from OOS-DSBLD state to IS-NR state. Repeat the same action if you continue to observe glitches.


For more information about the 400G-XP card, see http://www.cisco.com/c/en/us/products/collateral/optical-networking/network-convergence-system-2000-series/datasheet-c78-736916.html.

Key Features

The 400G-XP card supports the following key feature:

    • Operating Modes—The card can be configured in various operating modes. The cards can be equipped with pluggables for client and trunk ports, and offer a large variety of configurations. When you configure the card, make sure that the following tasks are completed:
      • The trunk port PPMs must be preprovisioned before configuring the card operating mode. When the 400G-XP card is paired with the 10x10G-LC card, all the operating mode provisioning must performed on the 400G-XP card. The client payloads can be provisioned after configuring the operational mode on the card.

      The table below details the configurations supported on the 400G-XP card for the supported card modes.
      Table 6. Configuration Options for the 400G-XP Card Modes

      Configuration

      Options

      Card configuration

      MXP

      OTNXC

      1

      REGEN

      MXP_2x150G (8QAM)

      Trunk configuration ( per trunk)

      None

      None

      None

      M_150G

      M_100G

      M_100G

      M_100G

      M_200G

      M_200G

      M_200G

      Slice configuration

      None

      None

      Slice configuration is not supported

      None

      OPM_2x40G_2x10G

      OPM_100G

      OPM_100G

      OPM_100G

      OPM_10x10G

      OPM_10x10G

      OPM_10x10G

      OPM_6x16G_FC

      OPM_6x16G_FC

      OPM_PEER_ODU2 (Available only for Slice 2 when 400G-XP is paired with 10x10G-LC)

      OPM_PEER_ODU2e (Available only for Slice 2 when 400G-XP is paired with 10x10G-LC)

      1 Not supported in R12.0
      For more information about the trunk and slice configuration, see Slice Definition and Line Card Configuration for 400G-XP Card.
  • Each trunk port functions as a muxponder instance has the following features:

    • The trunk port supports Analog Coherent Optical (ACO) CFP2 coherent pluggable.


      Note

      Before removing the CFP2 pluggable from any of two trunk ports, ensure that the relevant trunk port is set to the OOS (Out-of-service) state. Wait until the trunk port LED turns off. Wait for a further 120 seconds before extracting the CFP2 pluggable.


    • Configurable trunk capacity:

      • 100 Gbps coherent DWDM transmission with quadrature phase shift keying (QPSK )modulation.

      • 200 Gbps coherent DWDM transmission with 16-state quadrature amplitude modulation (16-QAM) modulation.

    • Configurable trunk FEC: SD-FEC with 15% or 25% overhead.

    • Configurable differential/non-differential line encoding.

    • Nyquist shaping if channels at trunk TX.

    • Flex spectrum tunability over the full extended C-Band.

    • 100 Gbps through 100 Gbps QSFP28 client ports.

    • 10 Gbps through 4x 10 Gbps QSFP+ client ports.

    • 16 Gbps through 4 x 16 Gbps QSFP+ client ports.

  • The supported CD ranges are detailed in the table below:

    Table 7. CD Range for 400G-XP Card

    200G 16-QAM

    100G QPSK

    Low

    High

    Low

    High

    Default Working CD Range

    -10000

    50000

    -20000

    90000

    Default CD Thresholds

    -9000

    45000

    -18000

    72000

    Allowed CD Range ( Working and Thresholds)

    -60000

    60000

    -280000

    280000

  • Loopback—The following loopback types are supported:

    • Client ports - Terminal (Inward), Facility (Line)

    • Trunk ports - Terminal (Inward)

    • Iports - Facility (Line), Terminal loopback (Drop)


      Note

      Before you provision loopback on the iports, place the relevant trunk ports in the OOS-MT state. This causes the iports to move to the OOS-MT state.


  • Automatic Laser Shutdown (ALS) can be configured on all the ports.

  • 100GE ethernet client ports can be provisioned with or without IEEE 802.3 bj FEC. The options are Auto, Force-Fec-On, Force-Fec-Off.

  • Trail Trace Identifier (TTI)—TTI in the section monitoring overhead is supported . Source Access Point Identifer (SAPI), Destination Access Point Identifer (DAPI), and User Operator Data fields are supported in Release 10.6.2 and later releases.

  • Trunk Port Interworking—The two CFP2 trunk ports can interoperate with each other when the source and destination 400G-XP cards have the same trunk mode and slice mode configuration. For more information, see Trunk Port Interworking in 400G-XP Cards.

  • GCC0 Support—The 400G-XP card supports provision of GCC0 channel on the trunk port. For more information, see GCC0 Support on the 400G-XP Card.

  • Interoperability—The 400G-XP card is interoperable with the NC55-6X200-DWDM-S card supported on NCS 5500 and the NCS4K-4H-OPW-QC2 Card supported on NCS 4000.

    The following table describes the configurations, payload types, and pluggables supported for interoperability between the 400G-XP card and the NCS4K-4H-OPW-QC2 card.

    Table 8. 400G-XP Interoperability with the NCS4K-4H-OPW-QC2 card.

    Payload type

    Trunk configuration

    Pluggables for trunk ports on 400G-XP

    Pluggables for client ports on 400G-XP

    Pluggables for trunk ports on 4H-OPW-QC2

    Pluggables for client ports on 4H-OPW-QC2

    100GE

    OTU4

    CFP2

    QSFP-100G-SR4-S

    CFP2

    QSFP-100G-SR4-S

    100GE

    OTU4C2

    CFP2

    QSFP-100G-SR4-S

    CFP2

    QSFP-100G-SR4-S

    OTU2

    OTU4

    CFP2

    ONS-QSFP-4X10 MLR

    CFP2

    ONS-QSFP28-LR4

    OTU2

    OTU4C2

    CFP2

    ONS-QSFP-4X10 MLR

    CFP2

    ONS-QSFP28-LR4

    10GE

    OTU4

    CFP2

    ONS-QSFP-4X10 MLR

    CFP2

    ONS-QSFP-4X10 MLR

    10GE

    OTU4C2

    CFP2

    ONS-QSFP-4X10 MLR

    CFP2

    ONS-QSFP-4X10 MLR

    The following table describes the configurations, payload types, and pluggables supported for interoperability between the 400G-XP card and the NC55-6X200-DWDM-S card.

    Table 9. 400G-XP Interoperability with the NC55-6X200-DWDM-S card.

    Payload type

    Trunk configuration

    Pluggables for trunk ports on 400G-XP

    Pluggables for client ports on 400G-XP

    Pluggables for trunk ports on 6X200-DWDM-S

    Pluggables for client ports on 6X200-DWDM-S

    100GE

    OTU4

    CFP2

    QSFP-100G-SR4-S

    CFP2

    QSFP-100G-SR4-S

    100GE

    OTU4C2

    CFP2

    QSFP-100G-SR4-S

    CFP2

    QSFP-100G-SR4-S

For a detailed list of the supported pluggables, see http://www.cisco.com/c/en/us/td/docs/optical/spares/gbic/guides/b_ncs_pluggables.html .

Interoperability

The 400G-XP card has two trunk ports, each supporting up to 20 ODU2es. These ODU2es are numbered from 1 through 20. ODU2es 1 through 10 belong to the first ODU4 slice and ODU2es 11 through 20 belong to the second ODU4 slice. Each ODU number has a pre-defined group of timeslots as seen in the following table.

Trunk Port

ODU4 Slice

ODU Trunk Number

ODU Trunk FAC

Tributary Port Number

Timeslots

Trunk 1

(FAC 10)

Slice 1

1

96

1

1 11 21 31 41 51 61 71

2

97

2

2 12 22 32 42 52 62 72

3

98

3

3 13 23 33 43 53 63 73

4

99

4

4 14 24 34 44 54 64 74

5

100

5

5 15 25 35 45 55 65 75

6

101

6

6 16 26 36 46 56 66 76

7

102

7

7 17 27 37 47 57 67 77

8

103

8

8 18 28 38 48 58 68 78

9

104

9

9 19 29 39 49 59 69 79

10

105

10

10 20 30 40 50 60 70 80

Slice 2

11

106

1

1 11 21 31 41 51 61 71

12

107

2

2 12 22 32 42 52 62 72

13

108

3

3 13 23 33 43 53 63 73

14

109

4

4 14 24 34 44 54 64 74

15

110

5

5 15 25 35 45 55 65 75

16

111

6

6 16 26 36 46 56 66 76

17

112

7

7 17 27 37 47 57 67 77

18

113

8

8 18 28 38 48 58 68 78

19

114

9

9 19 29 39 49 59 69 79

20

115

10

10 20 30 40 50 60 70 80

Trunk 2

(FAC 11)

Slice 1

1

116

1

1 11 21 31 41 51 61 71

2

117

2

2 12 22 32 42 52 62 72

3

118

3

3 13 23 33 43 53 63 73

4

119

4

4 14 24 34 44 54 64 74

5

120

5

5 15 25 35 45 55 65 75

6

121

6

6 16 26 36 46 56 66 76

7

122

7

7 17 27 37 47 57 67 77

8

123

8

8 18 28 38 48 58 68 78

9

124

9

9 19 29 39 49 59 69 79

10

125

10

10 20 30 40 50 60 70 80

Slice 2

11

126

1

1 11 21 31 41 51 61 71

12

127

2

2 12 22 32 42 52 62 72

13

128

3

3 13 23 33 43 53 63 73

14

129

4

4 14 24 34 44 54 64 74

15

130

5

5 15 25 35 45 55 65 75

16

131

6

6 16 26 36 46 56 66 76

17

132

7

7 17 27 37 47 57 67 77

18

133

8

8 18 28 38 48 58 68 78

19

134

9

9 19 29 39 49 59 69 79

20

135

10

10 20 30 40 50 60 70 80

When the 400G-XP card interoperates with the NCS4K-4H-OPW-QC2 card, the first ODU4 slice of the 400G-XP trunk is connected to the second ODU4 slice of the same NCS4K-4H-OPW-QC2 trunk.


Note

The ODU circuit between the 400G-XP and NCS4K-4H-OPW-QC2 cards is created even when the ODU number is incorrect. Please ensure that the correct source and destination ODU numbers are selected.


Regeneration Mode for 400G-XP

From Release 10.8.0, the 400G-XP can be configured as a regenerator. The regeneration functionality is available only on the trunk ports. A new card operating mode, REGEN, is available. No client ports are involved. The two trunk ports must have the same rate to achieve regeneration (wavelengths and FEC of the trunks can vary).


Note

For traffic to flow in the REGEN mode, it is mandatory that the 400G-XP should be running on firmware (SCP) version 5.24 or later.

We recommend that you use the REGEN mode only with the MXP operating mode (the output from the MXP trunk of a 400G-XP can be connected to trunk ports in REGEN mode).


Slice Definition and Line Card Configuration for 400G-XP Card

The image below displays the client and trunk ports of the 400G-XP card.

Figure 2. 400G-XP Card


The client to trunk port mapping is fixed in the 400G-XP card as detailed in this table:
Table 10. Trunk -Client Port Mapping on the 400G-XP Card

Trunk

Client Port

Pluggable Type

Trunk 1 (CFP2-11)- Slice 1 and Slice 2

Ports 1, 2, 3

QSFP+

Ports 7, 8

QSFP+ or QSFP28 2

Trunk 2 (CFP2-12) - Slice 3 and Slice 4

Ports 4, 5, 6

QSFP+

Ports 9, 10

QSFP+ or QSFP28

2 QSFP+ and QSFP28 share the same form factor.

The trunk ports can be configured with either 100G or 200G rates. The client ports are grouped into four slices. The slice mode defines the aggregation capacity and can be configured independently.

The configuration of each of the two trunk ports is independent of the configuration of the other and is done using either one of the two trunk operating modes.

Trunk Operating Modes (trunk capacity)

  • M-100G: 100G QPSK. One slice is enabled on the trunk. Slice 2 is enabled for Trunk 1 and Slice 4 is enabled on Trunk 2.

  • M-200G: 200G 16 QAM. Two slices are enabled on the trunk.

  • provision a ONS-QC-16GFC-SW= pluggable on the shared ports and have 6x 16G-FC payloads and 8x 10G payloads (10GE or OTU2)

  • provision a QSFP-4X10G-MLR (or ONS-QSFP-4X10G-LR-S) pluggable on the shared ports and have 4x 16G-FC + 10x 10G payloads (10GE and/or OTU2)

Slice Mode:
  • OPM-100G: Enables 100G client on the QSFP 28 port.

  • OPM-10x10G: Enables 10G client over a set of QSFP+ ports.

  • OPM_2x40G_2x10G: Enables 40G client over a set of QSFP+ ports.

Traffic from the client ports are aggregated on the 100G or 200G trunk at the intermediate ports. There are four intermediate ports (iports), two per trunk. The iports are automatically configured when the slices are configured.

The relation between the two trunk ports (Ports 11 and 12), client ports (Ports 1 through 10) and the four slices are represented in the tables below.

The OPM-6x16G_FC mode is referred to as 6x16G_FC and OPM_2x40G_2x10G mode is referred to as 2x40G_2x10G in this table.

Table 11. Trunk, Slice, and Port Configuration for Trunk 1 of the 400G-XP Card

Trunk 1

Client Ports

Trunk Mode

Slice Operation Mode

1

2 3

3

7

8

Slice 1

Slice 2

Port Lanes

M-200G

OPM-100G

OPM-100G

-

-

-

4x 4

4x

OPM-100G

OPM-10x10G

-

3,4

1 to 4

4x

1 to 4

OPM-10x10G

OPM-100G

1 to 4

1, 2

-

1 to 4

4x

OPM-10x10G

OPM-10x10G

1 to 4

1 to 4

1 to 4

1 to 4

1 to 4

6x16G_FC

5

OPM-100G

1 to 4

1,2

4x

6x16G_FC

OPM-10x10G

1 to 4

1,2 or 3,4 6

1 to 4

1 to 4

OPM-100G

6x16G_FC

-

3,4

1 to 4

4x

OPM-10x10G

OPM-6x16G

7

1 to 4

1,2 or 3,48

1 to 4

1 to 4

OPM-6x16G

6x16G_FC

1 to 4

1,2 and 3,4

1 to 4

2x40G_2x10G

2x40G_2x10G

40G

1,2 + 3,4

40G

40G

40G

2x40G_2x10G

OPM-10x10G

40G

1,2 + 3,4

1 to 4

40G

1 to 4

2x40G_2x10G

OPM_100G

40G

1,2

-

40G

4x

OPM-10x10G

2x40G_2x10G

1 to 4

1,2 + 3,4

40G

1 to 4

40G

OPM_100G

2x40G_2x10G

-

3,4

40G

4x

40G

M-100G

NA

OPM-100G

-

-

4x

OPM-10x10G

3,4

1 to 4

1 to 4

6x16G_FC

3,4

1 to 4

2x40G_2x10G

-

3,4

40G

-

40G

3 Port 2 is shared between Slice 1 and Slice 2.
4 4x refers to all four lanes of the QSFP28 pluggable.
5 This slice mode is not supported in R12.0.
6 Depending on the PPM provisioned, ports 1 and 2 can be 16G FC or ports 3 and 4 can be 10GE/OTU2.
7 This slice mode is not supported in R12.0.
8 Depending on the PPM provisioned, ports 3 and 4 can be 16G FC or ports 1 and 2 can be 10GE/OTU2.

The OPM-6x16G_FC mode is referred to as 6x16G_FC and OPM_2x40G_2x10G mode is referred to as 2x40G_2x10G in this table.

Table 12. Trunk, Slice, and Port Configuration for Trunk 2 of the 400G-XP Card

Trunk 2

Client Ports

Trunk Mode

Slice Operation Mode

4

5

9

6

9

10

Slice 3

Slice 4

Port Lanes

M-200G

OPM-100G

OPM-100G

-

-

-

4x

4x

OPM-100G

OPM-10x10G

-

3,4

1 to 4

4x

1 to 4

OPM-10x10G

OPM-100G

1 to 4

1, 2

-

1 to 4

4x

OPM-10x10G

OPM-10x10G

1 to 4

1 to 4

1 to 4

1 to 4

1 to 4

6x16G_FC

10

OPM-100G

1 to 4

1,2

-

-

4x

6x16G_FC

OPM-10x10G

1 to 4

1,2 or 3,4 11

1 to 4

-

1 to 4

OPM-100G

6x16G_FC

-

3,4

1 to 4

4x

-

OPM-10x10G

6x16G_FC

1 to 4

1,2 or 3,4 12

1 to 4

1 to 4

-

6x16G_FC

6x16G_FC

1 to 4

1,2 and 3,4

1 to 4

1 to 4

-

2x40G_2x10G

2x40G_2x10G

40G

1,2 + 3,4

40G

40G

40G

2x40G_2x10G

OPM-10x10G

40G

1,2 + 3,4

1 to 4

40G

1 to 4

2x40G_2x10G

OPM_100G

40G

1,2

-

40G

4x

OPM-10x10G

2x40G_2x10G

1 to 4

1,2 + 3,4

40G

1 to 4

40G

OPM_100G

2x40G_2x10G

-

3,4

40G

4x

40G

M-100G

NA

OPM-100G

-

-

-

-

1 to 4

OPM-10x10G

3,4

1 to 4

1 to 4

3,4

1 to 4

2x40G_2x10G

-

3,4

40G

-

40G

9 Port 5 is shared between Slice 3 and Slice 4.
10 This slice mode is not supported in R12.0.
11 Depending on the PPM provisioned, ports 1 and 2 can be 16G FC or ports 3 and 4 can be 10GE/OTU2.
12 Depending on the PPM provisioned, ports 3 and 4 can be 16G FC or ports 1 and 2 can be 10GE/OTU2.

Trunk Port Interworking in 400G-XP Cards

To provide greater flexibility on the network design and deployment, the two CFP2 trunk ports of the 400G-XP card can interoperate with each other when the same trunk operating mode and slice configurations exist on both source and destination cards.

OCHCC circuits can be created between compatible client ports as detailed in the tables below.

Table 13. Compatible Client Ports for M-100G Trunk Port Configuration

Trunk 1 - CFP2 Port 11

Source/Destination Client Ports

Source/Destination Client Ports

Trunk 2 - CFP2 Port 12

Slice configuration 1

Slice 2: OPM_100G

8

10

Slice 4: OPM_100G

Slice configuration 1

Slice configuration 2

Slice 2: OPM_10x10G

2-3

5-3

Slice 4: OPM_10x10G

Slice configuration 2

2-4

5-4

3-1

6-1

3-2

6-2

3-3

6-3

3-4

6-4

8-1

10-1

8-2

10-2

8-3

10-3

8-4

10-4

Slice configuration 2

Slice 2: OPM_6x16G_FC

13

2-3

5-3

Slice 4: OPM_6x16G_FC

Slice configuration 2

2-4

5-4

3-1

6-1

3-2

6-2

3-3

6-3

3-4

6-4

13 This slice mode is not supported in R12.0.
Table 14. Compatible Client Ports for M-200G Trunk Port Configuration

Trunk 1 - CFP2 Port 11

Source/Destination Client Ports

Source/Destination Client Ports

Trunk 2 - CFP2 Port 12

Slice configuration 1

Slice1: OPM_100G

7

9

Slice 3: OPM_100G

Slice configuration 1

Slice 2: OPM_10x10G

2-3

5-3

Slice 4: OPM_10x10G

Slice 1: OPM_100G

7

9

Slice 3: OPM_100G

Slice configuration 2

Slice 2: OPM_10x10G

2-3

5-3

Slice 4: OPM_10x10G

Slice configuration 2

2-4

5-4

3-1

6-1

3-2

6-2

3-3

6-3

3-4

6-4

8-1

10-1

8-2

10-2

8-3

10-3

8-4

10-4

Slice configuration 3

Slice 1: OPM_10x10G

1-1

4-1

Slice 3: OPM_10x10G

Slice configuration 3

1-2

4-2

1-3

4-3

1-4

4-4

2-1

5-1

2-2

5-2

7-1

9-1

7-2

9-2

7-3

9-3

7-4

9-4

Slice 2: OPM_100G

8

10

Slice 4: OPM_100G

Slice configuration 4

Slice 1: OPM_10x10G

1-1

4-1

Slice 3: OPM_10x10G

Slice configuration 4

1-2

4-2

1-3

4-3

1-4

4-4

2-1

5-1

2-2

5-2

7-1

9-1

7-2

9-2

7-3

9-3

7-4

9-4

Slice 2: OPM_10x10G

2-3

5-3

Slice 4: OPM_10x10G

2-4

5-4

3-1

6-1

3-2

6-2

3-3

6-3

3-4

6-4

8-1

10-1

8-2

10-2

8-3

10-3

8-4

10-4

Slice configuration 1

Slice 1:OPM_6x16G_FC

14

1-1

4-1

Slice 3: OPM_6x16G_FC

Slice configuration 3

1-2

4-2

1-3

4-3

1-4

4-4

2-1

5-1

2-2

5-2

Slice configuration 2

Slice 2: OPM_6x16G_FC

2-3

5-3

Slice 4: OPM_6x16G_FC

Slice configuration 4

2-4

5-4

3-1

6-1

3-2

6-2

3-3

6-3

3-4

6-4

14 This slice mode is not supported in R12.0.

GCC0 Support on the 400G-XP Card

  • The 400G-XP card supports provision of one GCC0 channel for each of the trunk ports on the operating modes-MXP, and MXP-2x150G(8QAM).

  • In case of the OTU4C3 (8QAM) payload, only one GCC0 channel is configurable on the first trunk port (Port-11). The configuration on the second trunk port (Port-12) is automatically blocked.

  • In case of the MXP-2x150G(8QAM) payload, the GCC0 channel is configurable only on the second trunk port (Port-12); no GCC0 channel configuration is supported on the first trunk port (Port-11).

  • The OTU4 and OTU2 client ports do not support GCC0 channels on the card.

  • The 400G-XP card supports a maximum of two GCC0 channels on each trunk port.

  • The OTU4C2 trunk port supports the Low Speed GCC 196K and High Speed GCC 1200K. The 400G-XP card supports only the High Speed GCC rate, 1200K. So, GCC0 channels provisioning on 400G-XP cards, which are part of 15454-M12 as Node Controller (NC) configurations, is not supported.

  • The OTNXC or OCHTRAIL circuits are not supported over the direct GCC0 link on the 400G-XP card.

  • The GCC0 channel provisioning is not supported on REGEN card mode on the 400G-XP-LC card. However, GCC0 tunneling is enabled.

  • From R11.1.1.2, the GCC0 channel provisioning is supported on the REGEN card mode on the 400G-XP-LC card. If the GCC0 provisioning is not provisioned, it acts transparent for GCC0 channel.

  • GCC0 channels provisioning is supported with hardware FPGA image version > 0.28. GCC provisioning will fail with a deny error message if FPGA version is = < 0.28.

  • In presence of the TIM-SM alarm, GCC0 link remains down.

2x150G Support on the 400G-XP Card

From Release 10.9, the 400G-XP card supports the configuration of 2x150G mode in 8QAM modulation format. It is configurable on the trunk ports of the card by selecting M_150G as the Trunk Operating mode.

The M_150G mode does not support muxponder, cross connection, and regeneration configurations.

The M_150G trunk mode configuration supports client slices 1, 3 and 4. The available ports 1[1:4], 2[1:2], 4[1:4], 5[1:4], 6[1:4], 7[1:4], 9[1:4], 10[1:4]. When the M_150G trunk mode in configured, the slices 1, 3 and 4 are independently configurable as OPM_100G, OPM_10x10G or OPM_6x16G-FC. It is possible to change a slice mode without it being traffic affecting on the other provisioned slices. The admin state of both trunk ports are aligned.

On a M_150G configured trunk mode, all client payloads or options are the same as the standard M_200G MXP mode.

The M_150G trunk mode is applicable to both trunk ports. This is required because this mode splits the ODU4line frames into two interleaved 150G signals transported separately by the two trunk ports.

The trunk ports configured as M_150G support the same optical and FEC alarms or monitors provided by the M_200G mode. An LOS-P or LOF alarm on any of the two trunk ports of M_150G correlates all the OTU4C3 container OTN alarms.

The Line OTN Alarms and Performance Monitors of the 2x150G mode container frame (OTU4C3) is evaluated as the summarization of the Alarms or PMs related to the 3 embedded ODU4 internal ports 1, 3, and 4. The resulting values are available at the OTN layer of Trunk12.

Limitations of 2x150G Support on the 400G-XP Card

  • The trunk ports are put in the Out-of-Service state before unplugging any CFP2 trunk. Extracting an In-Service CFP2 trunk results in shutting down of the other trunk.

  • The loopback setting of both M_150G trunks are aligned. However, the internal loopback ports are configurable independently with the same limitations as that of M_100G and the M_200G modes.

  • The TTI-SM of the OTU4C3 container is configurable and is monitored only on Trunk 12.

  • The GCC0 provisioning is supported only for Trunk-12.

  • The FEC setting of both M_150G trunks are aligned.

40E-MXP-C, 40EX-MXP-C, and 40ME-MXP-C Cards

Table 15. Feature History

Feature Name

Release Information

Feature Description

40E-MXP-C, 40EX-MXP-C, and 40ME-MXP-C Cards

Cisco NCS 2000 Release 12.2

The 40E-MXP-C, 40EX-MXP-C, and 40ME-MXP-C cards are supported on SVO. The cards have CP-DQPSK extended performance.

The 40E-MXP-C, 40EX-MXP-C, and 40ME-MXP-C cards aggregate a variety of client service inputs (10GE, OTU2, OTU2e, and OC-192) into a single 40-Gbps OTU3/OTU3e signal on the trunk side.


Note

The 40E-MXP-C, 40EX-MXP-C, or 40ME-MXP-C card is displayed with the same card name, _15454-40E-MXP-C.


The 40E-MXP-C, 40EX-MXP-C, and 40ME-MXP-C cards support aggregation of the following signals:

  • With overclock enabled on the trunk port:

    • OTU2e

    • 10 Gigabit Ethernet LAN-Phy (CBR mapping)

  • With overclock disabled on the trunk port:

    • OC-192/STM-64

    • OTU2

You can install and provision the cards in:

  • Slots 2 to 6 in Cisco NCS 2006 chassis

  • Slots 2 to 15 in Cisco NCS 2015 chassis

The client ports of the 40E-MXP-C, 40EX-MXP-C, and 40ME-MXP-C cards interoperate with all the existing TXP or MXP (OTU2 trunk) cards.

For OTU2 and OTU2e client protocols, Enhanced FEC (EFEC) is not supported on Port 1 of the 40E-MXP-C, 40EX-MXP-C, and 40ME-MXP-C cards.

Client Port

FEC Configuration

Port 1

Only Standard FEC

Port 2

Standard and Enhanced FEC

Port 3

Standard and Enhanced FEC

Port 4

Standard and Enhanced FEC

Key Features

The 40E-MXP-C, 40EX-MXP-C, and 40ME-MXP-C cards provide the following key features:

  • The cards use the CP-DQPSK modulation format.

  • Onboard E-FEC processor—The E-FEC functionality improves the correction capability of the transponder to improve performance, allowing operation at a lower OSNR compared to the standard RS (239,255) correction algorithm. A new BCH algorithm implemented (according to G.975.1 I.7) in E-FEC allows recovery of an input BER up to 1E-3. The 40E-MXP-C, 40EX-MXP-C, and 40ME-MXP-C cards support both the standard RS (specified in ITU-T G.709) and E-FEC standard, which allows an improved gain on trunk interfaces with a resultant extension of the transmission range on these interfaces.

  • Automatic Laser Shutdown (ALS)—A safety mechanism, Automatic Laser Shutdown (ALS), is used in the event of a fiber cut. The Auto Restart ALS option is supported only for OC-192/STM-64 and OTU2 payloads. The Manual Restart ALS option is supported for all payloads.

  • Automatic timing source synchronization—The 40E-MXP-C, 40EX-MXP-C, and 40ME-MXP-C cards synchronize to the card clocks. Because of a maintenance or upgrade activity, if the control cards are not available, the cards automatically synchronize to one of the input client interface clocks.

  • Squelching policy—The cards are set to squelch the client interface output if there is LOS at the DWDM receiver, or if there is a remote fault. In the event of a remote fault, the card manages MS-AIS insertion.

  • The 40E-MXP-C, 40EX-MXP-C, and 40ME-MXP-C cards are tunable across the full C-band wavelength.

Wavelength Identification

The 40E-MXP-C, 40EX-MXP-C, and 40ME-MXP-C cards use trunk lasers that are wave-locked, which allows the trunk transmitter to operate on the ITU grid effectively. These cards implement the UT2 module; they use a C-band version of the UT2. The laser is tunable over 82 wavelengths in the C-band at 50-GHz spacing on the ITU grid.

Table 16. 40E-MXP-C, 40EX-MXP-C, and 40ME-MXP-C Trunk Wavelengths

Channel Number

Frequency (THz)

Wavelength (nm)

Channel Number

Frequency (THz)

Wavelength (nm)

1

196.00

1529.55

42

193.95

1545.72

2

195.95

1529.94

43

193.90

1546.119

3

195.90

1530.334

44

193.85

1546.518

4

195.85

1530.725

45

193.80

1546.917

5

195.80

1531.116

46

193.75

1547.316

6

195.75

1531.507

47

193.70

1547.715

7

195.70

1531.898

48

193.65

1548.115

8

195.65

1532.290

49

193.60

1548.515

9

195.60

1532.681

50

193.55

1548.915

10

195.55

1533.073

51

193.50

1549.32

11

195.50

1533.47

52

193.45

1549.71

12

195.45

1533.86

53

193.40

1550.116

13

195.40

1534.250

54

193.35

1550.517

14

195.35

1534.643

55

193.30

1550.918

15

195.30

1535.036

56

193.25

1551.319

16

195.25

1535.429

57

193.20

1551.721

17

195.20

1535.822

58

193.15

1552.122

18

195.15

1536.216

59

193.10

1552.524

19

195.10

1536.609

60

193.05

1552.926

20

195.05

1537.003

61

193.00

1553.33

21

195.00

1537.40

62

192.95

1553.73

22

194.95

1537.79

63

192.90

1554.134

23

194.90

1538.186

64

192.85

1554.537

24

194.85

1538.581

65

192.80

1554.940

25

194.80

1538.976

66

192.75

1555.343

26

194.75

1539.371

67

192.70

1555.747

27

194.70

1539.766

68

192.65

1556.151

28

194.65

1540.162

69

192.60

1556.555

29

194.60

1540.557

70

192.55

1556.959

30

194.55

1540.953

71

192.50

1557.36

31

194.50

1541.35

72

192.45

1557.77

32

194.45

1541.75

73

192.40

1558.173

33

194.40

1542.142

74

192.35

1558.578

34

194.35

1542.539

75

192.30

1558.983

35

194.30

1542.936

76

192.25

1559.389

36

194.25

1543.333

77

192.20

1559.794

37

194.20

1543.730

78

192.15

1560.200

38

194.15

1544.128

79

192.10

1560.606

39

194.10

1544.526

80

192.05

1561.013

40

194.05

1544.924

81

192.00

1561.42

41

194.00

1545.32

82

191.95

1561.83

1.2T-MXP Card

Table 17. Feature History

Feature

Release Information

Description

1.2T-MXP Card

Cisco NCS 2000 Release 12.2

This card triples the per slot throughput of the NCS 2000 system from 200 Gbps to 600 Gbps. The DCO trunk ports of the card can support up to 400-Gbps data-rate with multiple modulation formats, encoding types, and FEC options. This card can be installed in the NCS 2006 and NCS 2015 chassis.

In this chapter, "1.2T-MXP" refers to the NCS2K-1.2T-MXP card.

The 1.2Tbps Transponder or Muxponder line card (1.2T-MXP) is the first line card to have a 400G trunk and 400GE client interface in the NCS 2000 platform. It is a two-slot card that triples the per slot throughput of the NCS 2000 system from 200 Gbps to 600 Gbps.

The 1.2T-MXP card has three QSFPDD56 or QSFP28 client ports, five QSFP28 client ports, and three CFP2 DWDM Digital Coherent Optics (DCO) trunk ports. The QSFPDD56 client ports can also be used alternately as QSFP28 ports on an individual port. The DCO ports can support up to 400-Gbps data-rate with multiple modulation formats, encoding types, and FEC options. You can configure the 1.2T-MXP card in different ways with a maximum of 1.2Tbps total traffic on the client side (QSFP-DD/28) and the 1.2Tbps total traffic on the trunk side.

The 1.2T-MXP card can be installed in:

  • NCS 2006 chassis that can accommodate a maximum of three 1.2T-MXP cards.

  • NCS 2015 chassis that can accommodate a maximum of seven 1.2T-MXP cards.

The 1.2T-MXP card coexists with other NCS 2000 line cards without restricting their functionalities. However, it does not interoperate with any other line cards.

Operating Modes and Slice Definition in the 1.2T-MXP Card

Operating Modes

You can configure the 1.2T-MXP card in the TXPMXP mode. The following are the suboperating modes:

  • OPM-400G―Enables 400GE client on the QSFP DD port, when the trunk is at 400G rate.

  • OPM-4x100G-DD―Enables four 100GE clients that use four-level Pulse Amplitude Modulation (PAM4), on one QSFP DD port, when the trunk is at 400G rate.

  • OPM-3x100G-DD―Enables three 100GE clients that use PAM4, on one QSFP DD port, when the trunk is at 300G rate.

  • OPM-4x100G―Enables 100GE clients over four QSFP28 ports, when the trunk is at 400G rate.

  • OPM-3x100G―Enables 100GE clients over three QSFP28 ports, when the trunk is at 300G rate.

The slices are configured based on the required data path configuration. The following table explains the suboperating modes that are enabled on trunk ports for each slice:

Table 18. Sub-Operating Modes

Slice

Trunk Port

Supported Sub-Operating Modes

Slice 1

9

  • OPM-400G

  • OPM-4x100G-DD

  • OPM-3x100G-DD

Slice 2

10

  • OPM-400G

  • OPM-4x100G

  • OPM-4x100G-DD

  • OPM-3x100G

Slice 3

11

  • OPM-400G

  • OPM-4x100G

  • OPM-3x100G

You can use the 1.2T-MXP card in different configurations. The following table describes the combinations of suboperating modes, the trunk ports, and client ports for each slice:


Note

In the combinations described, you can also choose to configure only one of the slices 1–3.


Table 19. Different Combinations of Sub-Operating Modes

Configuration

Sub-Operating Modes

Trunk Ports

Client Ports

400GE Transponder―Includes 3x400G trunk, 3x400GE client with 3xQSFP-DD pluggables

Slice 1: OPM-400G

9

6

Slice 2: OPM-400G

10

7

Slice 3: OPM-400G

11

8

12x100GE Muxponder―Includes 3x400G trunk, 12x100GE client with 2xQSFP-DD breakout + 4xQSFP28 pluggables

Slice 1: OPM-4x100G-DD

9

Port-6 lanes (6.1, 6.2, 6.3, 6.4)

Slice 2: OPM-4x100G-DD

10

Port-7 lanes (7.1, 7.2, 7.3, 7.4)

Slice 3: OPM-4x100G

11

2, 3, 4, 8

9x100GE Muxponder―Includes 3x300G trunk, 9x100GE client with 1xQSFP-DD breakout + 6xQSFP28 pluggables

Slice 1: OPM-3x100G-DD

9

Port-6 lanes (6.1, 6.2, 6.3)

6.4 is unused.

Slice 2: OPM-3x100G

10

1, 5, 7

Slice 3: OPM-3x100G

11

3, 4, 8

Mixed Configuration 1

Slice 1: OPM-400G

9

6

Slice 2: OPM-4x100G-DD

10

Port-7 lanes (7.1, 7.2, 7.3, 7.4)

Slice 3: OPM-3x100G

11

3, 4, 8

Key Features of 1.2T-MXP

The key features are:

  • Enhanced muxponder or transponder capabilities while enabling double-slot card with 100GE or 400GE client type.

  • O-FEC encoding on the trunk interface.

  • Nyquist filtering for OSNR.

  • Supports configurable modulation format such as 300 8QAM, 400 16QAM, PAM4, and 16QAM in both Open ROADM and 400ZR+ framing mode.

  • Flex spectrum support with Nyquist filtering.

  • ZR+ based framing on trunk.

  • 3x400GE client bandwidth using QSFP-DD or 12x100GE client bandwidth using QSFP-DD (break-out mode)

  • LLDP support on 100GE or 400GE clients.

  • Supports secure boot.

  • Alarms for ZR interface and, Alarms, Performance, and Statistics for GE interface as well as Optical Pluggable.

  • Diagnostics and maintenance support.

  • Supports GroupId that uniquely identifies a group of physical ports in a ZR frame.

Supported Pluggables

The supported pluggables are:

  • Three CFP2 400G DCO trunk pluggables

  • Eight QSFP28 or three QSFP-DD pluggables

  • Five QSFP28 client pluggables

Limitations of 1.2T-MXP Card

The following are the limitations of the 1.2T-MXP card:

  • Optics PMs are not supported by Active Optical Cable (AOC) PPM.

  • GroupID feature is not supported for 400GE transponder configuration.

  • I-port management is not supported.

  • OTN is not supported on trunk.

  • Traffic does not go down when FlexO-SR Interface Trail Trace Identifier Mismatch (FOIC-TIM) alarm is raised.

  • There might be traffic fluctuations affecting some switches and routers due to the following scenario:

    When there is a 400GE or 4x100GE traffic congestion on the pluggables, an electrical squelch or unsquelch is performed for one second on the transmit side of the pluggables. This operation relocks the transmit Clock and Data Recovery (CDR) of the pluggables. This results in an out-of-range frequency on the client for four to six seconds before the traffic clears.

    This issue occurs in pluggables such as QSFP-DD DR4, QDD-400G-FR4, QDD-400G-LR8, QDD-400-AOC1M, QDD-400-AOC2M, QDD-400-AOC3M, QDD-400-AOC5M, QDD-400-AOC7M, QDD-400-AOC10M, and QDD-400-AOC15M.

OTU2-XP Card

Table 20. Feature History

Feature Name

Release Information

Feature Description

OTU2-XP Card

Cisco NCS 2000 Release 12.2

The OTU2-XP card simplifies the integration and transport of 10 Gigabit Ethernet (10GE), OC192, and STM64 services into metro and regional service provider networks. This card can be installed in Cisco NCS 2006 and NCS 2015 chassis.

In this chapter, "OTU2-XP" refers to the 15454-M-OTU2-XP card.

The OTU2-XP card is a single-slot card that can be installed in slots 2 to 7 in NCS 2006 and slots 2 to 16 in NCS 2015 chassis. All the four ports in the card are ITU-T G.709 compliant and support 40 channels (wavelengths) at 100-GHz channel spacing in the C-band (that is, the 1530.33 to 1561.42 nm wavelength range).

The following operating modes are supported on the OTU2-XP card.

  • OTU2XP-2XP—This is the default card mode. In this mode, both the port pairs (1-3 and 2–4) are configured in Transponder mode.

  • OTU2XP-2REG—In this mode, both the port pairs (1-3 and 2–4) are configured in Regenerator mode.

  • OTU2XP-XP-REG—In this mode, port pair 1–3 is configured in Transponder mode and port pair 2–4 is configured in Regenerator mode.

  • OTU2XP-REG-XP—In this mode, port pair 1–3 is configured in Regenerator mode and port pair 2–4 is configured in Transponder mode.

Table 21. OTU2-XP Card Configurations and Ports

Configuration

Port 1

Port 2

Port 3

Port 4

2 x 10G transponder

Client port 1

Client port 2

Trunk port 1

Trunk port 2

2 x 10G regenerator (with enhanced FEC (E-FEC) only on one port)

Trunk port 1

Trunk port 2

Trunk port 1

Trunk port 2

Key Features

The OTU2-XP card has the following key features:

  • 10G transponder and regenerator capability.

  • Four ports with multirate (OC-192/STM-64, 10G) client interface. The client signals are mapped into an ITU-T G.709 OTU2 signal using standard ITU-T G.709 multiplexing.

  • The supported payloads are TEN-GE, OTU2, OTU2e, OC192, and STM64. 10G FC and IB-5 payloads are not supported.

  • The default configuration is transponder, with trunk ports configured as ITU-T G.709 standard FEC.

  • In transponder or regenerator configuration, if one of the ports is configured, the corresponding port is automatically created.

  • In regenerator configuration, only ports 3 and 4 can be configured as E-FEC (Standard ITU-T G.975.1 (subclause I.7)). The ports 1 and 2 can be configured only with standard FEC (Standard ITU-T G.709).

  • When the port pair 1–3 or 2–4 is configured as regenerator, the default configuration on ports 3 and 4 is automatically set to standard FEC.

  • The following OTU2 link rates are supported on the OTU2-XP trunk port:

    • Standard G.709 (10.70923 Gbps) when the client is provisioned as “SONET” (including 10G Ethernet WAN PHY) (9.95328 Gbps).

    • G.709 overclocked to transport 10GE as defined by ITU-T G. Sup43 Clause 7.2 (11.0491 Gbps) and ITU-T G. Sup43 Clause 7.1 (11.0957 Gbps) when the client is provisioned as “10G Ethernet LAN Phy” (10.3125 Gbps).

  • The Fixed Stuff parameter is applicable only in transponder operating mode with TEN-GE payload. The trunk port is OTU2e when this parameter is set to Enable (default).

OTU2-XP Card Configuration Rules

The following rules apply to OTU2-XP card configurations:

  • When you provision port pairs 1-3 or 2-4, they come up in the default Transponder mode.

  • The port pairs 1–3 and 2–4 can be configured in different modes only when the card configuration is Mixed. If the card configuration is Mixed, you must choose different modes on port pairs 1–3 and 2–4 (that is, one port pair in Transponder mode and the other port pair in Regenerator mode).

    The Mixed card configurations are as follows:

    • OTU2XP-XP-REG—In this mode, port pair 1–3 is configured in Transponder mode and port pair 2–4 is configured in Regenerator mode.

    • OTU2XP-REG-XP—In this mode, port pair 1–3 is configured in Regenerator mode and port pair 2–4 is configured in Transponder mode.

  • If the card is in Transponder configuration, you can change the configuration to Regenerator or Mixed.

  • If the card is in Regenerator configuration and you have configured only one port pair, then configuring payload rates for the other port pair automatically changes the card configuration to Mixed, with the new port pair in Transponder mode.

  • If the card is in Regenerator configuration, you cannot change the payload rate of the port pairs. You must change the configuration to Transponder, change the payload rate, and then move the card configuration back to Regenerator.

  • If the card is in Transponder configuration with TEN-GE payload, the trunk port interfaces 3 and 4 are created as OTU2e.

  • If the card is in Regenerator configuration with TEN-GE payload, all the port interfaces are created as OTU2e.

  • If the card is in Transponder configuration, the OTN Disable parameter can be set to True (G.709 disabled). In this case, both the port pairs (1-2 and 3–4) does not have OTU and ODU interfaces.

  • If any of the affected ports are in IS (ANSI) or Unlocked-enabled (ETSI) state, you cannot change the card configuration.

ODU Transparency

A feature of the OTU2-XP card is the ability to configure the ODU overhead bytes (EXP bytes and RES bytes 1 and 2) using the ODU Transparency parameter. SVO supports ODU Transparency parameter on OTU client interfaces 1 and 2 when the card mode is set as Regenerator. The valid values are Disable (Cisco Extended Use) or Enable (Transparent Standard Use).

  • Cisco Extended Use—ODU overhead bytes are terminated and regenerated on both the ports of the regenerator group.

  • Transparent Standard Use—ODU overhead bytes are transparently passed through the card. This option allows the OTU2-XP card to act transparently between the two trunk ports when the card is configured in Regenerator mode.

The ODU Transparency parameter is configurable only for Regenerator configuration. For Transponder and Mixed configurations, this parameter defaults to Interop-None and cannot be changed.

Installing the Card

Use this task to install the card.


Warning

During this procedure, wear grounding wrist straps to avoid ESD damage to the card. Do not directly touch the backplane with your hand or any metal tool, or you could shock yourself. Statement 94



Warning

Class 1 laser product. Statement 1008



Warning

Invisible laser radiation may be emitted from the end of the unterminated fiber cable or connector. Do not view directly with optical instruments. Viewing the laser output with certain optical instruments (for example, eye loupes, magnifiers, and microscopes) within a distance of 100 mm may pose an eye hazard. Statement 1056



Warning

Class I (CDRH) and Class 1M (IEC) laser products. Statement 1055



Note

You can install the cards on the NCS 2006 chassis that is mounted with either of the following:

  • Standard brackets on a 19-inch or 23-inch ANSI rack configuration or on an ETSI rack configuration.

  • Inlet air deflectors on a 23-inch ANSI rack configuration or on an ETSI rack configuration. The exhaust air deflectors cannot be used.



Note



Note

For US installations, complies with the US Federal Drug Administration Code of Federal Regulations Title 21, Sections 1040.10 and 1040.11, except for deviations pursuant to Laser Notice No. 50, dated July 26, 2001.



Note

If protective clips are installed on the backplane connectors of the cards, remove the clips before installing the cards.



Note

If you install a card incorrectly, the FAIL LED flashes continuously.



Note

Until a card is provisioned, the card is in the standby condition and the ACT or STBY LED remains amber in color.


Procedure


Step 1

Open the card latches or ejectors.

Step 2

Use the latches or ejectors to firmly slide the card along the guide rails until the card plugs into the receptacle at the back of the slot.

Step 3

Verify that the card is inserted correctly and simultaneously close the latches or ejectors on the card.

Note 

It is possible to close the latches and ejectors when the card is not plugged into the backplane. Ensure that you have inserted the card all the way.

Note 

If you install the card in the wrong slot, an MEA alarm is raised. To clear this alarm, open the latches, slide out the card, then insert it in the correct slot.

After you install the card, the FAIL, ACT, and SF LEDs go through a sequence of activities. They turn on, turn off, and blink at different points. After approximately two or three minutes, the ACT or ACT/STBY LED turns on. The SF LED might persist until all card ports connect to their far-end counterparts and a signal is present.

Note 

Until a card is provisioned, it is in the standby condition, and the ACT or STBY LED remains amber in color.

Step 4

If the card does not boot up properly or the LEDs do not progress through the activities described in Step 2, check the following:

  • When a physical card type does not match the type of card that is provisioned for that slot in the nodal craft, the card might not boot, and the nodal craft displays a MEA alarm. If the card does not boot, open the nodal craft and ensure that the slot is not provisioned for a different card type before assuming that the card is faulty.

  • If the red FAIL LED does not turn on, check the power.

  • If you insert a card into a slot that is provisioned for a different card, turn all LED off.

  • If the red FAIL LED is on continuously or the LEDs behave erratically, the card is not installed properly.

    If any of these conditions are present, remove the card and repeat Steps 1 to 3. If the card does not boot up properly the second time, contact your next level of support.

Step 5

If the card requires a pluggable, complete one of the following tasks:

  • DLP-G723 Install PPM on a Line Card—Complete this task to install the physical pluggable post module into the transponder or muxponder card.

  • Provision PPM—(Optional) Complete this task if you do not have the physical pluggable and must preprovision the PPM slot.

    Note 

    Pluggable port modules are hot-swappable I/O devices that plug into a transponder or muxponder card.


Provision PPM

Use this task to provision a PPM on a line card.

Before you begin

Procedure


Step 1

Click the Provisioning > Pluggable Port Modules tabs.

Step 2

In the Pluggable Port Modules area, click the + button.

The Create PPM dialog box appears.

Step 3

Choose the PPM port from the PPM drop-down list, and click Apply.

The newly created PPM appears in the Pluggable Port Modules area.

Step 4

Repeat the steps to provision additional PPMs, if needed.


Provision an Operating Mode

Use this task to provision an operating mode on the card.

The following table lists the operating modes that are supported on the cards.

Card

Operating Mode

Peer Cards

Client-Trunk Ports

MR-MXP

TXP-100G

200G-CK-LC card or 100GS-CK-C card

MXP-100G

200G-CK-LC card or 100GS-CK-C card

100G-B2B-CPAK

MR-MXP

CPAK

100G-B2B-SFP-QSFP

MR-MXP

2xSFP+2xQSFP

MXP-2X40G-2X10G

200G-CK-LC

100G-CK-C

TXP-100G

RGN-100G

100G-CK-C, 100G-LC-C

MXP-2x40G

100G-LC-C

TXP-100G

RGN-100G

100G-LC-C, 100G-CK-C

100GS-CK-C

TXP-100G

RGN-100G

200G-CK-LC or 100GS-CK-C

MXP-CK-100G-SFP-QSFP

MR-MXP

2xSFP+2xQSFP

MXP-CK-100G-CPAK

MR-MXP

CPAK

MXP-200G

MR-MXP

Skip card is MR-MXP.

MXP-10x10G-100G

10x10G-LC

Skip card is MR-MXP.

10x10G-LC

MXP-10x10G

100G-LC-C, 100G-CK-C, 100GS-CK-C, 200G-CK-C

RGN-10G

1–2, 3–4, 5–6, 7–8, 9–10

TXP-10G

1–2, 3–4, 5–6, 7–8, 9–10

Low Latency

1–2, 3–4, 5–6, 7–8, 9–10

Fanout-10X10G

TXPP-10G

3–4–6, 7–8–10

200G-CK-LC

TXP-100G

RGN-100G

200G-CK-C or 100GS-CK-C

MXP-200G

MR-MXP

Skip card is MR-MXP.

OPM-10x10G or OPM-2x40G-2x10G sub OpMode is required.

MXP-CK-100G-CPAK

MR-MXP

CPAK

MXP-CK-100G-SFP-QSFP

MR-MXP

2xSFP+2xQSFP

MXP-10x10G-100G

10x10G-LC + MR-MXP

CFP-LC

CFP-TXP

One or two 100G-LC-C, 100G-CK-C

CFP-MXP

Only one 100G-LC-C, 100G-CK-C

400G-XP

REGEN-200G

No slices

REGEN-100G

No slices

MXP-2x150G

Three slices

MXP

Four slices

1.2T-MXP

TXPMXP

Three Slices

Before you begin

Procedure


Step 1

Click the Provisioning > Card Modes tabs.

Step 2

In the Card Modes area, click the + button.

The Create Card Mode dialog box appears.

Step 3

Choose the operating mode from the Card Mode drop-down list.

The operating mode options vary depending on the card.

The supported card operating modes for 400G-XP are REGEN-200G, REGEN-100G, MXP-2x150G, and MXP. For the REGEN card mode on the 400G-XP, both trunk ports are configured with the same rate (100G or 200G). The trunk port configuration that is created for CFP2-11 is copied to CFP2-12. For the MXP 2x150G card mode on 400G-XP, both trunk ports are configured at 150G.

Step 4

Choose the Sub Mode from the Slice drop-down lists.

These fields are visible only for the operating modes that are supported on the 400G-XP card.

Step 5

Choose the peer card(s) from the Peer drop-down list.

This field is visible only if a peer card or peer cards are required for the configuration.

Step 6

Choose the sub mode from the drop-down list.

This field is visible only for the MXP-200G operating mode.

Step 7

Choose the skip peer card from the Peer drop-down list.

This field is visible only if a skip card is required for the configuration. This field is applicable to the MXP-200G and MXP-10x10G-100G operating modes.

Step 8

Select the port pair from the drop-down list(s).

This field is visible only if a port pair is required for the configuration. The 10x10G-LC card supports a maximum of five TXP-10G modes, two TXPP-10G modes, five RGN-10G modes, five LOW LATENCY modes, or a combination of five TXP-10G, RGN-10G, and LOW LATENCY modes.

For the TXPP-10G mode configuration on the 10x10G-LC card, client ports can be port 3, port 7, or both. You can select the port 4 and port 6 as trunk ports, when port 3 is selected as the client port. You can select port 8 and port 10 as trunk ports, when port 7 is selected as the client port.

Step 9

Click Apply.

The selected operating mode is provisioned on the card.


What to do next

Complete the Provision Pluggable Ports task.

Provision an Operating Mode

Use this task to provision an operating mode on the 40E-MXP-C, 40EX-MXP-C, or 40ME-MXP-C card.

The following operating modes are supported on the cards.

  • XM-40G-OTU3E2—This is the default card mode. Overclock is enabled on the trunk port.

  • XM-40-OTU3—Overclock is disabled on the trunk port.

Before you begin

Procedure


Step 1

Click the Provisioning > Card Mode tabs.

Step 2

Choose the operating mode from the Card Mode drop-down list.

Step 3

Choose the timing source from the TimeSource drop-down list.

The values are:
  • System-clock—The cards synchronize to the control cards.

  • Internal-clock—The cards automatically synchronize to one of the input client interface clocks.

Step 4

Click Apply.

The selected operating mode is provisioned on the card.


What to do next

Complete the Provision Pluggable Ports task.

Provision an Operating Mode on the OTU2-XP Card

Use this task to provision an operating mode on the OTU2-XP card.


Note

Enhanced FEC and 10GE LAN to WAN operating modes are not supported by SVO.


Before you begin

Procedure


Step 1

Click the Provisioning > Card Mode tabs.

Step 2

Choose the operating mode from the Card Mode drop-down list.

Step 3

Click Apply.

The selected operating mode is provisioned on the card.


Provision Pluggable Ports

Use this task to provision the payloads supported on the card.

Before you begin

Procedure


Step 1

Click the Provisioning > Pluggable Ports tabs.

Step 2

In the Pluggable Ports area, click the + button.

The Create Port dialog box appears.

Step 3

Choose the port number from the Port ID drop-down list.

Step 4

Choose the supported payload from the Port Type drop-down list.

Note 

For 1.2T-MXP card, if you try to choose a payload which is not supported by the sub operating mode of the pluggable, you will see an error message.

Step 5

Choose the number of lanes from the drop-down list.

This field is visible only in specific configurations.

Step 6

Click Apply.

Step 7

Repeat Step 1 through Step 6 to configure the rest of the port rates as needed.


Enable Proactive Protection

Use this task to modify the proactive protection settings of the card.

Before you begin

Procedure


Step 1

Click the Provisioning > Proactive Protection tabs.

Step 2

Modify required settings described in the following table.

Table 22. Proactive Protection Regen Settings

Parameter

Description

Options

Port

(Display only) Displays the port name.

Trigger Threshold

Sets the maximum BER threshold to trigger proactive protection.

  • 1E-3

  • 9E-2 to 1E-2

  • 9E-3 to 1E-3

  • 9E-4 to 1E-4

  • 9E-5 to 1E-5

  • 9E-6 to 1E-6

  • 9E-7 to 1E-7

Trigger Window (ms)

Sets the duration when BER is monitored before triggering the proactive protection.

The trigger window value must be a multiple of:

  • 10 ms for trigger thresholds between 1E-3 and 6E-6

  • 100 ms for a trigger threshold between 5E-6 to 1E-7

Trigger window must be less than or equal to 500 ms for trigger thresholds between 1E-3 and 6E-6. The trigger window must be less than or equal to 3900 ms for trigger thresholds between 5E-6 to 1E-7.

Time in milliseconds.

Revert Threshold

Sets the revert threshold value of BER.

Note 

The revert threshold settings must be less than the trigger threshold values.

  • 1E-4

  • 9E-3 to 1E-3

  • 9E-4 to 1E-4

  • 9E-5 to 1E-5

  • 9E-6 to 1E-6

  • 9E-7 to 1E-7

  • 9E-8 to 5E-8

Revert Window (ms)

Sets the duration when BER is monitored for settings that are less than the revert threshold value before which, proactive protection that is provided to the router is removed.

The revert window value must be at least 2000 ms and a multiple of:

  • 10 ms for a revert threshold of 1E-4 to 6E-7.

  • 100 ms for a revert threshold of 5E-7 to 5E-8.

The revert window must be less than or equal to 3900 ms.

Time in milliseconds.

Enable Proactive Protection

Enables proactive protection.

  • Disabled

  • FRR Proactive Protection

  • Pre-FEC PSM Proactive Protection

Step 3

Click Apply.


Provision ODU Interfaces

Use this task to modify the ODU settings of the card.

Before you begin

Procedure


Step 1

Click the Provisioning > ODU Interfaces tabs.

Step 2

Modify required settings described in the following table.

Table 23. ODU Interface Settings

Parameter

Description

Options

Port

(Display only) Displays the port name.

SF BER

Sets the signal fail (SF) bit error rate (BER).

Only 1E-5 is allowed.

SD BER

Sets the signal degrade (SD) bit error rate (BER).

  • 1E-5

  • 1E-6

  • 1E-7

  • 1E-8

  • 1E-9

Squelch Mode

When a LOS is detected on the near-end client input, the far-end client laser is turned off. It is said to be squelched.

Alternatively, an AIS can be invoked.

The OTU2-XP card supports Squelch Mode parameter when the card mode is set as Regenerator. The valid values are Squelch and AIS. When the card mode is set to Transponder or Mixed, the Squelch Mode cannot be changed and the parameter defaults to the Squelch value.

  • Squelch

  • AIS

SquelchHold Off Time

Sets the period in milliseconds that the client interface waits for resolution of issues on the trunk side. The client squelching starts after this period.

  • Disable

  • 50 ms

  • 100 ms

  • 250 ms

  • 500 ms

Framing Type (Only OTU2-XP)

(Display only) Contains details of the encapsulated payload inside the OTN framer.

ETHERNET

Step 3

Click Apply.


Provision OTU Interfaces

Use this task to modify the OTU settings of the card.

Before you begin

Procedure


Step 1

Click the Provisioning > OTU Interfaces tabs.

Step 2

Modify required settings described in the following table.

Table 24. OTU Interface Settings

Parameter

Description

Options

Port

(Display only) Displays the port name.

HD FEC

Sets the OTN lines to forward error correction (FEC).

Note 

When you change the FEC mode, you will see a pop-up alerting that the change will impact the traffic. Confirm whether you want to proceed.

  • DISABLE_FEC

  • EFEC

  • EFEC_14

  • EFEC_17

  • HG_FEC_20

  • HG_FEC_7

  • STANDARD_FEC

Note 

Only the FEC modes applicable for the card are displayed.

Interop Mode

Enables interoperability between line cards and other vendor interfaces.

  • InteropNone

  • InteropEnable

Supports Sync

(Display only) Displays the SupportsSync card parameter. If the value is true, the card is provisioned as a NE timing reference.

  • true

  • false

Sync Msg In

Sets the EnableSync card parameter. Enables synchronization status messages (S1 byte), which allow the node to choose the best timing source.

  • true

  • false

Admin SSM In

Overrides the synchronization status message (SSM) and the synchronization traceability unknown (STU) value. If the node does not receive an SSM signal, it defaults to STU.

  • G811

  • STU

  • G812T

  • G812L

  • SETS

  • DUS

  • PRS

  • ST2

  • ST3E

  • ST3

  • SMC

  • ST4

  • RES

  • STU_SDH

  • DUS_SDH

  • SSM_FAILED

  • RES_SDH

  • TNC

ODU Transparency (Only OTU2-XP)

Configures the ODU overhead bytes (EXP bytes and RES bytes 1 and 2). This parameter is supported only when the card is configured in Regenerator mode.

The two options available for this parameter are:

  • Transparent Standard Use—ODU overhead bytes are transparently passed through the card. This option allows the OTU2-XP card to act transparently between the two trunk ports.

  • Cisco Extended Use—ODU overhead bytes are terminated and regenerated on both the ports of the regenerator group.

  • Enabled (Transparent Standard Use)

  • Disabled (Cisco Extended Use)

Step 3

Click Apply.


Provision G.709 Thresholds

Use this task to provision the G.709 PM thresholds for the OTN ports.

Before you begin

Procedure


Step 1

Click the Provisioning > G.709 Thresholds tabs.

Step 2

Choose the value for the G.709 PM thresholds, and click Apply.

You can set the thresholds for Near End or Far End, for 15 minutes or 1 day intervals, or for SM (OTUk) or PM (ODUk).

Table 25. G.709 PM Thresholds

Parameter

Description

ES

Errored Seconds shows the number of errored seconds recorded during the PM time interval.

SES

Severely Errored Seconds shows the severely errored seconds recorded during the PM time interval.

UAS

Unavailable Seconds shows the unavailable seconds recorded during the PM time interval.

BBE

Background block error shows the number of background block errors that are recorded during the PM time interval.

FC

Failure Counter shows the number of failure counts recorded during the PM time interval.


Provision FEC Thresholds

Use this task to provision the FEC thresholds for the card.

Before you begin

Procedure


Step 1

Click the Provisioning > FEC Thresholds tabs.

Step 2

Choose the value for the FEC PMs and click Apply.

You can set the FEC thresholds for 15 minutes or one-day intervals.

The possible PM types are:

  • BIT-EC—Sets the value for bit errors corrected.

  • UNC-WORDS—Sets the value for uncorrectable words.


Provision Trail Trace Monitoring

This task provisions the trail trace monitoring parameters that are supported for both the OTU and ODU payloads. Trail trace monitoring is supported on all the cards except CFP-LC.

Before you begin

Procedure


Step 1

Click the Provisioning > Trail Trace Monitoring tabs.

Step 2

From the Level drop-down list, choose Section to list all the OTU interfaces and Path to list all the ODU interfaces.

Step 3

Modify required settings as described in the following table.

Table 26. Trail Trace Identifier Settings

Parameter

Description

Options

Port

Displays the port number.

Tx-SAPI

(All cards except 40E-MXP-C and OTU2-XP)

Displays the current Source Access Point Identifier (SAPI) transmit string of the TTI or sets a new transmit string.

0–15 bytes

Tx-DAPI

(All cards except 40E-MXP-C and OTU2-XP)

Displays the current Destination Access Point Identifier (DAPI) transmit string of the TTI or sets a new transmit string.

0–15 bytes

Tx-Operator

(All cards except 40E-MXP-C and OTU2-XP)

User operator data of the TTI.

0–32 bytes

Legacy Tx-TTI

(Only 40E-MXP-C and OTU2-XP)

Displays the current transmit string of the TTI or sets a new transmit string.

0-64 bytes

Expected-SAPI

(All cards except 40E-MXP-C and OTU2-XP)

Displays the current expected SAPI string or sets a new expected string.

0–15 bytes

Expected-DAPI

(All cards except 40E-MXP-C and OTU2-XP)

Displays the current expected DAPI string or sets a new expected string.

0–15 bytes

Legacy Expected-TTI

(Only 40E-MXP-C and OTU2-XP)

Displays the current expected string or sets a new expected string.

0-64 bytes

Rx-SAPI

(All cards except 40E-MXP-C and OTU2-XP)

(Display only) Displays the current received SAPI string.

Rx-DAPI

(All cards except 40E-MXP-C and OTU2-XP)

(Display only) Displays the current received DAPI string.

Rx-Operator

(All cards except 40E-MXP-C and OTU2-XP)

(Display only) User operator data of the TTI.

Legacy Rx-TTI

(Only 40E-MXP-C and OTU2-XP)

(Display only) Displays the current received string.

Alarm Propagation

If a discrepancy is detected between the expected and received trace, it raises an alarm. If set to True, the alarm is propagated downstream to the other nodes.

  • True

  • False

Detect Mode

Sets the mode for detecting the discrepancy between the expected and received trace.

  • Disabled

  • Enabled

  • SAPI

    (All cards except 40E-MXP-C and OTU2-XP)

  • DAPI

    (All cards except 40E-MXP-C and OTU2-XP)

  • SAPI-and-DAPI

    (All cards except 40E-MXP-C and OTU2-XP)

Step 4

Click Apply.


Provision ZR Plus Interfaces

Use this task to provision the parameters for the ZR Plus interfaces of the 1.2T-MXP card.

Before you begin

Procedure


Step 1

Click the Provisioning > ZR Plus Interfaces tabs.

Step 2

Modify any of the ZR Plus settings as described in the following table. These parameters depend on the card mode.

Table 27. Card ZR Plus Settings

Parameter

Description

Options

Port

(Display only) Displays the port number

Squelch Mode

(Display only) Displays the squelch mode

  • LF

Squelch Hold Off Time

Sets the period in milliseconds that the client interface waits for resolution of issues on the trunk side. The client squelching starts after this period

  • Disable

FEC

Sets the FEC mode

OFEC_15_DE_ON

GroupId

Sets the GroupId that uniquely identifies a group of physical ports in a ZR frame. This makes sure that noncompliant groups do not interoperate. When a mismatch in the group is identified, GIDM alarm is raised.

1–255

Step 3

Click Apply.


Provision ZR Plus Trail Trace Monitoring

This task provisions the trail trace monitoring parameters that are supported for the ZR plus payloads on the 1.2T-MXP card.

Before you begin

Procedure


Step 1

Click the Provisioning > ZR Plus Trail Trace Monitoring tabs.

Step 2

Modify any of the ZR Plus settings as described in the following table.

Table 28. ZR plus Trail Tracing Settings

Parameter

Description

Options

Port

(Display only) Displays the port number.

Send-Tti

Sets the transmit TTI String.

0–32 Bytes

Expected-Tti

Sets the expected TTI String.

0–32 Bytes

Received-Tti

(Display only) Displays the received TTI String.

0–32 Bytes

Note 
  • When the trunk port is in OOS-DSBL state, its received TTI is not displayed in the GUI.

Step 3

Click Apply.


Provision Optical Channels

Use this task to provision the parameters for the optical channels on the card.

Before you begin

Procedure


Step 1

Click the Provisioning > Optical Channel tabs.

Step 2

Modify required settings described in the following table.

Table 29. Optical Channel Settings

Parameter

Description

Options

Port

(Display only) Displays the port name.

Reach

Indicates the distance from one node to another node.

  • Auto Provision

  • List of reach values

SD FEC

Indicates the standard FEC.

  • SD_FEC_15_DE_OFF

  • SD_FEC_15_DE_ON

  • SD_FEC_20

  • SD_FEC_25_DE_OFF

  • SD_FEC_25_DE_ON

  • SD_FEC_7

Tx Power (dBm)

Sets the Tx power on the trunk port.

The range is –10.0 to 0.25 dBm.

PSM Info

When enabled on a TXP or MXP trunk port that is connected to a PSM card, it allows fast switching on the cards.

  • NA

  • Enable

  • Disable

Frequency (THz)

Sets the frequency in THz

-

Wavelength (nm)

(Display only) Wavelength is set based on the frequency.

-

Tx Shutdown

(Display only)

  • true

  • false

Width (GHz)

(Display only)

-

CD (Working Range) High (ps/nm)

Sets the threshold for maximum chromatic dispersion.

-

CD (Working Range) Low (ps/nm)

Sets the threshold for minimum chromatic dispersion.

-

Admin State

Sets the port service state unless network conditions prevent the change.

  • Unlocked (ETSI)/ IS (ANSI)

  • Locked, disabled (ETSI)/ OOS, DSBLD (ANSI)

  • Locked, maintenance (ETSI)/ OOS, MT (ANSI)

  • Unlocked, automaticInService (ETSI)/ IS, AINS (ANSI)

OTN Enabled (Only OTU2-XP)

Sets the OTN lines according to ITU-T G.709.

  • true

  • false

Step 3

Click Apply.


Provision Optics Thresholds

Use this task to provision the optics thresholds of all the payload ports of the card.

Before you begin

Procedure


Step 1

Click the Provisioning > Optics Thresholds tabs.

Step 2

Choose the types (Alarm or TCA) and 15-minute or one-day intervals, and click Apply.

Step 3

Modify required settings described in the following table.

Table 30. Optics Threshold Settings

Parameter

Description

Port

(Display only) Displays the port name

RX Power High (dBm)

Sets the maximum optical power received

RX Power Low (dBm)

Sets the minimum optical power received

TX Power High (dBm)

Sets the maximum optical power transmitted

TX Power Low (dBm)

Sets the minimum optical power transmitted

CD (Working Range) High (ps/nm)

Sets the threshold for maximum chromatic dispersion

CD (Working Range) Low (ps/nm)

Sets the threshold for minimum chromatic dispersion

Laser Bias High (%)

Sets the maximum laser bias

OSNR Power High (dBm)

Maximum Optical Signal to Noise Ratio (OSNR) during the PM time interval

OSNR Power Low (dBm)

Minimum OSNR during the PM time interval

PMD High (ps)

Maximum Polarization Mode Dispersion (PMD) during the PM time interval

PMD Low (ps)

Minimum PMD during the PM time interval

Step 4

Click Apply.


Provision Ethernet Interfaces

Use this task to provision the parameters for the Ethernet interfaces of the card.

Before you begin

Procedure


Step 1

Click the Provisioning > Ethernet Interfaces tabs.

Step 2

Modify any of the Ethernet settings as described in the following table. These parameters appear depends on the card mode.

Table 31. Card Ethernet Settings

Parameter

Description

Options

Port

(Display only) Displays the port number

Speed

Sets the expected port speed.

FEC

Sets the FEC mode. When set to On, FEC is enabled.

  • NA

  • Auto (default)

  • On

  • Off

MTU

Sets the maximum size of the Ethernet frames that are accepted by the port. The port must be in OOS/locked state.

Numeric. Default: 1548

Range 64–9700

Duplex

Sets the expected duplex capability of ports.

  • Full

  • Half

Mapping

Sets the mapping mode.

  • CBR

  • GFP

Autonegotiation

Enables or disables autonegotiation on the port.

  • Disabled

  • Enabled

Squelch Mode

Sets the squelch mode.

  • Disable

  • Squelch

  • LF

Squelch Hold Off time

Sets the period in milliseconds that the client interface waits for resolution of issues on the trunk side. The client squelching starts after this period or local fault is sent.

  • Disable

  • 50 ms

  • 100 ms

  • 250 ms

  • 500 ms

Step 3

Click Apply.


Provision RMON Thresholds

Use this task to create and list the RMON thresholds of the Ethernet ports of the card.

Before you begin

Procedure


Step 1

Click the Provisioning > RMON Thresholds tabs.

Step 2

Click the + button.

The Create RMON Threshold dialog box appears.

Step 3

From the Port ID drop-down list, choose the Ethernet port.

Step 4

From the Variable drop-down list, choose a variable. The following tables lists the available variables.

Table 32. Card Ethernet Variables

Variable

Description

ifInOctets

Number of bytes received since the last counter reset.

RxTotalPkts

Total number of received packets.

IfInUcastPkts 15

Total number of packets that are delivered by this sublayer to a higher sublayer that is not addressed to a multicast or broadcast address.

IfInMulticastPkts 1

Total number of packets that are delivered by this sublayer to a higher sublayer that is addressed to a multicast address. For a MAC layer protocol, this includes both group and functional addresses.

IfInBroadcastPkts 1

Total number of packets that are delivered by this sublayer to a higher sublayer that is addressed to a broadcast address.

IfInErrors

Total number of received errors.

IfOutOctets

Total number of octets transmitted out of the interface, including framing characters.

TxTotalPkts

Total number of transmitted packets.

IfOutUcastPkts

Total count of packets that are transmitted to a unicast group destination address.

IfOutMulticastPkts

Total number of packets that higher-level protocols requested to be transmitted, which are addressed to a multicast address at this sublayer. These include packets that are discarded or not sent. For a MAC layer protocol, this includes both group and functional addresses.

IfOutBroadcastPkts

Total number of packets that higher-level protocols requested to be transmitted, which are addressed to a broadcast address at this sublayer. These include packets that are discarded or not sent.

Dot3StatsAlignmentErrors

Total number of frames received on a particular interface that are not an integral number of octets in length and do not pass the FCS check. This counter is only valid for FE modes of operation.

Dot3StatsFCSErrors

Total number of frames received on a particular interface that are an integral number of octets in length but do not pass the FCS check.

Dot3StatsFrameTooLong

Total number of frames received on a particular interface that exceed the maximum permitted frame size.

EtherStatsUndersizePkts

Total number of packets received that were less than 64 octets long (excluding framing bits, but including FCS octets) and are otherwise well formed.

EtherStatsFragments

Total number of packets received that are less than 64 octets in length (excluding framing bits, but including FCS octets) and had either a bad FCS with an integral number of octets (FCS error) or a bad FCS with a nonintegral number of octets (alignment error).

Note that it is entirely normal for etherStatsFragments to increment. This is because it counts both runts (which are normal occurrences due to collisions) and noise hits.

EtherStatsPkts

Total number of frames that are received on an interface in both Rx and Tx directions.

EtherStatsPkts64Octets

Total number of packets (including bad packets) received that are 64 octets in length (excluding framing bits, but including FCS octets).

EtherStatsPkts65to127Octets

Total number of packets (including bad packets) received that are from 65 through 127 octets in length inclusive (excluding framing bits, but including FCS octets).

EtherStatsPkts128to255Octets

Total number of packets (including bad packets) received that are from 128 through 255 octets in length inclusive (excluding framing bits, but including FCS octets).

EtherStatsPkts256to511Octets

Total number of packets (including bad packets) received that are from 256 through 511 octets in length inclusive (excluding framing bits, but including FCS octets).

EtherStatsPkts512to1023Octets

Total number of packets (including bad packets) received that are from 512 through 1023 octets in length inclusive (excluding framing bits, but including FCS octets).

EtherStatsPkts1024to1518Octets

Total number of packets (including bad packets) received that are from 1024 through 1518 octets in length inclusive (excluding framing bits, but including FCS octets).

EtherStatsBroadcastPkts

Total number of good packets received that are directed to the broadcast address. This total number does not include multicast packets.

EtherStatsMulticastPkts

Total number of good packets received that are directed to a multicast address. This total number does not include packets that are directed to the broadcast address.

EtherStatsOversizePkts

Total number of packets received that are longer than 1518 octets (excluding framing bits, but including FCS octets) and are otherwise well formed.

EtherStatsJabbers

Total number of packets received that are longer than 1518 octets (excluding framing bits, but including FCS octets), and had either a bad FCS with an integral number of octets (FCS error) or a bad FCS with a nonintegral number of octets (alignment error).

EtherStatsOctets

Total number of octets of data (including those data in bad packets) received on the network (excluding framing bits, but including FCS octets).

EtherStatsPkts1519toMaxOctets

Total number of packets (including bad packets) received that were 1591 octets in length (excluding framing bits, but including FCS octets).

15 The counter does not increment for traffic with incorrect Ethertype and packet size of more than 64 bytes on the 10x10G-LC and 100G-LC-C cards.
Table 33. 10x10G-LC FC/FICON Variables

Variable

Description

RxTotalPkts

Total number of received packets.

TxTotalPkts

Total number of transmitted packets.

MediaIndStatsRxFramesBadCRC

Total number of received data frames with payload CRC errors when an HDLC framing is used.

MediaIndStatsTxFramesBadCRC

Total number of transmitted data frames with payload CRC errors when the HDLC framing is used.

MediaIndStatsRxFramesTruncated

Total number of frames received that are less than 5 bytes. This value is a part of the High-Level Data Link Control (HDLC) and GFP port statistics.

MediaIndStatsTxFramesTruncated

Total number of transmitted data frames that exceed the MTU. This value is a part of the HDLC and GFP port statistics.

MediaIndStatsRxFramesTooLong

Total number of received frames that exceed the maximum transmission unit (MTU). This value is part of the HDLC and GFP port statistics.

MediaIndStatsTxFramesTooLong

Total number of transmitted data frames that are less than 5 bytes. This value is a part of the HDLC and GFP port statistics.

IfInOctets

Total number of octets received on the interface, including the framing octet.

IfOutOctets

Total number of octets transmitted out of the interface, including framing characters.

IfInErrors

Total number of inbound packets that contained errors preventing them from being delivered to a higher-layer protocol.

IfOutErrors

Total number of outbound packets or transmission units that could not be transmitted because of errors.

Table 34. 10x10G-LC GFP RMON Variables

Variable

Description

GfpStatsRxFrame

Total number of received data frames.

GfpStatsTxFrame

Total number of transmitted data frames.

GfpStatsRxCRCErrors

Total number of CRC errors with the receive transparent GFP frame.

GfpStatsRxOctets

Total number of GFP data octets received.

GfpStatsTxOctets

Total number of GFP data octets transmitted.

GfpStatsRxSBitErrors

Received GFP frames with single bit errors in the core header (these errors can be corrected).

GfpStatsRxMBitErrors

Received GFP frames with multiple bit errors in the core header (these errors cannot be corrected).

GfpStatsRxTypeInvalid

Received GFP frames with invalid type (these are discarded). For example, receiving GFP frames that contain Ethernet data when we expect Fibre Channel data.

GfpRxCmfFrame

GfpRxCmfFrame

Step 5

From the Alarm Type drop-down list, indicate whether the event is triggered by the rising threshold, falling threshold, or both rising and falling thresholds.

The available options are Rising Threshold, Falling Threshold, and Rising and Falling Threshold.

Step 6

From the Sampling Type drop-down list, choose either Relative or Absolute.

Relative restricts the threshold to use the number of occurrences in the user-set sample period. Absolute sets the threshold to use the total number of occurrences, regardless of the time period.

Step 7

Enter the appropriate number of seconds in the Sampling Period field.

Step 8

Enter the appropriate number of occurrences in the Rising Threshold field.

For a rising type of alarm, the measured value must move from below the falling threshold to above the rising threshold. For example, if a network is running below a rising threshold of 1000 collisions every 15 seconds and a problem causes 1001 collisions in 15 seconds, the excess occurrences trigger an alarm.

Step 9

Enter the appropriate number of occurrences in the Falling Threshold field. In most cases, a falling threshold is set to a lower value than the value of the rising threshold.

A falling threshold is the counterpart to a rising threshold. When the number of occurrences is above the rising threshold and then drops below a falling threshold, it resets the rising threshold. For example, when the network problem that caused 1001 collisions in 15-seconds subsides and creates only 799 collisions in 15 seconds, occurrences fall below a falling threshold of 800 collisions. This resets the rising threshold so that if network collisions again spike over a 1000 per 15-second period, an event again triggers when the rising threshold is crossed. An event is triggered only the first time a rising threshold is exceeded (otherwise, a single network problem might cause a rising threshold to be exceeded multiple times and cause a flood of events).

Step 10

Click Apply.


Provision Loopback

Use this task to provision loopback on the card.


Caution

This task is traffic-affecting.


Before you begin

Procedure


Step 1

Click the Maintenance > Loopback tabs.

From R12.1, the colomns Admin State and Service State are added to the Loopback table.

Step 2

From the Loopback Type drop-down list, choose Terminal, Facility, Terminal-Drop, or Facility-Drop for each port required.

Step 3

Click Apply.


Provision Optical Safety

Use this task to provision the optical safety parameters for cards.

Before you begin

Procedure


Step 1

Click the Maintenance > Optical Safety tabs.

Step 2

Modify required settings described in the following table:

Table 35. Optical Safety Parameters for Cards

Parameter

Description

Options

Interface

(Display only) Displays the port name, port type, and direction.

Supported Safety

(Display only) Displays the supported safety mechanism.

  • ALS for line cards and control cards.

  • ALS-OSRI for amplifier cards.

ALS Mode

Automatic laser shutdown mode. The ALS mode is disabled for RX ALS interfaces.

From the drop-down list, choose one of the following:

  • ALS-Disabled—Deactivates ALS.

  • Automatic Restart—(Default) ALS is active. The power is automatically shut down when needed, and it automatically tries to restart using a probe pulse until the cause of the failure is repaired.

  • Manual Restart

OSRI

Optical safety remote interlock. The default value is OSRI-OFF. When set to OSRI-ON, the TX output power is shut down.

Note 

OSRI configuration is not supported on the transponder and muxponder cards.

From the drop-down list, choose one of the following:

  • OSRI-OFF

  • OSRI-ON

ALS Status

(Display only) ALS status of the device.

  • Working

  • Shutdown

Recovery Pulse Interval

Displays the interval between two optical power pulses.

60 to 300 seconds.

Recovery Pulse Duration

Displays the duration of the optical power pulse that begins when an amplifier restarts.

2 to 100 seconds

Manual Restart

Triggers manual restart action for the ALS interface. However, manual restart does not happen if Mode is set to Automatic Restart or Disabled.

Step 3

Click Apply to save the changes.


Provision PRBS

This task provisions the Pseudo Random Binary Sequence (PRBS) settings on the card.

PRBS supports the following cards:

  • 100G-LC-C, 100G-CK-C, and 200G-CK-C cards in TXP-100G operating mode

  • 200G-CK-C and MR-MXP card combination in MXP-CK-100G-CPAK operating mode

Before you begin

Procedure


Step 1

Change the admin state on the trunk port to Locked, disabled (ETSI) /OOS,DSBLD (ANSI). See Provision Optical Channels.

Step 2

Click the Maintenance > PRBS tabs.

From R12.1, the colomns Admin State and Service State are added to the PRBS table.

Step 3

From the Generator Pattern drop-down list, choose a pattern for each port.The supported patterns are PRBS_NONE and PRBS_PN31.

Apply the same pattern in both source and destination trunk ports.

Step 4

Click Apply.

The Pattern Sync Status field displays one of the following values:

  • PATTERN_OK: When the port is receiving one of the recognized patterns.

  • PATTERN_ERROR—When the port is receiving a recognized pattern but the pattern contains errors. This error also occurs when there is a pattern mismatch.

  • PATTERN_NONE—When the port is not receiving a recognized PRBS pattern.

In case of pattern errors, the card provides a PRBS error counter. The counter zeroes itself when the PRBS is disabled.

Step 5

Change the admin state on the trunk port to Unlocked (ETSI) /IS (ANSI). See Provision Optical Channels.


Retrieve MAC Addresses through LLDP

Use this task to retrieve the source MAC address of the host connected to the 100GE or 400GE ports of 1.2T-MXP card, after a Link Layer Discovery Protocol (LLDP) packet is received on the client port.

Before you begin

Procedure


Step 1

Click the Maintenance > LLDP tabs.

Step 2

Click Refresh.

The table displays the following fields:

  • Port―Displays the port number.

  • Source MAC Address―Displays the MAC address of the node to which the port is connected.


Limitations of LLDP Support on the 1.2T-MXP Card

The LLDP support on the 1.2T-MXP card has the following limitations:

  • The 1.2T-MXP card can handle only one LLDP packet of 2000byte size per client every four seconds.

  • LLDP packet is not detected when the client ports if moved to from IS to OOS.

  • After the trunk port transits from OOS to IS, there is a delay of 12 to15 seconds to detect the LLDP packet and display it on the GUI.

  • LLDP capture does not happen when CFP2 DCO associated with the client port is not plugged in.

  • The 1.2T-MXP card captures the LLDP packets only when:

    1. The value of ETH TYPE header is 0x88CC.

    2. The destination Multicast addresses are:

      • 01:80:C2:00:00:00

      • 01:80:C2:00:00:0e

      • 01:80:C2:00:00:03

Provision FPD Upgrade for the Ports

When the firmware version on the DCO trunk port is earlier than the NCS 2000 package firmware version, an alarm "FPD-UPG-REQUIRED" is raised on that trunk port in the Alarms tab. Each trunk port has separate upgrade alarm.

You can view the DCO running firmware version and the NCS 2000 package firmware version under the Maintenance > FPD upgrade tabs.

Use this task to upgrade DCO (trunk ports) on the 1.2T-MXP card with the latest firmware released as part of the NCS 2000 software release.

Before you begin

Procedure


Step 1

Click the Maintenance > FPD Upgrade tabs.

Step 2

Click FPD upgrade to perform firmware upgrade for the chosen ports.

After the firmware upgrade is completed successfully, the "FPD-UPG-REQUIRED" alarm gets cleared in the Alarms tab and you can view the updated running firmware version in the FPD Upgrade table.

Note 

Traffic affecting DCO upgrade is not supported in Release 12.2.