Configuring the Card Mode

This chapter lists the supported configurations and the procedures to configure the card mode on the line cards.


Note


Unless otherwise specified, “line cards” refers to 1.2T and 1.2TL line cards.

1.2T and 1.2TL Line Cards

The following section describes the supported configurations and procedures to configure the card modes on the line cards.

Card Modes

The line cards support module and slice configurations.

The line cards have two trunk ports (0 and 1) and 12 client ports (2 through 13) each. You can configure the line card in two modes:

  • Muxponder—In this mode, both trunk ports are configured with the same trunk rate. The client-to-trunk mapping is in a sequence.

  • Muxponder slice—In this mode, each trunk port is configured independent of the other with different trunk rates. The client-to-trunk mapping is fixed. For Trunk 0, the client ports are 2 through 7. For Trunk 1, the client ports are 8 through 13.

Sub 50G Configuration

You can configure the sub 50G or coupled mode on the line card only in the muxponder mode. The following table displays the port configuration for the supported data rates.

Trunk Data Rate (per trunk)

Total Configured Data rate

Card Support

Trunk Ports

Client Ports for Trunk 0 (100G)

Shared Client Port (50G per trunk)

Client Ports for Trunk 1 (100G)

50G

100G

1.2T, 1.2TL

0, 1

-

2

-

150G

300G

1.2T, 1.2TL

0, 1

2

3

4

250G

500G

1.2T, 1.2TL

0, 1

2, 3

4

5, 6

350G

700G

1.2T, 1.2TL

0, 1

2, 3, 4

5

6, 7, 8

450G

900G

1.2T

0, 1

2, 3, 4, 5

6

7, 8, 9, 10

550G

1.1T

1.2T

0, 1

2, 3, 4, 5, 6

7

8, 9, 10, 11, 12


Note


In all x50G configurations, client traffic on the middle port is affected with ODUK-BDI and LF alarms after the power cycle or link flap on the trunk side. This issue is raised when the two network lanes work in coupled mode and move from low to high power. To solve this issue, create a new frame either at the near-end or far-end by performing shut or no shut of the trunk ports.


Coupled Mode Restrictions

The following restrictions apply to the coupled mode configuration:

  • Both trunk ports must be configured with the same bits-per-symbol or baud rate and must be sent over same fiber and direction.

  • The chromatic dispersion must be configured to the same value for both trunk ports.

  • When trunk internal loopback is configured, it must be done for both trunk ports. Configuring internal loopback on only one trunk results in traffic loss.

  • Fault on a trunk port of a coupled pair may cause errors on all clients including those running only on the unaffected trunk port.

Supported Data Rates

The following data rates are supported on the line card.

In R7.0.1, you can configure the client port to OTU4 only in the muxponder mode. In R7.1.1 and later releases, you can configure the client port to OTU4 in both the muxponder and muxponder slice modes. In muxponder slice mode, both the slices must be configured with either OTU4 or 100GE Ethernet client rates in R7.1.1. In R7.2.0, a mixed configuration of OTU4 and 100GE is supported in the muxponder slice mode. LLDP drop, L1 encryption, and AINS are not supported on the OTU4 configuration.

The following table displays the client and trunk ports that are enabled for the muxponder configuration.

Trunk Data Rate

Card Support

Client Data Rate (100GE, OTU4)

Trunk Ports

Client Ports

100

1.2T, 1.2TL

100GE, OTU4

0

2

200

1.2T, 1.2TL

100GE, OTU4

0, 1

2, 3, 4, 5

300

1.2T, 1.2TL

100GE, OTU4

0, 1

2, 3, 4, 5, 6, 7

400

1.2T, 1.2TL

100GE, OTU4

0, 1

2, 3, 4, 5, 6, 7, 8, 9

500

1.2T

100GE, OTU4

0, 1

2, 3, 4, 5, 6, 7, 8, 9, 10, 11

600

1.2T

100GE, OTU4

0, 1

2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13

The following table displays the client and trunk ports that are enabled for the muxponder slice 0 configuration.

Trunk Data Rate

Card Support

Client Data Rate

Trunk Ports

Client Ports

100

1.2T, 1.2TL

100, OTU4

0

2

200

1.2T, 1.2TL

100, OTU4

0

2, 3

300

1.2T, 1.2TL

100, OTU4

0

2, 3, 4

400

1.2T, 1.2TL

100, OTU4

0

2, 3, 4, 5

500

1.2T

100, OTU4

0

2, 3, 4, 5, 6

600

1.2T

100, OTU4

0

2, 3, 4, 5, 6, 7

The following table displays the client and trunk ports that are enabled for the muxponder slice 1 configuration.

Trunk Data Rate

Card Support

Client Data Rate

Trunk Ports

Client Ports

100

1.2T, 1.2TL

100, OTU4

1

8

200

1.2T, 1.2TL

100, OTU4

1

8, 9

300

1.2T, 1.2TL

100, OTU4

1

8, 9, 10

400

1.2T, 1.2TL

100, OTU4

1

8, 9, 10, 11

500

1.2T

100, OTU4

1

8, 9, 10, 11, 12

600

1.2T

100, OTU4

1

8, 9, 10, 11, 12, 13

All configurations can be accomplished by using appropriate values for client bitrate and trunk bitrate parameters of the hw-module command.

The following table displays the trunk parameter ranges for the 1.2T card.

Trunk Payload

FEC

Min BPS

Max BPS

Min GBd

Max GBd

50G

15%

1

1.3125

24.0207911

31.5272884

50G

27%

1

1.4453125

24.0207911

34.7175497

100G

15%

1

2.625

24.0207911

63.0545768

100G

27%

1

2.890625

24.0207911

69.4350994

150G

15%

1.3203125

3.9375

24.0207911

71.6359689

150G

27%

1.453125

4.3359375

24.0207911

71.6749413

200G

15%

1.7578125

5.25

24.0207911

71.7420962

200G

27%

2

4.40625

31.51

69.43

250G

15%

2.1953125

6

26.2727403

71.8059237

250G

27%

2.4140625

6

28.9312914

71.9068991

300G

15%

2.6328125

6

31.5272884

71.8485385

300G

27%

2.8984375

6

34.7175497

71.8681352

350G

15%

3.0703125

6

36.7818364

71.8790086

350G

27%

3.3828125

6

40.503808

71.8404724

400G

15%

3.5078125

6

42.0363845

71.9018782

400G

27%

3.8671875

6

46.2900663

71.8197392

450G

15%

3.9453125

6

47.2909326

71.9196757

450G

27%

4.34375

6

52.0763245

71.9327648

500G

15%

4.3828125

6

52.5454806

71.93392

500G

27%

4.8281250

6

57.8625828

71.9068991

550G

15%

4.8203125

6

57.8000287

71.9455787

550G

27%

5.3125

6

63.6488411

71.88575

600G

15%

5.2578125

-

-

71.9552971

The following table displays the trunk parameter ranges for the 1.2TL card.

Trunk Payload

FEC

Min BPS

Max BPS

Min GBd

Max GBd

100G

15%

1

2.625

24.0207911

63.0545768

100G

27%

1

2.890625

24.0207911

69.4350994

150G

15%

1.3203125

3.9375

24.0207911

71.6359689

150G

27%

1.453125

4.3359375

24.0207911

71.6749413

200G

15%

2

4

31.5272884

63.0545768

200G

27%

2

4.40625

31.51664088

69.43509943

250G

15%

2.1953125

4.5

35.0303204

71.8059237

250G

27%

2.4140625

4.5

38.5750552

71.9068991

300G

15%

2.6328125

4.5

42.0363845

71.8485385

300G

27%

2.8984375‬

4.5

46.2900662857142

71.86813526

350G

15%

3.0703125

4.5

49.0424486

71.8790086

350G

27%

3.3828125

4.5

54.0050773

71.8404724

400G

15%

3.5078125

4.5

56.0485127

71.9018782

400G

27%

3.8671875

4.5

61.72008838

71.81973921

To configure the BPS, see Configuring the BPS.

Configuring the Card Mode

You can configure the line card in the module (muxponder) or slice configuration (muxponder slice).

To configure the card in the muxponder mode, use the following commands.

configure

hw-module location location mxponder client-rate {100GE | OTU4}

hw-module location location mxponder trunk-rate {50G | 100G150G | 200G | 250G | 300G | 350G | 400G | 450G | 500G | 550G | 600G }

commit

To configure the card in the muxponder slice mode, use the following commands.

configure

hw-module location location mxponder-slice mxponder-slice-number client-rate { 100GE|OTU4}

hw-module location location mxponder-slice trunk-rate { 100G | 200G | 300G | 400G | 500G | 600G }

commit

Examples

The following is a sample in which the card is configured in the muxponder mode with a 550G trunk payload.


RP/0/RP0/CPU0:ios#config
Tue Oct 15 01:24:56.355 UTC
RP/0/RP0/CPU0:ios(config)#hw-module location 0/1 mxponder client-rate 100GE
RP/0/RP0/CPU0:ios(config)#hw-module location 0/1 mxponder trunk-rate 550G
RP/0/RP0/CPU0:ios(config)#commit

The following is a sample in which the card is configured in the muxponder mode with a 500G trunk payload.


RP/0/RP0/CPU0:ios#config
Sun Feb 24 14:09:33.989 UTC
RP/0/RP0/CPU0:ios(config)#hw-module location 0/2 mxponder client-rate OTU4
RP/0/RP0/CPU0:ios(config)#hw-module location 0/2 mxponder trunk-rate 500G
RP/0/RP0/CPU0:ios(config)#commit

The following is a sample in which the card is configured in the muxponder slice 0 mode with a 500G trunk payload.


RP/0/RP0/CPU0:ios#config
RP/0/RP0/CPU0:ios(config)#hw-module location 0/1 mxponder-slice 0 client-rate 100GE
RP/0/RP0/CPU0:ios(config)#hw-module location 0/1 mxponder-slice 0 trunk-rate 500G
RP/0/RP0/CPU0:ios(config)#commit

The following is a sample in which the card is configured in the muxponder slice 1 mode with a 400G trunk payload.


RP/0/RP0/CPU0:ios#config
RP/0/RP0/CPU0:ios(config)#hw-module location 0/1 mxponder-slice 1 client-rate 100GE
RP/0/RP0/CPU0:ios(config)#hw-module location 0/1 mxponder-slice 1 trunk-rate 400G
RP/0/RP0/CPU0:ios(config)#commit

The following is a sample in which the card is configured with mixed client rates in the muxponder slice mode.


RP/0/RP0/CPU0:ios#configure
Mon Mar 23 06:10:22.227 UTC
RP/0/RP0/CPU0:ios(config)#hw-module location 0/1 mxponder-slice 0 client-rate OTU4 trunk-rate 500G 
RP/0/RP0/CPU0:ios(config)#hw-module location 0/1 mxponder-slice 1 client-rate 100GE trunk-rate 500G
RP/0/RP0/CPU0:ios(config)#commit

Verifying the Card Configuration


RP/0/RP0/CPU0:ios#show hw-module location 0/2 mxponder
Fri Mar 15 11:48:48.344 IST

Location:             0/2
Client Bitrate:       100GE
Trunk  Bitrate:       500G
Status:               Provisioned
LLDP Drop Enabled:    FALSE
Client Port            Mapper/Trunk Port   CoherentDSP0/2/0/0  CoherentDSP0/2/0/1
                     Traffic Split Percentage

HundredGigECtrlr0/2/0/2  ODU40/2/0/0/1                100                   0
HundredGigECtrlr0/2/0/3  ODU40/2/0/0/2                100                   0
HundredGigECtrlr0/2/0/4  ODU40/2/0/0/3                100                   0
HundredGigECtrlr0/2/0/5  ODU40/2/0/0/4                100                   0
HundredGigECtrlr0/2/0/6  ODU40/2/0/0/5                100                   0
HundredGigECtrlr0/2/0/7  ODU40/2/0/1/1                  0                 100
HundredGigECtrlr0/2/0/8  ODU40/2/0/1/2                  0                 100
HundredGigECtrlr0/2/0/9  ODU40/2/0/1/3                  0                 100
HundredGigECtrlr0/2/0/10 ODU40/2/0/1/4                  0                 100
HundredGigECtrlr0/2/0/11 ODU40/2/0/1/5                  0                 100

The following is a sample ouput of the coupled mode configuration where the shared client port is highlighted.

RP/0/RP0/CPU0:ios#show hw-module location 0/1 mxponder
Tue Oct 15 01:25:57.358 UTC

Location:             0/1
Client Bitrate:       100GE
Trunk  Bitrate:       550G
Status:               Provisioned
LLDP Drop Enabled:    FALSE
Client Port           Mapper/Trunk Port    CoherentDSP0/1/0/0 CoherentDSP0/1/0/1
                   Traffic Split Percentage

HundredGigECtrlr0/1/0/2    ODU40/1/0/0/1             100                   0
HundredGigECtrlr0/1/0/3    ODU40/1/0/0/2             100                   0
HundredGigECtrlr0/1/0/4    ODU40/1/0/0/3             100                   0
HundredGigECtrlr0/1/0/5    ODU40/1/0/0/4             100                   0
HundredGigECtrlr0/1/0/6    ODU40/1/0/0/5             100                   0
HundredGigECtrlr0/1/0/7    ODU40/1/0/0/6              50                  50
HundredGigECtrlr0/1/0/8    ODU40/1/0/1/1               0                 100
HundredGigECtrlr0/1/0/9    ODU40/1/0/1/2               0                 100
HundredGigECtrlr0/1/0/10   ODU40/1/0/1/3               0                 100
HundredGigECtrlr0/1/0/11   ODU40/1/0/1/4               0                 100
HundredGigECtrlr0/1/0/12   ODU40/1/0/1/5               0                 100

The following is a sample ouput of all the muxponder slice 0 configurations.


RP/0/RP0/CPU0:ios#show hw-module location 0/1 mxponder-slice  0
Fri Mar 15 06:04:18.348 UTC

Location:             0/1
Slice ID:             0
Client Bitrate:       100GE
Trunk  Bitrate:       500G
Status:               Provisioned
LLDP Drop Enabled:    FALSE
Client Port                     Mapper/Trunk Port          CoherentDSP0/1/0/0
                                Traffic Split Percentage

HundredGigECtrlr0/1/0/2         ODU40/1/0/0/1                      100
HundredGigECtrlr0/1/0/3         ODU40/1/0/0/2                      100
HundredGigECtrlr0/1/0/4         ODU40/1/0/0/3                      100
HundredGigECtrlr0/1/0/5         ODU40/1/0/0/4                      100
HundredGigECtrlr0/1/0/6         ODU40/1/0/0/5                      100

The following is a sample ouput of all the muxponder slice 1 configurations.


RP/0/RP0/CPU0:ios#show hw-module location 0/1 mxponder-slice 1
Fri Mar 15 06:11:50.020 UTC

Location:             0/1
Slice ID:             1
Client Bitrate:       100GE
Trunk  Bitrate:       400G
Status:               Provisioned
LLDP Drop Enabled:    TRUE
Client Port                     Mapper/Trunk Port          CoherentDSP0/1/0/1
                                Traffic Split Percentage

HundredGigECtrlr0/1/0/8         ODU40/1/0/1/1                      100
HundredGigECtrlr0/1/0/9         ODU40/1/0/1/2                      100
HundredGigECtrlr0/1/0/10        ODU40/1/0/1/3                      100
HundredGigECtrlr0/1/0/11        ODU40/1/0/1/4                      100

The following is a sample ouput of the muxponder slice 1 configuration with client configured as OTU4.

RP/0/RP0/CPU0:ios#sh hw-module location 0/0 mxponder-slice 1                                                            
Wed Mar 11 13:59:11.073 UTC 

Location:             0/0
Slice ID:             1  
Client Bitrate:       OTU4
Trunk  Bitrate:       200G
Status:               Provisioned
Client Port                     Peer/Trunk Port            CoherentDSP0/0/0/1  
                              Traffic Split Percentage
OTU40/0/0/8                     ODU40/0/0/1/1                      100
OTU40/0/0/9                     ODU40/0/0/1/2                      100

The following is a sample to verify the mixed client rate configuration in the muxponder slice mode.


RP/0/RP0/CPU0:ios#show hw-module location 0/1 mxponder
Mon Mar 23 06:20:22.227 UTC

Location:             0/1
Slice ID:             0
Client Bitrate:       OTU4
Trunk  Bitrate:       500G
Status:               Provisioned
Client Port                     Peer/Trunk Port            CoherentDSP0/1/0/0   
                                Traffic Split Percentage

OTU40/1/0/2                     ODU40/1/0/0/1                      100
OTU40/1/0/3                     ODU40/1/0/0/2                      100
OTU40/1/0/4                     ODU40/1/0/0/3                      100
OTU40/1/0/5                     ODU40/1/0/0/4                      100
OTU40/1/0/6                     ODU40/1/0/0/5                      100


Location:             0/1
Slice ID:             1
Client Bitrate:       100GE
Trunk  Bitrate:       500G
Status:               Provisioned
LLDP Drop Enabled:    FALSE
ARP Snoop Enabled:    FALSE
Client Port                     Mapper/Trunk Port          CoherentDSP0/1/0/1   
                                Traffic Split Percentage

HundredGigECtrlr0/1/0/8         ODU40/1/0/1/1                         100
HundredGigECtrlr0/1/0/9         ODU40/1/0/1/2                         100
HundredGigECtrlr0/1/0/10        ODU40/1/0/1/3                         100
HundredGigECtrlr0/1/0/11        ODU40/1/0/1/4                         100
HundredGigECtrlr0/1/0/12        ODU40/1/0/1/5                         100

Use the following command to clear alarm statistics on the optics or coherent DSP controller.

clear counters controller controllertype R/S/I/P

The following is a sample in which the alarm statistics are cleared on the coherent DSP controller.


RP/0/RP0/CPU0:ios#show controller coherentDSP 0/1/0/0
Tue Jun 11 05:15:12.540 UTC

Port                                            : CoherentDSP 0/1/0/0
Controller State                                : Up
Inherited Secondary State                       : Normal
Configured Secondary State                      : Normal
Derived State                                   : In Service
Loopback mode                                   : None
BER Thresholds                                  : SF = 1.0E-5  SD = 1.0E-7
Performance Monitoring                          : Enable

Alarm Information:
LOS = 1 LOF = 1 LOM = 0
OOF = 1 OOM = 1 AIS = 0
IAE = 0 BIAE = 0        SF_BER = 0
SD_BER = 2      BDI = 2 TIM = 0
FECMISMATCH = 0 FEC-UNC = 0
Detected Alarms                                 : None

Bit Error Rate Information
PREFEC  BER                                     : 8.8E-03
POSTFEC BER                                     : 0.0E+00

TTI :
        Remote hostname                         : P2B8
        Remote interface                        : CoherentDSP 0/1/0/0
        Remote IP addr                          : 0.0.0.0

FEC mode                                        : Soft-Decision 15

AINS Soak                                       : None
AINS Timer                                      : 0h, 0m
AINS remaining time                             : 0 seconds
RP/0/RP0/CPU0:ios#clear counters controller coherentDSP 0/1/0/0
Tue Jun 11 05:17:07.271 UTC
All counters are cleared
RP/0/RP0/CPU0:ios#show controllers coherentDSP 0/1/0/1
Tue Jun 11 05:20:55.199 UTC

Port                                            : CoherentDSP 0/1/0/1
Controller State                                : Up
Inherited Secondary State                       : Normal
Configured Secondary State                      : Normal
Derived State                                   : In Service
Loopback mode                                   : None
BER Thresholds                                  : SF = 1.0E-5  SD = 1.0E-7
Performance Monitoring                          : Enable

Alarm Information:
LOS = 0 LOF = 0 LOM = 0
OOF = 0 OOM = 0 AIS = 0
IAE = 0 BIAE = 0        SF_BER = 0
SD_BER = 0      BDI = 0 TIM = 0
FECMISMATCH = 0 FEC-UNC = 0
Detected Alarms                                 : None

Bit Error Rate Information
PREFEC  BER                                     : 1.2E-02
POSTFEC BER                                     : 0.0E+00

TTI :
        Remote hostname                         : P2B8
        Remote interface                        : CoherentDSP 0/1/0/1
        Remote IP addr                          : 0.0.0.0

FEC mode                                        : Soft-Decision 15

AINS Soak                                       : None
AINS Timer                                      : 0h, 0m
AINS remaining time                             : 0 seconds

Regeneration Mode

In an optical transmission system, 3R regeneration helps extend the reach of the optical communication links by reamplifying, reshaping, and retiming the data pulses. Regeneration helps to correct any distortion of optical signals by converting it to an electrical signal, processing that electrical signal, and then retransmitting it again as an optical signal.

In Regeneration (Regen) mode, the OTN signal is received on a trunk port and the regenerated OTN signal is sent on the other trunk port of the line card and the other way round. In this mode, only the trunk optics controller and coherentDSP controllers are created.

Configuring the Card in Regen Mode

The supported trunk rates for the different cards are:

To configure regen mode on 1.2T, 1.2TL, and 2-QDD-C cards, use the following commands:

configure

hw-module location location

regen

trunk-rate trunk-rate

commit

exit

Example

Verifying the Regen Mode

The following is a sample to verify the regen mode.

show hw-module location location regen

RP/0/RP0/CPU0:ios#show hw-module location 0/0 regen
Mon Mar 25 09:50:42.936 UTC

Location:             0/0
Trunk  Bitrate:       400G
Status:               Provisioned
East Port 	            West Port
CoherentDSP0/0/0/0      CoherentDSP0/0/0/1

The terms, East Port and West Port are used to represent OTN signal regeneration at the same layer.

Configuring the BPS

You can configure the Bits per Symbol (BPS) to 3.4375 to support 300G trunk configurations on 75 GHz networks using the following commands:

configure

controller optics R/S/I/P bits-per-symbol 3.4375

commit

The following is a sample in which the BPS is configured to 3.4375.


RP/0/RP0/CPU0:ios#configure
Wed Mar 27 14:12:49.932 UTC
RP/0/RP0/CPU0:ios(config)#controller optics 0/3/0/0 bits-per-symbol 3.4375
RP/0/RP0/CPU0:ios(config)#commit

Viewing BPS and Baud Rate Ranges

To view the the BPS for a specific range use the following command:

show controller optics R/S/I/P bps-range bps-range | include data-rate | include fec-type

RP/0/RP0/CPU0:ios#show controllers optics 0/3/0/0 bps-range 3 3.05 | include 300G | include SD27
Thu Mar 28 03:01:39.751 UTC
300G            SD27            3.0000000       69.4350994
300G            SD27            3.0078125       69.2547485
300G            SD27            3.0156250       69.0753320
300G            SD27            3.0234375       68.8968428
300G            SD27            3.0312500       68.7192736
300G            SD27            3.0390625       68.5426174
300G            SD27            3.0468750       68.3668671

To view the baud for a specific range use the following command:

show controller optics R/S/I/P baud-rate-range baud-range | include data-rate | include fec-type

RP/0/RP0/CPU0:ios#show controllers optics 0/3/0/0 baud-rate-range 43 43.4 | include 300G | include SD27
Thu Mar 28 03:12:36.521 UTC
300G            SD27            4.8046875       43.3545986
300G            SD27            4.8125000       43.2842178
300G            SD27            4.8203125       43.2140651
300G            SD27            4.8281250       43.1441394
300G            SD27            4.8359375       43.0744397
300G            SD27            4.8437500       43.0049648

Configuring the Trunk Rate for BPSK

From R7.2.1 onwards, you can configure trunk rates of 50G, 100G, and 150G to support Binary Phase-Shift Keying (BPSK) modulation. The BPSK modulation enables information to be carried over radio signals more efficiently.

You can configure trunk rates for BPSK using CLI, NetConf YANG, and OC models.

The following table list the 50G, 100G, and 150G trunk rates with the supported BPSK modulation:

Trunk Rate

BPSK Modulation

50G

1 to 1.4453125

100G

1 to 2.890625

150G

1.453125 to 4.3359375

To configure the trunk rate for BPSK modulation, enter the following commands:

configure

hw-module location location mxponder

trunk-rate {50G | 100G | 150G}

commit

The following example shows how to configure trunk rate to 50G:


RP/0/RP0/CPU0:(config)#hw-module location 0/0 mxponder
RP/0/RP0/CPU0:(config-hwmod-mxp)#trunk-rate 50G 
RP/0/RP0/CPU0:(config-hwmod-mxp)#commit    

Viewing the BPSK Trunk Rate Ranges

To view the trunk rate configured for the BPSK modulation, use the following show commands:


RP/0/RP0/CPU0:ios(hwmod-mxp)#show hw-module location 0/0 mxponder                                                                                
Tue Feb 25 11:13:41.934 UTC                                                                                                                                    

Location:             0/0
Client Bitrate:       100GE
Trunk  Bitrate:       50G  
Status:               Provisioned
LLDP Drop Enabled:    FALSE                   
ARP Snoop Enabled:    FALSE                   
Client Port                     Mapper/Trunk Port          CoherentDSP0/0/0/0   CoherentDSP0/0/0/1      
                                Traffic Split Percentage                                                

HundredGigECtrlr0/0/0/2         ODU40/0/0/0                            50                       50


RP/0/RP0/CPU0:ios#show controllers optics 0/0/0/0
Thu Mar  5 07:12:55.681 UTC                          

Controller State: Up 

Transport Admin State: In Service 

Laser State: On 

LED State: Green 
                  
 Optics Status    

         Optics Type:  DWDM optics
         DWDM carrier Info: C BAND, MSA ITU Channel=61, Frequency=193.10THz,
         Wavelength=1552.524nm                                              

         Alarm Status:
         -------------
         Detected Alarms: None


         LOS/LOL/Fault Status:

         Alarm Statistics:

         -------------
         HIGH-RX-PWR = 0            LOW-RX-PWR = 2          
         HIGH-TX-PWR = 0            LOW-TX-PWR = 0          
         HIGH-LBC = 0               HIGH-DGD = 0            
         OOR-CD = 0                 OSNR = 0                
         WVL-OOL = 0                MEA  = 0                
         IMPROPER-REM = 0                                   
         TX-POWER-PROV-MISMATCH = 0                         
         Laser Bias Current = 0.0 %                         
         Actual TX Power = 1.97 dBm                         
         RX Power = 1.58 dBm                                
         RX Signal Power = 0.60 dBm                         
         Frequency Offset = 386 MHz                         

         Performance Monitoring: Enable 

         THRESHOLD VALUES
         ----------------

         Parameter                 High Alarm  Low Alarm  High Warning  Low Warning
         ------------------------  ----------  ---------  ------------  -----------
         Rx Power Threshold(dBm)          4.9      -12.0           0.0          0.0
         Tx Power Threshold(dBm)          3.5      -10.1           0.0          0.0
         LBC Threshold(mA)                N/A        N/A          0.00         0.00

         Configured Tx Power = 2.00 dBm
         Configured CD High Threshold = 180000 ps/nm
         Configured CD lower Threshold = -180000 ps/nm
         Configured OSNR lower Threshold = 0.00 dB
         Configured DGD Higher Threshold = 180.00 ps
         Baud Rate =  34.7175521851 GBd
         Bits per Symbol = 1.0000000000  bits/symbol
         Modulation Type: BPSK
         Chromatic Dispersion -9 ps/nm
         Configured CD-MIN -180000 ps/nm  CD-MAX 180000 ps/nm
         Polarization Mode Dispersion = 0.0 ps
         Second Order Polarization Mode Dispersion = 125.00 ps^2
         Optical Signal to Noise Ratio = 34.60 dB
         SNR = 20.30 dB
         Polarization Dependent Loss = 0.20 dB
         Polarization Change Rate = 0.00 rad/s
         Differential Group Delay = 2.00 ps
         Filter Roll Off Factor : 0.100
         Rx VOA Fixed Ratio : 15.00 dB
         Enhanced Colorless Mode : 0
         Enhanced SOP Tolerance Mode : 0
         NLEQ Compensation Mode : 0
         Cross Polarization Gain Mode : 0
         Cross Polarization Weight Mode : 0
         Carrier Phase Recovery Window : 0
         Carrier Phase Recovery Extended Window : 0


AINS Soak                : None
AINS Timer               : 0h, 0m
AINS remaining time      : 0 seconds

OTN-XP Card

The following section describes the supported configurations and procedures to configure the card modes on the line card.

LC Mode on OTN-XP Card

When you install the OTN-XP card in the Cisco NCS 1004 chassis, it is in the POWERED_ON state. The LCMODE is not configured for line card alarm is present on the card and the LED status is AMBER.

sysadmin-vm:0_RP0# show platform
Thu Mar  26 21:38:07.305 UTC+00:00
Location  Card Type               HW State      SW State      Config State  
----------------------------------------------------------------------------
0/0       NCS1K4-LC-FILLER        PRESENT       N/A           NSHUT         
0/1       NCS1K4-OTN-XP           POWERED_ON    N/A           NSHUT         
0/RP0     NCS1K4-CNTLR-K9         OPERATIONAL   OPERATIONAL   NSHUT         
0/FT0     NCS1K4-FAN              OPERATIONAL   N/A           NSHUT         
0/FT1     NCS1K4-FAN              OPERATIONAL   N/A           NSHUT         
0/FT2     NCS1K4-FAN              OPERATIONAL   N/A           NSHUT         
0/PM0     NCS1K4-AC-PSU           OPERATIONAL   N/A           NSHUT         
0/SC0     NCS1004                 OPERATIONAL   N/A           NSHUT         
sysadmin-vm:0_RP0# show alarms brief system active 
Thu Mar  26 21:38:34.394 UTC+00:00

-------------------------------------------------------------------------------
Active Alarms
-------------------------------------------------------------------------------
Location          Severity      Group           Set time            Description
-------------------------------------------------------------------------------
0                 major         environ         03/26/20 20:23:11   Power Module redundancy lost.
0                 critical      environ         03/26/20 20:23:29   Fan: One or more LCs missing, running fans at max speed.
0/1               not_alarmed   shelf           03/26/20 21:38:26   LCMODE is not configured for line card
sysadmin-vm:0_RP0# 
sysadmin-vm:0_RP0# show led location 0/1 
Thu Mar  26 21:39:05.101 UTC+00:00
=============================================================
Location  LED Name                      Mode         Color  
=============================================================
0/1        
          0/1-Status LED                WORKING     AMBER     
sysadmin-vm:0_RP0# 

You must select a datapath mode by configuring the LC mode. OTN-XP does not have a default LC mode. After the LC mode is configured using the CLI,the card transitions to the OPERATIONAL state, the alarm clears, and the LED status turns to GREEN.

The LC modes supported on the OTN-XP card are:

  • 10G-GREY-MXP


Note


100G-TXP LC mode is not supported.


Only one LC mode can be configured on the OTN-XP card at a time. When the LC mode is changed using the CLI, the LCMODE changed, delete the datapath config and reload line card alarm is present on the card and the DP FPD is in disabled state. To clear the alarm and enable the DP FPD, delete the the existing datapath configuration and reload the line card to apply the new LC mode to make the card operational.

If a LC mode requires a different FPGA configuration, and the package is not available, the OTN_XP_DP_FPD_PKG is missing, please install the package to proceed alarm is present on the card. To clear the alarm, install the OTN_XP_DP_FPD_PKG file. After the package installation is complete, the required FPGA image is copied from the OTN_XP_DP_FPD_PKG file to the card, the card is automatically reloaded, and the card becomes operational.


Note


The LC mode configuration is a shared plane configuration. The configuration does not enter the preconfigured state when the line card is not available.


Configuring the LC Mode


Note


  • Ensure the OTN_XP_DP_FPD_PKG file is installed before configuring the LC mode.

  • When you insert an OTN-XP line card having a lower FPD version, you must configure a LC mode which is supported on the software release that the line card is loaded with. You cannot upgrade the FPD of a line card if you configure a LC mode supported only in a higher software release.


To configure the LC mode on the OTN-XP card, use the following commands:

configure

lc-module location location lcmode mode

commit

Example

To view the LC modes available on the OTN-XP card, use the following command:


RP/0/RP0/CPU0:ios#sh lc-module location 0/0 lcmode all 
Wed Sep 29 14:41:51.487 UTC
States: A-Available     R-Running       C-Configured

Node    Lcmode_Supported    Owner    Options(State)                HW_Ver
--------------------------------------------------------------------------------
0/0     Yes                 None     10G-GREY-MXP (A)               3.0 
                                     4x100G-MXP-400G-TXP (A)        2.0 

The following is a sample in which the OTN-XP card is configured in the 10G-GREY-MXP mode.

RP/0/RP0/CPU0:ios#configure
Thu Mar 26 21:40:51.495 UTC
RP/0/RP0/CPU0:ios(config)#lc-module location 0/1 lcmode 10G-GREY-MXP 
RP/0/RP0/CPU0:ios(config)#commit
Verifying the LC Mode Configuration

The following is a sample output of a successful 10G-GREY-MXP LC mode configuration after which the card transitions to the OPERATIONAL state, the alarm clears, and the LED status turns to GREEN.

RP/0/RP0/CPU0:ios(config)#do show platform
Thu Mar 26 21:41:17.206 UTC
Node              Type                       State             Config state
--------------------------------------------------------------------------------
0/0               NCS1K4-LC-FILLER           PRESENT           NSHUT
0/1               NCS1K4-OTN-XP              OPERATIONAL       NSHUT
0/RP0/CPU0        NCS1K4-CNTLR-K9(Active)    IOS XR RUN        NSHUT
0/FT0             NCS1K4-FAN                 OPERATIONAL       NSHUT
0/FT1             NCS1K4-FAN                 OPERATIONAL       NSHUT
0/FT2             NCS1K4-FAN                 OPERATIONAL       NSHUT
0/PM0             NCS1K4-AC-PSU              OPERATIONAL       NSHUT
0/SC0             NCS1004                    OPERATIONAL       NSHUT
RP/0/RP0/CPU0:ios(config)#do show alarms brief system active 
Thu Mar 26 21:41:29.641 UTC

------------------------------------------------------------------------------------
Active Alarms 
------------------------------------------------------------------------------------
Location        Severity     Group            Set Time                   Description                                                                                                                                                                                                                                                
------------------------------------------------------------------------------------
0               Major        Environ          03/26/2020 20:23:11 UTC    Power Module redundancy lost.                                                                                                                                                                                                                              
0               Critical     Environ          03/26/2020 20:23:29 UTC    Fan: One or more LCs missing, running fans at max speed.                                                                                                                          
RP/0/RP0/CPU0:ios(config)#end
RP/0/RP0/CPU0:ios#show lc-module location 0/1 lcmode all 
Thu Mar 26 21:41:58.780 UTC
States: A-Available     R-Running       C-Configured

Node    Lcmode_Supported    Owner    Options(State)                HW_Ver
--------------------------------------------------------------------------------
0/1     Yes                 CLI      10G-GREY-MXP (R/C)             3.0 
                                     4x100G-MXP-400G-TXP (A)        2.0 
RP/0/RP0/CPU0:ios#show lc-module location 0/1 lcmode     
Thu Mar 26 21:42:18.997 UTC

Node    Lcmode_Supported    Owner    Running                Configured
--------------------------------------------------------------------------------
0/1     Yes                 CLI      10G-GREY-MXP          10G-GREY-MXP 
RP/0/RP0/CPU0:ios#admin
Thu Mar 26 21:42:38.525 UTC

root connected from 192.0.2.3 using ssh on sysadmin-vm:0_RP0
sysadmin-vm:0_RP0# show led location 0/1
Thu Mar  26 21:42:45.337 UTC+00:00
=============================================================
Location  LED Name                      Mode         Color  
=============================================================
0/1        
          0/1-Status LED                WORKING     GREEN     

 
Example

The following is a sample in which the LC mode is changed from 10G-GREY-MXP to the 4x100G-MXP-400G-TXP mode. In this sample, the datapath configuration is deleted and the card is reloaded to apply the new LC mode.


RP/0/RP0/CPU0:ios#show lc-module location all lcmode 
Thu Sep 30 10:19:29.853 UTC

Node    Lcmode_Supported    Owner    Running                Configured
--------------------------------------------------------------------------------
0/0     Yes                 CLI      10G-GREY-MXP           10G-GREY-MXP 
0/1     No                  N/A      N/A                    N/A
0/2     No                  N/A      N/A                    N/A
0/3     No                  N/A      N/A                    N/A


RP/0/RP0/CPU0:ios#configure 
Thu Sep 30 10:19:32.818 UTC
Current Configuration Session  Line       User     Date                     Lock
00001000-000051f7-00000000     vty1       root     Wed Sep 29 15:26:00 2021 
RP/0/RP0/CPU0:ios(config)#no lc-module location 0/0 lcmode 10g-GREY-MXP 
RP/0/RP0/CPU0:ios(config)#commit 
Thu Sep 30 10:20:34.086 UTC
RP/0/RP0/CPU0:ios(config)#do show alarms brief system active 
Thu Sep 30 10:20:52.950 UTC

------------------------------------------------------------------------------------
Active Alarms 
------------------------------------------------------------------------------------
Location        Severity     Group            Set Time                   Description                                                                                                                                                                                                                                                
------------------------------------------------------------------------------------
0/PM0           Major        Environ          09/29/2021 14:41:59 UTC    Power Module Output Disabled                                                                                                                                                                                                                               
0               Major        Environ          09/29/2021 14:42:15 UTC    Power Module redundancy lost.                                                                                                                                                                                                                              
0               Critical     Environ          09/29/2021 14:42:25 UTC    Fan: One or more LCs missing, running fans at max speed.                                                                                                                                                                                                   
0/0             NotAlarmed   Shelf            09/30/2021 10:20:34 UTC    LCMODE changed, delete the datapath config and reload line card                                                                                                                                                                                            


RP/0/RP0/CPU0:ios#configure 
Thu Sep 30 10:21:41.281 UTC
Current Configuration Session  Line       User     Date                     Lock
00001000-000051f7-00000000     vty1       root     Wed Sep 29 15:26:00 2021 
RP/0/RP0/CPU0:ios(config)#no hw-module location 0/0
RP/0/RP0/CPU0:ios(config)#commit 
Thu Sep 30 10:21:49.982 UTC
RP/0/RP0/CPU0:ios(config)#

RP/0/RP0/CPU0:ios#show platform
Thu Sep 30 10:22:08.482 UTC
Node              Type                       State             Config state
--------------------------------------------------------------------------------
0/0               NCS1K4-OTN-XP              OPERATIONAL       NSHUT
0/2               NCS1K4-LC-FILLER           PRESENT           NSHUT
0/3               NCS1K4-LC-FILLER           PRESENT           NSHUT
0/RP0/CPU0        NCS1K4-CNTLR-K9(Active)    IOS XR RUN        NSHUT
0/FT0             NCS1K4-FAN                 OPERATIONAL       NSHUT
0/FT1             NCS1K4-FAN                 OPERATIONAL       NSHUT
0/FT2             NCS1K4-FAN                 OPERATIONAL       NSHUT
0/PM0             NCS1K4-AC-PSU              OPERATIONAL       NSHUT
0/SC0             NCS1004                    OPERATIONAL       NSHUT
RP/0/RP0/CPU0:ios#
RP/0/RP0/CPU0:ios#admin
Thu Sep 30 10:23:55.937 UTC
Last login: Thu Sep 30 04:32:57 2021 from 192.0.2.3
root connected from 192.0.2.3 using ssh on sysadmin-vm:0_RP0
sysadmin-vm:0_RP0# hw-module location 0/0 reload 
Thu Sep  30 10:24:17.938 UTC+00:00
Reloading the module will be traffic impacting if not properly drained. Continue to Reload hardware module ? [no,yes] yes
result Card graceful reload request on 0/0 succeeded.
sysadmin-vm:0_RP0#show platform 
Thu Sep  30 10:25:16.876 UTC+00:00
Location  Card Type               HW State      SW State      Config State  
----------------------------------------------------------------------------
0/0       NCS1K4-OTN-XP           POWERED_ON    N/A           NSHUT         
0/2       NCS1K4-LC-FILLER        PRESENT       N/A           NSHUT         
0/3       NCS1K4-LC-FILLER        PRESENT       N/A           NSHUT         
0/RP0     NCS1K4-CNTLR-K9         OPERATIONAL   OPERATIONAL   NSHUT         
0/FT0     NCS1K4-FAN              OPERATIONAL   N/A           NSHUT         
0/FT1     NCS1K4-FAN              OPERATIONAL   N/A           NSHUT         
0/FT2     NCS1K4-FAN              OPERATIONAL   N/A           NSHUT         
0/PM0     NCS1K4-2KW-AC           OPERATIONAL   N/A           NSHUT         
0/SC0     NCS1004-K9              OPERATIONAL   N/A           NSHUT         

sysadmin-vm:0_RP0#exit
RP/0/RP0/CPU0:ios#show lc-module location all lcmode 
Thu Sep 30 10:29:08.183 UTC

Node    Lcmode_Supported    Owner    Running                Configured
--------------------------------------------------------------------------------
0/0     Yes                 None     Not running            Not configured
0/1     No                  N/A      N/A                    N/A
0/2     No                  N/A      N/A                    N/A
0/3     No                  N/A      N/A                    N/A

RP/0/RP0/CPU0:ios#show platform
Thu Sep 30 10:29:36.075 UTC
Node              Type                       State             Config state
--------------------------------------------------------------------------------
0/0               NCS1K4-OTN-XP              POWERED_ON        NSHUT
0/2               NCS1K4-LC-FILLER           PRESENT           NSHUT
0/3               NCS1K4-LC-FILLER           PRESENT           NSHUT
0/RP0/CPU0        NCS1K4-CNTLR-K9(Active)    IOS XR RUN        NSHUT
0/FT0             NCS1K4-FAN                 OPERATIONAL       NSHUT
0/FT1             NCS1K4-FAN                 OPERATIONAL       NSHUT
0/FT2             NCS1K4-FAN                 OPERATIONAL       NSHUT
0/PM0             NCS1K4-AC-PSU              OPERATIONAL       NSHUT
0/SC0             NCS1004                    OPERATIONAL       NSHUT
RP/0/RP0/CPU0:ios#
RP/0/RP0/CPU0:ios#configure 
Thu Sep 30 10:29:57.997 UTC
Current Configuration Session  Line       User     Date                     Lock
00001000-000051f7-00000000     vty1       root     Wed Sep 29 15:26:00 2021 
RP/0/RP0/CPU0:ios(config)#lc-module location 0/0 lcmode 4x100G-MXP-400G-TXP 
RP/0/RP0/CPU0:ios(config)#commit 
Thu Sep 30 10:30:11.312 UTC
RP/0/RP0/CPU0:ios(config)#end
RP/0/RP0/CPU0:ios#show lc-module location all lcmode 
Thu Sep 30 10:40:56.480 UTC

Node    Lcmode_Supported    Owner    Running                Configured
--------------------------------------------------------------------------------
0/0     Yes                 CLI      4x100G-MXP-400G-TXP    4x100G-MXP-400G-TXP 
0/1     No                  N/A      N/A                    N/A
0/2     No                  N/A      N/A                    N/A
0/3     No                  N/A      N/A                    N/A

RP/0/RP0/CPU0:ios# RP/0/RP0/CPU0:ios#show  platform
Thu Sep 30 10:41:25.093 UTC
Node              Type                       State             Config state
--------------------------------------------------------------------------------
0/0               NCS1K4-OTN-XP              OPERATIONAL       NSHUT
0/2               NCS1K4-LC-FILLER           PRESENT           NSHUT
0/3               NCS1K4-LC-FILLER           PRESENT           NSHUT
0/RP0/CPU0        NCS1K4-CNTLR-K9(Active)    IOS XR RUN        NSHUT
0/FT0             NCS1K4-FAN                 OPERATIONAL       NSHUT
0/FT1             NCS1K4-FAN                 OPERATIONAL       NSHUT
0/FT2             NCS1K4-FAN                 OPERATIONAL       NSHUT
0/PM0             NCS1K4-AC-PSU              OPERATIONAL       NSHUT
0/SC0             NCS1004                    OPERATIONAL       NSHUT
RP/0/RP0/CPU0:ios#

Muxponder Configuration on OTN-XP Card

The OTN-XP card has two trunk ports and 12 client ports. The muxponder configuration supports two slices, 0 and 1. You can configure mxponder-slice 0, mxponder-slice 1, or both. Each mxponder-slice supports 10 client interfaces.

Table 1. Feature History

Feature Name

Release Information

Description

400 TXP or MXP modes with CFP2 DCO for OTN-XP Card

Cisco IOS XR Release 7.3.1

On the OTN-XP card, you can configure a single 400GE or 4x100G payload that is received over the client port as a 400G signal over DWDM on the line side.

The card improves efficiency, performance, and flexibility for customer networks allowing 400GE or 4x100G client transport over 400G WDM wavelength.

Commands modified:

  • controller coherentDSP

  • show controller coherentDSP

Table 2. Hardware Module Configuration with Client to Trunk Mapping

Hardware Module Configuration

Line Card Mode

Client Port Rate

Client to Trunk Mapping

Trunk Rate

10G Grey Muxponder

10G-GREY-MXP

OTU2, OTU2e, or 10 GE

Mxponder-slice 0—Client ports 4, 5, and 2 are mapped to the trunk port 0.

Mxponder-slice 1—Client ports 7, 6, and 11 are mapped to the trunk port 1.

Each client port consists of four lanes, 1, 2, 3, and 4. The lanes 3 and 4 can only be configured for ports 2 and 11. It is not mandatory to configure all 10 client lanes for a slice.

100G

Configuring the Muxponder Mode for 10G Grey Muxponder


Note


The LC mode must be configured to 10G-GREY-MXP on the OTN-XP card before you perform this configuration.


To configure the OTN-XP card in the muxponder mode, use the following commands:

configure

hw-module location location mxponder-slice mxponder-slice-number

trunk-rate 100G

client-port-rate client-port-number lane lane-number client-type { 10GE | OTU2 | OTU2e}

commit

Example

The following is a sample in which the OTN-XP card is configured with mixed client rates in the mxponder-slice 0 mode.

RP/0/RP0/CPU0:ios#config
Tue Apr 21 09:21:44.460 UTC
RP/0/RP0/CPU0:ios(config)#hw-module location 0/1 mxponder-slice 0
RP/0/RP0/CPU0:ios(config-hwmod-mxp)#trunk-rate 100G
RP/0/RP0/CPU0:ios(config-hwmod-mxp)#client-port-rate 2 lane 3 client-type OTU2
RP/0/RP0/CPU0:ios(config-hwmod-mxp)#client-port-rate 2 lane 4 client-type OTU2
RP/0/RP0/CPU0:ios(config-hwmod-mxp)#client-port-rate 4 lane 1 client-type 10GE
RP/0/RP0/CPU0:ios(config-hwmod-mxp)#commit
Verifying the Muxponder Configuration

The following is a sample to verify the muxponder configuration in the OTN-XP card.


RP/0/RP0/CPU0:ios#show hw-module location 0/1 mxponder
Tue Apr 21 09:26:12.308 UTC

Location:             0/1
Slice ID:             0
Client Bitrate:       MIXED
Trunk  Bitrate:       100G
Status:               Provisioned
LLDP Drop Enabled:    FALSE
ARP Snoop Enabled:    FALSE
Client Port   Mapper/Trunk Port    Peer/Trunk Port    OTU40/0/0/0
              Traffic Split Percentage
OTU20/0/0/2/3            NONE          ODU20/0/0/0/2/3       100
OTU20/0/0/2/4            NONE          ODU20/0/0/0/2/4       100
TenGigECtrlr0/0/0/4/1 ODU2E0/0/0/0/4/1       NONE            100