Introduction
This document describes how to troubleshoot the throughput limitation observed on ASR9912 or ASR9922 chassis .
Prerequisites
Requirements
Cisco recommends that you have knowledge of these topics:
- ASR 9900 series
- SFC1 series fabric cards
Components Used
The information in this document is based on these software and hardware versions:
- ASR9912 with SFC1 series fabric cards installed
- ASR9922 with SFC1 series fabric cards installed
The information in this document was created from the devices in a specific lab environment. All of the devices used in this document started with a cleared (default) configuration. If your network is live, make sure that you understand the potential impact of any command.
Background Information
On the ASR 9900 series chassis (ASR9912, ASR9922) which has SFC1 series fabric cards installed along with Tomahawk (100GE - A9K* PID ) line cards you could experience rate limit of ~ 60 Gbps on individual HundredGigE interfaces respectively.
Problem
The SFC1 line card has a limit of ~ 100 Gbps per card. This issue is mainly observed in Tomahawk line cards with PID A9K* e.g. A9K-8X100GE-TR. Since these line cards support only 5 fabric cards therefore, the total available bandwidth per individual line card is ~ 500 Gbps. Therefore, even if 7 SFC1 series line cards are installed on the device the A9K* PID card would utilize the first 5 Fabric Cards on the ASR9K.
The available fabric capacity of ~ 500 Gbps is divided equally per NP i.e. 500/4 = 125 Gbps available per NP. Therefore the NP feeds 2 individual HundredGigE interfaces on the line card and sharing the bandwidth amongst them equally accordingly.
When both interfaces per NP are UP then the aggregate bandwidth of 125 Gbps is divided equally between the two ports i.e. maximum bandwidth per port available is 125/2 = ~ 62.5 Gbps. Similarly, when all interfaces on the tomahawk line card are UP each individual interface would recieve ~62.5 Gbps throughput respectively.
Tip: The fabric type and line card compatability is explained in ASR9K Chassis Fabric Modes cisco article.
Solution
The line card shares the bandwidth equally amongst the NP, however, the NP can modify the resources per port based on interface status.
Hence, as a temporary workaround, only one port per NP (Network Processor) should be in no-shut state while other one remains in shutdown state.
Note: Please note that if the other port is simply in down state (interface unplugged etc.) and not admin-down state then this workaround does not work.
This allows the NP to redirect the second port's fabric capacity to the first port. In this scenario per port max available bandwidth shall be 125 Gbps. Therefore, the individual HunGigE port will be able to deliver the required 100Gbps bandwidth while using SFC1 line cards.
This workaround could be used either on an individual NP or throughout the line card as well if 100Gbps throughput is reuqired on all production interfaces.
The individual port to NP (Network Processor) mapping can be seen by the command show controller np ports all location X/Y/CPUZ, for example as shown here:
Show controller np ports all location 0/0/CPU0
Thu Sep 22 16:47:23.338 UTC
Node: 0/0/CPU0:
----------------------------------------------------------------
NP Bridge Fia Ports
-- ------ --- ---------------------------------------------------
0 -- 0 HundredGigE0/0/0/0 - HundredGigE0/0/0/1
1 -- 1 HundredGigE0/0/0/2 - HundredGigE0/0/0/3
2 -- 2 HundredGigE0/0/0/4 - HundredGigE0/0/0/5
3 -- 3 HundredGigE0/0/0/6 - HundredGigE0/0/0/7
However, the permanent and recommended workaround is to upgrade the device to SFC2 series fabric cards, which provides ~ 1 Tbps per line card therefore, 125 Gbps would be available per interface when all HunGigE interfaces are in UP/UP state.
Moreover, when you use the A99* PID line cards with RP2/SFC2 modules, there are 3 different fabric modes which can be configured on the ASR9K (9912, 9910, 9922 only) devices and are described here:
Fabric Modes
ASR99XX chassis (ASR9912, ASR9910, ASR9922) can be used in three different fabric modes.
Default Mode
In this mode, both Typhoon and Tomahawk LCs (as well as RP/FC) can be inter-mixed in the chassis. The number of VQIs are limited to 1024 and multicast traffic uses only first 5 FCs.
Note: No explicit admin configuration is required to enable this mode.
HighBandWidth Mode
In this mode, only Tomahawk LCs (and only RP2/SFC2) can be used in the chassis. The number of VQIs are up to 2048 and multicast traffic uses only first 5 FCs. Both Tomahawk 5-FAB (9K LC PID) and 7-FAB (99 LC PID) LCs can be used in the chassis. Typhoon LCs are not supported in this mode. It is recommended that chassis has all 7 FCs. This mode is enabled by using the following admin config CLI:
fabric enable mode highbandwidth
Note: This CLI would be rejected if the chassis has an unsupported card that should be removed prior to doing a config commit.
A99-HighBandWidth Mode
In this mode, only Tomahawk 7-FAB (99 LC PID) LCs (and only RP2/SFC2) can be used in the chassis. The number of VQIs are up to 2048 and multicast traffic uses all 7 FCs. Tomahawk 5-FAB (9K LC PID) and Typhoon LCs cannot be used in the chassis. It is recommended that chassis has all 7 FCs. This mode is enabled by using this admin config CLI:
fabric enable mode A99-highbandwidth
Note: This CLI would be rejected if the chassis has an unsupported card that should be removed prior to doing a config commit.