- Index
- Preface
- Product Overview
-
- Configuring Ethernet Interfaces
- Configuring VLANs
- Configuring Private VLANs
- Configuring Rapid PVST+
- Configuring Multiple Spanning Tree
- Configuring STP Extensions
- Configuring Port Channels
- Configuring Access and Trunk Interfaces
- Configuring the MAC Address Table
- Configuring IGMP Snooping
- Configuring Traffic Storm Control
-
- Configuring Fibre Channel Interfaces
- Configuring Domain Parameters
- Configuring N-Port Virtualization
- Configuring VSAN Trunking
- Configuring SAN PortChannels
- Configuring and Managing VSANs
- Configuring and Managing Zones
- Distributing Device Alias Services
- Configuring Fibre Channel Routing Services and Protocols
- Managing FLOGI, Name Server, FDMI, and RSCN Databases
- Discovering SCSI Targets
- Advanced Features and Concepts
- Configuring FC-SP and DHCHAP
- Configuring Port Security
- Configuring Fabric Binding
- Configuring Fabric Configuration Servers
- Configuring Port Tracking
Configuring QoS
This chapter describes how to configure the quality of service (QoS) features on the Cisco Nexus 5000 Series switch.
Information About QoS
The Cisco Nexus 5000 Series switch provides QoS capabilities such as traffic prioritization and egress bandwidth allocation.
The default QoS configuration on the switch provides lossless service for Fibre Channel and Fibre Channel Over Ethernet (FCoE) traffic and best-effort service for Ethernet traffic. QoS can be configured to provide additional classes of service for Ethernet traffic. Cisco Nexus 5000 Series QoS features are configured using Cisco Modular QoS CLI (MQC).
This section includes the following topics:
- MQC
- System Classes
- Default System Classes
- Link-Level Flow Control
- Priority Flow Control
- MTU
- Trust Boundaries
- Ingress Policies
- Egress Policies
- QoS for Multicast Traffic
- Policy for Fibre Channel Interfaces
- QoS for Traffic Directed to the CPU
MQC
The Cisco Modular QoS CLI (MQC) provides a standard set of commands for configuring QoS.
You can use MQC to define additional traffic classes and to configure QoS policies for the whole system and for individual Ethernet interfaces. Configuring a QoS policy with MQC consists of the following steps:
1. Define traffic classes using the class-map command.
The class map classifies incoming or outgoing packets based on matching criteria, such as the IEEE 802.1p CoS value. Unicast and multicast packets are classified.
2. Associate policies or actions with each class of traffic using the policy-map command.
The policy map defines a set of actions to take on the associated traffic class, such as limiting the bandwidth or dropping packets.
3. Attach policies to MQC targets using the service-policy command.
An MQC target is an entity (such as an Ethernet interface) that represents a flow of packets. A service policy associates a policy map with an MQC target, and specifies whether to apply the policy on incoming or outgoing packets. This enables the configuration of interface-specific QoS policies such as policing and bandwidth allocation.
System Classes
The system class is a new type of MQC target. A service policy can associate a policy map with a system class, which enables application of a QoS policy across the whole switch.
Parameters in system classes need to be configured consistently across the switch and the whole network to ensure that packets in a specific traffic class receive consistent treatment as they are transported across the network.
To ensure QoS consistency (and for ease of configuration), the switch distributes the system class parameter values to all its attached network adapters using the DCBX protocol. For additional information about communication between the switch and adapters, see the “DCE Bridging Capability Exchange Protocol” section.
If service policies are configured at the interface level, the interface-level policy always takes precedence over system class configuration or defaults.
The following QoS parameters can be specified for a system class:
No drop specifies lossless service for the system class. Drop specifies that tail drop is used when a queue for this system class is full.
The system class MTU defines the maximum packet size for any packet classified into the system class. Each system class has a default MTU and the system class MTU is configurable.
The match CoS value specifies the IEEE 802.1p CoS value to associate with this system class.
Sets the bandwidth and priority configuration values for this system class. The system class values are used as the default values for all interfaces.
Default System Classes
The Cisco Nexus 5000 Series switch provides the following default system classes:
All Fibre Channel and FCoE control and data traffic is automatically classified into the FCoE system class, which provides no-drop service.
This class is created automatically when the system starts up (the class is named class-fcoe in the CLI). You cannot delete this class, and you can only modify the IEEE 802.1p CoS value to associate with this class.
The switch classifies packets into the FCoE system class as follows:
– FCoE traffic is classified based on EtherType.
– Native Fibre Channel traffic is classified based on the physical interface type.
By default, all unicast and multicast Ethernet traffic is classified into the default drop system class.
This class is created automatically when the system starts up (the class is named class-default in the CLI). You cannot delete this class and you cannot change the CoS value associated with the default class.
There are two reserved system classes for internal system use.
Link-Level Flow Control
The IEEE 802.3x link-level flow control capability allows a congested receiver to communicate the far end to pause its data transmission for a short period of time. The link-level flow control feature applies to all the traffic on the link.
The transmit and receive directions are separately configurable. By default, link-level flow control is disabled for both directions.
On the Cisco Nexus 5000 Series switch, Ethernet interfaces do not auto-detect the link-level flow control capability. You must configure the capability explicitly on the Ethernet interfaces.
On each Ethernet interface, the switch can enable either priority flow control or link-level flow control (but not both).
Priority Flow Control
The priority flow control (PFC) capability allows you to apply pause functionality to specific classes of traffic on a link (instead of all the traffic on the link). PFC applies pause functionality based on the IEEE 802.1p CoS value. When the switch enables PFC, it communicates to the adapter which CoS values to apply the pause.
Ethernet interfaces use PFC to provide lossless service to no-drop system classes. PFC implements Pause frames on a per-class basis and uses the IEEE 802.1p CoS value to identify the classes that require lossless service.
In the switch, each system class has an associated IEEE 802.1p CoS value (assigned by default or configured on the system class). If PFC is enabled, the switch sends the no-drop CoS values to the adapter, which then applies PFC to these CoS values.
The default CoS value for the FCoE system class is 3 and this value is configurable. The default CoS value for the default drop system class is 0 and this value is configurable.
If PFC is not enabled on an interface, you can enable IEEE 802.3X link-level pause. By default, link-level pause is disabled.
MTU
The Cisco Nexus 5000 Series switch is a Layer 2 switch, and it does not support packet fragmentation. MTU configuration mismatch between ingress and egress interfaces may result in packets being truncated.
When configuring MTU, follow these guidelines:
- MTU is specified per system class. You cannot configure MTU on the interfaces.
- Fibre Channel and FCoE payload MTU is 2112 bytes across the switch. As a result, the rxbufsize for Fibre Channel interfaces is fixed at 2112 bytes. If the Cisco Nexus 5000 Series switch receives an rxbufsize from a peer different than 2112 bytes, it will fail ELP negotiation and not bring the link up.
- The system jumbomtu command defines the upper bound of any MTU in the system. System jumbo MTU has a default value of 9216 bytes. The minimum MTU is 2240 bytes and the maximum MTU is 9216 bytes.
- The system class MTU sets the MTU for all packets in the class. The system class MTU cannot be configured larger than the global jumbo MTU.
- The FCoE system class (for Fibre Channel and FCoE traffic) has a default MTU of 2240 bytes. This value cannot be modified.
- The default drop system class has a default MTU of 1500 bytes. You can configure this value.
- The switch sends the MTU configuration to network adapters that support DCBXP.
Trust Boundaries
The trust boundary is enforced by the incoming interface as follows:
- All Fibre Channel and virtual Fibre Channel interfaces are automatically classified into the FCoE system class.
- By default, all Ethernet interfaces are trusted interfaces. A packet tagged with a 802.1p CoS value is classified into a system class using the value in the packet.
- Any packet not tagged with an 802.1p CoS value is classified into the default drop system class. If the untagged packet is sent over a trunk, it is tagged with the default untagged CoS value, which is zero.
- You can override the default untagged CoS value for an Ethernet interface or port channel.
After the system applies the untagged CoS value, QoS functions the same as for a packet that entered the system tagged with the CoS value.
Ingress Policies
You can associate an ingress policy map with an Ethernet interface, to guarantee bandwidth for the specified traffic class or to specify a priority queue.
The ingress policy is applied in the adapter to all outgoing traffic that matches the specified CoS value.
When you configure an ingress policy for an interface, the switch sends the configuration data to the adapter. If the adapter does not support DCBX protocol (or the ingress policy TLVs), the ingress policy configuration is ignored.
Egress Policies
You can associate an egress policy map with an Ethernet interface, to guarantee the bandwidth for the specified traffic class or to configure the egress queues.
The bandwidth allocation limit applies to all traffic on the interface (including any FCoE traffic).
Each Ethernet interface supports up to eight queues (one for each system class). The queues have the following default configuration:
- Queue zero is configured as a strict priority queue. Control traffic destined for the CPU uses this queue.
- FCoE traffic (traffic that maps to the FCoE system class) is assigned a queue. This queue uses WRR scheduling with 50 percent of the bandwidth.
- Standard Ethernet traffic (in the default drop system class) is assigned a queue. This queue uses WRR scheduling with 50 percent of the bandwidth.
If you add a system class, a queue is assigned to the class. You must reconfigure the bandwidth allocation on all affected interfaces. Bandwidth is not dedicated automatically to user-defined system classes.
You can configure an additional strict priority queue. This queue is serviced before all other queues except queue zero (which carries control traffic, not data traffic).
QoS for Multicast Traffic
The system provides six multicast queues per interface and allocates one queue for each system class. By default, all multicast Ethernet traffic is classified into the default drop system class. This traffic is serviced by one multicast queue.
The optimized multicast feature allows use of the unused multicast queues, to achieve better throughput for multicast frames. If optimized multicast is enabled for the default drop system class, the system will use all six queues to service the multicast traffic (all six queues are given equal priority).
If you define a new system class. a dedicated multicast queue is assigned for this class. This queue is removed from the set of queues available for optimized multicast.
The optimized multicast feature achieves better throughput for multicast frames and improves performance for multicast frames that are less than 256 bytes long.
Note Optimized multicast is supported on the BF and later versions of the Cisco Nexus 5020 switch. To verify the model version, enter the show module 1 command. The model version is the last two characters of the model number. Optimized multicast is supported on all versions of the Cisco Nexus 5010 switch.
The system provides two predefined class maps for matching broadcast or multicast traffic. These class maps are convenient for creating separate policy maps for unicast and multicast traffic. The predefined class maps are as follows:
The class-all-flood class map matches all broadcast, multicast and unknown unicast traffic (across all CoS values). If you configure a policy map with the class-all-flood class map, the system automatically utilizes all available multicast queues for this traffic.
The class-ip-multicast class map matches all IP multicast traffic. Policy options configured in this class map apply to traffic across all Ethernet CoS values. For example, if you enable optimized multicast for this class, the IP multicast traffic for all CoS values is optimized.
If you configure this class as a no-drop class, the priority flow control capability is applied across all Ethernet CoS values. In this configuration, pause will be applied to unicast and multicast traffic.
Note Only one of these predefined classes can be configured in the system QoS policy.
Policy for Fibre Channel Interfaces
The egress queues are not configurable for native Fibre Channel interfaces. Two queues are available as follows:
QoS for Traffic Directed to the CPU
The switch automatically applies QoS policies to traffic that is directed to the CPU to ensure that the CPU is not flooded with packets. Control traffic, such as BPDU frames, is given higher priority to ensure delivery.
Configuration Guidelines and Limitations
Switch resources (such as buffers, virtual output queues, and egress queues) are partitioned based on the default and user-defined system classes. The switch software automatically adjusts the resource allocation to accommodate the configured system classes.
To maintain optimal switch performance, follow these guidelines when configuring system classes and policies:
- If less than four Ethernet classes are defined, up to two of these classes can be configured as no-drop classes. If more than four Ethernet classes are defined, only one of these classes can be configured as a no-drop class. The default drop class is counted as an Ethernet class.
- If priority flow control is enabled on an Ethernet interface, pause will never be applied to traffic with a drop system class. PFC does not apply pause to drop classes and the link-level pause feature is never enabled on an interface with PFC.
- All FCoE traffic on an Ethernet interface is mapped to one no-drop system class. By default, this class is associated with CoS value 3, although you can configure a different value. If you configure standard Ethernet traffic to use the same CoS value as FCoE, the switch does not apply priority flow control to the standard Ethernet traffic. This traffic is mapped to the default drop system class.
- The CoS value 0 is reserved for the default drop system class. This value cannot be mapped to any other class.
When configuring Ethernet port channels, note the following guidelines:
- Service policies configured on port channel interfaces are applied to all members of the port channel. Service policies configured on individual member interfaces are ignored.
- Priority flow control is configured on the individual member interfaces of a port channel. The PFC configuration must be consistent across all members of the port channel for the port channel to become operational.
Configuring PFC and LLC
Cisco Nexus 5000 Series switches support PFC and LLC on Ethernet interfaces. The Ethernet interface can operate in two different modes: FCoE mode or standard Ethernet mode.
If the interface is operating in FCoE mode, the Ethernet link is connected at the server port using a converged network adapter (CNA). Refer to Chapter 1, “Configuring FCoE” for information about configuring PFC and LLC when the interface is operating in FCoE mode.
If the interface is operating in standard Ethernet mode, the Ethernet link is connected at the server port with a standard Ethernet network adapter (NIC). The network adapter must support DCBX protocol for PFC or ingress policing to be supported on the interface.
Note You must configure a no-drop Ethernet system class for PFC to operate on Ethernet traffic (PFC will be applied to traffic that matches the CoS value configured for this class).
Configuring PFC and LLC for standard Ethernet is covered in the following topics:
Configuring Priority Flow Control
By default, Ethernet interfaces negotiate PFC capability with the network adapter using DCBX protocol. When PFC is enabled, PFC is applied to traffic that matches the CoS value configured for the no-drop Ethernet class.
You can override the negotiation result by force-enabling the PFC capability. To force-enable the PFC capability, perform this task:
|
|
|
---|---|---|
|
||
|
Sets PFC mode for the selected interface. |
Note Priority flow control is configured on the individual member interfaces of a port channel. The PFC configuration must be consistent across all members of the port channel for the port channel to become operational.
The following example shows how to force-enable PFC on an interface:
To disable PFC capability for an interface, perform this task:
|
|
---|---|
|
Configuring IEEE 802.3x Link-Level Flow Control
By default, link-level flow control capability on Ethernet interfaces is disabled. You can enable link-level flow-control capability for the transmit and receive directions. To enable link-level flow control capability, perform this task:
The following example enables link-level flow control frames on an interface:
To disable link-level flow control, perform this task:
|
|
---|---|
|
Disables 802.3x link-level flow control for the selected interface. |
Configuring System Classes
This section describes how to configure system classes on the switch. The steps to configure a system class are described in the following topics:
- Configuring Class Maps
- Configuring Policy Maps
- Creating the System Service Policy
- System Class Example
- Enable Link Level Flow Control Example
- Enabling Jumbo MTU
- Verifying Jumbo MTU
Configuring Class Maps
The class-map command creates a named object that represents a class of traffic. In the class map, you specify a set of match criteria for classifying the packets. For system classes, the only match criteria supported is match cos.
If a system class is configured with no-drop function, the match cos command serves an additional purpose. The switch sends the CoS value to the adapter, so that the adapter will apply PFC Pause for this CoS value.
The FCoE system class has a default CoS value of 3. You can add a match cos configuration to the FCoE system class to set a different CoS value. PFC Pause will be applied to traffic that matches the new value.
To configure a class map for a system class, perform this task:
Configuring Policy Maps
The policy-map command is used to create a named object representing a set of policies that are to be applied to a set of traffic classes.
The switch provides two default system classes: a no-drop class for lossless service and a drop class for best-effort service. You can define up to four additional system classes for Ethernet traffic.
You need to create a policy map to specify the policies for any user-defined class. In the policy-map, you can configure the QoS parameters for each class. You can use the same policy map to modify the configuration of the default classes.
Note Before creating the policy map, define a class map for each new system class.
To configure a policy map, perform this task:
Note The switch distributes all the policy map configuration values to the attached network adapters.
Note Policy maps can also be configured for interface service policies. However, different parameters are supported in these policy maps. See the “Configuring QoS on Interfaces” section.
The following example shows how to enable optimized multicast for the default Ethernet class:
The following example shows how to create a policy map with a no-drop Ethernet class:
Creating the System Service Policy
The service-policy command is used to associate the system class policy-map as the service policy for the system.
|
|
|
---|---|---|
Specifies the policy-map to use as the service policy for the system. |
The following example sets a no-drop Ethernet policy map as the system class:
System Class Example
In the following example, a new Ethernet no-drop system class is created, and the CoS values of the default system classes are changed from their default values:
In this example, the first class-map command defines a new Ethernet system class. Packets from all over the system with 802.1p CoS value of 5 will be classified into this new system class.
The second class-map command changes the match value of the default no-drop system class.
The policy-map command defines a QoS policy for each traffic class. The new Ethernet class is configured as a no-drop class, with an MTU of 2000 bytes. The pause no-drop command causes PFC to apply pause functionality for packets with IEEE 802.1p priority value 5.
The service-policy command sets the specified policy as the system class.
Enable Link Level Flow Control Example
The following example shows how to enable link level flow control and to add a new policy-map as a no-drop class.
See the Configuring IEEE 802.3x Link-Level Flow Control section for details on enabling the flow-control send/receive on the interfaces.
Enabling Jumbo MTU
To enable jumbo MTU for the whole switch, set the MTU to its maximum size (9216 bytes) in the policy map for the default Ethernet system class (class-default).
In the following example, the default Ethernet system class is configured to support the jumbo MTU:
Note The system jumbomtu command defines the maximum MTU size for the switch. However, jumbo MTU is only supported for system classes that have mtu configured.
Verifying Jumbo MTU
To verify that jumbo MTU is enabled, enter the show interface ethernet slot/port command for an Ethernet interface that carries traffic with jumbo MTU.
The following example shows how to display summary jumbo MTU information for Ethernet 2/1 (the relevant part of the output is shown in bold font):
The following example shows how to display detailed jumbo MTU information for Ethernet 2/1 (the relevant part of the output is shown in bold font):
Configuring QoS on Interfaces
QoS parameters that can be configured on Ethernet and port channel interfaces are described in the following topics:
Configuring Untagged CoS
Any incoming packet not tagged with an 802.1p CoS value is assigned the default untagged CoS value of zero (which maps to the default Ethernet drop system class). You can override the default untagged CoS value for an Ethernet interface or port channel.
To configure the untagged CoS value, perform this task:
|
|
|
---|---|---|
switch(config)# interface { ethernet slot / port | port-channel channel-number } |
Enters configuration mode for the specified interface or port channel. |
|
Configuring Ingress Policies
An ingress policy is a service policy applied to incoming traffic on an Ethernet interface. The ingress policy is applied in the adapter to all outgoing traffic that matches the specified class. When you configure an ingress policy on an interface or port channel, the switch sends the configuration data to the adapter.
To configure an ingress policy, perform this task:
The following example shows that the system class best-effort-drop-class is guaranteed 20 percent of the bandwidth on interface eth1/1:
Configuring Egress Policies
An egress policy is a service policy applied to the outgoing traffic on an Ethernet interface. You can configure an egress policy to guarantee the bandwidth for the specified traffic class or to configure the egress queues.
To configure an egress policy, perform this task:
The following example shows that the system class best-effort drop class is guaranteed 20 percent of the bandwidth on interface eth1/1: