The ACI fabric provides several load balancing options for balancing the traffic among the available uplink links. This topic
describes load balancing for leaf to spine switch traffic.
Static hash load balancing is the traditional load balancing mechanism used in networks where each flow is allocated to an
uplink based on a hash of its 5-tuple. This load balancing gives a distribution of flows across the available links that is
roughly even. Usually, with a large number of flows, the even distribution of flows results in an even distribution of bandwidth
as well. However, if a few flows are much larger than the rest, static load balancing might give suboptimal results.
ACI fabric Dynamic Load Balancing (DLB) adjusts the traffic allocations according to congestion levels. It measures the congestion
across the available paths and places the flows on the least congested paths, which results in an optimal or near optimal
placement of the data.
DLB can be configured to place traffic on the available uplinks using the granularity of flows or flowlets. Flowlets are bursts
of packets from a flow that are separated by suitably large gaps in time. If the idle interval between two bursts of packets
is larger than the maximum difference in latency among available paths, the second burst (or flowlet) can be sent along a
different path than the first without reordering packets. This idle interval is measured with a timer called the flowlet timer.
Flowlets provide a higher granular alternative to flows for load balancing without causing packet reordering.
DLB modes of operation are aggressive or conservative. These modes pertain to the timeout value used for the flowlet timer.
The aggressive mode flowlet timeout is a relatively small value. This very fine-grained load balancing is optimal for the
distribution of traffic, but some packet reordering might occur. However, the overall benefit to application performance is
equal to or better than the conservative mode. The conservative mode flowlet timeout is a larger value that guarantees packets
are not to be re-ordered. The tradeoff is less granular load balancing because new flowlet opportunities are less frequent.
While DLB is not always able to provide the most optimal load balancing, it is never worse than static hash load balancing.
Note
|
Although all Nexus 9000 Series switches have hardware support for DLB, the DLB feature is not enabled in the current software
releases for second generation platforms (switches with EX, FX, and FX2 suffixes).
|
The ACI fabric adjusts traffic when the number of available links changes due to a link going off-line or coming on-line.
The fabric redistributes the traffic across the new set of links.
In all modes of load balancing, static or dynamic, the traffic is sent only on those uplinks or paths that meet the criteria
for equal cost multipath (ECMP); these paths are equal and the lowest cost from a routing perspective.
Dynamic Packet Prioritization (DPP), while not a load balancing technology, uses some of the same mechanisms as DLB in the
switch. DPP configuration is exclusive of DLB. DPP prioritizes short flows higher than long flows; a short flow is less than
approximately 15 packets. Because short flows are more sensitive to latency than long ones, DPP can improve overall application
performance.
For intra-leaf switch traffic, all DPP-prioritized traffic is marked CoS 0 regardless of a custom QoS configuration. For inter-leaf
switch traffic, all DPP-prioritized traffic is marked CoS 3 regardless of a custom QoS configuration.
GPRS tunneling protocol (GTP) is used mainly to deliver data on wireless networks. Cisco Nexus switches are places in Telcom
Datacenters. When packets are being sent through Cisco Nexus 9000 switches in a datacenter, traffic needs to be load-balanced
based on the GTP header. When the fabric is connected with an external router through link bundling, the traffic is required
to be distributed evenly between all bundle members (For example, Layer 2 port channel, Layer 3 ECMP links, Layer 3 port channel,
and L3Out on the port channel). GTP traffic load balancing is performed within the fabric as well.
To achieve GTP load balancing, Cisco Nexus 9000 Series switches use 5-tuple load balancing mechanism. The load balancing mechanism
takes into account the source IP, destination IP, protocol, Layer 4 resource and destination port (if traffic is TCP or UDP)
fields from the packet. In the case of GTP traffic, a limited number of unique values for these fields restrict the equal
distribution of traffic load on the tunnel.
To avoid polarization for GTP traffic in load balancing, a tunnel endpoint identifier (TEID) in the GTP header is used instead
of a UDP port number. Because the TEID is unique per tunnel, traffic can be evenly load balanced across multiple links in
the bundle.
The GTP load balancing feature overrides the source and destination port information with the 32-bit TEID value that is present
in GTPU packets.
GTP tunnel load balancing feature adds support for:
The ACI fabric
default configuration uses a traditional static hash. A static hashing function
distributes the traffic between uplinks from the leaf switch to the spine
switch. When a link goes down or comes up, traffic on all links is
redistributed based on the new number of uplinks.
Leaf/Spine Switch Dynamic Load Balancing Algorithms
The following table provides the default non-configurable algorithms used in leaf/spine switch dynamic load balancing:
Table 1. ACI Leaf/Spine Switch Dynamic Load Balancing
Traffic Type
|
Hashing Data Points
|
Leaf/Spine IP unicast
|
|
Leaf/Spine Layer 2
|
|