Information About ePBR L2
Enhanced Policy-based Redirect Layer2 (ePBR) in Elastic Services Re-direction (ESR) provides transparent service redirection and service chaining of Layer1/ Layer2 service appliances by leveraging Port ACL and VLAN translation. This action helps achieve service chaining and load-balancing capabilities without adding extra headers and avoids latency in using extra headers.
ePBR enables application-based routing and provides a flexible, device-agnostic policy-based redirect solution without impacting application performance. The ePBR service flow includes the following tasks:
Configuring ePBR Service and Policy
You must first create an ePBR service which defines the attributes of service end points. Service end points are the service appliances such as firewall, IPS, etc., that can be associated with switches. You can also define probes to monitor the health of the service end points and can define the forward and reverse interfaces where the traffic policies are applied. ePBR also supports load balancing along with service chaining. ePBR allows you to configure multiple service end points as a part of the service configuration.
After creating the ePBR service, you must create an ePBR policy. The ePBR policy allows you to define traffic selection, redirection of traffic to the service end point and various fail-action mechanisms on the end point health failure. You may use IP access-list end points with permit access control entries (ACE) to define the traffic of interest to match and take the appropriate action.
The ePBR policy supports multiple ACL match definitions. A match can have multiple services in a chain which can be sequenced by a sequence number. This allows flexibility to add, insert, and modify elements in a chain in a single service policy. In every service sequence, you can define the fail action method such as drop, forward, and bypass. The ePBR policy allows you to specify source or destination-based load balancing and bucket counts in order to have granular load balancing of traffic.
Applying ePBR to an L2 Interface
After creating the ePBR policy you need to apply the policy on an interface. This allows you to define the interface at which the traffic ingresses the NX-OS switch and the interface through which traffic needs to exit the switch after redirection or service-chaining. You can also apply the policy in both the forward and reverse directions into the NX-OS switch.
Enabling Production Interfaces as Access Port
If the service-chaining switch is inserted in between the two L3 routers for traffic redirection, the production interfaces are enabled as access port with the following limitations:
-
You must use the VLAN of the port as part of the match configuration.
-
It is limited to mac-learn disable mode.
Enabling Production Interfaces as Trunk Ports
Production interfaces may be configured as trunk ports. The VLANs of the incoming traffic that needs to be service-chained that is trunked by the interfaces must be configured as part of the match configuration.
Alternatively, using 'vlan all' in the match configuration will allow any traffic pertaining to any incoming VLANs on the interface to be matched and service chained.
Creating Bucket and Load Balancing
ePBR computes the number of traffic buckets based on the service that has maximum number of service-end-points in the chain. If you configure the load balance buckets, your configuration will have the precedence. ePBR supports load balancing methods of source IP and destination IP but does not support L4-based source or destination load balancing methods.
ePBR Object Tracking, Health Monitoring, and Fail-Action
Layer-2 ePBR performs link state monitoring of the service end-points by default. The user may additionally enable CTP (Configuration Testing Protocol) if supported by the service.
You can configure the ePBR probe options for a service or for each of the forward or reverse end points. You can also configure frequency, timeout, and retry up and down counts. The same track objects is re-used for all policies using the same ePBR service.
If no probe method is defined at the end point level, the probe method configured for the service level will be used.
ePBR supports the following fail-action mechanisms for its service chain sequences:
-
Bypass
-
Drop on Fail
-
Forward
Bypass of a service sequence indicates that the traffic must be redirected to the next service sequence when there is a failure of the current sequence.
Drop on fail of a service sequence indicates that the traffic must be dropped when all the service-end-points of the service become unreachable.
Forward is the default option and indicates that upon failure of the current service, traffic should forward to the egress interface. This is the default fail-action mechanism.
Note |
Symmetry is maintained when fail-action bypass is configured for all the services in the service chain. In other fail-action scenarios, when there are one or more failed services, symmetry is not maintained in the forward and the reverse flow. |
ePBR Session-based Configuration
ePBR sessions allow addition, deletion or modification of the following aspects of in-service services or policies. The in-service refers to a service that is associated with a policy that has been applied to an active interface or a policy that is being modified and currently configured on an active interface.
-
Service endpoints with their interfaces and probes
-
Reverse endpoints and probes
-
Matches under policies
-
Load-balance methods for matches
-
Match sequences and fail-action
Note |
In ePBR Sessions, you cannot move interfaces from one service to another service in the same session. To move interfaces from one service to another service, perform the following steps:
|
ACL Refresh
ePBR session ACL refresh allows you to update the policy generated ACLs, when the user-provided ACL gets modified or added or deleted with ACEs. On the refresh trigger, ePBR will identify the policies that are impacted by this change and create or delete or modify the buckets’ generated ACLs for those policies.
For ePBR scale values, see Cisco Nexus 9000 Series NX-OS Verified Scalability Guide.