Overview
The Cisco Cloud APIC enables you to deploy Layer 4 to Layer 7 service devices to the public cloud. The initial release (4.2(x)), supports Azure Application Gateway (Application Load Balancer) deployments in Azure. Beginning with release 5.0(2), Azure Load Balancer (Network Load Balancer) and Third Party Firewall deployments in Azure are supported. Beginning with release 5.1(2), Third Party Load Balancer deployments in Azure are supported.
Four types of Layer 4 to Layer 7 services are supported for deployments in Azure:
-
ALB refers to Azure Application gateway or Application Load balancer
-
NLB refers to Azure Load balancer or Network Load balancer
-
Third Party Firewall
-
Third Party Load Balancer
About Service Graphs
A service graph is used to represent a set of Layer 4 to Layer 7 services devices inserted between two or more pair of EPGs. EPGs can represent your applications running within a cloud (for example, Cloud EPG) or internet (cloudExtEPG) or from other sites (for example, on-premises or remote cloud sites). Layer 4 to Layer 7 services devices can be NLB, ALB, a cluster of third party firewalls or a third party load balancer.
A service graph in conjunction with contracts (and filters) is used to specify communication between two EPGs. A cloud APIC automatically derives security rules (network security group/NSG and ASG) and forwarding routes (UDRs) based on the policy specified in Contract and Service Graph
Multiple service graphs can be specified to represent different traffic flows or topologies.
Following combinations are possible with service graphs:
-
Same device can be used in multiple service graphs.
-
Same service graph can be used between multiple consumer and provider EPGs.
By using a service graph, the user can specify the policy once and deploy the service chain within regions or inter-regions. Each time the graph is deployed, Cisco ACI takes care of changing the network configuration to enable the forwarding in the new logical topology.
For Third party firewalls, the configuration inside the device is not managed by cloud APIC.
A service graph represents the network using the following elements:
-
Service Graph Nodes—A node represents a function that is applied to the traffic, such as a load balancer. A function within the service graph might require one or more parameters and have one or more connectors.
-
Connector—A connector enables input and output from a node.
After the graph is configured, the Cisco APIC automatically configures the services according to the service function requirements that are specified in the service graph. The Cisco APIC also automatically configures the network according to the needs of the service function that is specified in the service graph, which does not require any change in the service device.
Using Service Graphs with Cloud Native and Third-Party Services
Beginning with Release 5.1(2), you can now use service graphs with cloud native and third-party services. You can use service graphs in these situations either with or without redirect. See Example Use Cases for Service Graphs with Cloud Native and Third-Party Services for example use cases, with or without redirect.
You will use the cloud service endpoint group (service EPG), also introduced in Release 5.1(2), with this type of service graph. See Cloud Service Endpoint Groups for more information about the service EPG, and the deployment types and access types that are available for service EPGs.
The following deployment types and access types are supported with service graphs used with service EPGs for this purpose.
Deployment Types |
Access Types |
---|---|
Cloud Native |
Private |
Cloud Native Managed |
Public and Private |
Third-Party |
Private |
Deployment Types |
Access Types |
---|---|
Cloud Native Managed |
Public and Private |
Guidelines and Limitations
-
You must have the newer NSG-per-subnet configuration enabled in order to use service graphs with cloud native and third-party services, using the service EPGs. See Security Groups for more information on the NSG-per-subnet configuration.
-
Any restrictions that apply for cloud EPG and service graph combinations also apply to service EPG and service graph combinations. For example, the cloud EPG/service graph restriction that a consumer and provider that is tag-based cannot be in the same VRF in the same region would also apply for service EPGs and service graphs.
-
For two node graphs that don't perform redirect, SNAT and DNAT are enabled. It is assumed that the DNATed address is a device that is equivalent to a load balancer, which can take care of spraying traffic across different targets that may be in different subnets.
Note that if those targets are in different subnets, the service graph doesn't provide route reachability rules for those targets. It is assumed that the service EPG will take care of the reachability in this case.
-
For cases involving AKS and service graphs, the service graph will only establish route reachability to the load balancer's subnet of the AKS cluster.
About Application Load Balancers
Application Load Balancer (also called Azure Application Gateway or ALB) is a Layer 7 load balancer, which balances the web traffic based on attributes like HTTP request, URL filtering etc. For more details please refer to Microsoft Documentation.
In Cisco ACI, there are two ways to deploy an Application Load Balancer:
-
Internet-facing: inserts the Application Load Balancer as a service between the consumer external EPG and the provider cloud EPG.
-
Internal-facing: inserts the Application Load Balancer as a service between the consumer cloud EPG and the provider cloud EPG.
You can consume an Application Load Balancer using a service graph. A typical configuration involves:
-
Creation of Layer 4 to Layer 7 services device as Application Load Balancer
-
Consume the ALB as a node in the service graph
-
Creation of one or more listeners in EPG communication when a service graph is associated with a contract.
Listeners enable you to specify the ports and protocols (HTTP or HTTPS) that the Application Load Balancer accepts traffic on. When specifying HTTPS, you also choose a security policy and an SSL certificate.
Note |
A listener can have multiple certificates. |
All listeners require you to configure at least one rule (a default rule, which does not have a condition). Rules enable you to specify the action that the load balancer takes when a condition is met. For example, you can create a rule that redirects traffic to a specified URL when a request is made to a specified hostname or path.
An Application load balancer (ALB) should be in a separate subnet which should not be used to deploy other applications. Cloud APIC creates and attaches ALB’s NSG to the subnet associated with the ALB. Cloud APIC supports Standard and Standard_v2 SKUs of Azure Application Gateway.
About Network Load Balancer
A Network Load Balancer (Azure Load Balancer or NLB) is a Layer 4 device that distributes the in-bound flow packets based on Layer 4 ports. For more details, please refer to Microsoft Documentation.
Similar to ALB, NLB can be deployed using a service graph. You can specify these actions by configuring one or more listeners.
Listeners enable you to specify the ports and protocols (TCP or UDP) that the load balancer accepts and forwards traffic on. All listeners require you to configure at least one rule (a default rule, which does not have a condition). Rules enable you to specify the action that the load balancer takes when a condition is met. Unlike application gateway, here a rule can only forward traffic to specific port of the backend pool. NLB should be in a separate subnet similar to ALB. There are two modes of operation in Network load balancer:
-
Forward mode: Traffic is forwarded from a specific listener port to the specified backend port.
-
HA Port mode: Network load balancer will load balance TCP and UDP flows on all the ports simultaneously.
Cloud APIC supports Standard SKU Network Load Balancer only.
In Figure1, the frontend load balancer (ALB/NLB) - VM or firewall - backend load (ALB/NLB) balancer as a service are inserted between the consumer external EPG and the provider cloud EPG.
Understanding Configurations for Multiple Frontend IP Addresses on the Azure Network Load Balancer
The following sections provide information on support for multiple frontend IP addresses on the Azure Network Load Balancer that is available beginning with Cisco Cloud APIC release 25.0(3).
About Multiple Frontend IP Addresses on the Azure Network Load Balancer
When configuring an internet-facing Network Load Balancer, the number of public IP addresses allowed to assign to frontend the internet traffic varies, depending on the release:
-
Prior to Cisco Cloud APIC release 25.0(3), an internet-facing Network Load Balancer has a single public IP address assigned to frontend the internet traffic. The following graphic shows an example of a multi-node service graph configuration, where an internet-facing Network Load Balancer is shown at the top of the graphic, followed by VMs or a firewall, then an internal-facing Network Load Balancer as parts of this multi-node service graph.
In this example, the internet-facing Network Load Balancer has a single public IP address assigned to frontend the internet traffic.
However, with this configuration, an issue might arise if you have a service graph and you need to expose multiple HTTPS services. Having the restriction of a single public IP address assigned to frontend the internet traffic for an internet-facing Network Load Balancer means that you cannot add more frontend IP addresses to that Network Load Balancer. In addition, you cannot add more Network Load Balancer in this situation due to a restriction from Azure, which does not allow multiple Network Load Balancers to share the same backend device (the firewalls in this example).
-
Beginning with Cisco Cloud APIC release 25.0(3), support is now available for configuring multiple frontend IP addresses for an internet-facing Network Load Balancer. With this update, each frontend IP address is attached to one or more rules to a specific backend pool.
The following graphic provides an example configuration where multiple frontend IP addresses are configured for an internet-facing Network Load Balancer.
This example configuration depicts the packet flow for the following listener rules:
Listener Rule (Frontend Configuration) |
Rule Action (Backend Configuration) |
|
---|---|---|
Rule1 |
|
Port: 80 |
Rule2 |
|
Port: 81 |
The service graph allows you to configure the settings for the listener rule and the rule action on services devices. When defined on a Network Load Balancer, the listener rule and rule action settings construct a mapping from a frontend configuration of the load balancer to a backend pool. Prior to Cisco Cloud APIC release 25.0(3), an internet-facing Network Load Balancer provided the ability to configure listeners with a single frontend IP address but with different port and protocol combinations. Beginning with Cisco Cloud APIC release 25.0(3), with the support for configuring multiple frontend IP addresses for an internet-facing Network Load Balancer, that ability is extended to configure listener rules with multiple frontends where each frontend is represented as a tuple combination of frontend IP addresses, port, and protocol.
Guidelines and Limitations
Following are the guidelines and limitations with the support for configuring multiple frontend IP addresses for an internet-facing Network Load Balancer:
-
Support for multiple frontend IP addresses is available only for an internet-facing Network Load Balancer.
-
Backend port reuse across multiple listener rules is not supported.
About Third-Party Load Balancers
Third-Party Load Balancer is a noncloud native Layer 4 to Layer 7 load balancer. Cloud APIC does not manage the configuration of the third-party load balancers. However, Cloud APIC automates the network stitching for connectivity to a third-party load balancer.
You can configure VIPs for a third-party load balancer from the external interface subnet. You can also configure additional VIPs for the third- party load balancers as secondary IP addresses on the external interface.
Cloud APIC supports third-party load balancers that are deployed in a two-arm mode (external and internal interfaces) with source NAT enabled.
Limitations for Third-Party Load Balancers:
-
Cloud APIC does not support Direct Server Return (DSR) configurations on third-party load balancers.
-
Third-party load balancers are not supported in active/standby high availability configurations.
For details about third-party load balancer VMs in active/active mode, see Example Use Cases.
-
Alien VIP range is not supported for third-party load balancers.
About Allow All Traffic Option
Beginning with release 5.1(2g), the Allow All Traffic option is available for third-party firewalls and Azure network load balancers deployed as pass-through devices on a redirect-enabled service graph.
Note |
This option allows all inbound and outbound access to the subnet on which the interface belongs. Ensure that this does not present a security risk before enabling this option. |
The following sections provide instructions for enabling the Allow All Traffic option.
Third-Party Firewall
-
To enable this option when creating a new service graph type:
-
From the Application Management list in the Intent menu, click .
-
Choose Third party firewall as the Service Type.
-
Click Add Interface, then locate the Allow All Traffic area.
-
Click the box next to the Enabled field in the Allow All Traffic area to allow all inbound and outbound access to the subnet on which the interface belongs.
-
Click Save when finished.
-
-
To enable this option when editing an existing service graph type:
-
From the Application Management list in the Intent menu, click Services, then click on an existing service device with Third-Party Firewall shown as the Device Type.
A panel showing details for this service device type slides in from the right side of the window.
-
Click the Details icon ().
Another window appears that provides more detailed information for this service device type.
-
Locate the Interfaces area in the window and click the necessary interface selector under the Interface Selectors column.
A panel showing details for this interface slides in from the right side of the window.
-
Click the Details icon ().
Another window appears that provides more detailed information for this interface.
-
Click the pencil icon to edit the configuration settings for this interface.
-
Locate the Allow All Traffic area, then click the box next to the Enabled field in the Allow All Traffic area to allow all inbound and outbound access to the subnet on which the interface belongs.
-
Click Save when finished.
-
Azure Network Load Balancer
-
To enable this option when creating a new service graph type:
-
From the Application Management list in the Intent menu, click .
-
Choose Network Load Balancer as the Service Type.
-
In the Settings area, click the box next to the Enabled field in the Allow All Traffic area to allow all inbound and outbound access to the subnet on which the interface belongs.
-
Click Save when finished.
-
-
To enable this option when editing an existing service graph type:
-
From the Application Management list in the Intent menu, click Services, then click on an existing service device with Network Load Balancer shown as the Device Type.
A panel showing details for this service device type slides in from the right side of the window.
-
Click the Details icon ().
Another window appears that provides more detailed information for this service device type.
-
Click the pencil icon to edit the configuration settings for this service device.
-
In the Settings area, locate the Allow All Traffic area, then click the box next to the Enabled field in the Allow All Traffic area to allow all inbound and outbound access to the subnet on which the interface belongs.
-
Click Save when finished.
-
Dynamic Server Attachment to Server Pool
Servers in a provider EPG or service devices such as third-party firewalls behind ALB/NLB are dynamically added to the target groups. In Azure, the target groups are referenced as the backend pool. Listeners and rule configuration that define the frontend and backend protocol and port number, and load balancing action are provided by the user. When configuring a listener rule on ALB/NLB that is the last node as part of service graph configuration, you can select provider EPG for a given rule. The endpoints from that EPG are dynamically added to the target group of the load balancer. If another node, such as a third-party firewall, exists between ALB/NLB and the provider EPG, the firewall endpoints are dynamically added to the target group of the load balancer. You do not need to specify the endpoints or FQDN for the targets.
Prior to Cisco Cloud APIC for Azure Release 25.0(2), VM scale sets were not supported as backend targets for load balancers. Cisco Cloud APIC for Azure Release 25.0(2) adds support for VM scale sets as backend targets.
Note |
When using a VM scale set for a firewall, use only a subnet-based EP selector for the firewall interfaces. Azure does not support per-NIC tagging for a VM scale set with multiple interfaces. |
About Inter-VNet Services
Beginning with Release 5.0(2), support is available for the deployment and automation of the inter-VNet services. This is both for the East-West and North-South use cases within the cloud.
Note the following considerations for this support:
-
VNet peering needs to be configured for hub-spoke topology. For more information, refer to Configuring VNet Peering for Cloud APIC for Azure.
-
For multi-node services with redirect: The service device has to be present in the infra VNet. Service devices such as ALB fronting the provider can be present in the provider VNet.
-
For multi-node service without redirect: The service device can be in the provider VNet or spread across the hub VNet and the provider VNet.
-
-
Inter-VNet traffic is supported with an Application load balancer or Network load balancer in the infra VNet and the provider in a non-infra VNet. The VNets should be peered together and the load balancer and the provider should be from the same region.
About Multinodes
Beginning with release 5.0(2), Multinode service graph is supported. Multinodes enable multiple deployment scenarios with service graphs.
Service devices that can be deployed are Application Load Balancer, Network Load Balancer and Third Party Firewall.
Two types of nodes are admitted in a graph.
-
Non-redirect: Traffic is destined to service devices (Load Balancers, Thirdparty firewalls with DNAT and SNAT, Network Load Balancer).
-
Redirect: Service device is a passthrough device (Network Load Balancer or Firewall).
About Layer 4 to Layer 7 Service Redirect
Beginning with Release 5.0(2), the Layer 4 to Layer 7 Service Redirect feature is available for Cisco Cloud APIC, similar to the policy-based redirect (PBR) feature available for Cisco APIC. The Layer 4 to Layer 7 Service Redirect feature is configured using the Redirect option in the Cisco Cloud APIC.
Note |
Throughout this section, the term "consumer-to-provider" is sometimes used as a blanket term to describe traffic going from point A to point B, where a redirect service device might be inserted between those two points. However, this does not mean that only consumer-to-provider traffic is supported for redirect; traffic might also be from provider-to-consumer, such as in the use case described in Spoke to Spoke. |
With redirect, policies are used to redirect traffic through specific service devices, where service devices can be deployed as a Network Load Balancer or a third-party firewall. This traffic isn't necessarily destined for the service device as part of the standard consumer-to-provider configuration; rather, you would configure the consumer-to-provider traffic as you normally would, and you would then configure service graphs to redirect that consumer-to-provider traffic to a specific service device.
Support for redirect for Cisco Cloud APIC is only available in conjunction with the VNet peering feature, taking advantage of the hub-and-spoke topology used in VNet peering. For more information on the VNet peering feature, see the Configuring VNet Peering for Cloud APIC for Azure document.
Passthrough Rules
When redirect is enabled, the rules in the NSGs (Network Security Groups) attached to the service devices are updated to permit traffic from consumer to provider. These rules are called “passthrough rules". In general, the passthrough rule is to permit traffic from consumer IP to provider IP. If the destination IP is an application load balancer (ALB) VIP, the rule is to permit traffic from consumer IP to the ALB VIP.
Redirect Programming
Redirect programming depends on the classification of the destination EPG (tag-based or subnet-based):
-
For a subnet-based EPG, subnets of the destination EPGs are used to program redirects
-
For a tag-based EPGs, CIDRs of the destination VNet are used to program redirects
As a result of this, the redirect affects traffic from other EPGs going to the same destination in the redirect, even if the EPG is not part of the service graph with the redirect. Traffic from EPGs that are not part of the redirect will also get redirected to the service device.
The following table describes how redirect is programmed in different scenarios.
Consumer |
Provider |
Redirect on Consumer VNet |
Redirect on Provider VNet |
---|---|---|---|
Tag-based |
Tag-based |
Redirect for the provider are the CIDRs of the provider's VNet |
Redirect for the consumer are the CIDRs of the consumer's VNet |
Tag-based |
Subnet-based |
Redirect for the provider are the subnets of the provider |
Redirect for the consumer are the CIDRs of the consumer's VNet |
Subnet-based |
Tag-based |
Redirect for the provider are the CIDRs of the provider's VNet |
Redirect for the consumer are the subnets of the consumer |
Subnet-based |
Subnet-based |
Redirect for the provider are the subnets of the provider |
Redirect for the consumer are the subnets of the consumer |
Redirect Policy
To support the Layer 4 to Layer 7 Service Redirect feature, a new redirect flag is now available for service device connectors. The following table provides information on the existing and new flags for the service device connectors.
ConnType |
Description |
---|---|
redir |
This value means the service node is in redirect node for that connection. This value is only available or valid for third-party firewalls and Network Load Balancers. |
snat |
This value tells the service graph that the service node is performing source NAT on traffic. This value is only available or valid for the provider connector of third-party firewalls and only on the provider connector of a node. |
snat_dnat |
This value tells the service graph that the service node is performing both source NAT and destination NAT on traffic. This value is only available or valid for the provider connector of third-party firewalls and only on the provider connector of a node. |
none |
Default value. |
Workflow for Configuring Redirect
Following is the typical workflow for configuring redirect:
-
Create one or more service devices to use with the service graph:
-
Network load balancer (NLB)
-
Application load balancer (ALB)
-
Third-party firewall
-
-
Create a service graph and select the appropriate service devices for this particular service graph.
You will configure redirect at this point in the procedures:
-
Drag and drop a network load balancer, application load balancer, or firewall icon to the Drop Device area to select that service device for the service graph.
-
To enable the redirect feature, in the Service Node window that appears, check the box next to the Redirect option under the Consumer Connector Type and/or under the Provider Connector Type areas, depending on where you want to enable the redirect function.
Note
Even though you might have an application load balancer in the service graph, you cannot enable redirect on an application load balancer service device.
-
Complete the remaining configurations in the Service Node window, then click Add.
-
-
Configure the EPG communication, where you create a contract between the consumer and the provider EPGs.
-
Attach the service graph to the contract.
-
Configure the service device parameters.