Overview
The Cisco Cloud APIC enables you to deploy Layer 4 to Layer 7 service devices to the public cloud. The initial release (4.2(x)), supports Azure Application Gateway (Application Load Balancer) deployments in Azure. Beginning with release 5.0(2), Azure Load Balancer (Network Load Balancer) and Third Party Firewall deployments in Azure are supported.
Two types of Load Balancers are supported for Layer 4 - Layer 7 deployments in Azure:
-
ALB refers to Azure Application gateway or Application Load balancer
-
NLB refers to Azure Load balancer or Network Load balancer.
About Service Graphs
A service graph is used to represent a set of Layer 4- Layer 7 service devices inserted between two or more pair of EPGs. EPGs can represent your applications running within a cloud (e.g. Cloud EPG) or internet (cloudExtEPG) or from other sites (e.g. on-prem or remote cloud sites). Layer 4- Layer 7 devices can be NLB, ALB or a cluster of Third party firewalls.
A service graph in conjunction with contracts (and filters) is used to specify communication between two EPGs. A cloud APIC automatically derives security rules (network security group/NSG and ASG) and forwarding routes (UDRs) based on the policy specified in Contract and Service Graph
Multiple service graphs can be specified to represent a different represent different traffic flows or topologies.
Following combinations are possible with service graphs:
-
Same device can be used in multiple service graphs.
-
Same service graph can be used between multiple consumer and provider EPGs.
By using a service graph, the user can specify the policy once and deploy the service chain within regions or inter-regions. Each time the graph is deployed, Cisco ACI takes care of changing the network configuration to enable the forwarding in the new logical topology.
For Third party firewalls, the configuration inside the device is not managed by cloud APIC.
A service graph represents the network using the following elements:
-
Service Graph Nodes—A node represents a function that is applied to the traffic, such as a load balancer. A function within the service graph might require one or more parameters and have one or more connectors.
-
Connector—A connector enables input and output from a node.
After the graph is configured, the Cisco APIC automatically configures the services according to the service function requirements that are specified in the service graph. The Cisco APIC also automatically configures the network according to the needs of the service function that is specified in the service graph, which does not require any change in the service device.
About Application Load Balancers
Application Load Balancer (also called Azure Application Gateway or ALB) is a Layer 7 load balancer, which balances the web traffic based on attributes like HTTP request, URL filtering etc. For more details please refer to Microsoft Documentation.
In Cisco ACI, there are two ways to deploy an Application Load Balancer:
-
Internet-facing: inserts the Application Load Balancer as a service between the consumer external EPG and the provider cloud EPG.
-
Internal-facing: inserts the Application Load Balancer as a service between the consumer cloud EPG and the provider cloud EPG.
You can consume an Application Load Balancer using a service graph. A typical configuration involves:
-
Creation of L4L7 device as Application Load Balancer
-
Consume the ALB as a node in the service graph
-
Creation of one or more listeners in EPG communication when a service graph is associated with a contract.
Listeners enable you to specify the ports and protocols (HTTP or HTTPS) that the Application Load Balancer accepts traffic on. When specifying HTTPS, you also choose a security policy and an SSL certificate.
Note |
A listener can have multiple certificates. |
All listeners require you to configure at least one rule (a default rule, which does not have a condition). Rules enable you to specify the action that the load balancer takes when a condition is met. For example, you can create a rule that redirects traffic to a specified URL when a request is made to a specified hostname or path.
An Application load balancer (ALB) should be in a separate subnet which should not be used to deploy other applications. Cloud APIC creates and attaches ALB’s NSG to the subnet associated with the ALB. Cloud APIC supports Standard and Standard_v2 SKUs of Azure Application Gateway.
About Network Load Balancer
A Network Load Balancer (Azure Load Balancer or NLB) is a Layer 4 device that distributes the in-bound flow packets based on L4 ports. For more details, please refer to Microsoft Documentation.
Similar to ALB, NLB can be deployed using a service graph. You can specify these actions by configuring one or more listeners.
Listeners enable you to specify the ports and protocols (TCP or UDP) that the load balancer accepts and forwards traffic on. All listeners require you to configure at least one rule (a default rule, which does not have a condition). Rules enable you to specify the action that the load balancer takes when a condition is met. Unlike application gateway, here a rule can only forward traffic to specific port of the backend pool. NLB should be in a separate subnet similar to ALB. There are two modes of operation in Network load balancer:
-
Forward mode: Traffic is forwarded from a specific listener port to the specified backend port.
-
HA Port mode: Network load balancer will load balance TCP and UDP flows on all the ports simultaneously.
Cloud APIC supports Standard SKU Network Load Balancer only.
In Figure1, the frontend load balancer (ALB/NLB) - VM or firewall - backend load (ALB/NLB) balancer as a service are inserted between the consumer external EPG and the provider cloud EPG.
Dynamic Server Attachment to Server Pool
Servers in provider EPG are dynamically added to the target groups. In Azure, the target groups are referenced as the backend pool. Listeners and rule configuration that define the frontend and backend protocol and port number, and load balancing action are provided by the user. When configuring listener rule as part of service graph configuration, user can select provider EPG for a given rule. The endpoints from that EPG would be dynamically added to the target group of the load balancer. You do not need to specify the endpoints or FQDN for the targets.
About Inter-VNet Services
Beginning with Release 5.0(2), support is available for the deployment and automation of the inter-VNet services. This is both for the East-West and North-South use cases within the cloud.
Note the following considerations for this support:
-
VNet peering needs to be configured for hub-spoke topology. For more information, refer to Configuring VNet Peering for Cloud APIC for Azure.
-
For multi-node services with redirect: The service device has to be present in the infra VNet. Service devices such as ALB fronting the provider can be present in the provider VNet.
-
For multi-node service without redirect: The service device can be in the provider VNet or spread across the hub VNet and the provider VNet.
-
-
Inter-VNet traffic is supported with an Application load balancer or Network load balancer in the infra VNet and the provider in a non-infra VNet. The VNets should be peered together and the load balancer and the provider should be from the same region.
About Multinodes
Beginning with release 5.0(2), Multinode service graph is supported. Multinodes enable multiple deployment scenarios with service graphs.
Service devices that can be deployed are Application Load Balancer, Network Load Balancer and Third Party Firewall.
Two types of nodes are admitted in a graph.
-
Non-redirect: Traffic is destined to service devices (Load Balancers, Thirdparty firewalls with DNAT and SNAT, Network Load Balancer).
-
Redirect: Service device is a passthrough device (Network Load Balancer or Firewall).
About Layer 4 to Layer 7 Service Redirect
Beginning with Release 5.0(2), the Layer 4 to Layer 7 Service Redirect feature is available for Cisco Cloud APIC, similar to the policy-based redirect (PBR) feature available for Cisco APIC. The Layer 4 to Layer 7 Service Redirect feature is configured using the Redirect option in the Cisco Cloud APIC.
Note |
Throughout this section, the term "consumer-to-provider" is sometimes used as a blanket term to describe traffic going from point A to point B, where a redirect service device might be inserted between those two points. However, this does not mean that only consumer-to-provider traffic is supported for redirect; traffic might also be from provider-to-consumer, such as in the use case described in Spoke to Spoke. |
With redirect, policies are used to redirect traffic through specific service devices, where service devices can be deployed as a Network Load Balancer or a third-party firewall. This traffic isn't necessarily destined for the service device as part of the standard consumer-to-provider configuration; rather, you would configure the consumer-to-provider traffic as you normally would, and you would then configure service graphs to redirect that consumer-to-provider traffic to a specific service device.
Support for redirect for Cisco Cloud APIC is only available in conjunction with the VNet peering feature, taking advantage of the hub-and-spoke topology used in VNet peering. For more information on the VNet peering feature, see the Configuring VNet Peering for Cloud APIC for Azure document.
About the Overlay-1 and Overlay-2 VRFs
The overlay-1 and overlay-2 VRFs are automatically created in the infra tenant for Cloud APIC. In the Azure portal, CIDRs and subnets from the overlay-1 and overlay-2 VRFs are deployed in the Azure cloud on the overlay-1 VNet. The overlay-2 VRF is used to hold additional CIDRs. You shouldn't consider overlay-2 as a separate VNet.
The following sections provide more information on the overlay-1 and overlay-2 VRFs.
Requirement for Separate VRFs in the Infra Hub
Prior to Release 5.0(2), the infra hub VNet was used to achieve transit routing functionality for inter-spoke communications within the site through CSRs in the hub, and to send VxLAN packets for EPG communication across sites.
There are situations where you might want to deploy a certain number of EPGs configured with shared services and layer 4 to layer 7 service graphs in a common hub that can be shared across spokes. In some situations, you might have multiple hub networks deployed separately (for example, for production, pre-production, and core services). You might want to deploy all of these hub networks in the same infra hub VNet (in the same infra cloud context profile), along with the existing cloud CSRs.
Thus, for these kind of requirements, you might need to split the hub VNet into multiple VRFs for network segmentation while keeping the security intact.
About the Infra Hub Services VRF (Overlay-2 VRF in the Infra VNet)
Beginning with Release 5.0(2), the overlay-2 VRF is now created in the infra tenant implicitly during the Cisco Cloud APIC bringup. In order to keep the network segmentation intact between the infra subnets used by the cloud site (for CSRs and network load balancers) and the user subnets deployed for shared services, different VRFs are used for infra subnets and user-deployed subnets:
-
Overlay-1: Used for infra CIDRs for the cloud infra, along with Cisco Cloud Services Routers (CSRs), the infra network load balancer, and the Cisco Cloud APIC
-
Overlay-2: Used for user CIDRs to deploy shared services, along with layer 4 to layer 7 service devices in the infra VNet (the overlay-1 VNet in the Azure cloud)
All the user-created EPGs in the infra tenant can only be mapped to the overlay-2 VRF in the infra VNet. You can add additional CIDRs and subnets to the existing infra VNet (the existing infra cloud context profile). They are implicitly mapped to overlay-2 VRF in the infra VNet, and are deployed in the overlay-1 VNet in the Azure cloud.
Prior to Release 5.0(2), any given cloud context profile would be mapped to a cloud resource of a specific VNet. All the subnets and associated route tables of the VNet would be have a one-to-one mapping with a single VRF. Beginning with Release 5.0(2), the cloud context profile of the infra VNet can be mapped to multiple VRFs (the overlay-1 and overlay-2 VRFs in the infra VNet).In the cloud, the subnet’s route table is the most granular entity for achieving network isolation. So all system-created cloud subnets of the overlay-1 VRF and the user-created subnets of the overlay-2 VRF will be mapped to separate route tables in the cloud for achieving the network segmentation.
Note |
On Azure cloud, you cannot add or delete CIDRs in a VNet when it has active peering with other VNets. Therefore, when you need to add more CIDRs to the infra VNet, you need to first disable VNet peering in it, which removes all the VNet peerings associated with the infra VNet. After adding new CIDRs to the infra VNet, you need to enable VNet peering again in the infra VNet. You do not have to disable VNet peering if you are adding a new subnet in an existing CIDR in the hub VNet. See Adding a New CIDR to Overlay-2 Using the Cloud APIC GUI for more information. |
Passthrough Rules
When redirect is enabled, the rules in the NSGs (Network Security Groups) attached to the service devices are updated to permit traffic from consumer to provider. These rules are called “passthrough rules". In general, the passthrough rule is to permit traffic from consumer IP to provider IP. If the destination IP is an application load balancer (ALB) VIP, the rule is to permit traffic from consumer IP to the ALB VIP.
Redirect Programming
Redirect programming depends on the classification of the destination EPG (tag-based or subnet-based):
-
For a subnet-based EPG, subnets of the destination EPGs are used to program redirects
-
For a tag-based EPGs, CIDRs of the destination VNet are used to program redirects
As a result of this, the redirect affects traffic from other EPGs going to the same destination in the redirect, even if the EPG is not part of the service graph with the redirect. Traffic from EPGs that are not part of the redirect will also get redirected to the service device.
The following table describes how redirect is programmed in different scenarios.
Consumer |
Provider |
Redirect on Consumer VNet |
Redirect on Provider VNet |
---|---|---|---|
Tag-based |
Tag-based |
Redirect for the provider are the CIDRs of the provider's VNet |
Redirect for the consumer are the CIDRs of the consumer's VNet |
Tag-based |
Subnet-based |
Redirect for the provider are the subnets of the provider |
Redirect for the consumer are the CIDRs of the consumer's VNet |
Subnet-based |
Tag-based |
Redirect for the provider are the CIDRs of the provider's VNet |
Redirect for the consumer are the subnets of the consumer |
Subnet-based |
Subnet-based |
Redirect for the provider are the subnets of the provider |
Redirect for the consumer are the subnets of the consumer |
Redirect Policy
To support the Layer 4 to Layer 7 Service Redirect feature, a new redirect flag is now available for service device connectors. The following table provides information on the existing and new flags for the service device connectors.
ConnType |
Description |
---|---|
redir |
This value means the service node is in redirect node for that connection. This value is only available or valid for third-party firewalls and Network Load Balancers. |
snat |
This value tells the service graph that the service node is performing source NAT on traffic. This value is only available or valid for the provider connector of third-party firewalls and only on the provider connector of a node. |
snat_dnat |
This value tells the service graph that the service node is performing both source NAT and destination NAT on traffic. This value is only available or valid for the provider connector of third-party firewalls and only on the provider connector of a node. |
none |
Default value. |
Workflow for Configuring Redirect
Following is the typical workflow for configuring redirect:
-
Create one or more service devices to use with the service graph:
-
Network load balancer (NLB)
-
Application load balancer (ALB)
-
Third-party firewall
-
-
Create a service graph and select the appropriate service devices for this particular service graph.
You will configure redirect at this point in the procedures:
-
Drag and drop a network load balancer, application load balancer, or firewall icon to the Drop Device area to select that service device for the service graph.
-
To enable the redirect feature, in the Service Node window that appears, check the box next to the Redirect option under the Consumer Connector Type and/or under the Provider Connector Type areas, depending on where you want to enable the redirect function.
Note
Even though you might have an application load balancer in the service graph, you cannot enable redirect on an application load balancer service device.
-
Complete the remaining configurations in the Service Node window, then click Add.
-
-
Configure the EPG communication, where you create a contract between the consumer and the provider EPGs.
-
Attach the service graph to the contract.
-
Configure the service device parameters.