Neutron SVI Integration

This chapter contains the following sections:

Neutron SVI Integration Overview

A Switched Virtual Interface (SVI) is a virtual local area network (VLAN) of switch ports that have a single interface to a routing or bridging system. In the context of a Layer 3 Out (L3Out) configuration, you configure an (SVI) to provide connectivity between the Cisco Application Centric Infrastructure (ACI) leaf switch and a router.

By default, when you configure a single L3Out with SVI interfaces, the VLAN encapsulation spans multiple nodes within the fabric. The spanning occurs because the Cisco ACI fabric configures the same bridge domain (VXLAN VNI) across all the nodes in the fabric where the L3Out SVI is deployed. The bridge domain configuration occurs as long as all SVI interfaces use the same external encapsulation (SVI). However, when different L3Outs are deployed, the Cisco ACI fabric uses different bridge domains even if they use the same external encapsulation (SVI).


Note

Beginning with Cisco Application Policy Infrastructure Controller (APIC) Release 5.1(1), Cisco APIC supports a dual stack on SVI networks; that is, it now supports IPv6 and IPv4 connections. This feature enables Border Gateway Protocol (BGP) peering over L3Outs for IPv6 routes and IPv4 routes if you enable them. See the section Configure SVI Integration with Dual-Stack IP Addressing.

Neutron SVI

Beginning with Cisco APIC Release 5.1.(1), you can enable the Neutron SVI feature for VMs on OpenStack compute nodes using the OpFlex agent as well as the community OVS agent. This feature is only available in the unified mode with the AIM based plug-in.

When an OpenStack cluster is integrated with Cisco ACI through the Cisco ACI Modular Layer 2 (ML2) plug-in, you can create a floating L3Out that dynamically instantiates SVIs on Cisco ACI border leaf switches to peer with the OpenStack workload. In the Cisco ACI naming convention, this function is called Neutron SVI. The Neutron SVI feature allows you to configure in OpenStack Neutron networks that automatically create Cisco ACI L3Out, which potentially can enable BGP peering.

The OpenStack administrator can bind OpenStack virtual network functions (VNFs) directly to those Neutron networks. The Cisco ACI ML2 plug-in for OpenStack dynamically creates and deletes the SVI configuration on the L3Out to peer BGP with the VNFs that are created or destroyed. You can create an SVI network without specifying the L3Out. In that case, the plug-in automatically creates it and establishes the mapping.

Reasons for Configuring SVIs

SVIs are configured for a VLAN for the following reasons:

  • To allow BGP peering between virtual machines (VMs) peering with the (L3Out)

  • To use the upstream OpenStack API to control L3Out node profiles

  • To enable the OpenStack API to create an L3Out configuration on Cisco ACI.

SVI advantages include:

  • Configuration of the dynamic routing protocol between fabric switch and VNFs

  • Support for dynamic and distributed VNFs even across multiple Cisco ACI pods

  • Equal-cost multipath (ECMP) traffic distribution among VNFs

  • Optimal performance with VNFs

  • Distributed route peering between the switches and OpenStack VNFs

    The Cisco ACI plug-in for OpenStack enables the route peering based on the creation or destruction of VNFs. The Neutron SVI feature dynamically and automatically creates and destroys the SVI on the underlay. The feature also enables line rate routing capabilities and up to 64-way ECMP to the VNFs.

    Neutron SVI supports up to six pairs of switches under same L3out. Supports VNFs across distributed sites (multipod) and bonding with VPC to fabric with bidirectional forwarding detection (BFD) for fast VM failure detection.

Configuring SVI

This section describes how to configure the Switched Virtual Interface (SVI).

Procedure


Create neutron network with “--apic:svi True”:

Example:

#######
#creates the LB SVI network and its subnet which will be used for BGP peering between 
#ACI leaf and LB --no-dhcp is required initially not to #assign a random IP to the SVI

neutron net-create LBSVI --provider:network_type vlan --provider:physical_network physnet1 \
--apic:svi True --apic:bgp_enable True --apic:bgp_asn 2010 
openstack subnet create --ip-version 4 --subnet-range 172.168.0.0/24 --gateway 172.168.0.1 \
--network LBSVI LBSUBNET --no-dhcp   

#defines the static leaf IP address for the SVI (this is optional but nice to have so the LB 
#knows the neighbor to peer with)
openstack port create apic-svi-port:node-101 --network LBSVI --device-owner apic:svi \ 
--fixed-ip subnet=LBSUBNET,ip-address=172.168.0.11
openstack port create apic-svi-port:node-102 --network LBSVI --device-owner apic:svi \
--fixed-ip subnet=LBSUBNET,ip-address=172.168.0.12

#now that static ports are set dhcp can be enabled
openstack subnet set LBSUBNET --dhcp

#create 2 LB VMs with static IP 172.168.0.21 and 172.168.0.22
openstack port create LB1PORT --network LBSVI --fixed-ip subnet=LBSUBNET,ip-address=172.168.0.21 \
--fixed-ip subnet=LBSUBNET6,ip-address=2001:db8::11
openstack port create LB2PORT --network LBSVI --fixed-ip subnet=LBSUBNET,ip-address=172.168.0.22 \
--fixed-ip subnet=LBSUBNET6,ip-address=2001:db8::12

LB1=$(openstack port list | awk '/LB1/ {print $2}')
LB2=$(openstack port list | awk '/LB2/ {print $2}')

nova boot --flavor m1.tiny --image LB1 --nic port-id=$LB1 vLB1
nova boot --flavor m1.tiny --image LB2 --nic port-id=$LB2 vLB2

Dual Stack with SVI Integration

OpenStack supports dual-stack IP addressing—the ability configure IPv4 and IPv6 traffic at the same time—on Switched Virtual Interface (SVI) networks. Using dual-stack IP addressing provides the advantages of IPv6—increased network efficiency and larger address space—while supporting hosts or applications that do not support IPv6.

Because of dual-stack IP addressing, the Cisco Application Centric Infrastructure (ACI) ML2 plug-in adds support for Border Gateway Protocol (BGP) peering for IPv6 connections and IPv4 connections if you enable BGP. OpenStack previously supported BGP peering for IPv4 connections.

Configuring SVI integration with dual-stack IP addressing is similar to configuring SVI integration without dual-stack. With IPv4 only, you create an SVI-type network, define a static IP address for the leaf switch, and enable DHCP. However, for dual-stack, you add both IPv4 and IPv6 subnets to the SVI-type network. You can have only one IPv4 subnet and one IPv6 subnet on the SVI network.

Adding subnets creates a bound DHCP port on Cisco Application Policy Infrastructure Controller (APIC). Bringing up a virtual machine (VM) also creates a bound port on Cisco APIC. The presence of a bound port, regardless of origin, prompts the OpenStack plug-in to create the required interface profiles. The profiles enable traffic to flow through the leaf switch.

To use BGP peering, you must enable BGP on the Neutron network creation command. For example:

neutron net-create LBSVI --provider:network_type vlan \
--provider:physical_network physnet1 --apic:svi True --apic:bgp_enable True \
--apic:bgp_asn 2010

The option --apic:bgp_asn 2010 prepares the Layer 3 outside connection (L3Out) definition for BGP peering to the OpenStack workload. BGP configuration on the L3Out is dynamically configured only when the first OpenStack virtual machine (VM) is attached to this Neutron network. Until VMs exist, the SVI and related BGP configuration does not exist in Cisco Application Centric Infrastructure (ACI).


Note

You must manually define the BGP configuration on the OpenStack workload because it is outside of the ACI ML2 plug-in control.

Prerequisites for Configuring SVI Integration with Dual Stack IP Addressing

This section lists the tasks what you must fulfill before you configure dual-stack Switched Virtual Interface (SVI) integration and Border Gateway Protocol (BGP) peering.

  1. Create a tenant.

  2. If you plan to specify IP addresses for the IPv4 or IPv6 connections, determine and reserve the addresses.

    Alternatively, you can let the OpenStack plug-in assign IP addresses automatically.

  3. If you plan to specify a Neutron port—referred to as an SVI port in OpenStack—create one.

    Alternatively, you can let the OpenStack plug-in create one automatically.

  4. If you want to use BGP peering, configure BGP routing on the virtual machine (VM) that you plan to bring up in OpenStack.

  5. Create at least one virtual machine (VM).

Configure SVI Integration with Dual-Stack IP Addressing

To configure Switched Virtual Interface (SVI) integration with dual-stack IP addressing, you enter a series of OpenStack commands and verify the configuration in Cisco Application Policy Infrastructure Controller (APIC).


Note

OpenStack supports only one subnet from each address type. That is, you can configure only one IPv4 subnet and only one IPv6 subnet. If you configure more than one IPv4 subnet or more than one IPv6 subnet, the OpenStack plug-in raises an error.

Before you begin

Fulfill the conditions and tasks in the section Prerequisites for Configuring SVI Integration with Dual Stack IP Addressing.

Procedure


Step 1

Create an SVI-type network.

Example:

neutron net-create LBSVI --provider:network_type vlan --apic:svi True \
--apic:bgp_enable True --apic:bgp_asn 2010 \
--apic:distinguished_names type=dict ExternalNetwork=uni/tn-common/out-Access-Out/instP-data_ext_pol

The command also enables Border Gateway Protocol (BGP) on the network. If you configure an endpoint on the network, the OpenStack plug-in creates a BGP peer connectivity profile on Cisco APIC.

The option --apic:distinguished_names type-dict ...._pol is optional. You need it only if you want to use an existing Layer 3 outside (L3Out) connection already defined on Cisco Application Centric Infrastructure (ACI). If you do not give this parameter, a new L3Out is created on the Cisco ACI fabric from the OpenStack plug-in.

Step 2

Add an IPv4 subnet to the network.

Example:

openstack subnet create --ip-version 4 --subnet-range 172.168.0.0/24  \
--gateway 172.168.0.1 --network LBSVI LBSUBNETP-data_ext_pol68.0.11

The OpenStack plug-in creates a DHCP port on Cisco APIC. The DHCP port is a bound port, and its creation triggers the static path.

Step 3

Create a Neutron port, which the interface profile requires.

You can skip the step and allow the OpenStack plug-in to create the port automatically.

In OpenStack, the Neutron port is referred to as the SVI port.

Example:

openstack port create apic-svi-port:node-103 --network LBSVI --device-owner apic:svi \
--fixed-ip subnet=LBSUBNET,ip-address=172.168.0.11

Creation of the DHCP and SVI ports automatically triggers the creation of an interface profile on Cisco APIC, which enables traffic to flow through the leaf switch.

Step 4

Add an IPv6 subnet.

Example:

openstack subnet create --ip-version 6 --subnet-range 2001:db8::/64 --gateway 2001:db8::1 \
--ipv6-ra-mode slaac --ipv6-address-mode slaac --network LBSVI LBSUBNET6

Neutron updates the DHCP port with the new subnet information, and the OpenStack plug-in automatically creates an interface profile for the new subnet. You created Neutron (SVI) ports with IPv4 and IPv6 addresses in the section Configuring SVI.


What to do next

Verify the SVI integration in Cisco APIC:

Go to Tenants > tenant > Networking > L3Outs > Access-Out > Logical Node Profiles > IfProfile or IfProfile6, and then click the SVI tab. The central work pane displays the path, IP address, and other information about the interface profile.

Configure BGP

After you configure a Switched Virtual Interface (SVI) with Border Gateway Protocol (BGP) enabled, you can configure BGP for IPv4. Beginning with Cisco Application Policy Infrastructure Controller (APIC) Release 5.1.1, you can configure BGP for IPv6 routes and IPv4 routes.

Before you begin

You must have enabled BGP when you configured an SVI. See Configure SVI Integration with Dual-Stack IP Addressing.

Procedure


Step 1

Configure BGP on OpenStack virtual machines (VMs) on the SVI network with the appropriate export routes.

The BGP configuration is separate from OpenStack or Cisco Application Centric Infrastructure (ACI) and depends on the VM type.
Step 2

Verify the BGP configuration.

  1. Log in to Cisco APIC.

  2. Go to Tenants > tenant > L3Outs > Access-Out > Logical Node Profiles > NodeProfile > Configured Nodes > topology/pod/node > BGP for VRF-common: > Neighbors.

  3. Open the Neighbors folder.

  4. The folder displays the IP address for the IPv4 route and, if you configured it, the IP address for the IPv6 route.


Troubleshooting SVI Networks

This section describes how to troubleshoot SVI networks.

  • Make sure that the l3 domain DN is configured properly. The l3 domain DN pointing to a working external routed domain in APIC in the neutron config file and neutron-server has been restarted on all the controller nodes.

  • Make sure that there are no faults in either the pre-existing l3-outs or the auto l3-outs created by our mechanism driver.

  • Make sure that SVI interface is being created properly with the right path and the VLAN ID of the SVI network when DHCP or VM endpoints are getting created.

    • In a VPC setup, the primary IP of site A and site B will be allocated from the SVI subnet by our mechanism driver and consistent per VPC pair across the SVI interfaces. The secondary IP will always be the Gateway IP of the SVI subnet.

  • Make sure that nodes are getting created properly under node profile in the l3-out.

    • When SVI interface is being created, our mechanism_driver will create the corresponding nodes also. The nodes info is in the SVI path itself. In a VPC setup, each SVI interface will have 2 nodes in its path.

  • Bgp_asn parameter that is specified at the time of network create or update should be the same as the one used for peering by the guest machine acting as a bgp peer and it should not match the AS number that is used by ACI for internal fabric BGP peering. Also ensure that provider and consumer BGP peers have different AS numbers in order to redistribute routes from one peer to the other.


    Note

    Directly connected subnets are never redistributed as is the common use case with eBGP. If needed, these subnets can be exported with an explicit permit route-map outside of openstack, by posting to APIC.
  • Minimal config to establish BGP session and import/export prefixes that are learned by peering to other peers is exposed with openstack integration, for all advanced route-map/community/password configuration, use the APIC API directly.

  • Use the aimctl manager command to debug if you think things should be created under that l3-out but you do not see it in APIC or vice versa.

    • Just type aimctl manager | grep out and aimctl manager | grep external then it will list all the l3-out and external-network related CLI commands available to use.

    • If sync_status shows some failure, check the /var/log/aim-aid.log file for more details.


Note

If you use OSP16 or later, you must execute the aimctl commands from inside the ciscoaci_aim container. The container can be entered from a controller node by running the following command: docker exec -itu root ciscoaci_aim bash. To exit the container, type exit.