Cisco VIM supports
installation on two different type of pods. The B-series and C-series offering
supports NICs that are from Cisco (called as Cisco VIC). You can choose the
C-series pod to run in a pure Intel NIC environment, and thereby obtain SRIOV
support on the C-series pod. This section calls out the differences in
networking between the Intel NIC and Cisco VIC installations.
To achieve network
level security and isolation of tenant traffic, Cisco VIM segments the various
OpenStack networks. The Cisco NFVI network includes six different segments in
the physical infrastructure (underlay). These segments are presented as VLANs
on the Top-of-Rack (ToR) Nexus switches (except for the provider network) and
as vNIC VLANs on Cisco UCS servers. You must allocate subnets and IP addresses
to each segment. Cisco NFVI network segments include: API, external, management
and provisioning, storage, tenant and provider.
API Segment
The API segment
needs one VLAN and two IPv4 addresses (four if you are installing Cisco VTS)
(not a full subnet) in an externally accessible subnet different from the
subnets assigned to other Cisco NFVI segments. These IP addresses are used for:
-
OpenStack API
end points. These are configured within the control node HAProxy load balancer.
-
Management node
external connectivity.
-
The Cisco
Virtual Topology Services (VTS) (if included in your Cisco NFVI package)
Virtual Topology Controller (VTC) node (optional for VTS).
-
VTC (optional
for VTS).
External Segment
The external segment
needs one VLAN to configure the OpenStack external network. Provide the VLAN
during installation in the the Cisco NFVI setup_data.yaml file, but configure
the actual subnet using the OpenStack API after the installation. Then use the
external network to assign OpenStack floating IP addresses to VMs running on
Cisco NFVI.
Management and Provisioning
Segment
The management and
provisioning segment needs one VLAN and one subnet with an address pool large
enough to accommodate all the current and future servers planned for the pod
for initial provisioning (PXE boot Linux) and, thereafter, for all OpenStack
internal communication. This VLAN and subnet can be local to Cisco NFVI for
C-Series deployments. For B-Series pods, the UCS Manager IP and management
network must be routable. You must statically configure Management IP addresses
of Nexus switches and Cisco UCS server Cisco IMC IP addresses, and not through
DHCP. They must be through the API segment. The management/provisioning subnet
can be either internal to Cisco NFVI (that is, in a lab it can be a
non-routable subnet limited to Cisco NFVI only for C-Series pods), or it can be
an externally accessible and routable subnet. All Cisco NFVI nodes (including
the Cisco VTC node) need an IP address from this subnet.
Storage Segment
Cisco VIM has a
dedicated storage network used for Ceph monitoring between controllers, data
replication between storage nodes, and data transfer between compute and
storage nodes. The storage segment needs one VLAN and /29 or larger subnet
internal to Cisco NFVI to carry all Ceph replication traffic. All the
participating nodes in the pod, have IP addresses on this subnet.
Tenant Segment
The tenant segment
needs one VLAN and a subnet large enough to manage pod tenant capacity internal
to Cisco NFVI to carry all tenant virtual network traffic. Only Cisco NFVI
control and compute nodes have IP addresses on this subnet. The VLAN/subnet can
be local to Cisco NFVI.
Provider Segment
Provider networks
are optional for Cisco NFVI operations but are often used for real VNF traffic.
You can allocate one or more VLANs for provider networks after installation is
completed from OpenStack.
Cisco NFVI renames
interfaces based on the network type it serves. The segment Virtual IP (VIP)
name is the first letter of the segment name. Combined segments use the first
character from each segment for the VIP, with the exception of provisioning
whose interface VIP name is mx instead of mp to avoid ambiguity with the
provider network. The following table shows Cisco NFVI network segments, usage,
and network and VIP names.
Table 2 Cisco NFVI
Networks
Network
|
Usage
|
Network Name
|
VIP Name
|
Management/Provisioning
|
-
OpenStack control plane traffic.
-
Application package downloads.
-
Server
management; management node connect to servers on this network.
-
Host
default route.
-
PXE
booting servers during bare metal installations.
|
Management
and provisioning
|
mx
|
API
|
-
Clients
connect to API network to interface with OpenStack APIs.
-
OpenStack Horizon dashboard.
-
Default
gateway for HAProxy container.
-
Integration with endpoints served by SwiftStack cluster for
native object storage, cinder backup service or Identity service with LDAP/AD.
|
api
|
a
|
Tenant
|
VM to VM
traffic. For example, VXLAN traffic.
|
tenant
|
t
|
External
|
Access to
VMs using floating IP addresses.
|
external
|
e
|
Storage
|
Transit
network for storage back-end.
Storage
traffic between VMs and Ceph nodes.
|
storage
|
s
|
Provider
Network
|
Direct
access to existing network infrastructure.
|
provider
|
p
|
ACIINFRA
|
Internal
ACI Network for Policy management (only allowed when deployed with ACI)
|
aciinfra
|
o
|
Installer
API
|
|
VIM
installer API
|
br_api
|
For each C-series
pod node, two vNICs are created using different ports and bonded for redundancy
for each network. Each network is defined in setup_data.yaml using the naming
conventions listed in the preceding table. The VIP Name column provides the
bonded interface name (for example, mx or a) while each vNIC name has a 0 or 1
appended to the bonded interface name (for example, mx0, mx1, a0, a1).
The Cisco NFVI
installer creates the required vNICs, host interfaces, bonds, and bridges with
mappings created between all elements. The number and type of created vNICs,
interfaces, bonds, and bridges depend on the Cisco NFVI role assigned to the
UCS server. For example, the controller node has more interfaces than the
compute or storage nodes. The following table shows the networks that are
associated with each Cisco NFVI server role.
Table 3 Cisco NFVI
Network-to-Server Role Mapping
|
Management
Node
|
Controller
Node
|
Compute Node
|
Storage Node
|
Management/Provisioning
|
+
|
+
|
+
|
+
|
ACIINFRA*
|
|
+
|
+
|
|
API
|
|
+
|
|
|
Tenant
|
|
+
|
+
|
|
Storage
|
|
+
|
+
|
+
|
Provider
|
|
|
+
|
|
External
|
|
+
|
|
|
Note |
*ACIINFRA is only
applicable when using ACI as a mechanism driver.
|
In the initial Cisco
NFVI deployment, two bridges are created on the controller nodes, and
interfaces and bonds are attached to these bridges. The br_api bridge connects
the API (a) interface to the HAProxy. The HAProxy and Keepalive container has
VIPs running for each OpenStack API endpoint. The br_mgmt bridge connects the
Management and Provisioning (mx) interface to the HAProxy container as well.
The following
diagram shows the connectivity between Cisco NFVI nodes and networks.
Figure 5. Cisco NFVI
Network Connectivity
Supported Layer 2
networking protocols include:
-
Virtual
extensible LAN (VXLAN) over a Linux bridge.
-
VLAN over Open
vSwitch(SRIOV with Intel 710NIC).
-
VLAN over
VPP/VLAN for C-series Only.
-
For UCS B-Series
pods, Single Root Input/Output Virtualization (SRIOV). SRIOV allows a single
physical PCI Express to be shared on a different virtual environment. The SRIOV
offers different virtual functions to different virtual components, for
example, network adapters, on a physical server.
Any connection
protocol can be used unless you install UCS B200 blades with the UCS Manager
plugin, in which case, only OVS over VLAN can be used. The following table
shows the available Cisco NFVI data path deployment combinations.
Table 4 Cisco NFVI Data
Path Deployment Combinations
NFVI Pod
Type
|
Pod Type
|
Mechanism
Driver
|
Tenant
Virtual Network Encapsulation
|
Provider
Virtual Network Encapsulation
|
SRIOV for VM
|
PCI
Passthrough Ports
|
MTU Values
|
|
|
|
VLAN
|
VxLAN
|
VLAN
|
|
|
1500
|
9000
|
UCS
C-series
|
Full on
|
LinuxBridge
|
No
|
Yes
|
Yes
|
No
|
No
|
Yes
|
No
|
UCS
C-series
|
Full on,
micro, HC
|
Openvswitch
|
Yes
|
No
|
Yes
|
Yes*
|
No
|
Yes
|
Yes
|
UCS
C-series
|
Full on,
micro
|
VPP
|
Yes
|
No
|
Yes
|
No
|
No
|
Yes
|
Yes
|
UCS
C-series
|
Full on,
micro
|
ACI
|
Yes
|
No
|
Yes
|
No
|
No
|
Yes
|
Yes
|
UCS C-series
|
Full on
|
VTF with
VTC***
|
No
|
Yes
|
Yes
|
No
|
No (except
through DPDK)
|
Yes
|
Yes
|
UCS B
|
Full on
|
Openvswitch
|
Yes
|
No
|
Yes
|
Yes
|
No
|
Yes
|
Yes
|
Note |
Fullon: Indicates
dedicated control, compute and ceph nodes.
|
Micro: Indicates
converged control, compute and ceph nodes with expandable computes.
HC (Hyperconverged):
Indicates dedicated control, compute, but all ceph nodes are compute nodes
also.
Note |
*** VTF with VTC
is only supported on C-series Cisco VIC.
|
Pod with Intel
NICs In case of the pod having Intel NICs (X710), the networking is
slightly different. First of all, the requirement is to have atleast two NICs
(4x10G) single server, so that we can support NIC level redundancy. Each NIC is
connected to each ToR (connections explained later in the chapter). Since vNICs
are not supported in the Intel card, the idea is to bond the physical
interfaces at the host and then create sub-interfaces based on the segment
VLAN. Lets call the two NIC cards as NIC_1 and NIC_2 and call their four ports
as A, B, C, D. Unlike Cisco VIC based pod, the traffic here is classified into
the following.
-
Control Plane.
- Data plane (external,
tenant and non-SRIOV provider network).
- SRIOV (optional for
provider network); if SRIOV is used the Data plane network only carries
external and tenant network traffic.
Control Plane.
The control plane
is responsible for carrying all the control and management traffic of the
cloud. The traffic that flows through control plane are:
- Management/Provision.
-
Storage
-
API
The control plane
interface is created by bonding the NIC_1 A port with NIC_2 A port. The bonded
interface name is called as samx, indicating that it is carrying Storage, API,
Management/Provision traffic (naming convention is similar to Cisco VIC pod).
The slave interfaces (physical interfaces) of the bonded interface are renamed
as samx0 and samx1. samx0 belongs to NIC_1 and samx1 belongs to NIC_2. Sub
interfaces are then carved out of this samx interface based on the Storage, API
VLANs. The management/provision traffic will be untagged/native VLAN in order
to support pxe booting.
Data Plane
The data plane is
responsible for carrying all the VM data traffic. The traffic that flows
through the data plane are
The data plane is
created by bonding the NIC_1 B port with NIC_2 B port. The bonded interface
name here would be pet, indicating that it is carrying Provider, External and
Tenant traffic. The slave interfaces of this bonded interface would be visible
as pet0 and pet1. pet0 belongs to the NIC_1 and pet1 belongs to NIC_2.
In case of
OVS/VLAN, the "pet" interface is used as it is (trunked to carry all the data
VLANs) to the Openstack cloud, as all the tagging and untagging happens at the
Openstack level. In case of Linux Bridge/VXLAN, there will be sub-interface for
tenant VLAN to act as the VXLAN tunnel endpoint.
SRIOV
In case of Intel
NIC pod, the third (and optionally the fourth) port from each NIC can be used
for SRIOV traffic. This is optional and is set/unset through a setup_data.yaml
parameter. Unlike the control and data plane interfaces, these interfaces are
not bonded and hence there is no redundancy. Each SRIOV port can have maximum
of 32 Virtual Functions and the number of virtual function to be created are
configurable through the setup_data.yaml. The interface names of the sriov will
show up as sriov0 and sriov1 on each host, indicating that sriov0 belongs to
NIC_1 C port and sriov1 belongs to NIC_2 C port.
In the case of
Intel NIC testbeds, the following table summarizes the above discussion
Network
|
Usage
|
Type of
traffic
|
Interface name
|
Control
Plane
|
To carry
control/management traffic
|
Storage,
API, Management/Provision
|
samx
|
Data
Plane
|
To carry
data traffic
|
Provider,
External, Tenant
|
pet
|
SRIOV
|
To carry
SRIOV traffic
|
SRIOV
|
sriov0,
sriov1
|
The following table
shows the interfaces that are present on each type of server (role based).
|
Management Node
|
Controller Node
|
Compute
Node
|
Storage
Node
|
Installer
API
|
+
|
|
|
|
Control
plane
|
+
|
+
|
+
|
+
|
Data
plane
|
|
+
|
+
|
|
SRIOV
|
|
|
+
|
|
Note |
On an Intel
testbed, all kind of OpenStack networks should be created using
physnet1 as
the physnet name.
|