Guidelines and Limitations
You can create and deploy multiple instances of the ASAv on an ESXi server. The specific hardware used for ASAv deployments can vary, depending on the number of instances deployed and usage requirements. Each virtual appliance you create requires a minimum resource allocation—memory, number of CPUs, and disk space—on the host machine.
Review the following guidelines and limitations before you deploy the ASAv.
ASAv on VMware ESXi System Requirements
Make sure to conform to the specifications below to ensure optimal performance. The ASAvASAv has the following requirements:
-
The host CPU must be a server class x86-based Intel or AMD CPU with virtualization extension.
For example, ASAv performance test labs use as minimum the following: Cisco Unified Computing System™ (Cisco UCS®) C series M4 server with the Intel® Xeon® CPU E5-2690v4 processors running at 2.6GHz.
-
ASAv supports ESXi version 6.0, 6.5, 6.7.
Recommended vNICs
The following vNICs are recommended in order of optimum performance.
-
i40e in PCI passthrough—Dedicates the server's physical NIC to the VM and transfers packet data between the NIC and the VM via DMA (Direct Memory Access). No CPU cycles are required for moving packets.
-
i40evf/ixgbe-vf—Effectively the same as above (DMAs packets between the NIC and the VM) but allows the NIC to be shared across multiple VMs. SR-IOV is generally preferred because it has more deployment flexibility. See Guidelines and Limitations
-
vmxnet3—This is a para-virtualized network driver that supports 10Gbps operation but also requires CPU cycles. This is the VMware default.
When using vmxnet3, you need to disable Large Receive Offload (LRO) to avoid poor TCP performance.
Performance Optimizations
To achieve the best performance out of the ASAv, you can make adjustments to the both the VM and the host. See Performance Tuning for more information.
-
NUMA—You can improve performance of the ASAv by isolating the CPU resources of the guest VM to a single non-uniform memory access (NUMA) node. See NUMA Guidelines for more information.
-
Receive Side Scaling—The ASAv supports Receive Side Scaling (RSS), which is a technology utilized by network adapters to distribute network receive traffic to multiple processor cores. Supported on Version 9.13(1) and later. See Multiple RX Queues for Receive Side Scaling (RSS) for more information.
-
VPN Optimization—See VPN Optimization for additional considerations for optimizing VPN performance with the ASAv.
OVF File Guidelines
The selection of the asav-vi.ovf or asav-esxi.ovf file is based on the deployment target:
-
asav-vi—For deployment on vCenter
-
asav-esxi—For deployment on ESXi (no vCenter)
-
The ASAv OVF deployment does not support localization (installing the components in non-English mode). Be sure that the VMware vCenter and the LDAP servers in your environment are installed in an ASCII-compatible mode.
-
You must set your keyboard to United States English before installing the ASAv and for using the VM console.
-
When the ASAv is deployed, two different ISO images are mounted on the ESXi hypervisor:
-
The first drive mounted has the OVF environment variables generated by vSphere.
-
The second drive mounted is the day0.iso.
Attention
You can unmount both drives after the ASAv machine has booted. However, Drive 1 (with the OVF environment variables) will always be mounted every time the ASAv is powered off/on, even if Connect at Power On is unchecked.
-
The Export OVF Template in vSphere helps you export an existing ASAv instance package as an OVF template. You can use an exported OVF template for deploying the ASAv instance in the same or different environment. Before deploying the ASAv instance using an exported OVF template on vSphere, you must modify the configuration details in the OVF file to prevent deployment failure.
To modify the exported OVF file of ASAv.
-
Log in to the local machine where you have exported the OVF template.
-
Browse and open the OVF file in a text editor.
-
Ensure that the tag
<vmw:ExtraConfig vmw:key="monitor_control.pseudo_perfctr" vmw:value="TRUE"></vmw:ExtraConfig>
is present. -
Delete the tag
<rasd:ResourceSubType>vmware.cdrom.iso</rasd:ResourceSubType>
.Or
Replace the tag
<rasd:ResourceSubType>vmware.cdrom.iso</rasd:ResourceSubType>
with<rasd:ResourceSubType>vmware.cdrom.remotepassthrough</rasd:ResourceSubType>
.See the Deploying an OVF fails on vCenter Server 5.1/5.5 when VMware tools are installed (2034422) published by VMware for more information.
-
Enter the property values for
UserPrivilege
,OvfDeployment
, andControllerType
.For example: - <Property ovf:qualifiers="ValueMap{"ovf", "ignore", "installer"}" ovf:type="string" ovf:key="OvfDeployment"> + <Property ovf:qualifiers="ValueMap{"ovf", "ignore", "installer"}" ovf:type="string" ovf:key="OvfDeployment" ovf:value="ovf"> - <Property ovf:type="string" ovf:key="ControllerType"> + <Property ovf:type="string" ovf:key="ControllerType" ovf:value="ASAv"> - <Property ovf:qualifiers="MinValue(0) MaxValue(255)" ovf:type="uint8" ovf:key="UserPrivilege"> + <Property ovf:qualifiers="MinValue(0) MaxValue(255)" ovf:type="uint8" ovf:key="UserPrivilege" ovf:value="15">
-
Save the OVF file.
-
Deploy the ASAv using the OVF template. See, Deploy the ASAv Using the VMware vSphere Web Client.
Failover for High Availability Guidelines
For failover deployments, make sure that the standby unit has the same license entitlement; for example, both units should have the 2Gbps entitlement.
Important |
When creating a high availability pair using ASAv, it is necessary to add the data interfaces to each ASAv in the same order. If the exact same interfaces are added to each ASAv, but in different order, errors may be presented at the ASAv console. Failover functionality may also be affected. |
For the ESX port group used for ASAv Inside interface or ASAv failover high availability link, configure the ESX port group failover order with two virtual NICs – one as active uplink and the other as standby uplink. This is necessary for the two VMs to ping each other or ASAv high availability link to be up.
vMotion Guidelines
-
VMware requires that you only use shared storage if you plan to use vMotion. During ASAv deployment, if you have a host cluster you can either provision storage locally (on a specific host) or on a shared host. However, if you try to vMotion the ASAv to another host, using local storage will produce an error.
Memory and vCPU Allocation for Throughput and Licensing
-
The memory allocated to the ASAv is sized specifically for the throughput level. Do not change the memory setting or any vCPU hardware settings in the Edit Settings dialog box unless you are requesting a license for a different throughput level. Under-provisioning can affect performance.
Note
If you need to change the memory or vCPU hardware settings, use only the values documented in Licensing for the ASAv. Do not use the VMware-recommended memory configuration minimum, default, and maximum values.
CPU Reservation
-
By default the CPU reservation for the ASAv is 1000 MHz. You can change the amount of CPU resources allocated to the ASAv by using the shares, reservations, and limits settings (Edit Settings > Resources > CPU). Lowering the CPU Reservation setting from 1000 Mhz can be done if the ASAv can perform its required purpose while under the required traffic load with the lower setting. The amount of CPU used by an ASAv depends on the hardware platform it is running on as well as the type and amount of work it is doing.
You can view the host’s perspective of CPU usage for all of your virtual machines from the CPU Usage (MHz) chart, located in the Home view of the Virtual Machine Performance tab. Once you establish a benchmark for CPU usage when the ASAv is handling typical traffic volume, you can use that information as input when adjusting the CPU reservation.
See the CPU Performance Enhancement Advice published by VMware for more information.
-
You can use the ASAv show vm and show cpu commands or the ASDM tab or the pane to view the resource allocation and any resources that are over- or under-provisioned.
Transparent Mode on UCS B Series Hardware Guidelines
MAC flaps have been observed in some ASAv configurations running in transparent mode on Cisco UCS B Series hardware. When MAC addresses appear from different locations you will get dropped packets.
The following guidelines help prevent MAC flaps when you deploy the ASAv in transparent mode in VMware environments:
-
VMware NIC teaming—If deploying the ASAv in transparent mode on UCS B Series, the Port Groups used for the Inside and Outside interfaces must have only 1 Active Uplink, and that uplink must be the same. You configure VMware NIC teaming in vCenter.
See the VMware documentation for complete information on how to configure NIC teaming.
-
ARP inspection—Enable ARP inspection on the ASAv and statically configure the MAC and ARP entry on the interface you expect to receive it on. See the Cisco ASA Series General Operations Configuration Guide for information about ARP inspection and how to enable it.
Additional Guidelines and Limitations
-
The ASA Virtual boots without the two CD/DVD IDE drives if you are running ESXi 6.7, vCenter 6.7, ASA Virtual 9.12 and above.
-
The vSphere Web Client is not supported for ASAv OVF deployment; use the vSphere client instead.