Guidelines and Limitations
You can create and deploy multiple instances of the ASAv Cisco HyperFlex on a VMware vCenter server. The specific hardware used for ASAv deployments can vary, depending on the number of instances deployed and usage requirements. Each virtual appliance you create requires a minimum resource allocation—memory, number of CPUs, and disk space—on the host machine.
Important |
The ASAv deploys with a disk storage size of 8 GB. It is not possible to change the resource allocation of the disk space. |
Review the following guidelines and limitations before you deploy the ASAv.
Recommended vNICs
For optimal performance, we recommend that you use vmxnet3 vNIC. This vNIC is a para-virtualized network driver that supports 10 Gbps operation but also requires CPU cycles. In addition, when using vmxnet3, disable Large Receive Offload (LRO) to avoid poor TCP performance.
OVF File Guidelines
-
asav-vi.ovf—For deployment on vCenter
-
The ASAv OVF deployment does not support localization (installing the components in non-English mode). Be sure that the VMware vCenter and the LDAP servers in your environment are installed in an ASCII-compatible mode.
-
You must set your keyboard to United States English before installing the ASAv and for using the VM console.
Failover for High Availability Guidelines
For failover deployments, make sure that the standby unit has the same license entitlement; for example, both units should have the 2 Gbps entitlement.
Important |
When creating a high availability pair using ASAv, you must add the data interfaces to each ASAv in the same order. If you have added the exact same interfaces are added to each ASAv, but in different order, you might see errors at the ASAv console. Failover functionality may also be affected. |
IPv6 Guidelines
You cannot specify IPv6 addresses for the management interface when you first deploy the ASAv OVF file using the VMware vSphere Web Client; you can later add IPv6 addressing using ASDM or the CLI.
vMotion Guidelines
-
VMware requires you to use only shared storage if you are using vMotion. During ASAv deployment, if you have a host cluster you can either provision storage locally (on a specific host) or on a shared host. However, if you try to vMotion the ASAv to another host, using local storage will produce an error.
Memory and vCPU Allocation for Throughput and Licensing
-
The memory allocated to the ASAv is sized specifically for the throughput level. Do not change the memory setting or any vCPU hardware settings in the Edit Settings dialog box unless you are requesting a license for a different throughput level. Under-provisioning can affect performance.
Note
If you need to change the memory or vCPU hardware settings, use only the values documented in Licensing for the ASA Virtual. Do not use the VMware-recommended memory configuration minimum, default, and maximum values.
CPU Reservation
-
By default the CPU reservation for the ASAv is 1000 MHz. You can change the amount of CPU resources allocated to the ASAv by using the shares, reservations, and limits settings.
. Lowering the CPU Reservation setting from 1000 MHz can be done if the ASAv can perform its required purpose while under the required traffic load with the lower setting. The amount of CPU used by an ASAv depends on the hardware platform it is running on as well as the type and amount of work it is doing.You can view the host’s perspective of CPU usage for all of your virtual machines from the CPU Usage (MHz) chart, located in the Home view of the Virtual Machine Performance tab. Once you establish a benchmark for CPU usage when the ASAv is handling typical traffic volume, you can use that information as input when adjusting the CPU reservation.
For More information, see the link CPU Performance Enhancement Advice -
You can view the resource allocation and any resources that are over- or under-provisioned using the ASAv
commands or the ASDM
tab or the
pane
Transparent Mode on UCS B and C Series Hardware Guidelines
MAC flaps have been observed in some ASAv configurations running in transparent mode on Cisco UCS B (Compute Nodes) and C (Converged Nodes) Series hardware. When MAC addresses appear from different locations you will get dropped packets.
The following guidelines help to prevent MAC flaps when you deploy the ASAv in transparent mode in VMware environments:
-
VMware NIC teaming—If deploying the ASAv in transparent mode on UCS B or C Series, the port groups used for the inside and outside interfaces must have only 1 Active uplink, and that uplink must be the same. Configure VMware NIC teaming in vCenter.
-
ARP inspection—Enable ARP inspection on the ASAv and statically configure the MAC and ARP entry on the interface that you expect to receive it on. See the Cisco ASA Series General Operations Configuration Guide for information about ARP inspection and how to enable it.
System Requirements
Configurations and Clusters for HyperFlex HX-Series
Configurations | Clusters |
---|---|
HX220c converged nodes |
|
HX240c converged nodes |
|
HX220C and Edge (VDI, VSI, ROBO) HX240C (VDI, VSI, Test/Dev) |
|
B200 + C240/C220 |
Compute bound apps/VDI |
Deployment options for the HyperFlex HX-Series:
-
Hybrid Cluster
-
Flash Cluster
-
HyperFlex HX Edge
-
SED drives
-
NVME Cache
-
GPUs
For HyperFlex HX cloud powered management option, refer to the Deploying HyperFlex Fabric Interconnect-attached Clusters section in the Cisco HyperFlex Systems Installation Guide.
HyperFlex Components and Versions
Component | Version |
---|---|
VMware vSphere |
7.0.2-18426014 |
HyperFlex Data Platform |
4.5.2a-39429 |
Supported Features
-
Deployment Modes—Routed (Standalone), Routed (HA), and Transparent
-
ASAv native HA
-
Jumbo frames
-
VirtIO
-
HyperFlex Data Center Clusters (excluding Stretched Clusters)
-
HyperFlex Edge Clusters
-
HyperFlex All NVMe, All Flash and Hybrid converged nodes
-
HyperFlex Compute-only Nodes
Unsupported Features
ASAv running with SR-IOV has not been qualified with HyperFlex.
Note |
HyperFlex supports SR-IOV, but requires a PCI-e NIC in addition to the MLOM VIC |