vHBA Template
This template is a policy that defines how a vHBA on a server connects to the SAN. It is also referred to as a vHBA SAN connectivity template.
You must include this policy in a service profile for it to take effect.
The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product. Learn more about how Cisco is using Inclusive Language.
About vHBA Templates
This template is a policy that defines how a vHBA on a server connects to the SAN. It is also referred to as a vHBA SAN connectivity template.
You must include this policy in a service profile for it to take effect.
This policy requires that one or more of the following resources already exist in the system:
Named VSAN
WWNN pool or WWPN pool
SAN pin group
Statistics threshold policy
Step 1 |
In the Navigation pane, click SAN. |
||||||||||||||||||||||||
Step 2 |
Expand . |
||||||||||||||||||||||||
Step 3 |
Expand the node for the organization where you want to create the policy. If the system does not include multi tenancy, expand the root node. |
||||||||||||||||||||||||
Step 4 |
Right-click the vHBA Templates node and choose Create vHBA Template. |
||||||||||||||||||||||||
Step 5 |
In the Create vHBA Template dialog box, complete the following fields:
|
||||||||||||||||||||||||
Step 6 |
Click OK. |
Include the vHBA template in a service profile.
You can bind a vHBA associated with a service profile to a vHBA template. When you bind the vHBA to a vHBA template, Cisco UCS Manager configures the vHBA with the values defined in the vHBA template. If the existing vHBA configuration does not match the vHBA template, Cisco UCS Manager reconfigures the vHBA. You can only change the configuration of a bound vHBA through the associated vHBA template. You cannot bind a vHBA to a vHBA template if the service profile that includes the vHBA is already bound to a service profile template.
Important |
If the vHBA is reconfigured when you bind it to a template, Cisco UCS Manager reboots the server associated with the service profile. |
Step 1 |
In the Navigation pane, click Servers. |
Step 2 |
Expand . |
Step 3 |
Expand the node for the organization that includes the service profile with the vHBA you want to bind. If the system does not include multi-tenancy, expand the root node. |
Step 4 |
Expand . |
Step 5 |
Click the vHBA you want to bind to a template. |
Step 6 |
In the Work pane, click the General tab. |
Step 7 |
In the Actions area, click Bind to a Template. |
Step 8 |
In the Bind to a vHBA Template dialog box, do the following:
|
Step 9 |
In the warning dialog box, click Yes to acknowledge that Cisco UCS Manager may need to reboot the server if the binding causes the vHBA to be reconfigured. |
Step 1 |
In the Navigation pane, click Servers. |
Step 2 |
Expand . |
Step 3 |
Expand the node for the organization that includes the service profile with the vHBA you want to unbind. If the system does not include multi-tenancy, expand the root node. |
Step 4 |
Expand . |
Step 5 |
Click the vHBA you want to unbind from a template. |
Step 6 |
In the Work pane, click the General tab. |
Step 7 |
In the Actions area, click Unbind from a Template. |
Step 8 |
If a confirmation dialog box displays, click Yes. |
Step 1 |
In the Navigation pane, click SAN. |
Step 2 |
Expand . |
Step 3 |
Expand the vHBA Templates node. |
Step 4 |
Right-click the vHBA template that you want to delete and choose Delete. |
Step 5 |
If a confirmation dialog box displays, click Yes. |
Fibre Channel Adapter Policies
These policies govern the host-side behavior of the adapter, including how the adapter handles traffic. For example, you can use these policies to change default settings for the following:
Queues
Interrupt handling
Performance enhancement
RSS hash
Failover in a cluster configuration with two fabric interconnects
Note |
For Fibre Channel adapter policies, the values displayed by Cisco UCS Manager may not match those displayed by applications such as QLogic SANsurfer. For example, the following values may result in an apparent mismatch between SANsurfer and Cisco UCS Manager:
|
By default, Cisco UCS provides a set of Ethernet adapter policies and Fibre Channel adapter policies. These policies include the recommended settings for each supported server operating system. Operating systems are sensitive to the settings in these policies. Storage vendors typically require non-default adapter settings. You can find the details of these required settings on the support list provided by those vendors.
Important |
We recommend that you use the values in these policies for the applicable operating system. Do not modify any of the values in the default policies unless directed to do so by Cisco Technical Support. However, if you are creating an Ethernet adapter policy for an OS (instead of using the default adapter policy), you must use the following formulas to calculate values that work for that OS. Depending on the UCS firmware, your driver interrupt calculations may be different. Newer UCS firmware uses a calculation that differs from previous versions. Later driver release versions on Linux operating systems now use a different formula to calculate the Interrupt Count. In this formula, the Interrupt Count is the maximum of either the Transmit Queue or the Receive Queue plus 2. |
Drivers on Linux operating systems use differing formulas to calculate the Interrupt Count, depending on the eNIC driver version. The UCS 3.2 release increased the number of Tx and Rx queues for the eNIC driver from 8 to 256 each.
Use one of the following strategies, according to your driver version.
For Linux drivers before the UCS 3.2 firmware release, use the following formula to calculate the Interrupt Count.
For example, if Transmit Queues = 1 and Receive Queues = 8 then:
On drivers for UCS firmware release 3.2 and higher, the Linux eNIC drivers use the following formula to calculate the Interrupt Count.
Interrupt Count = (#Tx or Rx Queues) + 2
For Windows OS, the recommended adapter policy in UCS Manager for VIC 1400 series and above adapters is Win-HPN and if RDMA is used, the recommended policy is Win-HPN-SMB. For VIC 1400 series and above adapters, the recommended interrupt value setting is 512 and the Windows VIC driver takes care of allocating the required number of Interrupts.
For VIC 1300 and VIC 1200 series adapters, the recommended UCS Manager adapter policy is Windows and the Interrupt would be TX + RX + 2, rounded to closest power of 2. The maximum supported Windows queues is 8 for Rx Queues and 1 for Tx Queues.
Example for VIC 1200 and VIC 1300 series adapters:
Tx = 1, Rx = 4, CQ = 5, Interrupt = 8 ( 1 + 4 rounded to nearest power of 2), Enable RSS
Example for VIC 1400 series and above adapters:
Tx = 1, Rx = 4, CQ = 5, Interrupt = 512 , Enable RSS
The NVM Express (NVMe) interface allows host software to communicate with a non-volatile memory subsystem. This interface is optimized for Enterprise non-volatile storage, which is typically attached as a register level interface to the PCI Express (PCIe) interface.
NVMe over Fabrics using Fibre Channel (FC-NVMe) defines a mapping protocol for applying the NVMe interface to Fibre Channel. This protocol defines how Fibre Channel services and specified Information Units (IUs) are used to perform the services defined by NVMe over a Fibre Channel fabric. NVMe initiators can access and transfer information to NVMe targets over Fibre Channel.
FC-NVMe combines the advantages of Fibre Channel and NVMe. You get the improved performance of NVMe along with the flexibility and the scalability of the shared storage architecture. Cisco UCS Manager Release 4.0(2) supports NVMe over Fabrics using Fibre Channel on UCS VIC 1400 Series adapters.
Starting with UCS Manager release 4.3(2b), NVMeoF using RDMA is supported on Cisco UCS VIC 14000 series adapters.
Starting with UCS Manager release 4.2(2), NVMeoF using Fibre Channel is supported on Cisco UCS VIC 15000 series adapters.
Cisco UCS Manager provides the recommended FC NVME Initiator adapter policies in the list of pre-configured adapter policies. To create a new FC-NVMe adapter policy, follow the steps in the Creating a Fibre Channel Adapter Policy section.
NVMe over Fabrics (NVMeoF) is a communication protocol that allows one computer to access NVMe namespaces available on another computer. NVMeoF is similar to NVMe, but differs in the network-related steps involved in using the NVMeoF storage devices. The commands for discovering, connecting, and disconnecting a NVMeoF storage device are integrated into the nvme utility provided in Linux..
The NVMeoF fabric that Cisco supports is RDMA over Converged Ethernet version 2 (RoCEv2). RoCEv2 is a fabric protocol that runs over UDP. It requires a no-drop policy.
The eNIC RDMA driver works in conjunction with the eNIC driver, which must be loaded first when configuring NVMeoF.
Cisco UCS Manager provides the default Linux-NVMe-RoCE adapter policy for creating NVMe RoCEv2 interfaces. Do not use the default Linux adapter policy. For complete information on configuring RoCEv2 over NVMeoF, refer to the Cisco UCS Manager Configuration Guide for RDMA over Converged Ethernet (RoCE) v2.
NVMeoF using RDMA is supported on M5 B-Series or C-Series Servers with Cisco UCS VIC 1400 Series adapters.
Starting with UCS Manager release 4.3(2b), NVMeOF using RDMA is supported on Cisco UCS VIC 14000 series adapters.
Starting with UCS Manager release 4.2(2), NVMeOF using RDMA is supported on Cisco UCS VIC 15000 series adapters.
Tip |
If the fields in an area do not display, click the Expand icon to the right of the heading. |
Step 1 |
In the Navigation pane, click Servers. |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Step 2 |
Expand . |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Step 3 |
Expand the node for the organization where you want to create the policy. If the system does not include multi tenancy, expand the root node. |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Step 4 |
Right-click Adapter Policies and choose Create Fibre Channel Adapter Policy. |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Step 5 |
Enter a name and description for the policy in the following fields:
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Step 6 |
(Optional) In the Resources area, adjust the following values:
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Step 7 |
(Optional) In the Options area, adjust the following values:
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Step 8 |
Click OK. |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Step 9 |
If a confirmation dialog box displays, click Yes. |
Step 1 |
In the Navigation pane, click SAN. |
Step 2 |
Expand . |
Step 3 |
Expand the Fibre Channel Policies node. |
Step 4 |
Right-click the policy you want to delete and choose Delete. |
Step 5 |
If a confirmation dialog box displays, click Yes. |
About the Default vHBA Behavior Policy
Default vHBA behavior policy allow you to configure how vHBAs are created for a service profile. You can choose to create vHBAs manually, or you can allow them to be created automatically.
You can configure the default vHBA behavior policy to define how vHBAs are created. This can be one of the following:
None—Cisco UCS Manager does not create default vHBAs for a service profile. All vHBAs must be explicitly created.
HW Inherit—If a service profile requires vHBAs and none have been explicitly defined, Cisco UCS Manager creates the required vHBAs based on the adapter installed in the server associated with the service profile.
Note |
If you do not specify a default behavior policy for vHBAs, none is used by default. |
Step 1 |
In the Navigation pane, click SAN. |
Step 2 |
Expand . |
Step 3 |
Expand the root node. You can configure only the default vHBA behavior policy in the root organization. You cannot configure the default vHBA behavior policy in a sub-organization. |
Step 4 |
Click Default vHBA Behavior. |
Step 5 |
On the General Tab, in the Properties area, click one of the following radio buttons in the Action field:
|
Step 6 |
Click Save Changes. |
SPDM Security Policy
Cisco UCS M6 Servers can contain mutable components that could provide vectors for attack against a device itself or use of a device to attack another device within the system. To defend against these attacks, the Security Protocol and Data Model (SPDM) Specification enables a secure transport implementation that challenges a device to prove its identity and the correctness of its mutable component configuration. This feature is supported on Cisco UCS C220 and C240 M6 Servers starting with in Cisco UCS Manager, Release 4.2(1d).
Note |
SPDM is currently not supported on the Cisco UCS C225 M6 Server and Cisco UCS C245 M6 Server. |
SPDM defines messages, data objects, and sequences for performing message exchanges between devices over a variety of transport and physical media. It orchestrates message exchanges between Baseboard Management Controllers (BMC) and end-point devices over a Management Component Transport Protocol (MCTP). Message exchanges include authentication of hardware identities accessing the BMC. The SPDM enables access to low-level security capabilities and operations by specifying a managed level for device authentication, firmware measurement, and certificate management. Endpoint devices are challenged to provide authentication. and BMC authenticates the endpoints and only allows access for trusted entities.
The UCS Manager optionally allows uploads of external security certificates to BMC. A maximum of 40 SPDM certificates is allowed, including native internal certificates. Once the limit is reached, no more certificates can be uploaded. User uploaded certificates can be deleted but internal/default certificates cannot.
A SPDM security policy allows you to specify one of three Security level settings. Security can be set at one of the three levels listed below:
Full Security:
This is the highest MCTP security setting. When you select this setting, a fault is generated when any endpoint authentication failure or firmware measurement failure is detected. A fault will also be generated if any of the endpoints do not support either endpoint authentication or firmware measurements.
Partial Security (default):
When you select this setting, a fault is generated when any endpoint authentication failure or firmware measurement failure is detected. There will NOT be a fault generated when the endpoint doesn’t support endpoint authentication or firmware measurements.
No Security
When you select this setting, there will NOT be a fault generated for any failure (either endpoint measurement or firmware measurement failures).
You can also upload the content of one or more external/device certificates into BMC. Using a SPDM policy allows you to change or delete security certificates or settings as desired. Certificates can be deleted or replaced when no longer needed.
Certificates are listed in all user interfaces on a system.
This step creates a SPDM policy.
Note |
You can upload up to 40 SPDM certificates (including native certificates). |
Step 1 |
In the Navigation pane, click Servers. |
||
Step 2 |
Go to Policies. Expand the root node. |
||
Step 3 |
Right-click SPDM Certificate Policies and select Create SPDM Policy. |
||
Step 4 |
Enter a name for this policy and select a Fault Alert Setting for the security level: Disabled, Partial, or Full. Full—If you select this option, then a fault is generated when there is any endpoint authentication failure for both supported and unsupported endpoints. Partial—If you select this option then a fault is generated when there is any endpoint authentication failure to only supported endpoints. No fault is generated when the endpoint does not support authentication. Disabled—If you select this option then no fault is generated for endpoint authentication failure for both supported and unsupported endpoints. The default is Partial.
|
||
Step 5 |
Click on Add in the Create Policy window. The Add SPDM Certificate window will open. |
||
Step 6 |
Name the certificate. UCS Manager supports only Pemcertificates. |
||
Step 7 |
Paste the contents of the certificate into the Certificate field. |
||
Step 8 |
Click OK to add the certificate and return to the Create SPDM Policy window. You can add up to 40 certificates. |
||
Step 9 |
In the Create SPDM Policy menu, click Okay. After the SPDM policy is created, it will be listed immediately, along with its Alert setting, when you select SPDM Certificate Policy under the Server root Policies. |
Assign the Certificate to a Service Profile. The Service Profile must be associated with a server for it to take effect.
Create the SPDM security policy.
Step 1 |
In the Navigation pane, click Servers. |
Step 2 |
Go to Service Profiles. Expand the root node. |
Step 3 |
Select the Service Profile you want to associate with the Policy you created.
|
Step 4 |
Click OK. |
Check the fault alert level to make sure it is set to the desired setting.
You can view the Fault Alert setting associated with a specific chassis.
Create a policy and associate it with a Service Profile.
Step 1 |
In the Navigation pane, click Equipment. |
Step 2 |
Select a Rack-Mount Server. |
Step 3 |
On the Inventory tab, select CIMC. . User uploaded certificates are listed and information for specific certificates can be selected and viewed. |
SAN Connectivity Policies
Connectivity policies determine the connections and the network communication resources between the server and the LAN or SAN on the network. These policies use pools to assign MAC addresses, WWNs, and WWPNs to servers and to identify the vNICs and vHBAs that the servers use to communicate with the network.
Note |
We do not recommend that you use static IDs in connectivity policies, because these policies are included in service profiles and service profile templates and can be used to configure multiple servers. |
Connectivity policies enable users without network or storage privileges to create and modify service profiles and service profile templates with network and storage connections. However, users must have the appropriate network and storage privileges to create connectivity policies.
Connectivity policies require the same privileges as other network and storage configurations. For example, you must have at least one of the following privileges to create connectivity policies:
admin—Can create LAN and SAN connectivity policies
ls-server—Can create LAN and SAN connectivity policies
ls-network—Can create LAN connectivity policies
ls-storage—Can create SAN connectivity policies
After the connectivity policies have been created, a user with ls-compute privileges can include them in a service profile or service profile template. However, a user with only ls-compute privileges cannot create connectivity policies.
You can configure the LAN and SAN connectivity for a service profile through either of the following methods:
LAN and SAN connectivity policies that are referenced in the service profile
Local vNICs and vHBAs that are created in the service profile
Local vNICs and a SAN connectivity policy
Local vHBAs and a LAN connectivity policy
Cisco UCS maintains mutual exclusivity between connectivity policies and local vNIC and vHBA configuration in the service profile. You cannot have a combination of connectivity policies and locally created vNICs or vHBAs. When you include a LAN connectivity policy in a service profile, all existing vNIC configuration is erased, and when you include a SAN connectivity policy, all existing vHBA configuration in that service profile is erased.
Step 1 |
In the Navigation pane, click SAN. |
Step 2 |
Expand . |
Step 3 |
Expand the node for the organization where you want to create the policy. If the system does not include multi tenancy, expand the root node. |
Step 4 |
Right-click SAN Connectivity Policies and choose Create SAN Connectivity Policy. |
Step 5 |
In the Create SAN Connectivity Policy dialog box, enter a name and optional description. |
Step 6 |
From the WWNN Assignment drop-down list in the World Wide Node Name area, choose one of the following:
|
Step 7 |
In the vHBAs table, click Add. |
Step 8 |
In the Create vHBAs dialog box, enter the name and optional description. |
Step 9 |
Choose the Fabric ID, Select VSAN, Pin Group, Persistent Binding, and Max Data Field Size. You can also create a VSAN or SAN pin group from this area. |
Step 10 |
In the Operational Parameters area, choose the Stats Threshold Policy. |
Step 11 |
In the Adapter Performance Profile area, choose the Adapter Policy and QoS Policy. You can also create a fibre channel adapter policy or QoS policy from this area. |
Step 12 |
After you have created all the vHBAs you need for the policy, click OK. |
Include the policy in a service profile or service profile template.
Step 1 |
In the Navigation pane, click SAN. |
Step 2 |
On the SAN tab, expand . |
Step 3 |
Choose the policy for which you want to create a vHBA. |
Step 4 |
In the Work pane, click the General tab. |
Step 5 |
In the table icon bar, click the + button. |
Step 6 |
In the Create vHBAs dialog box, enter the name and optional description. |
Step 7 |
Choose the Fabric ID, Select VSAN, Pin Group, Persistent Binding, and Max Data Field Size. You can also create a VSAN or SAN pin group from this area. |
Step 8 |
In the Operational Parameters area, choose the Stats Threshold Policy. |
Step 9 |
In the Adapter Performance Profile area, choose the Adapter Policy and QoS Policy. You can also create a fibre channel adapter policy or QoS policy from this area. |
Step 10 |
Click Save Changes. |
Step 1 |
In the Navigation pane, click SAN. |
Step 2 |
Expand . |
Step 3 |
Choose the policy from which you want to delete the vHBA. |
Step 4 |
In the Work pane, click the General tab. |
Step 5 |
In the vHBAs table, do the following:
|
Step 6 |
If a confirmation dialog box displays, click Yes. |
Step 1 |
In the Navigation pane, click SAN. |
||||||||||||
Step 2 |
Expand . |
||||||||||||
Step 3 |
Choose the policy for which you want to create an initiator group. |
||||||||||||
Step 4 |
In the Work pane, click the vHBA Initiator Groups tab. |
||||||||||||
Step 5 |
In the table icon bar, click the + button. |
||||||||||||
Step 6 |
In the Create vHBA Initiator Group dialog box, complete the following fields:
|
||||||||||||
Step 7 |
Click OK. |
Step 1 |
In the Navigation pane, click SAN. |
Step 2 |
Expand . |
Step 3 |
Choose the policy from which you want to delete the initiator group |
Step 4 |
In the Work pane, click the vHBA Initiator Groups tab. |
Step 5 |
In the table, do the following:
|
Step 6 |
If a confirmation dialog box displays, click Yes. |
If you delete a SAN connectivity policy that is included in a service profile, it also deletes all vHBAs from that service profile and disrupts SAN data traffic for the server associated with the service profile.
Step 1 |
In the Navigation pane, click SAN. |
Step 2 |
Expand . |
Step 3 |
Expand the SAN Connectivity Policies node. |
Step 4 |
Right-click the policy that you want to delete and choose Delete. |
Step 5 |
If a confirmation dialog box displays, click Yes. |
The Intel® Volume Management Device (VMD) is a tool that provides NVMe drivers to manage PCIe Solid State Drives attached to VMD-enabled domains. This includes Surprise hot-plug of PCIe drives and configuring blinking patterns to report status. PCIe Solid State Drive (SSD) storage lacks a standardized method to blink LEDs to represent the status of the device. With VMD, you can control LED indicators on both direct attached and switch attached PCIe storage using a simple command-line tool.
To use VMD, you must first enable VMD through a UCS Manager BIOS policy and set the UEFI boot options. Enabling VMD provides Surprise hot plug and optional LED status management for PCIe SSD storage that is attached to the root port. VMD Passthrough mode provides the ability to manage drives on guest VMs.
Enabling VMD also allows configuration of Intel® Virtual RAID on CPU (VRoC), a hybrid RAID architecture on Intel® Xeon® Scalable Processors. Documentation on the use and configuration of VRoC can be found at the Intel website.
IMPORTANT: VMD must be enabled in the UCS Manager BIOS settings before Operating System install. If enabled after OS installation, the server will fail to boot. This restriction applies to both standard VMD and VMD Passthrough. Likewise, once enabled, you cannot disable VMD without a loss of system function.
To configure a BIOS and local boot Policy for VMD in UCS Manager, use the following procedure. The VMD platform default is disabled.
Note |
VMD must be enabled before OS installation. |
Step 1 |
In the Navigation pane, click Servers. |
Step 2 |
Expand the node for the organization where you want to create the policy. If the system does not include multi tenancy, expand the root node. |
Step 3 |
Configure the BIOS policy for VMD: select a service profile and go to the Policies tab. In the Policies section, right-click the BIOS Policy section and select Create BIOS Policy from the popup. In the BIOS Policy form, enter a name and optional description. Click OK to create the policy. |
Step 4 |
Go to Policies > Root > BIOS Policies and select the new policy. |
Step 5 |
Expand BIOS Policies and select Advanced and LOM and PCle Slots from the submenus. |
Step 6 |
Scroll down to VMD Enable and select Enable. |
Step 7 |
Click Save Changes to enable VMD functions. |
Step 8 |
In the Boot Policy tab, create a local boot policy. Select Uefi for the Boot Mode and Add NVMe from the Local Devices menu. Click Save Changes to create the policy. |
The Intel® Volume Management Device (VMD) driver release package for Direct Device Assignment contains the Intel VMD UEFI Driver version for Direct Assign (PCIe PassThru) in VMware ESXi Hypervisor. The Intel VMD NVMe driver assists in the management of CPU-attached Intel PCIe NVMe SSDs.
The Intel VMD driver is required to enable the Direct Assign and discovery of the VMD physical addresses from a supported guest VM. Drivers are only provided for Passthrough mode for ESXi support of Red Hat Linux or Ubuntu. VMD Passthrough is enabled by configuring a UCS Manager BIOS policy before loading the Operating System. Once the Operating System has been loaded, you cannot enable or disable the VMD Passthrough option.
Note |
Passthrough mode is enabled by default, but you should always confirm that it is enabled before proceding. |
Passthrough mode is only supported on ESXi drivers for Red Hat Linux or Ubuntu guest operating systems.
Step 1 |
In the Navigation pane, click Servers. |
Step 2 |
Expand the node for the organization where you want to create the policy. If the system does not include multi tenancy, expand the root node. |
Step 3 |
Configure the BIOS policy for VMD: select a service profile and go to the Policies tab. In the Policies section, right-click the BIOS Policy section and select Create BIOS Policy from the popup. In the BIOS Policy form, enter a name and optional description. Click OK to create the policy. |
Step 4 |
Go to Policies > Root > BIOS Policies and select the new policy. |
Step 5 |
Expand BIOS Policies and select Advanced and LOM and PCle Slots from the submenus. |
Step 6 |
Scroll down to VMD Enable and select Enable. |
Step 7 |
Click Save Changes to enable VMD functions. |
Step 8 |
To finish enabling VMD Passthrough mode, select Advanced and Intel Directed IO from the submenus and scroll down to Intel VT Directed IO. Verify that the dropdown is set to Enabled. If not, set it. |
Step 9 |
Click Save Changes to enable the VMD Passthrough policy. |
Step 10 |
In the Boot Policy tab, create a local boot policy. Select Uefi for the Boot Mode. Click OK to create the policy. |
Intel® Volume Management Device (VMD) for NVMe enables drive management options using hardware logic inside the Intel Xeon processor. Specific drivers are available for the following operating systems:
Linux
Windows 2016, 2019
VMWare
Note |
The latest VMWare drivers are available directly from the VMWare site. Following links in the VMWare driver download on the Cisco download site will take you directly to the VMWare login page. |
For guest Operating Systems on ESXi, use VMD Passthrough mode. Supported Operating Systems for VMD Passthrough are:
Red Hat Linux
Ubuntu
To use the features of Intel VMD, you must:
Enable VMD by creating a BIOS policy in the UCS Manager.
Note |
The system will fail to boot if VMD is enabled or disabled after OS installation. Do not change the BIOS setting after OS installation. |
Install the appropriate VMD NVMe driver.
Install the appropriate management tools for the driver package.
Boot from UEFI.
Intel® Virtual RAID on CPU (VRoC) allows you to create and manage RAID volumes within the BIOS of VMD-enabled Intel NVMe SSD drives using hardware logic inside the Intel Xeon processor. More information on Intel VRoC can be found at: https://www.intel.com/content/www/us/en/support/products/122484/memory-and-storage/ssd-software/intel-virtual-raid-on-cpu-intel-vroc.html.
The User Guides for Intel VRoC can be accessed at the direct link at: https://www.intel.com/content/www/us/en/support/articles/000030445/memory-and-storage/ssd-software.html?productId=122484&localeCode=us_en
The Windows and Linux user documentation also contains information on how to configure Intel VRoC in the pre-boot environment. Creation of RAID volumes in VRoC is through the HII interface. The Windows documentation provides information on using the BIOS HII option to set up and configure RAID volumes in VRoC.
To use Intel VRoC, you must:
Enable VMD in the BIOS settings
Use UEFI boot mode
Have sufficient drive resources to create the volume
Use the BIOS HII option to set up and configure VRoC.
The Cisco implementation of Intel VRoC supports RAID 0 (striping), RAID 1 (mirroring), RAID 5 (striping with parity) and RAID 10 (combined mirroring and striping).
Complete these steps to download and install the driver bundle:
Make sure that VMD is enabled in the BIOS settings.
Note |
The system will fail to boot if VMD is enabled or disabled after OS installation. Do not change the BIOS setting after OS installation. |
Step 1 |
In a web browser, navigate to https://software.cisco.com/download/home. |
||
Step 2 |
Search on UCS B-Series Blade Server Software or UCS C-Series Rack-Mount UCS-Managed Server Software, depending on your platform. |
||
Step 3 |
Choose the UCS drivers from the Software Type selections: Unified Computing System (UCS) Drivers. |
||
Step 4 |
Click on the latest release in the left panel.
|
||
Step 5 |
Click on ISO image of UCS-related linux drivers only and download the driver bundle. |
||
Step 6 |
When the driver bundle is downloaded, open it and select x.x. |
||
Step 7 |
Click on the version of Red Hat Linux that you wish to install. |
||
Step 8 |
Extract the contents of the folder. The folder contains both the driver package and associated documentation. Follow the installation procedure packaged with the drivers. |
The Intel® Virtual RAID on CPU (VRoC) Linux Software User Guide can be found with the user documentation at: https://www.intel.com/content/www/us/en/support/articles/000030445/memory-and-storage/ssd-software.html?productId=122484&localeCode=us_en. It provides information on performing BIOS HII VRoC setup in the pre-boot environment, as well as how to install and use the programmable LED utility.
Complete these steps to download the driver bundle:
Make sure that VMD is enabled in the BIOS settings.
Note |
The system will fail to boot if VMD is enabled or disabled after OS installation. Do not change the BIOS setting after OS installation. |
Step 1 |
In a web browser, navigate to https://software.cisco.com/download/home. |
Step 2 |
Search on UCS B-Series Blade Server Software or UCS C-Series Rack-Mount UCS-Managed Server Software, depending on your platform. |
Step 3 |
Choose the UCS drivers from the Software Type selections: Unified Computing System (UCS) Drivers. |
Step 4 |
Click on the latest release in the left panel. The ISO image for VMD is available from the 4.0(4f) release onward. |
Step 5 |
Click on ISO image of UCS-related windows drivers only and download the driver bundle. |
Step 6 |
When the driver bundle is downloaded, open it and select . |
Step 7 |
Extract the contents of the folder. |
Step 8 |
Click on the entry for the kit and . |
Step 9 |
The folder contains both the driver package and associated documentation. Expand the zip file for VROC_x_x_x_xxxxInstall. |
Step 10 |
Follow the installation procedure packaged with the drivers. |
For setting up Intel® Virtual RAID on CPU (VRoC), refer to the online instructions at https://www.intel.com/content/www/us/en/support/products/122484/memory-and-storage/ssd-software/intel-virtual-raid-on-cpu-intel-vroc.html.
Information on VRoC RAID features and management can be found in the Windows Intel Virtual RAID on CPU Software User's Guide at https://www.intel.com/content/dam/support/us/en/documents/memory-and-storage/ssd-software/Windows_VROC_User_Guide.pdf.
Complete these steps to download and install the driver bundle for VMD Passthrough mode:
Note |
The VMD Passthrough driver bundle includes packages for both ESXi and Ubuntu. |
Note |
The system will fail to boot if VMD is enabled or disabled after OS installation. Do not change the BIOS setting after OS installation. |
Step 1 |
In a web browser, navigate to https://software.cisco.com/download/home. |
||
Step 2 |
Search onServers - Unified Computing. |
||
Step 3 |
Search on UCS B-Series Blade Server Software or UCS C-Series Rack-Mount UCS-Managed Server Software, depending on your platform. |
||
Step 4 |
Choose the UCS utilities from the Software Type selections: Unified Computing System (UCS) Utilities. |
||
Step 5 |
Click on the latest release in the left panel.
|
||
Step 6 |
Click on ISO image of UCS-related vmware utilities only and download the utilities bundle. |
||
Step 7 |
When the driver bundle is downloaded, open it and select .The bundle provides both the driver installation package for the desired version of ESXi or VMD Direct Asssign with Ubuntu, passthrough mode, and the Signed LED Offline bundle. Also included is a pdf that provides steps to configure an Ubuntu Virtual Machine in ESXi. |
||
Step 8 |
Click on either the version of ESXi that you wish to install or the zip file for Ubuntu. For ESXi versions, Click on ESXi_x > Direct Assign and chose the desired zip file. |
||
Step 9 |
Extract the contents of the folder. Follow the installation procedure packaged with the driver software. |
Extract the contents of the LED management tools zip file. Install the management tools according to the instructions included with the driver package.
Before using the command line tools, the ESXi command line shell should be enabled from either the vSphere client or from the direct console of the ESXi host system.
Once you have set up VMD, you can customize LED blinking patterns on PCIe NVMe drives. Information on LED customization can be found in the User Guides included in the driver packages.
PCIe SSD drives lack a standard way to manage the LEDs that indicate drive status and health. Without this, there is a risk of removing the wrong drive, resulting in data loss. SSD drives have two indicators, the first being a green activity LED whose signals come directly from the SSD, and the second being a status LED whose signals come from the backplane. VMD manages only the status LEDs, not the activity LEDs.
LED Management only applies to NVMe and/or SATA drives. It does not support drives that are connected either by an I/O cable, PCIe add-in card or plugged directly into the motherboard .
VMD with NVMe supports Surprise hot-plugging. When a disk is hot-removed, then re-inserted into the same slot, the fault LED blinks for 10 seconds. This is expected behavior.The fail state is imposed on a slot’s LEDs when the drive is removed, but the backplanes require the drive to be present in the slot for a LED to blink. Thus, the fail state exists once the drive is removed, but a LED blinks only when the new drive is inserted and discovered. The LED will return to normal once hot-plug event is handled.
VRoC with VMD allows you to perform basic LED management configuration of the status LEDs on compatible backplanes. Once the VMD NVMe driver is installed, you can install the VMD LED Management Tool, which lets you manage the LED through a command line interface. VMD allows you to customize LED blinking patterns on PCIe NVMe drives to better identify failing drives.
The tables below provide some brief guidelines for customized blinking on the various platforms. As individualized patterns are programmable, these tables provide only representative guidelines.
Status LED |
Behavior |
Options |
---|---|---|
"Activate LED" |
Identifies a specific device in an enclosure by blinking the status LED of that drive in a designated pattern. |
1-3600 seconds. Values outside this range default to 12 seconds. Default = 12 seconds |
Drive Failure |
Indicates a drive that is in a degraded or failed state by lighting the status LED of that device in a defined failure pattern. |
The failure pattern is displayed until:
Default = Option 1 |
RAID volume Initialization or Verify and Repair Process |
When a RAID volume is in Rebuild state, the status LEDs blink in the defined Rebuild pattern on either the specific drive being rebuilt or on the entire RAID volume that is being rebuilt. |
Default = Enabled Can be: 1. Disabled (only on one drive) 2. Enabled (on all drives) |
Managed unplug |
During a managed hot unplug, the status LED of the managed drive blinks in the defined Locate pattern until the drive is physically ejected. |
None. Enabled by default. |
RAID volume is migrating |
During RAID volume migration, the status LEDs blink in the defined Rebuild pattern on all drives until the process is complete. |
Default = Enabled Can be: 1. Disabled (No Status LED Blinking) 2. Enabled (Blinks Status LEDs) |
Rebuild |
Only the migrating drive blinks. |
Default = Disabled |
Status LED |
Behavior |
Options |
---|---|---|
Skip/exclude controller BLACKLIST |
|
Exclude controllers on the blacklist. Default = Support all controllers |
RAID volume is initializing, verifying, or verifying and fixing BLINK_ON_INIT |
Rebuild pattern on all drives in RAID volume (until initialization, verify, or verify and fix finishes). |
1. True/Enabled (on all drives) 2. False/Disabled (no drives) Default = True/Enabled |
Set INVERVAL |
Defines the time interval between The value is given in seconds. |
10s (5s maximum) Default = 10s |
RAID volume is rebuilding REBUILD_BLINK_ON_ALL |
Rebuild pattern on a single drive to which RAID volume rebuilds |
1. False/Disabled (on one drive) 2. True/Enabled (on all drives) Default = False/Disabled |
RAID volume is migrating BLINK_ON_MIGR |
Rebuild pattern on all drives in RAID volume (until migration finishes). |
1. True/Enabled (on all drives) 2. False/Disabled (no drives) Default = True/Enabled |
Set ledmon debug level LOG_LEVEL |
Corresponds with –log-level flag from |
Acceptable values are: quiet, error, warning, info, debug, all - 0 means ‘quiet’ and 5 means ‘all Default = 2 |
Set manage one RAID member or All RAID RAID_MEMBRES_ONLY |
If the flag is set to |
1. False / (all RAID member and PT) 2. True / (RAID member only) Default = False |
Limited scans only to specific controllers WHITELIST |
|
Limit changing LED state in whitelist controller. Default = No limit. |
Status LED |
Behavior |
Options |
---|---|---|
"Identify" |
The ability to identify a specific device in an enclosure by blinking the status LED of that drive in the defined Locate pattern. |
None. Default is Off. |
"Off" |
The ability to turn off the "Identify" LED once a specific device in an enclosure has been located. |
None. Default is Off. |