Post Cluster Configuration Guidelines
Important |
|
The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product. Learn more about how Cisco is using Inclusive Language.
Important |
|
Passthrough devices provide the means to more efficiently use resources and improve performance in your environment. Enabling PCI passthrough allows a VM to use a host device as if the device were directly attached to the VM.
The following procedure describes how to configure a network device (such as NVIDIA GPUs) for PCI passthrough on an ESXi host.
Step 1 |
In vSphere Client, browse to the ESXi host in the Navigator panel. |
Step 2 |
Enter HX maintenance mode on the node that has the GPUs installed. To enter maintenance mode, right click on the node > |
Step 3 |
In a new browser window, login directly to the ESXi node. |
Step 4 |
Click Manage. |
Step 5 |
Under the Hardware tab, click PCI Devices. A list of available passthrough devices appears. |
Step 6 |
Select PCI device you want to enable for passthrough. Click Toggle passthrough. |
Step 7 |
Reboot the host to make the PCI device available for use. |
Step 8 |
When the reboot completes, ensure that the node is not in maintenance mode. |
Step 9 |
Log in to vCenter Server. |
Step 10 |
Locate the VM, right click and select elect Edit Settings. |
Step 11 |
From the New device drop-down, select PCI Device, and click Add. |
Step 12 |
Click the passthrough device to use (example: NVIDIA GPU) and click OK. |
Step 13 |
Log in to the ESXi host and open the virtual machine configuration file (.vmx) in a text editor.
|
Step 14 |
Add the following lines, save, and exit the text editor.
|
To complete the post-installation tasks, you can run a post-installation script on the Installer VM.
Important |
|
Use an SSH client to connect to the shell on the installer VM.
Log in with installer VM root credentials.
Type post_install
and hit Enter
.
Set the post-install script parameters as specified in the following table:
Note |
If you run into any post-install script issues, set the post-install script parameters manually. |
Parameter |
Description |
---|---|
Enable HA/DRS on cluster? |
Enables vSphere High Availability (HA) feature per best practice. |
Disable SSH warning? |
Suppresses the SSH and shell warnings in the vCenter. |
Add vMotion interfaces |
Configure vMotion interfaces per best practice. Requires IP address and VLAN ID input. |
Add VM network VLANs |
Add additional guest VLANs to Cisco UCS Manager and within ESXi on all cluster hosts. |
Correct network errors reported, if any.
root@Cisco-HX-Data-Platform-Installer:~# post_install
Select post_install workflow-
1. New/Existing Cluster
2. Expanded Cluster
3. Generate Certificate
Note: Workflow No.3 is mandatory to have unique SSL certificate in the cluster.
By Generating this certificate, it will replace your current certificate.
If you're performing cluster expansion, then this option is not required.
Selection: 3
Certificate generation workflow selected
Logging in to controller 10.20.1.64
HX CVM admin password:
Getting ESX hosts from HX cluster...
Select Certificate Generation Workflow-
1. With vCenter
2. Without vCenter
Selection: 1
vCenter URL: 10.33.16.40
Enter vCenter username (user@domain): administrator@vsphere.local
vCenter Password:
Starting certificate generation and re-registration.
Trying to retrieve vCenterDatacenter information ....
Trying to retrieve vCenterCluster information ....
Certificate generated successfully.
Cluster re-registration in progress ....
Cluster re-registered successfully.
root@HyperFlex-Installer:~#
Host: esx-hx-5.cpoc-rtp.cisco.com
No errors found
Host: esx-hx-6.cpoc-rtp.clsco.com
No errors found
Host: esx-hx-l.cpoc-rtp.cisco.com
No errors found
Host: esx-hx-2.cpoc-rtp.cisco.com
No errors found
controller VM clocks:
stctlVM-FCH1946V34Y - 2016-09-16 22:34:04
stCtlVM-FCH1946V23M - 2016-09-16 22:34:04
stctIVM-FCH1951V2TT - 2016-09-16 22:34:04
stctlVM-FCH2004VINS - 2016-09-16 22:34:04
Cluster:
Version - 1.8.1a-19499
Model - HX220C-M4S
Health - HEALTHY
Access policy - LENIENT
ASUP enabled - False
SMTP server - smtp.cisco.com
You can change the default ESXi password for the following scenarios:
During creation of a standard and stretch cluster (supports only converged nodes)
During expansion of a standard cluster (supports both converged or compute node expansion)
During Edge cluster creation
Note |
In the above cases, the ESXi root password is secured as soon as installation is complete. In the event a subsequent password change is required, the procedure outlined below may be used after installation to manually change the root password. |
As the ESXi comes up with the factory default password, you should change the password for security reasons. To change the default ESXi root password post-installation, do the following.
Note |
If you have forgotten the ESXi root password, for password recovery please contact Cisco TAC. |
Step 1 |
Log in to the ESXi host service control using SSH. |
||
Step 2 |
Acquire root privileges.
|
||
Step 3 |
Enter the current root password. |
||
Step 4 |
Change the root password.
|
||
Step 5 |
Enter the new password, and press Enter. Enter the password a second time for confirmation.
|
To reset the HyperFlex storage controller password post-installation, do the following.
Step 1 |
Log in to a storage controller VM. |
||
Step 2 |
Change the Cisco HyperFlex storage controller password. # stcli security password set This command applies the change to all the controller VMs in the storage cluster.
To change the password on compute nodes:
|
||
Step 3 |
Type in the new password. |
||
Step 4 |
Press Enter. |
To manage your storage cluster through a GUI, launch the vSphere Web Client. You access your storage cluster through the vSphere Web Client and HX Data Platform plug-in.
Step 1 |
From the HX Data Platform installer, after installation is completed, on the Summary page, click Launch vSphere Web Client. |
Step 2 |
Display the login page, click Login to vSphere Web Client and enter your vSphere credentials. |
Step 3 |
View the HX Data Platform plug-in. From the vSphere Web Client Navigator, select vCenter Inventory Lists > Cisco HyperFlex Systems > Cisco HX Data Platform. |
Note |
A minimum of two datastores is recommended for high availability. |
Step 1 |
From the vSphere Web Client Navigator, Global Inventory Lists expand > cluster > Manage > Datastores. |
Step 2 |
Click the Create Datastore icon. |
Step 3 |
Enter a Name for the datastore. The vSphere Web Client enforces a 42 character limit for the datastore name. Assign each datastore a unique name. |
Step 4 |
Specify the Size for the datastore. Choose GB or TB from the drop-down list. Click OK. |
Step 5 |
Click the Refresh button to display your new datastore. |
Step 6 |
Click the Hosts tab to see the Mount Status of the new datastore. |
Under the vSphere HA settings, ensure that you set the Datastore for Heartbeating option to allow selecting any datastore from the list of available datastores.
Step 1 |
Login to vSphere. |
Step 2 |
Verify DRS is enabled. From vSphere Services. Click vSphere DRS. |
Step 3 |
Select the Edit button. Click vSphere HA. Click Edit. |
Step 4 |
Select Turn on vSphere HA if it is not selected. |
Step 5 |
Expand Admission Control from the drop-down menu. You may use the default value or enable Override calculated failover capacity and enter a percentage. |
Step 6 |
Expand Heartbeat Datastores and select Use datastore only from the specified list. Select which datastores to include. |
Step 7 |
Click OK. |
You can configure the HX storage cluster to send automated email notifications regarding documented events. You can use the data collected in the notifications to help troubleshoot issues in your HX storage cluster.
Note |
Auto Support (ASUP) and Smart Call Home (SCH) support the use of a proxy server. You can enable the use of a proxy server and configure proxy settings for both using HX Connect. |
Auto Support is the alert notification service provided through HX Data Platform. If you enable Auto Support, notifications are sent from HX Data Platform to designated email addresses or email aliases that you want to receive the notifications. Typically, Auto Support is configured during HX storage cluster creation by configuring the SMTP mail server and adding email recipients.
Note |
Only unauthenticated SMTP is supported for ASUP. |
If the Enable Auto Support check box was not selected during configuration, Auto Support can be enabled post-cluster creation using the following methods:
Post-Cluster ASUP Configuration Method |
Associated Topic |
---|---|
HX Connect user interface |
Configuring Auto Support Using HX Connect |
Command Line Interface (CLI) |
|
REST APIs |
Cisco HyperFlex Support REST APIs on Cisco DevNet. |
Auto Support can also be used to connect your HX storage cluster to monitoring tools.
Smart Call Home is an automated support capability that monitors your HX storage clusters and then flags issues and initiates resolution before your business operations are affected. This results in higher network availability and increased operational efficiency.
Call Home is a product feature embedded in the operating system of Cisco devices that detects and notifies the user of a variety of fault conditions and critical system events. Smart Call Home adds automation and convenience features to enhance basic Call Home functionality. After Smart Call Home is enabled, Call Home messages/alerts are sent to Smart Call Home.
Smart Call Home is included with many Cisco service contracts and includes:
Automated, around-the-clock device monitoring, proactive diagnostics, real-time email alerts, service ticket notifications, and remediation recommendations.
Proactive messaging sent to your designated contacts by capturing and processing Call Home diagnostics and inventory alarms. These email messages contain links to the Smart Call Home portal and the TAC case if one was automatically created.
Expedited support from the Cisco Technical Assistance Center (TAC). With Smart Call Home, if an alert is critical enough, a TAC case is automatically generated and routed to the appropriate support team through
https
, with debug and other CLI output attached.
Customized status reports and performance analysis.
Web-based access to: all Call Home messages, diagnostics, and recommendations for remediation in one place; TAC case status; and up-to-date inventory and configuration information for all Call Home devices.
To ensure automatic communication among your HX storage cluster, you, and Support, see Configuring Smart Call Home for Data Collection.
Typically, Auto Support (ASUP) is configured during HX storage cluster creation. If it was not, you can enable it post cluster creation using the HX Connect user interface.
Step 1 |
Log in to HX Connect. |
||||||||||
Step 2 |
In the banner, click and fill in the following fields.
|
||||||||||
Step 3 |
Click OK. |
||||||||||
Step 4 |
In the banner, click and fill in the following fields.
|
||||||||||
Step 5 |
Click OK. |
Use the following procedure to configure and verify that you are set up to receive alarm notifications from your HX storage cluster.
Note |
Only unauthenticated SMTP is supported for ASUP. |
Step 1 |
Log in to a storage controller VM in your HX storage cluster using |
Step 2 |
Configure the SMTP mail server, then verify the configuration. Email address used by the SMTP mail server to send email notifications to designated recipients. Syntax: Example:
|
Step 3 |
Enable ASUP notifications.
|
Step 4 |
Add recipient email addresses, then verify the configuration. List of email addresses or email aliases to receive email notifications. Separate multiple emails with a space. Syntax: Example:
|
Step 5 |
From the controller VM that owns the eth1:0 IP address for the HX storage cluster, send a test ASUP notification to your email.
To determine the node that owns the eth1:0 IP address, log in to each storage controller VM in your HX storage cluster using
|
Step 6 |
Configure your email server to allow email to be sent from the IP address of all the storage controller VMs. |
Data collection is enabled by default but, during installation, you can opt-out (disable). You can also enable data collection
post cluster creation. During an upgrade, Smart Call Home is set up based on your legacy configuration. For example, if stcli services asup show
is enabled, Smart Call Home is enabled on upgrade.
Data collection about your HX storage cluster is forwarded to Cisco TAC through https
. If you have a firewall installed, configuring a proxy server for Smart Call Home is completed post cluster creation.
Note |
In HyperFlex Data Platform release 2.5(1.a), Smart Call Home Service Request (SR) generation does not use a proxy server. |
Using Smart Call Home requires the following:
A Cisco.com ID associated with a corresponding Cisco Unified Computing Support Service or Cisco Unified Computing Mission Critical Support Service contract for your company.
Cisco Unified Computing Support Service or Cisco Unified Computing Mission Critical Support Service for the device to be registered.
Step 1 |
Log in to a storage controller VM in your HX storage cluster. |
|||||||||||||||||||||
Step 2 |
Register your HX storage cluster with Support. Registering your HX storage cluster adds identification to the collected data and automatically enables Smart Call Home. To register your HX storage cluster, you need to specify an email address. After registration, this email address receives support notifications whenever there is an issue and a TAC service request is generated.
Example:
|
|||||||||||||||||||||
Step 3 |
Verify data flow from your HX storage cluster to Support is operational. Operational data flow ensures that pertinent information is readily available to help Support troubleshoot any issues that might arise.
If you upgraded your HX storage cluster from HyperFlex 1.7.1 to 2.1.1b, also run the following command:
|
|||||||||||||||||||||
Step 4 |
(Optional) Configure a proxy server to enable Smart Call Home access through port 443. If your HX storage cluster is behind a firewall, after cluster creation, you must configure the Smart Call Home proxy server. Support collects data at the
|
|||||||||||||||||||||
Step 5 |
Verify Smart Call Home is enabled. set , it is automatically enabled.
|
|||||||||||||||||||||
Step 6 |
Enable Auto Support (ASUP) notifications. Typically, Auto Support (ASUP) is configured during HX storage cluster creation. If it was not, you can enable it post cluster creation using HX Connect or CLI. |
Creating a replication cluster pair is a pre-requisite for setting up VMs for replication. The replication network and at least one datastore must be configured prior to creating the replication pair.
By pairing cluster 1 with cluster 2, you are specifying that all VMs on cluster 1 that are explicitly set up for replication can replicate to cluster 2, and that all VMs on cluster 2 that are explicitly set up for replication can replicate to cluster 1.
By pairing a datastore A on cluster 1 with a datastore B on cluster 2, you are specifying that for any VM on cluster 1 that is set up for replication, if it has files in datastore A, those files will be replicated to datastore B on cluster 2. Similarly, for any VM on cluster 2 that is set up for replication, if it has files in datastore B, those files will be replicated to datastore A on cluster 1.
Pairing is strictly 1-to-1. A cluster can be paired with no more than one other cluster. A datastore on a paired cluster, can be paired with no more than one datastore on the other cluster.
For the detailed procedure on creating, editing, and deleting replication pairs, see the Cisco HyperFlex Systems Administration Guide.
Adding Private VLAN
A private VLAN partitions the Layer 2 broadcast domain of a VLAN into subdomains, allowing you to isolate the ports on the switch from each other. A subdomain consists of a primary VLAN and one or more secondary VLANs. A private VLAN domain has only one primary VLAN. Each port in a private VLAN domain is a member of the primary VLAN, and the primary VLAN is the entire private VLAN domain.
VLAN Port |
Description |
---|---|
Promiscuous Primary VLAN |
Belongs to the primary VLAN. Can communicate with all interfaces that belong to those secondary VLANs that are associated to the promiscuous port and associated with the primary VLAN. Those interfaces include the community and isolated host ports. All packets from the secondary VLANs go through this VLAN. |
Isolated Secondary VLAN |
Host port that belongs to an isolated secondary VLAN. This port has complete isolation from other ports within the same private VLAN domain, except that it can communicate with associated promiscuous ports. |
Community Secondary VLAN |
Host port that belongs to a community secondary VLAN. Community ports communicate with other ports in the same community VLAN and with associated promiscuous ports. |
Following HX deployment, a VM network uses a regular VLAN by default. To use a Private VLAN for the VM network, see the following sections:
Step 1 |
To configure a private VLAN on Cisco UCS Manager, see the Cisco UCS Manager Network Management Guide. |
Step 2 |
To configure a private VLAN on the upstream switch, see the Cisco Nexus 9000 Series NX-OS Layer 2 Switching Configuration Guide. |
Step 3 |
To configure a private VLAN on ESX hosts, see Configuring Private VLAN on ESX Hosts. |
To configure private VLANs on the ESX hosts do the following:
Step 1 |
Delete VMNICs on the vSphere Standard Switches from the VMware vSphere Client. |
Step 2 |
Create new vSphere Distributed Switch with the VMNICs deleted from the previous step. |
Step 3 |
Create promiscuous, isolated, and community VLAN. |
Step 1 |
To configure a private VLAN on Cisco UCS Manager, see the Cisco UCS Manager Network Management Guide. |
Step 2 |
To configure a private VLAN on the upstream switch, see the Cisco Nexus 9000 Series NX-OS Layer 2 Switching Configuration Guide. |
Step 3 |
To configure a private VLAN on ESX hosts, see Configuring Private VLAN on ESX Hosts |
Step 4 |
Migrate VMs from vSphere standard switch to the newly created vSphere distributed switch.
|
Step 5 |
Change network connection of the network adapter on the VMs to private VLAN.
|
Step 1 |
Log on to VMware vSphere Client. |
Step 2 |
Select Home > Hosts and Clusters. |
Step 3 |
Select the ESX host from which you want to delete the VMNIC. |
Step 4 |
Open the Configuration tab. |
Step 5 |
Click Networking. |
Step 6 |
Select the switch you wish to remove a VMNIC from. |
Step 7 |
Click the Manage the physical adapters connected to the selected switch button. |
Step 8 |
Select the vminc you want to delete and click Remove. |
Step 9 |
Confirm your selection by clicking Yes. |
Step 10 |
Click Close. |
Step 1 |
Log on to the VMware vSphere Client. |
Step 2 |
Select Home > Networking. |
Step 3 |
Right click on the cluster Distributed Switch > New Distributed Switch. |
Step 4 |
In the Name and Location dialog box, enter a name for the distributed switch. |
Step 5 |
In the Select Version dialog box, select the distributed switch version that correlates to your version and configuration requirements. |
Step 6 |
Click Next. |
Step 7 |
In the Edit Settings dialog box, specify the following:
|
Step 8 |
Click Next. |
Step 9 |
Review the settings in the Ready to complete dialog box. |
Step 10 |
Click Finish. |
Step 1 |
From the VMware vSphere Client, select Inventory > Networking. |
Step 2 |
Right-click on the dvSwitch. |
Step 3 |
Click Edit Settings. |
Step 4 |
Select the Private VLAN tab. |
Step 5 |
On the Primary private VLAN ID tab, enter a private VLAN ID. |
Step 6 |
On the Secondary private VLAN ID tab, enter a private VLAN ID. |
Step 7 |
Select the type of VLAN from the Type drop-down list. Valid values include:
|
Step 8 |
Click OK. |
Create Private VLAN on the vSphere Distribute Switch.
Step 1 |
Right click dvPortGroup under dvSwitch, and click Edit Settings. |
||
Step 2 |
Click Policies > VLAN. |
||
Step 3 |
Select Private VLAN, from the VLAN type drop-down list. |
||
Step 4 |
From the Private VLAN Entry drop-down list, select the type of private VLAN. It can be one of the following:
|
||
Step 5 |
Click OK. |
Considerations when Deploying Distributed Switches
Note |
|
Distributed switches ensure that each node is using the same configuration. It helps prioritize traffic and allows other network streams to utilize available bandwidth when no vMotion traffic is active.
The HyperFlex (HX) Data Platform can use Distributed Virtual Switch (DVS) Networks for non-HyperFlex dependent networks.
These non-HX dependent networks include:
VMware vMotion networks
VMware applications networks
The HX Data Platform has dependency that the following networks use standard vSwitches.
vswitch-hx-inband-mgmt: Storage Controller Management Network
vswitch-hx-inband-mgmt: Management Network
vswitch-hx-storage-data: Storage Hypervisor Data Network
vswitch-hx-storage-data: Storage Controller Data Network
During HX Data Platform installation, all the networks are configured with standard vSwitch networks. After the storage cluster is configured, the non-HX dependent networks can be migrated to DVS networks. For example:
vswitch-hx-vm-network: VM Network
vmotion: vmotion pg
For further details on how to migrate the vMotion Network to Distributed Virtual Switches, please see the Migrating vMotion Networks to Distributed Virtual Switches (DVS) or Cisco Nexus 1000v (N1Kv) in the Network and Storage Management Guide.
Deployment of vCenter on the HyperFlex cluster is supported with some constraints. See the How to Deploy vCenter on the HX Data Platform TechNote for more details.
AMD FirePro S7150 series GPUs are supported in HX240c M5 nodes. These graphic accelerators enable highly secure, high performance, and cost-effective VDI deployments. Follow the steps below to deploy AMD GPUs in HyperFlex.
Step |
Action |
Step Instructions |
---|---|---|
1 |
For the service profiles attached to the servers modify the BIOS policy. |
Requirement For All Supported GPUs: Memory-Mapped I/O Greater than 4 GB |
2 |
Install the GPU card in the servers. |
|
3 |
Power on the servers, and ensure that the GPUs are visible in the Cisco UCS Manager inventory for the servers. |
— |
4 |
Install the vSphere Installation Bundle (VIB) for the AMD GPU card and reboot. |
Download the inventory list from Cisco Software Downloads that includes the latest driver ISO for C-series standalone firmware / software version bundle 3.1(3) for AMD on VMware ESXi . |
5 |
Create a Win10 VM on the cluster with the VM configuration. |
|
6 |
On each ESXi hosts run the MxGPU.sh script to configure the GPUs and to create virtual functions from the GPU. |
|
7 |
Assign the virtual functions (VFs) created in the previous step to the Win10 VMs. |
— |