Step 1 |
Launch the Cisco HX Data Platform Installer.
-
In your web browser, enter the IP address or the node name for the HX
Data Platform Installer VM. Click Accept or Continue to bypass any SSL certificate errors. The Cisco HX Data Platform Installer login page appears. Verify the HX
Data Platform Installer
Build ID in the lower right corner of the login screen.
-
In the login page, enter the following credentials:
Username: root
Password (Default): Cisco123
Note
|
Systems ship with a default password of Cisco123 that must be changed during installation. You cannot continue installation unless you specify a new user supplied password.
|
-
Read the EULA, check the I accept the terms and conditions checkbox, and click Login.
|
Step 2 |
On the Workflow page, select Cluster Expansion.
|
Step 3 |
On the Credentials page, complete the following fields.
To perform cluster creation, you can import a JSON configuration file with the required configuration data. The following two steps are optional if importing a JSON file, otherwise you can
input data into the required fields manually.
Note
|
For a first-time installation, contact your Cisco representative to procure the factory preinstallation JSON file.
-
Click Select a file and choose your JSON file to load the configuration. Select Use Configuration.
-
An Overwrite Imported Values dialog box displays if your imported values for Cisco UCS Manager are different. Select Use Discovered Values.
|
Field
|
Description
|
UCS Manager Credentials
|
UCS Manager Host Name
|
UCS Manager FQDN or IP address.
For example, 10.193.211.120.
|
User Name
|
<admin> username.
|
Password
|
<admin> password.
|
vCenter Credentials
|
vCenter Server
|
vCenter server FQDN or IP address.
For example, 10.193.211.120.
Note
|
-
A vCenter server is required before the cluster can be made operational.
-
The vCenter address and credentials must have root level administrator permissions to the vCenter.
-
vCenter server input is optional if you are building a nested vCenter. See the Nested vCenter TechNote for more details.
|
|
User Name
|
<admin> username.
For example, administrator@vsphere.local.
|
Admin Password
|
<root> password.
|
Hypervisor Credentials
|
Admin User Name
|
<admin> username.
This is root for factory nodes.
|
Admin Password
|
<root> password.
Default password is Cisco123 for factory nodes.
Note
|
Systems ship with a default password of Cisco123 that must be changed during installation. You cannot continue installation unless you specify a new user supplied password.
|
|
|
Step 4 |
Click Continue. A Cluster Expand Configuration page is displayed. Select the HX Cluster that you want to expand.
If the HX cluster to be expanded is not found, or if loading the cluster takes time, enter the IP of the Cluster Management
Address in the Management IP Address field.
|
Step 5 |
The Server Selection page displays a list of unassociated HX servers under the Unassociated tab, and the list of discovered servers under the Associated tab. Select the servers under the Unassociated tab to include in the HyperFlex cluster.
If HX servers do not appear in this list, check Cisco UCS Manager and ensure that they have been discovered.
For each server you can use the Actions drop-down list to set the following:
Note
|
If there are no unassociated servers, the following error message is displayed:
No unassociated servers found. Please login to UCS Manager and ensure server ports are enabled.
|
The Configure Server Ports button allows you to discover any new HX nodes. Typically, the server ports are configured in Cisco UCS Manager before you start the configuration.
|
Step 6 |
Click Continue. The UCSM Configuration page appears.
Note
|
If you imported a JSON file at the beginning, the Credentials page should be populated with the required configuration data from the preexisting HX cluster. This information must match
your existing cluster configuration.
|
|
Step 7 |
In the UCSM Configuration page, complete the following fields for each network.
Field
|
Description
|
VLAN Configuration
Note
|
Use separate subnet and VLANs for each of the following networks.
|
|
VLAN for Hypervisor and HyperFlex management
|
VLAN Name
VLAN ID
|
Name: hx-inband-mgmt
Default VLAN ID: 3091
|
VLAN for HyperFlex storage traffic
|
VLAN Name
VLAN ID
|
Name: hx-storage-data
Default VLAN ID: 3092
|
VLAN for VM vMotion
|
VLAN Name
VLAN ID
|
Name: hx-vmotion
Default VLAN ID: 3093
|
VLAN for VM Network
|
VLAN Name
VLAN ID(s)
|
Name: vm-network
Default VLAN ID: 3094
A comma-separated list of guest VLANs.
|
MAC Pool
|
MAC Pool Prefix
|
Configure MAC Pool prefix by adding in two more hex characters (0-F).
For example, 00:25:B5:A0.
|
'hx-ext-mgmt' IP Pool for Out-of-Band CIMC
|
IP Blocks
|
The range of IP addresses designated for the HyperFlex nodes. This can be a comma-separated list of values for the guest VLANs.
For example, 10.193.211.124-127, 10.193.211.158-163
|
Subnet Mask
|
Set the subnet to the appropriate level to limit and control IP addresses.
For example, 255.255.0.0.
|
Gateway
|
IP address.
For example, 10.193.0.1.
|
iSCSI Storage
Note
|
This must be configured upfront if you want to use external storage at any point in the future.
|
|
Enable iSCSI Storage checkbox
|
Check to configure iSCSI storage. |
VLAN A Name
|
Name of the VLAN associated with the iSCSI vNIC, on the primary fabric interconnect (FI-A).
|
VLAN A ID
|
ID of the VLAN associated with the iSCSI vNIC, on the primary fabric interconnect (FI-A).
|
VLAN B Name
|
Name of the VLAN associated with the iSCSI vNIC, on the subordinate fabric interconnect (FI-B).
|
VLAN B ID
|
ID of the VLAN associated with the iSCSI vNIC, on the subordinate fabric interconnect (FI-A).
|
FC Storage
Note
|
This must be configured upfront if you want to use external storage at any point in the future.
|
|
Enable FC Storage checkbox
|
Check to enable FC Storage.
|
WWxN Pool
|
A WWN pool that contains both WW node names and WW port names. For each fabric interconnect, a WWxN pool is created for WWPN
and WWNN.
|
VSAN A Name
|
The name of the VSAN for the primary fabric interconnect (FI-A). By default, this is set to hx-ext-storage-fc-a .
|
VSAN A ID
|
The unique identifier assigned to the network for the primary fabric interconnect (FI-A).
Caution
|
Do not enter VSAN IDs that are currently used on the UCS or HyperFlex system. If you enter an existing VSAN ID in the installer
which utilizes UCS zoning, zoning will be disabled in your existing environment for that VSAN ID.
|
|
VSAN B Name
|
The name of the VSAN for the subordinate fabric interconnect (FI-B). By default, this is set to hx-ext-storage-fc-b .
|
VSAN B ID
|
The unique identifier assigned to the network for the subordinate fabric interconnect (FI-B).
Caution
|
Do not enter VSAN IDs that are currently used on the UCS or HyperFlex system. If you enter an existing VSAN ID in the installer
which utilizes UCS zoning, zoning will be disabled in your existing environment for that VSAN ID.
|
|
Advanced
|
UCS Firmware Version
|
Select the UCS firmware version to associate with the HX servers from the drop-down list. The UCS firmware version must match
the UCSM version. See the latest Cisco HX Data Platform Release Notes for more details.
For example, 3.2(1d).
|
HyperFlex Cluster Name
|
The name applied to a group of HX Servers in a given cluster. This is a user-defined name. The HyperFlex cluster name adds
a label to service profiles for easier identification.
|
Org Name
|
Displays a unique Org Name of the cluster, that ensures isolation of the HyperFlex environment from the rest of the UCS domain.
|
Note
|
Review the VLAN, MAC pool, and IP address pool information in the Configuration pane. These VLAN IDs might be changed by your environment. By default, the Installer sets the VLANs as non-native. You must
configure the upstream switches to accommodate the non-native VLANs by appropriately applying a trunk configuration.
|
|
Step 8 |
Click Continue. The Hypervisor Configuration page appears. Complete the following fields:
Attention
|
You can skip the completion of the fields described in this step in case of a reinstall, and if ESXi networking has been completed.
|
Field
|
Description
|
Configure Common Hypervisor Settings
|
Subnet Mask
|
Set the subnet mask to the appropriate level to limit and control IP addresses.
For example, 255.255.0.0.
|
Gateway
|
IP address of gateway.
For example, 10.193.0.1.
|
DNS Server(s)
|
IP address for the DNS Server.
If you do not have a DNS server, do not enter a hostname in any of the fields on the Cluster Configuration page of the HX Data Platform installer. Use only static IP addresses and hostnames for all ESXi hosts.
Note
|
If you are providing more than one DNS server, check carefully to ensure that both DNS servers are correctly entered, separated
by a comma.
|
|
Hypervisor Settings
Ensure to select Make IP Addresses and Hostnames Sequential, to make the IP addresses sequential.
Note
|
You can rearrange the servers using drag and drop.
|
|
Name
|
Server name.
|
Serial
|
Serial number of the server.
|
Static IP Address
|
Input static IP addresses and hostnames for all ESXi hosts.
|
Hostname
|
Do not leave the hostname fields empty.
|
|
Step 9 |
Click Continue. The IP Addresses page appears. You can add more compute or converged servers, by clicking Add Compute Server or Add Converged Server.
Ensure to select Make IP Addresses Sequential, to make the IP addresses sequential. For the IP addresses, specify if the network should belong to Data Network or Management
Network.
For each HX node, complete the following fields for Hypervisor Management and Data IP addresses.
Field
|
Description
|
Management Hypervisor
|
Enter the static IP address that handles the Hypervisor management network connection between the ESXi host and the storage
cluster.
|
Management Storage Controller
|
Enter the static IP address that handles the HX Data Platform storage controller VM management network connection between
the storage controller VM and the storage cluster.
|
Data Hypervisor
|
Enter the static IP address that handles the Hypervisor data network connection between the ESXi host and the storage cluster.
|
Data Storage Controller
|
Enter the static IP address that handles the HX Data Platform storage controller VM data network connection between the storage
controller VM and the storage cluster.
|
When you enter IP addresses in the first row for Hypervisor (Management), Storage Controller VM (Management), Hypervisor (Data),
and Storage Controller VM (Data) columns, the HX
Data Platform Installer applies an incremental auto-fill to the node information for the rest of the nodes. The minimum number of nodes in the storage
cluster is three. If you have more nodes, use the Add button to provide the address information.
Note
|
Compute-only nodes can be added only after the storage cluster is created.
|
|
Controller VM Password
|
A default administrator username and password is applied to the controller VMs. The VMs are installed on all converged and
compute-only nodes.
Important
|
-
You cannot change the name of the controller VM or the controller VM’s datastore.
-
Use the same password for all controller VMs. The use of different passwords is not supported.
-
Provide a complex password that includes 1 uppercase character, 1 digit, 1 special character, and a minimum of 10 characters
in total.
-
You can provide a user-defined password for the controller VMs and for the HX cluster to be created. For password character
and format limitations, see the section on Guidelines for HX Data Platform Special Characters in the Cisco HX Data Platform Management Guide.
|
|
Advanced Configuration
|
Jumbo frames
Enable Jumbo Frames checkbox
|
Check to set the MTU size for the storage data network on the host vSwitches and vNICs, and each storage controller VM.
The default value is 9000.
Note
|
To set your MTU size to a value other than 9000, contact Cisco TAC.
|
|
Disk Partitions
Clean up Disk Partitions checkbox
|
Check to remove all existing data and partitions from all nodes added to the storage cluster. You must backup any data that
should be retained.
Important
|
Do not select this option for factory prepared systems. The disk partitions on factory prepared systems are properly configured.
For manually prepared servers, select this option to delete existing data and partitions.
|
|
|
Step 10 |
Click
Start. A
Progress page displays the progress of various
configuration tasks.
Note
|
If the vCenter cluster has EVC enabled, the deploy process fails with a message: The host needs to be manually added to vCenter. To successfully perform the deploy action, do the following:
-
Log in to the ESXi host to be added in vSphere Client.
-
Power off the controller VM.
-
Add the host to the vCenter cluster in vSphere Web Client.
-
In the HX
Data Platform Installer, click Retry Deploy.
|
|
Step 11 |
When cluster expansion is complete, start managing your storage cluster by clicking Launch HyperFlex Connect.
Note
|
When you add a node to an existing storage cluster, the cluster continues to have the same HA resiliency as the original storage
cluster until auto-rebalancing takes place at the scheduled time.
Rebalancing is typically scheduled during a 24-hour period, either 2 hours after a node fails or if the storage cluster is
out of space.
|
If you need to rebalance the storage cluster before the scheduled time, initiate rebalance storage cluster command manually
as shown below.
From a storage cluster controller VM command line, run the following commands:
-
# stcli rebalance start --force
-
To monitor rebalance status
# stcli rebalance status
|
Step 12 |
After the new
nodes are added to the storage cluster, HA services are reset so that HA is
able to recognize the added nodes.
-
Log in to vSphere.
-
In the vSphere Web Client, navigate to .
-
Select the new node.
-
Right-click and select Reconfigure for vSphere HA.
|