Install Cisco HyperFlex Systems Servers

This chapter describes how to install the physical components for setting up a HyperFlex cluster:

Rack Cisco HyperFlex Nodes

For details on the HyperFlex cluster and node limits, see Cisco HX Data Platform Storage Cluster Specifications in the latest release of the Release Notes for Cisco HX Data Platform.

For UCS C-Series integration guidelines, see the Cisco UCS C-Series Server Integration with Cisco UCS Manager Configuration Guide for your release.

For details on the installation of Cisco HyperFlex nodes, refer to respective links from the following table:

Type of Node To Be Installed

Reference

Converged Nodes

HyperFlex HX245c M5/M6 Nodes

Cisco HyperFlex HX245c Node Installation Guides

HyperFlex HX240c M5/M6 Nodes

Cisco HyperFlex HX240c Node Installation Guides

HyperFlex HX225c M5/M6 Nodes

Cisco HyperFlex HX225c Node Installation Guides

HyperFlex HX220c M5/M6 Nodes

Cisco HyperFlex HX220c Node Installation Guides

Compute-only Nodes

Cisco UCS B200 M5 Nodes

Cisco UCS B200 M3/M4/M5 Blade Server Installation and Service Note

Cisco UCS B480 M5 Nodes

Cisco UCS B480 M5 Blade Server Installation and Service Note

Cisco UCS C240 M5/M6 Rack Nodes

Cisco UCS C240 Server Installation and Service Guide

Cisco UCS C220 M5/M6 Rack Nodes

Cisco UCS C220 Server Installation and Service Guide

Cisco UCS C480 M5 Nodes

Cisco UCS C480 M5 Server Installation and Service Guide

Setting Up the Fabric Interconnects

Configure a redundant pair of fabric interconnects for high availability as follows:

  1. Connect the two fabric interconnects directly using Ethernet cables between the L1 and L2 high availability ports.

  2. Connect Port L1 on fabric interconnect A to port L1 on fabric interconnect B, and Port L2 on fabric interconnect A to port L2 on fabric interconnect B.

This allows both the fabric interconnects to continuously monitor the status of each other.

Verify and obtain the following information before connecting the fabric interconnects.

Item

Description

Verify the physical connections of the fabric interconnects.

  • Console port for the first fabric interconnect must be physically connected to a computer or console server.

  • Management Ethernet port (mgmt0) must be connected to an external hub, switch, or router.

  • L1 ports on both the fabric interconnects must be directly connected to each other.

  • L2 ports on both the fabric interconnects must be directly connected to each other.

Verify console port parameters on the computer terminal.

  • 9600 baud

  • 8 data bits

  • No parity

  • 1 stop bit

Obtain information for initial setup.

Collect the following information for initial setup:

  • System name

  • Password for admin account

  • Three static IP addresses

  • Subnet mask for three static IP addresses

  • Default gateway IP address

  • DNS server IP address

  • Domain name for the system

Both fabric interconnects must go through the same setup process. Set up the primary fabric interconnect and enable for cluster configuration. When you use the same process to set up the secondary fabric interconnect, it detects the first fabric interconnect as a peer.

Configuring the Primary Fabric Interconnect Using Cisco UCS Manager GUI

Specify the following three IP addresses in the same subnet before you begin the configuration.

  • Management Port IP address for the primary fabric interconnect, FI A.

  • Management Port IP address for the secondary fabric interconnect, FI B.

  • IP address of the HyperFlex Cluster.

Configure the Primary Fabric Interconnect using the Cisco UCS Manager GUI as follows:

Procedure


Step 1

Connect to the console port. See Cisco 6300 Series Fabric Interconnect Hardware Installation Guide for more details.

Step 2

Power on the fabric interconnect. You will see the Power On self-test message as the fabric interconnect boots.

Step 3

At the installation method prompt, enter gui.

Step 4

If the system cannot access the DHCP server, you will be prompted to enter the following information:

  • IPv4 address for the management port on the fabric interconnect.

  • IPv4 subnet mask for the management port on the fabric interconnect.

  • IPv4 for the default gateway assigned to the fabric interconnect.

Important

 

All IP addresses must be IPv4. HyperFlex does not support IPv6 addresses.

Step 5

Copy the web link from the prompt into a web browser and navigate to the Cisco UCS Manager launch page.

Step 6

Select Express Setup.

Step 7

Select Initial Setup and click Submit.

Step 8

In the Cluster and Fabric Setup area, complete the following fields:

Name

Description

Enable Cluster option

Select the enable cluster option.

Fabric Setup option

Select Fabric A.

Cluster IP Address field

Enter the IPv4 address that Cisco UCS Manager will use.

Step 9

In the System Setup area, complete the following fields:

Field

Description

System Name field

The name assigned to the Cisco UCS domain.

Admin Password field

The password used for the admin account on the fabric interconnect.

Choose a strong password that meets the guidelines for Cisco UCS Manager passwords. This password cannot be blank.

Confirm Admin Password field

The password used for the admin account on the fabric interconnect.

Mgmt IP Address field

The static IP address for the management port on the fabric interconnect.

Mgmt IP Netmask field

The IP subnet mask for the management port on the fabric interconnect.

Default Gateway field

The IP address for the default gateway assigned to the management port on the fabric interconnect.

DNS Server IP field

The IP address for the DNS server assigned to the management port on the fabric interconnect.

Domain name field

The name of the domain in which the fabric interconnect resides.

Step 10

Click Submit.

A page displays the results of your setup operations.

Configuring the Secondary Fabric Interconnect Using Cisco UCS Manager GUI

Make sure that the console port of the secondary fabric interconnect is physically connected to a computer or a console server. Ensure that you know the password for the admin account on the primary fabric interconnect that you configured earlier.

Procedure


Step 1

Connect to the console port. See Cisco 6300 Series Fabric Interconnect Hardware Installation Guide for more details.

Step 2

Power on the fabric interconnect. You will see the Power On self-test message as the fabric interconnect boots.

Step 3

At the installation method prompt, enter gui.

Step 4

If the system cannot access the DHCP server, you will be prompted to enter the following information:

  • IPv4 address for the management port on the fabric interconnect.

  • IPv4 subnet mask for the management port on the fabric interconnect.

  • IPv4 address for the default gateway assigned to the fabric interconnect.

Note

 

Both the fabric interconnects must be assigned the same management interface address type during setup.

Step 5

Copy the web link from the prompt into a web browser and go to the Cisco UCS Manager GUI launch page.

Step 6

Copy the web link from the prompt into a web browser and navigate to the Cisco UCS Manager launch page.

Step 7

Select Express Setup.

Step 8

Select Initial Setup and click Submit.

The fabric interconnect should detect the configuration information for the first fabric interconnect.

Step 9

In the Cluster and Fabric Setup area, complete the following fields:

Name

Description

Enable Cluster option

Select the enable cluster option.

Fabric Setup option

Select Fabric B.

Step 10

In the System Setup area, enter the password for the Admin account into the Admin Password of Master field. The Manager Initial Setup Area is displayed.

Step 11

In the Manager Initial Setup area, the field that is displayed depends on whether you configured the first fabric interconnect with an IPv4 management address. Complete the field that is appropriate for your configuration as follows:

Field

Description

Peer FI is IPv4 Cluster enabled. Please provide local FI Mgmt0 IPv4 address field

Enter an IPv4 address for the Mgmt0 interface on the local fabric interconnect.

Step 12

Click Submit.

A page displays the results of your setup operations.

Configure the Primary Fabric Interconnect Using CLI

Procedure


Step 1

Connect to the console port.

Step 2

Power on the fabric interconnect.

You will see the power-on self-test messages as the fabric interconnect boots.

Step 3

When the unconfigured system boots, it prompts you for the setup method to be used. Enter console to continue the initial setup using the console CLI.

Step 4

Enter setup to continue as an initial system setup.

Step 5

Enter y to confirm that you want to continue the initial setup.

Step 6

Enter the password for the admin account.

Step 7

To confirm, re-enter the password for the admin account.

Step 8

Enter yes to continue the initial setup for a cluster configuration.

Step 9

Enter the fabric interconnect fabric (either A or B ).

Step 10

Enter the system name.

Step 11

Enter the IPv4 address for the management port of the fabric interconnect.

You will be prompted to enter an IPv4 subnet mask.

Step 12

Enter the IPv4 subnet mask, then press Enter.

You are prompted for an IPv4 address for the default gateway, depending on the address type you entered for the management port of the fabric interconnect.

Step 13

Enter the IPv4 address of the default gateway.

Step 14

Enter yes if you want to specify the IP address for the DNS server, or no if you do not.

Step 15

(Optional) Enter the IPv4 address for the DNS server.

The address type must be the same as the address type of the management port of the fabric interconnect.

Step 16

Enter yes if you want to specify the default domain name, or no if you do not.

Step 17

(Optional) Enter the default domain name.

Step 18

Review the setup summary and enter yes to save and apply the settings, or enter no to go through the Setup wizard again to change some of the settings.

If you choose to go through the Setup wizard again, it provides the values you previously entered, and the values appear in brackets. To accept previously entered values, press Enter.


Example

The following example sets up the first fabric interconnect for a cluster configuration using the console and IPv4 management addresses:


Enter the installation method (console/gui)?  console
Enter the setup mode (restore from backup or initial setup) [restore/setup]? setup
You have chosen to setup a new switch.  Continue? (y/n): y
Enter the password for "admin": adminpassword%958
Confirm the password for "admin": adminpassword%958
Do you want to create a new cluster on this switch (select 'no' for standalone setup 
or if you want this switch to be added to an existing cluster)? (yes/no) [n]: yes
Enter the switch fabric (A/B): A
Enter the system name: foo
Mgmt0 IPv4 address: 192.168.10.10
Mgmt0 IPv4 netmask: 255.255.255.0
IPv4 address of the default gateway: 192.168.10.1
Virtual IPv4 address: 192.168.10.12
Configure the DNS Server IPv4 address? (yes/no) [n]: yes
  DNS IPv4 address: 20.10.20.10
Configure the default domain name? (yes/no) [n]: yes
  Default domain name: domainname.com
Join centralized management environment (UCS Central)? (yes/no) [n]: no
Following configurations will be applied:
  Switch Fabric=A
  System Name=foo
  Management IP Address=192.168.10.10
  Management IP Netmask=255.255.255.0
  Default Gateway=192.168.10.1
  Cluster Enabled=yes  
  Virtual Ip Address=192.168.10.12
  DNS Server=20.10.20.10
  Domain Name=domainname.com
Apply and save the configuration (select 'no' if you want to re-enter)? (yes/no): yes

Configure the Subordinate Fabric Interconnect Using CLI

This procedure describes setting up the second fabric interconnect using the IPv4 address for the management port.


Note


When adding a new Fabric Interconnect to an existing High Availability cluster, for example, during a new install or when replacing a Fabric Interconnect, the new device will not be able to log into the cluster as long as the authentication method is set to remote. To successfully add a new Fabric Interconnect to the cluster, the authentication method must be temporarily set to local and the local admin credentials of the primary Fabric Interconnect must be used.


Procedure


Step 1

Connect to the console port.

Step 2

Power up the fabric interconnect.

You will see the power-on self-test messages as the fabric interconnect boots.

Step 3

When the unconfigured system boots, it prompts you for the setup method to be used. Enter console to continue the initial setup using the console CLI.

Note

 
The fabric interconnect should detect the peer fabric interconnect in the cluster. If it does not, check the physical connections between the L1 and L2 ports, and verify that the peer fabric interconnect has been enabled for a cluster configuration.

Step 4

Enter y to add the subordinate fabric interconnect to the cluster.

Step 5

Enter the admin password of the peer fabric interconnect.

Step 6

Enter the IP address for the management port on the subordinate fabric interconnect.

Step 7

Review the setup summary and enter yes to save and apply the settings, or enter no to go through the Setup wizard again to change some of the settings.

If you choose to go through the Setup wizard again, it provides the values you previously entered, and the values appear in brackets. To accept previously entered values, press Enter.


Example

The following example sets up the second fabric interconnect for a cluster configuration using the console and the IPv4 address of the peer:

Enter the installation method (console/gui)? console
Installer has detected the presence of a peer Fabric interconnect. This Fabric interconnect
 will be added to the cluster. Continue (y/n) ? y
Enter the admin password of the peer Fabric Interconnect: adminpassword%958
Peer Fabric interconnect Mgmt0 IPv4 Address: 192.168.10.11
Apply and save the configuration (select 'no' if you want to re-enter)? (yes/no): yes

Verify Console Setup

You can verify that both fabric interconnect configurations are complete by logging in to the fabric interconnect through SSH.

Use the following commands to verify the cluster status using Cisco UCS Manager CLI.

Command

Purpose

Sample Output

show cluster state

Displays the operational state and leadership role for both fabric interconnects in a high availability cluster.

The following example displays that both fabric interconnects are in the Up state, HA is in the Ready state, fabric interconnect A has the primary role, and fabric interconnect B has the subordinate role.

UCS-A# show cluster state 
Cluster Id: 0x4432f72a371511de-0xb97c000de1b1ada4 

A: UP, PRIMARY 
B: UP,
SUBORDINATE HA READY

show cluster extended-state

Displays extended details about the cluster state and typically used when troubleshooting issues.

The following example shows how to view the extended state of a cluster.
UCSC# show cluster extended-state 0x2e95deacbd0f11e2-
0x8ff35147e84f3de2Start time: Thu May 16 06:54:22 2013Last election
 time: Thu May 16 16:29:28 2015System Management
Viewing the Cluster State
A: UP, PRIMARY
B: UP, SUBORDINATE

A: memb state UP, lead state PRIMARY, mgmt services state: UP
B: memb state UP, lead state SUBORDINATE, 
mgmt services state: UP heartbeat state PRIMARY_OK 
HA READY 
Detailed state of the device selected for HA quorum data: 
Device 1007, serial: a66b4c20-8692-11df-bd63-1b72ef3ac801, state: active 
Device 1010, serial: 00e3e6d0-8693-11df-9e10-0f4428357744, state: active 
Device 1012, serial: 1d8922c8-8693-11df-9133-89fa154e3fa1, state: active

Connecting HX-Series Servers to Cisco UCS Fabric Interconnects

Overview

The Cisco HX220c and HX240c Servers connect directly to the fabric interconnects. The direct connection enables Cisco UCS Manager to manage the HX-Series servers using a single cable for both management traffic and data traffic.


Note


After connecting the server with the fabric interconnect, when the server is discovered, update the C-Series software bundle available for Cisco UCS Manager using the UCS Manager configuration form.


When you use direct connect mode, all Cisco UCS managed adapters must be connected to the server ports on the fabric interconnects. Make sure that the HX servers have the recommended firmware as listed in the Cisco HyperFlex Software Requirements and Recommendations document. If not, use Cisco UCS Manager to update the firmware.

For information about general Cisco UCS configuration limits, see the Cisco UCS 6200, 6332, 6324 and 6400 Configuration Limits for Cisco UCS Manager.

Connecting Converged Nodes to the Fabric Interconnect

This topic describes how to physically add converged nodes for creating a HX cluster or adding to an existing HX cluster.

Before you begin

  • Set the CIMC server to factory default settings before integrating with Cisco UCS Manager.

  • Do not connect dedicated CIMC ports to the network for integrated nodes. Doing so causes the server to not be discovered in Cisco UCS Manager. If the server is not discovered, reset CIMC to factory settings for each server.

  • If there is no foreseeable future need to connect FC Storage, only use ports 1-16.

  • Cisco UCS FI 6200/6300/6400 and 6400 only support configuring ports 1-6 as FC ports. If there is a future need to connect FC Storage, convert ports 1-6 to FC.


    Note


    The conversion may disrupt the HX deployment.


  • Before you connect the CIMC server, make sure a Cisco VIC 1227 is installed in the PCIe slot 2 of an HXc240, or Riser 1 slot 1 for an HXc220 for integration with Cisco UCS Manager. If the card is not installed in the correct slot, you cannot enable direct connect management for the server.

  • Complete the physical cabling of servers to the fabric interconnects, and configure the ports as server ports.

Procedure


Step 1

Install the HX server in the rack. See Rack Cisco HyperFlex Nodes for more details.

Step 2

Configure the server ports on the fabric interconnect.

  1. Connect a 10-Gb SFP+ cable from one port on the server to fabric interconnect A. You can use any port on fabric interconnect A, but the port must be enabled for server traffic.

    Connect one cable from the VIC to the fabric interconnect for one card. Do not connect both ports to the same fabric interconnect.

  2. Configure that port on FI-A as a server port. For the detailed steps, refer to the Configuring Port Modes for a 6248 Fabric Interconnect section of the Cisco UCS Manager Network Management Guide.

  3. Connect 10-Gb SFP+ cable from the other port on the server to FI B. You can use any port on FI B, but the port must be enabled for server traffic.

    Note

     

    Do not mix SFP+ types on an uplink. If you do, you will get Discovery Failed errors.

  4. Configure that port on FI-B as a server port. For the detailed steps, refer to the Configuring Port Modes for a 6248 Fabric Interconnect section of the Cisco UCS Manager Network Management Guide.

Step 3

Attach a power cord to each power supply in your node, and to a grounded AC power outlet. During initial boot up, wait for approximately two minutes to let the node boot in standby power.

  • When powered up, the server is discovered by the fabric interconnects. You can monitor node discovery in UCS Manager.

  • Verify the node's power status by looking at the node Power Status LED on the front panel. A node is in the standby power mode when the LED is amber.

Step 4

Repeat steps one through four to connect the remaining HX-Series servers to the HyperFlex cluster.


Physical Connectivity Illustrations for Direct Connect Mode Cluster Setup

The following images shows a sample of direct connect mode physical connectivity for C-Series Rack-Mount Server with Cisco UCS Domain, Cisco UCS Manager, release 3.1 and later. This image shows the cabling configuration for Cisco UCS Manager integration with a C-Series Rack-Mount Server. The paths shown in gold carry both management traffic and data traffic.

Figure 1. Direct Connect Cabling Configuration


Figure 2. Direct Connect Cabling Configuration with Cisco VIC 1455

1

Cisco UCS 6454 Fabric Interconnect or Cisco UCS 6200, or 6300 Series FI (Fabric A)

3

C-Series Rack-Mount Server

2

Cisco UCS 6454 Fabric Interconnect or Cisco UCS 6200, or 6300 Series FI (Fabric B)

4

Cisco UCS VIC in supported PCIe slot

XGb represents a 40 Gigabit Ethernet connection or a 10 Gigabit Ethernet connection. For the 10 Gigabit Ethernet, the following cables are used:

  • 4x10 Breakout Small Form-Factor Pluggable (SFP) cables

  • 4x10 Active Optical (OAC) cables

  • 10G Small Form-Factor Pluggable (SFP) cable that uses the Qualified Security Assessor (QSA) module