Maintaining the Node

This chapter describes how to diagnose node problems using LEDs. It also provides information about how to install or replace hardware components, and it includes the following sections:

Status LEDs and Buttons

This section describes the location and meaning of LEDs and buttons and includes the following topics

Front Panel LEDs

Figure 3-1 shows the front panel LEDs. Table 3-1 defines the LED states.

Figure 3-1 Front Panel LEDs

 

305243.jpg
1

Drive fault LED (on each drive tray)

6

Fan status LED

2

Drive activity LED (on each drive tray)

7

Temperature status LED

3

Power button/power status LED

8

Power supply status LED

4

Unit Identification button/LED

9

Network link activity LED

5

Node status LED

 

 

 

Table 3-1 Front Panel LEDs, Definitions of States

LED Name
State

1

Drive fault

  • Off—The hard drive is operating properly.
  • Amber—Drive fault detected.
  • Amber, blinking—The device is rebuilding.
  • Amber, blinking with one-second interval—Drive locate function activated.

2

Drive activity

  • Off—There is no hard drive in the hard drive tray (no access, no fault).
  • Green—The hard drive is ready.
  • Green, blinking—The hard drive is reading or writing data.

3

Power button/LED

  • Off—There is no AC power to the node.
  • Amber—The node is in standby power mode. Power is supplied only to the Cisco IMC and some motherboard functions.
  • Green—The node is in main power mode. Power is supplied to all node components.

4

Unit Identification

  • Off—The unit identification function is not in use.
  • Blue—The unit identification function is activated.

5

Node status

  • Green—The node is running in a normal operating condition.
  • Green, blinking—The node is performing node initialization and memory check.
  • Amber, steady—The node is in a degraded operational state. For example:

blank.gif Power supply redundancy is lost.

blank.gif CPUs are mismatched.

blank.gif At least one CPU is faulty.

blank.gif At least one DIMM is faulty.

blank.gif At least one drive in a RAID configuration failed.

  • Amber, blinking—The node is in a critical fault state. For example:

blank.gif Boot failed.

blank.gif Fatal CPU and/or bus error is detected.

blank.gif Node is in an over-temperature condition.

6

Fan status

  • Green—All fan modules are operating properly.
  • Amber, steady—One or more fan modules breached the critical threshold.
  • Amber, blinking—One or more fan modules breached the non-recoverable threshold.

7

Temperature status

  • Green—The node is operating at normal temperature.
  • Amber, steady—One or more temperature sensors breached the critical threshold.
  • Amber, blinking—One or more temperature sensors breached the non-recoverable threshold.

8

Power supply status

  • Green—All power supplies are operating normally.
  • Amber, steady—One or more power supplies are in a degraded operational state.
  • Amber, blinking—One or more power supplies are in a critical fault state.

9

Network link activity

  • Off—The Ethernet link is idle.
  • Green—One or more Ethernet LOM ports are link-active, but there is no activity.
  • Green, blinking—One or more Ethernet LOM ports are link-active, with activity.

Rear Panel LEDs and Buttons

Figure 3-2 shows the rear panel LEDs and buttons. Table 3-2 defines the LED states.

Figure 3-2 Rear Panel LEDs and Buttons

 

352951.eps

 

1

Power supply fault LED

5

1-Gb Ethernet dedicated management link status LED

2

Power supply status LED

6

1-Gb Ethernet link speed LED

3

mLOM card LEDs (Cisco VIC 1227)
(not shown, see Table 3-2 )

7

1-Gb Ethernet link status LED

4

1-Gb Ethernet dedicated management link speed LED

8

Unit Identification button/LED

 

Table 3-2 Rear Panel LEDs, Definitions of States

LED Name
State

1

Power supply fault

This is a summary; for advanced power supply LED information, see Table 3-3 .

  • Off—The power supply is operating normally.
  • Amber, blinking—An event warning threshold has been reached, but the power supply continues to operate.
  • Amber, solid—A critical fault threshold has been reached, causing the power supply to shut down (for example, a fan failure or an over-temperature condition).

2

Power supply status

This is a summary; for advanced power supply LED information, see Table 3-3 .

AC power supplies:

  • Off—There is no AC power to the power supply.
  • Green, blinking—AC power OK; DC output not enabled.
  • Green, solid—AC power OK; DC outputs OK.

DC power supplies:

  • Off—There is no DC power to the power supply.
  • Green, blinking—DC power OK; DC output not enabled.
  • Green, solid—DC power OK; DC outputs OK.

3

mLOM 10-Gb SFP+
(Cisco VIC 1227)

(single status LED)

  • Off—No link is present.
  • Green, steady—Link is active.
  • Green, blinking—Traffic is present on the active link.

4

1-Gb Ethernet dedicated management link speed

  • Off—Link speed is 10 Mbps.
  • Amber—Link speed is 100 Mbps.
  • Green—Link speed is 1 Gbps.

5

1-Gb Ethernet dedicated management link status

  • Off—No Link is present.
  • Green—Link is active.
  • Green, blinking—Traffic is present on the active link.

6

1-Gb Ethernet link speed

  • Off—Link speed is 10 Mbps.
  • Amber—Link speed is 100 Mbps.
  • Green—Link speed is 1 Gbps.

7

1-Gb Ethernet link status

  • Off—No link is present.
  • Green—Link is active.
  • Green, blinking—Traffic is present on the active link.

8

Unit Identification

  • Off—The unit identification function is not in use.
  • Blue—The unit identification function is activated.

In Table 3-3 , read the status and fault LED states together in each row to determine the event that cause this combination.

 

Table 3-3 Rear Power Supply LED States

Green PSU Status LED State
Amber PSU Fault LED State
Event
  • Solid on
  • Off

12V main on (main power mode)

  • Blinking
  • Off

12Vmain off (standby power mode)

  • Off
  • Off

No AC power input (all PSUs present)

  • Off
  • On

No AC power input (redundant supply active)

  • Blinking
  • Solid on

12V over-voltage protection (OVP)

  • Blinking
  • Solid on

12V under-voltage protection (UVP)

  • Blinking
  • Solid on

12V over-current protection (OCP)

  • Blinking
  • Solid on

12V short-circuit protection (SCP)

  • Solid on
  • Solid on

PSU fan fault/Lock (before OTP)

  • Blinking
  • Solid on

PSU fan fault/Lock (after OTP)

  • Blinking
  • Solid on

Over-temperature protection (OTP)

  • Solid on
  • Blinking

OTP warning

  • Solid on
  • Blinking

OCP warning

  • Blinking
  • Off

12V main off (CR slave PSU is in sleep mode)

Internal Diagnostic LEDs

The node is equipped with a supercap voltage source that can activate internal component fault LEDs up to 30 minutes after AC power is removed. The node has internal fault LEDs for CPUs, DIMMs, fan modules, SD cards, the RTC battery, and the mLOM card.

To use these LEDs to identify a failed component, press the front or rear Unit Identification button (see Figure 3-1 or Figure 3-2) with AC power removed. An LED lights amber to indicate a faulty component.

See Figure 3-3 for the locations of these internal LEDs.

Figure 3-3 Internal Diagnostic LED Locations

 

352952.eps
1

Fan module fault LEDs (one on each fan module)

4

SD card fault LEDs

2

DIMM fault LEDs (one directly in front of each DIMM socket on the motherboard)

5

RTC battery fault LED (under PCIe riser 1)

3

CPU fault LEDs

6

mLOM card fault LED (under PCIe riser 1)

 

Table 3-4 Internal Diagnostic LEDs, Definition of States

LED Name
State

Internal diagnostic LEDs (all)

  • Off—Component is functioning normally.
  • Amber—Component has a fault.

Preparing for Component Installation

This section describes how to prepare for component installation, and it includes the following topics:

Required Equipment

The following equipment is used to perform the procedures in this chapter:

  • Number 2 Phillips-head screwdriver
  • Electrostatic discharge (ESD) strap or other grounding equipment such as a grounded mat

Shutting Down the Node

The node can run in two power modes:

  • Main power mode—Power is supplied to all node components and any operating system on your drives can run.
  • Standby power mode—Power is supplied only to the service processor and the cooling fans and it is safe to power off the node from this mode.
caut.gif

Caution blank.gif After a node is shut down to standby power, electric current is still present in the node. To completely remove power, you must disconnect all power cords from the power supplies in the node as directed in the service procedures.

This section contains the following procedures, which are referenced from component replacement procedures. Alternate shutdown procedures are included.

Shutting Down the Node From the Equipment Tab in Cisco UCS Manager

When you use this procedure to shut down an HX node, Cisco UCS Manager triggers the OS into a graceful shutdown sequence.

note.gif

Noteblank.gif If the Shutdown Server link is dimmed in the Actions area, the node is not running.



Step 1blank.gif In the Navigation pane, click Equipment.

Step 2blank.gif Expand Equipment > Rack Mounts > Servers.

Step 3blank.gif Choose the node that you want to shut down.

Step 4blank.gif In the Work pane, click the General tab.

Step 5blank.gif In the Actions area, click Shutdown Server.

Step 6blank.gif If a confirmation dialog displays, click Yes.


 

After the node has been successfully shut down, the Overall Status field on the General tab displays a power-off status.

Shutting Down the Node From the Service Profile in Cisco UCS Manager

When you use this procedure to shut down an HX node, Cisco UCS Manager triggers the OS into a graceful shutdown sequence.

note.gif

Noteblank.gif If the Shutdown Server link is dimmed in the Actions area, the node is not running.



Step 1blank.gif In the Navigation pane, click Servers.

Step 2blank.gif Expand Servers > Service Profiles.

Step 3blank.gif Expand the node for the organization that contains the service profile of the server node you are shutting down.

Step 4blank.gif Choose the service profile of the server node that you are shutting down.

Step 5blank.gif In the Work pane, click the General tab.

Step 6blank.gif In the Actions area, click Shutdown Server.

Step 7blank.gif If a confirmation dialog displays, click Yes.


 

After the node has been successfully shut down, the Overall Status field on the General tab displays a power-off status.

Shutting Down the Node Through vSphere With Cisco HX Maintenance Mode

Some procedures directly place the node into Cisco HX Maintenance mode. This procedure migrates all VMs to other nodes before the node is shut down and decommissioned from Cisco UCS Manager.


Step 1blank.gif Put the node in Cisco HX Maintenance mode by using the vSphere interface:

  • Using the vSphere web client:

a.blank.gif Log in to the vSphere web client.

b.blank.gif Go to Home > Hosts and Clusters.

c.blank.gif Expand the Datacenter that contains the HX Cluster.

d.blank.gif Expand the HX Cluster and select the node.

e.blank.gif Right-click the node and select Cisco HX Maintenance Mode > Enter HX Maintenance Mode.

  • Using the command line interface:

a.blank.gif Log in to the storage controller cluster command line as a user with root privileges.

b.blank.gif Move the node into HX Maintenance Mode.

1. Identify the node ID and IP address:

# stcli node list --summary

2. Enter the node into HX Maintenance Mode.

# stcli node maintenanceMode (--id ID | --ip IP Address ) --mode enter

(see also stcli node maintenanceMode --help)

c.blank.gif Log into the ESXi command line of this node as a user with root privileges.

d.blank.gif Verify that the node has entered HX Maintenance Mode:

# esxcli system maintenanceMode get

Step 2blank.gif Shut down the node using UCS Manager as described in Shutting Down the Node.


 

Shutting Down the Node with the Node Power Button

note.gif

Noteblank.gif This method is not recommended for a HyperFlex node, but the operation of the physical power button is explained here in case an emergency shutdown is required.



Step 1blank.gif Check the color of the Power Status LED (see the “Front Panel LEDs” section).

  • Green—The node is in main power mode and must be shut down before it can be safely powered off. Go to Step 2.
  • Amber—The node is already in standby mode and can be safely powered off. Go to .

Step 2blank.gif Invoke either a graceful shutdown or a hard shutdown:

caut.gif

Caution blank.gif To avoid data loss or damage to your operating node, you should always invoke a graceful shutdown of the operating system.

  • Graceful shutdown—Press and release the Power button. The operating system performs a graceful shutdown and the node goes to standby mode, which is indicated by an amber Power Status LED.
  • Emergency shutdown—Press and hold the Power button for 4 seconds to force the main power off and immediately enter standby mode.


 

Decommissioning the Node Using Cisco UCS Manager

Before replacing an internal component of a node, you must decommission the node to remove it from the Cisco UCS configuration. When you use this procedure to shut down an HX node, Cisco UCS Manager triggers the OS into a graceful shutdown sequence.


Step 1blank.gif In the Navigation pane, click Equipment.

Step 2blank.gif Expand Equipment > Rack Mounts > Servers.

Step 3blank.gif Choose the node that you want to decommission.

Step 4blank.gif In the Work pane, click the General tab.

Step 5blank.gif In the Actions area, click Server Maintenance.

Step 6blank.gif In the Maintenance dialog box, click Decommission, then click OK.

The node is removed from the Cisco UCS configuration.


 

Post-Maintenance Procedures

This section contains the following procedures, which are referenced from component replacement procedures:

Recommissioning the Node Using Cisco UCS Manager

After replacing an internal component of a node, you must recommission the node to add it back into the Cisco UCS configuration.


Step 1blank.gif In the Navigation pane, click Equipment.

Step 2blank.gif Under Equipment, click the Rack Mounts node.

Step 3blank.gif In the Work pane, click the Decommissioned tab.

Step 4blank.gif On the row for each rack-mount server that you want to recommission, do the following:

a.blank.gif In the Recommission column, check the check box.

b.blank.gif Click Save Changes.

Step 5blank.gif If a confirmation dialog box displays, click Yes.

Step 6blank.gif (Optional) Monitor the progress of the server recommission and discovery on the FSM tab for the server.


 

Associating a Service Profile With an HX Node

Use this procedure to associate an HX node to its service profile after recommissioning.


Step 1blank.gif In the Navigation pane, click Servers.

Step 2blank.gif Expand Servers > Service Profiles.

Step 3blank.gif Expand the node for the organization that contains the service profile that you want to associate with the HX node.

Step 4blank.gif Right-click the service profile that you want to associate with the HX node and then select Associate Service Profile.

Step 5blank.gif In the Associate Service Profile dialog box, select the Server option.

Step 6blank.gif Navigate through the navigation tree and select the HX node to which you are associating the service profile.

Step 7blank.gif Click OK.


 

Exiting HX Maintenance Mode

Use this procedure to exit HX Maintenance Mode after performing a service procedure.


Step 1blank.gif Exit the node from Cisco HX Maintenance mode by using the vSphere interface:

  • Using the vSphere web client:

a.blank.gif Log in to the vSphere web client.

b.blank.gif Go to Home > Hosts and Clusters.

c.blank.gif Expand the Datacenter that contains the HX Cluster.

d.blank.gif Expand the HX Cluster and select the node.

e.blank.gif Right-click the node and select Cisco HX Maintenance Mode > Exit HX Maintenance Mode.

  • Using the command line:

a.blank.gif Log in to the storage controller cluster command line as a user with root privileges.

b.blank.gif Exit the node out of HX Maintenance Mode.

1. Identify the node ID and IP address:

# stcli node list --summary

2. Exit the node out of HX Maintenance Mode

# stcli node maintenanceMode (--id ID | --ip IP Address ) --mode exit

(see also stcli node maintenanceMode --help)

c.blank.gif Log into ESXi command line of this node as a user with root privileges.

d.blank.gif Verify that the node has exited HX Maintenance Mode:

# esxcli system maintenanceMode get


 

Removing and Replacing the Node Top Cover


Step 1blank.gif Remove the top cover (see Figure 3-4).

a.blank.gif If the cover latch is locked, use a screwdriver to turn the lock 90-degrees counterclockwise to unlock it. See Figure 3-4.

b.blank.gif Lift on the end of the latch that has the green finger grip. The cover is pushed back to the open position as you lift the latch.

c.blank.gif Lift the top cover straight up from the node and set it aside.

Step 2blank.gif Replace the top cover:

note.gif

Noteblank.gif The latch must be in the fully open position when you set the cover back in place, which allows the opening in the latch to sit over a peg that is on the fan tray.


a.blank.gif With the latch in the fully open position, place the cover on top of the node about one-half inch (1.27 cm) behind the lip of the front cover panel. The opening in the latch should fit over the peg that sticks up from the fan tray.

b.blank.gif Press the cover latch down to the closed position. The cover is pushed forward to the closed position as you push down the latch.

c.blank.gif If desired, lock the latch by using a screwdriver to turn the lock 90-degrees clockwise.

Figure 3-4 Removing the Top Cover

 

352953.eps
1

Front cover panel

3

Locking cover latch

2

Top cover

 

 


 

Serial Number Location

The serial number (SN) for the node is printed on a label on the top of the node, near the front.

Installing or Replacing Node Components

warn.gif

Warningblank.gif Blank faceplates and cover panels serve three important functions: they prevent exposure to hazardous voltages and currents inside the chassis; they contain electromagnetic interference (EMI) that might disrupt other equipment; and they direct the flow of cooling air through the chassis. Do not operate the node unless all cards, faceplates, front covers, and rear covers are in place.
Statement 1029


caut.gif

Caution blank.gif When handling node components, wear an ESD strap to avoid damage.

tip.gif

Tipblank.gif You can press the Unit Identification button on the front panel or rear panel to turn on a flashing Unit Identification LED on the front and rear panels of the node. This button allows you to locate the specific node that you are servicing when you go to the opposite side of the rack. You can also activate these LEDs remotely by using the Cisco IMC interface. See the “Status LEDs and Buttons” section for locations of these LEDs.


This section describes how to install and replace node components, and it includes the following topics:

Replaceable Component Locations

Figure 3-5 shows the locations of the field-replaceable components. The view shown is from the top down with the top covers and air baffle removed.

Figure 3-5 Replaceable Component Locations

 

305245a.eps
1

Persistent drive bays:

  • HX240c: bays 2–24 support HDD persistent data drives
  • HX240c All-Flash: bays 2–11 support SSD persistent data drives. (With Cisco HX Release 2.0, 10 SSDs are supported.)

See Replacing Drives for information about supported drives.

9

Trusted platform module (TPM) socket on motherboard, under PCIe riser 2

2

Drive bay 1: SSD caching drive

The supported SSD differs between the HX240c and HX240c All-Flash nodes. See Replacing Drives.

10

PCIe riser 2 (PCIe slots 4, 5, 6)

3

Fan modules (six, hot-swappable)

11

PCIe riser 1 (PCIe slots 1, 2, 3*)

*Slot 3 is taken by two internal SATA SSD sockets.

4

DIMM sockets on motherboard (24)

12

120 GB internal housekeeping SSDs for SDS logs (two SATA SSDs in PCIe riser 1 sockets)

5

CPUs and heatsinks (two)

13

mLOM card socket on motherboard under
PCIe riser 1 for Cisco VIC 1227

6

Cisco SD card slots on motherboard (two)

14

Cisco UCS 12G SAS Modular HBA (dedicated slot and bracket)

7

USB socket on motherboard

15

RTC battery on motherboard

8

Power supplies (hot-swappable, accessed through rear panel)

 

 

Replacing Drives

This section includes the following information:

Drive Population Guidelines

The drive-bay numbering is shown in Figure 3-6.

Figure 3-6 Drive Bay Numbering

 

305392.jpg
1

Bay 1: solid state drive (SSD) caching drive

2

Bays 2 - 24: persistent data drives

Observe these drive population guidelines:

  • Populate the SSD caching drive only in bay 1. See Table 3-5 for the supported caching SSDs, which differ between supported drive configurations.
  • Populate persistent data drives as follows:

blank.gif HX240c: HDD persistent data drives—populate in bays 2–24.

blank.gif HX240c All-Flash: SSD persistent data drives—populate in bays 2–11. With Cisco HyperFlex Release 2.0, only 10 SSD persistent data drives are supported.

See Table 3-5 for the supported persistent drives, which differ between supported drive configurations.

  • When populating persistent data drives, add drives in the lowest numbered bays first.
  • Keep an empty drive blanking tray in any unused bays to ensure optimal airflow and cooling.
  • See HX240c Drive Configuration Comparison for a comparison of supported drive configurations (hybrid, All-Flash, SED).

HX240c Drive Configuration Comparison

 

Table 3-5 Supported Drive Configurations

Component
HX240c Hybrid Drives
HX240c All-Flash Drives
HX240c SED Hybrid Drives
HX240c SED All-Flash Drives

Persistent data drives

HX240c: Front bays 2–24

HX240c All-Flash: Front bays 2–11

HDD:

  • UCS-HD12TB10K12G

SSD:

  • UCS-SD960GBKS4-EV

HDD (SED):

  • UCS-HD12G10K9

HDD (SED):

  • UCS-SD800GBEK9

SSD caching SSD

Front bay 1

SSD:

  • UCS-SD16TB12S3-EP

SSD:

  • UCS-SD16TB12S3-EP

SSD (SED):

  • UCS-SD16TBEK9

SSD (SED):

  • UCS-SD800GBEK9

Housekeeping SSD for SDS logs

Internal SATA Boot SSD on PCIe riser 1

SSD:

  • UCS-SD120GBKS4-EV

SSD:

  • UCS-SD120GBKS4-EV

SSD:

  • UCS-SD120GBKS4-EV

SSD:

  • UCS-SD120GBKS4-EV

Note the following considerations and restrictions for All-Flash HyperFlex nodes:

  • The minimum Cisco HyperFlex software required is Release 2.0 or later.
  • With Cisco HX Release 2.0, only 10 SSD persistent data drives are supported.
  • HX240c All-Flash HyperFlex nodes are ordered as specific All-Flash PIDs; All-Flash configurations are supported only on those PIDs.
  • Conversion from hybrid HX240c configuration to HX240c All-Flash configuration is not supported.
  • Mixing hybrid HX240c HyperFlex nodes with HX240c All-Flash HyperFlex nodes within the same HyperFlex cluster is not supported.

Note the following considerations and restrictions for self-encrypting drive (SED) HyperFlex nodes:

  • The minimum Cisco HyperFlex software required for SED configurations is Release 2.1(1a) or later.
  • Mixing HX240c hybrid SED HyperFlex nodes with HX240c All-Flash SED HyperFlex nodes within the same HyperFlex cluster is not supported.

Drive Replacement Overview

The three types of drives in the node require different replacement procedures.

 

Table 3-6 Drive Replacement Overview

Persistent data drives

HX240c: Front bays 2–24

HX240c All-Flash: Front bays 2–11

Hot-swap replacement. See Replacing Persistent Data Drives.

NOTE: Hot-swap replacement includes hot-removal, so you can remove the drive while it is still operating.

SSD caching SSD

front bay 1

Hot-swap replacement. See Replacing the SSD Caching Drive (Bay 1).

NOTE: Hot-swap replacement for SAS/SATA drives includes hot-removal, so you can remove the drive while it is still operating.

NOTE: If an NVMe SSD is used as the caching drive, additional steps are required, as described in the procedure.

Housekeeping SSD for SDS logs

(internal SATA SSD on PCIe riser 1)

Node must be put into Cisco HX Maintenance Mode before shutting down and powering off the node.

Replacement requires additional technical assistance and cannot be completed by the customer. See Replacing the Internal Housekeeping SSDs for SDS Logs.

Replacing Persistent Data Drives

The persistent data drives must be installed only as follows:

  • HX240c: HDDs in bays 2–24
  • HX240c All-Flash: SSDs in bays 2–11. With Cisco HyperFlex Release 2.0, only 10 SSD persistent data drives are supported.
note.gif

Noteblank.gif Hot-swap replacement includes hot-removal, so you can remove the drive while it is still operating.



Step 1blank.gif Remove the drive that you are replacing or remove a blank drive tray from an empty bay:

a.blank.gif Press the release button on the face of the drive tray. See Figure 3-7.

b.blank.gif Grasp and open the ejector lever and then pull the drive tray out of the slot.

c.blank.gif If you are replacing an existing drive, remove the four drive-tray screws that secure the drive to the tray and then lift the drive out of the tray.+

Step 2blank.gif Install a new drive:

a.blank.gif Place a new drive in the empty drive tray and replace the four drive-tray screws.

b.blank.gif With the ejector lever on the drive tray open, insert the drive tray into the empty drive bay.

c.blank.gif Push the tray into the slot until it touches the backplane, and then close the ejector lever to lock the drive in place.

Figure 3-7 Replacing Drives

 

352958.eps
1

Release button

3

Drive tray securing screws (four)

2

Ejector lever

 


 

Replacing the SSD Caching Drive (Bay 1)

The SSD caching drive must be installed in drive bay 1 (see Figure 3-6).

Note the following considerations and restrictions for NVMe SSDs When Used As the Caching SSD:

  • NVMe SSDs are supported in HX240c and All-Flash nodes.
  • NVMe SSDs are not supported in Hybrid nodes.
  • NVMe SSDs are supported only in the Caching SSD position, in drive bay 1.
  • NVMe SSDs are not supported for persistent storage or as the Housekeeping drive.
  • The locator (beacon) LED cannot be turned on or off on NVMe SSDs.
note.gif

Noteblank.gif Always replace the drive with the same type and size as the original drive.


note.gif

Noteblank.gif Upgrading or downgrading the Caching drive in an existing HyperFlex cluster is not supported. If the Caching drive must be upgraded or downgraded, then a full redeployment of the HyperFlex cluster is required.


note.gif

Noteblank.gif When using a SAS drive, hot-swap replacement includes hot-removal, so you can remove a SAS drive while it is still operating. NVMe drives cannot be hot-swapped.



Step 1blank.gif Only if the caching drive is an NVMe SSD, enter the ESXi host into HX Maintenance Mode. Otherwise, skip to step 2blank.gif.

Step 2blank.gif Remove the SSD caching drive:

a.blank.gif Press the release button on the face of the drive tray (see Figure 3-7).

b.blank.gif Grasp and open the ejector lever and then pull the drive tray out of the slot.

c.blank.gif Remove the four drive-tray screws that secure the SSD to the tray and then lift the SSD out of the tray.

Step 3blank.gif Install a new SSD caching drive:

a.blank.gif Place a new SSD in the empty drive tray and replace the four drive-tray screws.

b.blank.gif With the ejector lever on the drive tray open, insert the drive tray into the empty drive bay.

c.blank.gif Push the tray into the slot until it touches the backplane, and then close the ejector lever to lock the drive in place.

Step 4blank.gif Only if the caching drive is an NVMe SSD :

a.blank.gif reboot the ESXi host. This enables ESXi to discover the NVMe SSD.

b.blank.gif Exit the ESXi host from HX Maintenance Mode.


 

Replacing the Internal Housekeeping SSDs for SDS Logs

PCIe riser 1 includes sockets for two 120 GB SATA SSDs.

note.gif

Noteblank.gif This procedure requires assistance from technical support for additional software update steps after the hardware is replaced. It cannot be completed without technical assistance.


caut.gif

Caution blank.gif Put the HX node in Cisco HX Maintenance mode before replacing the housekeeping SSD drive, as described in the procedure. Hot swapping the internal housekeeping/logs SSD while the HX node is running causes the HX node to fail.


Step 1blank.gif Put the node in Cisco HX Maintenance mode as described in Shutting Down the Node Through vSphere With Cisco HX Maintenance Mode.

Step 2blank.gif Shut down the node as described in Shutting Down the Node.

Step 3blank.gif Decommission the node as described in Decommissioning the Node Using Cisco UCS Manager.

caut.gif

Caution blank.gif After a node is shut down to standby power, electric current is still present in the node. To completely remove power, you must disconnect all power cords from the power supplies in the node.

Step 4blank.gif Disconnect all power cables from the power supplies.

Step 5blank.gif Do one of the following:

  • If the drive is a SED drive in front drive bay 1, remove it from front drive bay 1. Continue with Step 6.
  • If the SSD is in PCIe riser 1, remove it from PCIe riser 1. Continue with Step 8.

Step 6blank.gif Remove the drive that you are replacing or remove a blank drive tray from an empty bay:

a.blank.gif Press the release button on the face of the drive tray. See Figure 3-7.

b.blank.gif Grasp and open the ejector lever and then pull the drive tray out of the slot.

c.blank.gif Remove the four drive-tray screws that secure the drive to the tray and then lift the drive out of the tray.

Step 7blank.gif Install a new drive:

a.blank.gif Place a new drive in the empty drive tray and replace the four drive-tray screws.

b.blank.gif With the ejector lever on the drive tray open, insert the drive tray into the empty drive bay.

c.blank.gif Push the tray into the slot until it touches the backplane, and then close the ejector lever to lock the drive in place.

d.blank.gif Go to Step 16.

Step 8blank.gif Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

Step 9blank.gif Remove the top cover as described in Removing and Replacing the Node Top Cover.

Step 10blank.gif Remove PCIe riser 1 from the node:

a.blank.gif Lift straight up on both ends of the riser to disengage its circuit board from the socket on the motherboard. Set the riser on an antistatic mat.

b.blank.gif On the bottom of the riser, loosen the single thumbscrew that holds the securing plate. See Figure 3-18.

c.blank.gif Swing open the securing plate and remove it from the riser to provide access.

Step 11blank.gif Remove an existing SSD from PCIe riser 1.

Grasp the carrier-tabs on each side of the SSD and pinch them together as you pull the SSD from its cage and the socket on the PCIe riser.

Step 12blank.gif Install a new SSD to PCIe riser 1:

a.blank.gif Grasp the two carrier-tabs on either side of the SSD and pinch them together as you insert the SSD into the cage on the riser.

b.blank.gif Push the SSD straight into the cage to engage it with the socket on the riser. Stop when the carrier-tabs click and lock into place on the cage.

Step 13blank.gif Return PCIe riser 1 to the node:

a.blank.gif Return the securing plate to the riser. Insert the two hinge-tabs into the two slots on the riser, and then swing the securing plate closed.

b.blank.gif Tighten the single thumbscrew that holds the securing plate.

c.blank.gif Position the PCIe riser over its socket on the motherboard and over its alignment features in the chassis (see Figure 3-16).

d.blank.gif Carefully push down on both ends of the PCIe riser to fully engage its circuit board connector with the socket on the motherboard.

Step 14blank.gif Replace the top cover.

Step 15blank.gif Replace the node in the rack, replace power cables, and then power on the node by pressing the Power button.

Step 16blank.gif Recommission the node as described in Recommissioning the Node Using Cisco UCS Manager.

Step 17blank.gif Associate the node to its service profile as described in Associating a Service Profile With an HX Node.

Step 18blank.gif After ESXi reboot, exit HX Maintenance mode as described in Exiting HX Maintenance Mode.

note.gif

Noteblank.gif After you replace the Housekeeping SSD hardware, you must contact technical support for additional software update steps.



 

Replacing Fan Modules

The six hot-swappable fan modules in the node are numbered as follows when you are facing the front of the node.

Figure 3-8 Fan Module Numbering

 

FAN 6

FAN 5

FAN 4

FAN 3

FAN 2

FAN 1

 

tip.gif

Tipblank.gif A fault LED is on the top of each fan module that lights amber if the fan module fails. To operate these LEDs from the supercap power source, remove AC power cords and then press the Unit Identification button. See also Internal Diagnostic LEDs.


caut.gif

Caution blank.gif You do not have to shut down or power off the node to replace fan modules because they are hot- swappable. However, to maintain proper cooling, do not operate the node for more than one minute with any fan module removed.


Step 1blank.gif Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

caut.gif

Caution blank.gif If you cannot safely view and access the component, remove the node from the rack.

Step 2blank.gif Remove the top cover as described in Removing and Replacing the Node Top Cover.

Step 3blank.gif Identify a faulty fan module by looking for a fan fault LED that is lit amber (see Figure 3-9).

Step 4blank.gif Remove a fan module that you are replacing (see Figure 3-9):

a.blank.gif Grasp the top of the fan and pinch the green plastic latch toward the center.

b.blank.gif Lift straight up to remove the fan module from the node.

Step 5blank.gif Install a new fan module:

a.blank.gif Set the new fan module in place, aligning the connector on the bottom of the fan module with the connector on the motherboard.

note.gif

Noteblank.gif The arrow label on the top of the fan module, which indicates the direction of airflow, should point toward the rear of the node.


b.blank.gif Press down gently on the fan module until the latch clicks and locks in place.

Step 6blank.gif Replace the top cover.

Step 7blank.gif Replace the node in the rack.

Figure 3-9 Fan Modules Latch and Fault LED

 

352959.eps
1

Finger latch (on each fan module)

2

Fan module fault LED (on each fan module)


 

Replacing DIMMs

This section includes the following topics:

caut.gif

Caution blank.gif DIMMs and their sockets are fragile and must be handled with care to avoid damage during installation.

caut.gif

Caution blank.gif Cisco does not support third-party DIMMs. Using non-Cisco DIMMs in the node might result in node problems or damage to the motherboard.

note.gif

Noteblank.gif To ensure the best node performance, it is important that you are familiar with memory performance guidelines and population rules before you install or replace the memory.


Memory Performance Guidelines and Population Rules

This section describes the type of memory that the node requires and its effect on performance. The section includes the following topics:

DIMM Socket Numbering

Figure 3-10 shows the numbering of the DIMM sockets and CPUs.

Figure 3-10 CPUs and DIMM Socket Numbering on Motherboard

 

352815.eps

DIMM Population Rules

Observe the following guidelines when installing or replacing DIMMs:

  • Each CPU supports four memory channels.

blank.gif CPU1 supports channels A, B, C, and D.

blank.gif CPU2 supports channels E, F, G, and H.

  • Each channel has three DIMM sockets (for example, channel A = slots A1, A2, and A3).

blank.gif A channel can operate with one, two, or three DIMMs installed.

blank.gif If a channel has only one DIMM, populate slot 1 first (the blue slot).

  • When both CPUs are installed, populate the DIMM sockets of each CPU identically.

blank.gif Fill blue #1 slots in the channels first: A1, E1, B1, F1, C1, G1, D1, H1

blank.gif Fill black #2 slots in the channels second: A2, E2, B2, F2, C2, G2, D2, H2

blank.gif Fill white #3 slots in the channels third: A3, E3, B3, F3, C3, G3, D3, H3

  • Any DIMM installed in a DIMM socket for which the CPU is absent is not recognized. In a single-CPU configuration, populate the channels for CPU1 only (A, B, C, D).
  • Memory mirroring reduces the amount of memory available by 50 percent because only one of the two populated channels provides data. When memory mirroring is enabled, you must install DIMMs in sets of 4, 6, 8, or 12 as described in Memory Mirroring and RAS.
  • Observe the DIMM mixing rules shown in Table 3-7 .

 

Table 3-7 DIMM Mixing Rules for HX240c Nodes

DIMM Parameter
DIMMs in the Same Channel
DIMMs in the Same Bank

DIMM Capacity:

RDIMM = 8 or 16 GB

LRDIMM = 64 GB

  • You can mix different capacity DIMMs in the same channel (for example, A1, A2, A3).
  • You can mix different capacity DIMMs in the same bank. However, for optimal performance DIMMs in the same bank (for example, A1, B1, C1, D1) should have the same capacity.

DIMM Speed:

2133 or 2400 MHz

You can mix speeds, but DIMMs will run at the speed of the slowest DIMMs/CPUs installed in the channel.

You can mix speeds, but DIMMs will run at the speed of the slowest DIMMs/CPUs installed in the bank.

DIMM Type:

RDIMMs or LRDIMMs

You cannot mix DIMM types in a channel.

You cannot mix DIMM types in a bank.

Memory Mirroring and RAS

The Intel E5-2600 CPUs within the node support memory mirroring only when an even number of channels are populated with DIMMs. If one or three channels are populated with DIMMs, memory mirroring is automatically disabled. Furthermore, if memory mirroring is used, DRAM size is reduced by 50 percent for reasons of reliability.

Lockstep Channel Mode

When you enable lockstep channel mode, each memory access is a 128-bit data access that spans four channels.

Lockstep channel mode requires that all four memory channels on a CPU must be populated identically with regard to size and organization. DIMM socket populations within a channel (for example, A1, A2, A3) do not have to be identical but the same DIMM slot location across all four channels must be populated the same.

For example, DIMMs in sockets A1, B1, C1, and D1 must be identical. DIMMs in sockets A2, B2, C2, and D2 must be identical. However, the A1-B1-C1-D1 DIMMs do not have to be identical with the A2-B2-C2-D2 DIMMs.

DIMM Replacement Procedure

This section includes the following topics:

Identifying a Faulty DIMM

Each DIMM socket has a corresponding DIMM fault LED, directly in front of the DIMM socket. See Figure 3-3 for the locations of these LEDs. The LEDs light amber to indicate a faulty DIMM. To operate these LEDs from the SuperCap power source, remove AC power cords and then press the Unit Identification button.

Replacing DIMMs


Step 1blank.gif Put the node in Cisco HX Maintenance mode as described in Shutting Down the Node Through vSphere With Cisco HX Maintenance Mode.

Step 2blank.gif Shut down the node as described in Shutting Down the Node.

Step 3blank.gif Decommission the node as described in Decommissioning the Node Using Cisco UCS Manager.

caut.gif

Caution blank.gif After a node is shut down to standby power, electric current is still present in the node. To completely remove power, you must disconnect all power cords from the power supplies in the node.

Step 4blank.gif Disconnect all power cables from the power supplies.

Step 5blank.gif Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

caut.gif

Caution blank.gif If you cannot safely view and access the component, remove the node from the rack.

Step 6blank.gif Remove the top cover as described in Removing and Replacing the Node Top Cover.

Step 7blank.gif Remove the air baffle that sits over the DIMM sockets and set it aside.

Step 8blank.gif Identify the faulty DIMM by observing the DIMM socket fault LEDs on the motherboard (see Figure 3-3).

Step 9blank.gif Remove the DIMMs that you are replacing. Open the ejector levers at both ends of the DIMM socket, and then lift the DIMM out of the socket.

Step 10blank.gif Install a new DIMM:

note.gif

Noteblank.gif Before installing DIMMs, see the population guidelines. See Memory Performance Guidelines and Population Rules.


a.blank.gif Align the new DIMM with the empty socket on the motherboard. Use the alignment key in the DIMM socket to correctly orient the DIMM.

b.blank.gif Push down evenly on the top corners of the DIMM until it is fully seated and the ejector levers on both ends lock into place.

Step 11blank.gif Replace the air baffle.

Step 12blank.gif Replace the top cover.

Step 13blank.gif Replace the node in the rack, replace power cables, and then power on the node by pressing the Power button.

Step 14blank.gif Recommission the node as described in Recommissioning the Node Using Cisco UCS Manager.

Step 15blank.gif Associate the node to its service profile as described in Associating a Service Profile With an HX Node.

Step 16blank.gif After ESXi reboot, exit HX Maintenance mode as described in Exiting HX Maintenance Mode.


 

Replacing CPUs and Heatsinks

This section contains the following topics:

note.gif

Noteblank.gif You can use Xeon v3- and v4-based nodes in the same cluster. Do not mix Xeon v3 and v4 CPUs within the same node.


Special Information For Upgrades to Intel Xeon v4 CPUs

caut.gif

Caution blank.gif You must upgrade your server firmware to the required minimum level before you upgrade to Intel v4 CPUs. Older firmware versions cannot recognize the new CPUs and this results in a non-bootable server.

The minimum software and firmware versions required for the node to support Intel v4 CPUs are as follows:

 

Table 3-8 Minimum Requirements For Intel Xeon v4 CPUs

Software or Firmware
Minimum Version

Node Cisco IMC

2.0(10)

Node BIOS

2.0(10)

Cisco UCS Manager (UCSM-managed system only)

3.1(1)

Do one of the following actions:

  • If your node’s firmware and/or Cisco UCS Manager software are already at the required levels shown in Table 3-8 , you can replace the CPU hardware by using the procedure in this section.
  • If your node’s firmware and/or Cisco UCS Manager software is earlier than the required levels, use the instructions in Special Instructions for Upgrades to Intel Xeon v4 Series to upgrade your software. After you upgrade the software, return to the procedure in this section as directed to replace the CPU hardware.

CPU Configuration Rules

This node has two CPU sockets. Each CPU supports four DIMM channels (12 DIMM sockets). See Figure 3-10.

  • The minimum configuration is that the node must have at two CPUs installed.
  • Do not mix Xeon v3 and v4 CPUs within the same node.

Replacing a CPU and Heatsink

caut.gif

Caution blank.gif CPUs and their motherboard sockets are fragile and must be handled with care to avoid damaging pins during installation. The CPUs must be installed with heatsinks and their thermal grease to ensure proper cooling. Failure to install a CPU correctly might result in damage to the node.

note.gif

Noteblank.gif This node uses the new independent loading mechanism (ILM) CPU sockets, so no Pick-and-Place tools are required for CPU handling or installation. Always grasp the plastic frame on the CPU when handling.



Step 1blank.gif Put the node in Cisco HX Maintenance mode as described in Shutting Down the Node Through vSphere With Cisco HX Maintenance Mode.

Step 2blank.gif Shut down the node as described in Shutting Down the Node.

Step 3blank.gif Decommission the node as described in Decommissioning the Node Using Cisco UCS Manager.

caut.gif

Caution blank.gif After a node is shut down to standby power, electric current is still present in the node. To completely remove power, you must disconnect all power cords from the power supplies in the node.

Step 4blank.gif Disconnect all power cables from the power supplies.

Step 5blank.gif Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

caut.gif

Caution blank.gif If you cannot safely view and access the component, remove the node from the rack.

Step 6blank.gif Remove the top cover as described in Removing and Replacing the Node Top Cover.

Step 7blank.gif Remove the plastic air baffle that sits over the CPUs.

Step 8blank.gif Remove the heatsink that you are replacing:

a.blank.gif Use a Number 2 Phillips-head screwdriver to loosen the four captive screws that secure the heatsink.

note.gif

Noteblank.gif Alternate loosening each screw evenly to avoid damaging the heatsink or CPU.


b.blank.gif Lift the heatsink off of the CPU.

Step 9blank.gif Open the CPU retaining mechanism:

a.blank.gif Unclip the first retaining latch labeled with the 331732.eps icon, and then unclip the second retaining latch labeled with the 331733.eps icon. See Figure 3-11.

b.blank.gif Open the hinged CPU cover plate.

Figure 3-11 CPU Socket

 

352941.eps
1

CPU retaining latch 331732.eps

4

Hinged CPU seat

2

CPU retaining latch 331733.eps

5

Finger-grips on plastic CPU frame

3

Hinged CPU cover plate

 

 

Step 10blank.gif Remove any existing CPU:

a.blank.gif With the latches and hinged CPU cover plate open, swing the CPU in its hinged seat up to the open position, as shown in Figure 3-11.

b.blank.gif Grasp the CPU by the finger-grips on its plastic frame and lift it up and out of the hinged CPU seat.

c.blank.gif Set the CPU aside on an antistatic surface.

Step 11blank.gif Install a new CPU:

a.blank.gif Grasp the new CPU by the finger-grips on its plastic frame and align the tab on the frame that is labeled “ALIGN” with the hinged seat, as shown in Figure 3-12.

b.blank.gif Insert the tab on the CPU frame into the seat until it stops and is held firmly.

The line below the word “ALIGN” should be level with the edge of the seat, as shown in Figure 3-12.

c.blank.gif Swing the hinged seat with the CPU down until the CPU frame clicks in place and holds flat in the socket.

d.blank.gif Close the hinged CPU cover plate.

e.blank.gif Clip down the CPU retaining latch with the 331733.eps icon, and then clip down the CPU retaining latch with the 331732.eps icon. See Figure 3-11.

Figure 3-12 CPU and Socket Alignment Features

 

352942.eps
1

SLS mechanism on socket

2

Tab on CPU frame (labeled ALIGN)

Step 12blank.gif Install a heat sink:

caut.gif

Caution blank.gif The heat sink must have new thermal grease on the heat sink-to-CPU surface to ensure proper cooling. If you are reusing a heat sink, you must remove the old thermal grease from the heatsink and the CPU surface. If you are installing a new heat sink, skip to Step c.

a.blank.gif Apply the cleaning solution, which is included with the heatsink cleaning kit (UCSX-HSCK=, shipped with spare CPUs), to the old thermal grease on the heatsink and CPU and let it soak for a least 15 seconds.

b.blank.gif Wipe all of the old thermal grease off the old heat sink and CPU using the soft cloth that is included with the heatsink cleaning kit. Be careful to not scratch the heat sink surface.

note.gif

Noteblank.gif New heatsinks come with a pre-applied pad of thermal grease. If you are reusing a heatsink, you must apply thermal grease from a syringe (UCS-CPU-GREASE3=).


c.blank.gif Align the four heatsink captive screws with the motherboard standoffs, and then use a Number 2 Phillips-head screwdriver to tighten the captive screws evenly.

note.gif

Noteblank.gif Alternate tightening each screw evenly to avoid damaging the heatsink or CPU.


Step 13blank.gif Replace the air baffle.

Step 14blank.gif Replace the top cover.

Step 15blank.gif Replace the node in the rack, replace power cables, and then power on the node by pressing the Power button.

Step 16blank.gif Recommission the node as described in Recommissioning the Node Using Cisco UCS Manager.

Step 17blank.gif Associate the node to its service profile as described in Associating a Service Profile With an HX Node.

Step 18blank.gif After ESXi reboot, exit HX Maintenance mode as described in Exiting HX Maintenance Mode.


 

Special Instructions for Upgrades to Intel Xeon v4 Series

caut.gif

Caution blank.gif You must upgrade your node firmware to the required minimum level before you upgrade to Intel v4 CPUs. Older firmware versions cannot recognize the new CPUs and this results in a non-bootable node.

Use the following procedure to upgrade the node and CPUs.


Step 1blank.gif Upgrade the Cisco UCS Manager software to the minimum version for your node (or later). See Table 3-8 .

Use the procedures in the appropriate Cisco UCS Manager upgrade guide (depending on your current software version): Cisco UCS Manager Upgrade Guides.

Step 2blank.gif Use Cisco UCS Manager to upgrade and activate the node Cisco IMC to the minimum version for your node (or later). See Table 3-8 .

Use the procedures in the GUI or CLI Cisco UCS Manager Firmware Management Guide for your release.

Step 3blank.gif Use Cisco UCS Manager to upgrade and activate the node BIOS to the minimum version for your node (or later). See Table 3-8 .

Use the procedures in the Cisco UCS Manager GUI or CLI Cisco UCS Manager Firmware Management Guide for your release.

Step 4blank.gif Replace the CPUs with the Intel Xeon v4 Series CPUs.

Use the CPU replacement procedures in Replacing a CPU and Heatsink.


 

Additional CPU-Related Parts to Order with RMA Replacement Motherboards

When a return material authorization (RMA) of the motherboard or CPU is done, additional parts might not be included with the CPU or motherboard spare bill of materials (BOM). The TAC engineer might need to add the additional parts to the RMA to help ensure a successful replacement.

note.gif

Noteblank.gif This node uses the new independent loading mechanism (ILM) CPU sockets, so no Pick-and-Place tools are required for CPU handling or installation. Always grasp the plastic frame on the CPU when handling.


  • Scenario 1—You are reusing the existing heatsinks:

blank.gif Heat sink cleaning kit (UCSX-HSCK=)

blank.gif Thermal grease kit for HX240c (UCS-CPU-GREASE3=)

  • Scenario 2—You are replacing the existing heatsinks:

blank.gif Heat sink (UCSC-HS-C240M4=)

blank.gif Heat sink cleaning kit (UCSX-HSCK=)

A CPU heatsink cleaning kit is good for up to four CPU and heatsink cleanings. The cleaning kit contains two bottles of solution, one to clean the CPU and heatsink of old thermal interface material and the other to prepare the surface of the heatsink.

New heatsink spares come with a pre-applied pad of thermal grease. It is important to clean the old thermal grease off of the CPU prior to installing the heatsinks. Therefore, when you are ordering new heatsinks, you must order the heatsink cleaning kit.

Replacing a Cisco Modular HBA Card

The node has an internal, dedicated PCIe slot on the motherboard for an HBA card (see Figure 3-13).


Step 1blank.gif Put the node in Cisco HX Maintenance mode as described in Shutting Down the Node Through vSphere With Cisco HX Maintenance Mode.

Step 2blank.gif Shut down the node as described in Shutting Down the Node.

Step 3blank.gif Decommission the node as described in Decommissioning the Node Using Cisco UCS Manager.

caut.gif

Caution blank.gif After a node is shut down to standby power, electric current is still present in the node. To completely remove power, you must disconnect all power cords from the power supplies in the node.

Step 4blank.gif Disconnect all power cables from the power supplies.

Step 5blank.gif Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

caut.gif

Caution blank.gif If you cannot safely view and access the component, remove the node from the rack.

Step 6blank.gif Remove the top cover as described in Removing and Replacing the Node Top Cover.

Step 7blank.gif Remove an existing HBA controller card:

a.blank.gif Disconnect the data cable from the card. Depress the tab on the cable connector and pull.

b.blank.gif Disconnect the supercap power module cable from the transportable memory module (TMM), if present.

c.blank.gif Lift straight up on the metal bracket that holds the card. The bracket lifts off of two pegs on the chassis wall.

d.blank.gif Loosen the two thumbscrews that hold the card to the metal bracket and then lift the card from the bracket.

Step 8blank.gif Install a new HBA controller card:

caut.gif

Caution blank.gif When installing the card to the bracket, be careful so that you do not scrape and damage electronic components on the underside of the card on features of the bracket. Also avoid scraping the card when you install the bracket to the pegs on the chassis wall.

a.blank.gif Set the new card on the metal bracket, aligned so that the thumbscrews on the card enter the threaded standoffs on the bracket. Tighten the thumbscrews to secure the card to the bracket.

b.blank.gif Align the two slots on the back of the bracket with the two pegs on the chassis wall.

The two slots on the bracket must slide down over the pegs at the same time that you push the card into the motherboard socket.

c.blank.gif Gently press down on both top corners of the metal bracket to seat the card into the socket on the motherboard.

d.blank.gif Connect the supercap power module cable to its connector on the TMM, if present.

e.blank.gif Connect the single data cable to the card.

Step 9blank.gif Replace the top cover.

Step 10blank.gif Replace the node in the rack, replace power cables, and then power on the node by pressing the Power button.

Step 11blank.gif Recommission the node as described in Recommissioning the Node Using Cisco UCS Manager.

Step 12blank.gif Associate the node to its service profile as described in Associating a Service Profile With an HX Node.

Step 13blank.gif After ESXi reboot, exit HX Maintenance mode as described in Exiting HX Maintenance Mode.

Figure 3-13 HBA Card Location

 

352961.eps
1

Thumbscrews on card

2

Cisco HBA bracket


 

Replacing the Motherboard RTC Battery

warn.gif

Warningblank.gif There is danger of explosion if the battery is replaced incorrectly. Replace the battery only with the same or equivalent type recommended by the manufacturer. Dispose of used batteries according to the manufacturer’s instructions. [Statement 1015]


The real-time clock (RTC) battery retains node settings when the node is disconnected from power. The battery type is CR2032. Cisco supports the industry-standard CR2032 battery, which can be purchased from most electronic stores.


Step 1blank.gif Put the node in Cisco HX Maintenance mode as described in Shutting Down the Node Through vSphere With Cisco HX Maintenance Mode.

Step 2blank.gif Shut down the node as described in Shutting Down the Node.

Step 3blank.gif Decommission the node as described in Decommissioning the Node Using Cisco UCS Manager.

caut.gif

Caution blank.gif After a node is shut down to standby power, electric current is still present in the node. To completely remove power, you must disconnect all power cords from the power supplies in the node.

Step 4blank.gif Disconnect all power cables from the power supplies.

Step 5blank.gif Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

caut.gif

Caution blank.gif If you cannot safely view and access the component, remove the node from the rack.

Step 6blank.gif Remove the top cover as described in Removing and Replacing the Node Top Cover.

Step 7blank.gif Remove the battery from its holder on the motherboard (see Figure 3-14):

a.blank.gif Use a small screwdriver or pointed object to press inward on the battery at the prying point (see Figure 3-14).

b.blank.gif Lift up on the battery and remove it from the holder.

Step 8blank.gif Install an RTC battery. Insert the battery into its holder and press down until it clicks in place.

note.gif

Noteblank.gif The positive side of the battery marked “3V+” should face upward.


Step 9blank.gif Replace the top cover.

Step 10blank.gif Replace the node in the rack, replace power cables, and then power on the node by pressing the Power button.

Step 11blank.gif Recommission the node as described in Recommissioning the Node Using Cisco UCS Manager.

Step 12blank.gif Associate the node to its service profile as described in Associating a Service Profile With an HX Node.

Step 13blank.gif After ESXi reboot, exit HX Maintenance mode as described in Exiting HX Maintenance Mode.

Figure 3-14 RTC Battery Location and Prying Point

 

352964.eps
1

RTC battery holder on motherboard

2

Prying point


 

Replacing an Internal SD Card

The node has two internal SD card bays on the motherboard. Dual SD cards are supported.


Step 1blank.gif Put the node in Cisco HX Maintenance mode as described in Shutting Down the Node Through vSphere With Cisco HX Maintenance Mode.

Step 2blank.gif Shut down the node as described in Shutting Down the Node.

Step 3blank.gif Decommission the node as described in Decommissioning the Node Using Cisco UCS Manager.

caut.gif

Caution blank.gif After a node is shut down to standby power, electric current is still present in the node. To completely remove power, you must disconnect all power cords from the power supplies in the node.

Step 4blank.gif Disconnect all power cables from the power supplies.

Step 5blank.gif Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

caut.gif

Caution blank.gif If you cannot safely view and access the component, remove the node from the rack.

Step 6blank.gif Remove the top cover as described in Removing and Replacing the Node Top Cover.

Step 7blank.gif Remove an SD card (see Figure 3-15).

a.blank.gif Push on the top of the SD card, and then release it to allow it to spring out from the slot.

b.blank.gif Remove the SD card from the slot.

Step 8blank.gif Install an SD card:

a.blank.gif Insert the SD card into the slot with the label side facing up.

b.blank.gif Press on the top of the card until it clicks in the slot and stays in place.

Step 9blank.gif Replace the top cover.

Step 10blank.gif Replace the node in the rack, replace power cables, and then power on the node by pressing the Power button.

Step 11blank.gif Recommission the node as described in Recommissioning the Node Using Cisco UCS Manager.

Step 12blank.gif Associate the node to its service profile as described in Associating a Service Profile With an HX Node.

Step 13blank.gif After ESXi reboot, exit HX Maintenance mode as described in Exiting HX Maintenance Mode.

Figure 3-15 SD Card Bay Location and Numbering on the Motherboard

 

352969.eps
1

SD card bays SD1 and SD2

 

 


 

Enabling or Disabling the Internal USB Port

caut.gif

Caution blank.gif We do not recommend that you hot-swap the internal USB drive while the node is powered on.

The factory default is for all USB ports on the node to be enabled. However, the internal USB port can be enabled or disabled in the node BIOS. See Figure 3-5 for the location of the internal USB 3.0 slot on the motherboard.


Step 1blank.gif Enter the BIOS Setup Utility by pressing the F2 key when prompted during bootup.

Step 2blank.gif Navigate to the Advanced tab.

Step 3blank.gif On the Advanced tab, select USB Configuration.

Step 4blank.gif On the USB Configuration page, choose USB Ports Configuration.

Step 5blank.gif Scroll to USB Port: Internal, press Enter, and then choose either Enabled or Disabled from the dialog box.

Step 6blank.gif Press F10 to save and exit the utility.


 

Replacing a PCIe Riser

The node contains two toolless PCIe risers for horizontal installation of PCIe cards. See Replacing a PCIe Card for the specifications of the PCIe slots on the risers.


Step 1blank.gif Put the node in Cisco HX Maintenance mode as described in Shutting Down the Node Through vSphere With Cisco HX Maintenance Mode.

Step 2blank.gif Shut down the node as described in Shutting Down the Node.

Step 3blank.gif Decommission the node as described in Decommissioning the Node Using Cisco UCS Manager.

caut.gif

Caution blank.gif After a node is shut down to standby power, electric current is still present in the node. To completely remove power, you must disconnect all power cords from the power supplies in the node.

Step 4blank.gif Disconnect all power cables from the power supplies.

Step 5blank.gif Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

caut.gif

Caution blank.gif If you cannot safely view and access the component, remove the node from the rack.

Step 6blank.gif Remove the top cover as described in Removing and Replacing the Node Top Cover.

Step 7blank.gif Remove the PCIe riser that you are replacing (see Figure 3-16):

a.blank.gif Grasp the top of the riser and lift straight up on both ends to disengage its circuit board from the socket on the motherboard. Set the riser on an antistatic mat.

b.blank.gif If the riser has a card installed, remove the card from the riser. See Replacing a PCIe Card.

Step 8blank.gif Install a new PCIe riser:

a.blank.gif If you removed a card from the old PCIe riser, install the card to the new riser (see Replacing a PCIe Card).

b.blank.gif Position the PCIe riser over its socket on the motherboard and over its alignment slots in the chassis (see Figure 3-16). There are also two alignment pegs on the motherboard for each riser.

note.gif

Noteblank.gif The PCIe risers are not interchangeable. If you plug a PCIe riser into the wrong socket, the node will not boot. Riser 1 must plug into the motherboard socket labeled “RISER1.” Riser 2 must plug into the motherboard socket labeled “RISER2.”


c.blank.gif Carefully push down on both ends of the PCIe riser to fully engage its circuit board connector with the socket on the motherboard.

Step 9blank.gif Replace the top cover.

Step 10blank.gif Replace the node in the rack, replace power cables, and then power on the node by pressing the Power button.

Step 11blank.gif Recommission the node as described in Recommissioning the Node Using Cisco UCS Manager.

Step 12blank.gif Associate the node to its service profile as described in Associating a Service Profile With an HX Node.

Step 13blank.gif After ESXi reboot, exit HX Maintenance mode as described in Exiting HX Maintenance Mode.

Figure 3-16 PCIe Riser Alignment Features

 

352970.eps
1

Alignment peg locations on motherboard
(two for each riser)

2

Alignment channel locations on chassis
(two for each riser)


 

Replacing a PCIe Card

caut.gif

Caution blank.gif Cisco supports all PCIe cards qualified and sold by Cisco. PCIe cards not qualified or sold by Cisco are the responsibility of the customer. Although Cisco will always stand behind and support the nodes, customers using standard, off-the-shelf, third-party cards must go to the third-party card vendor for support if any issue with that particular third-party card occurs.

This section includes the following topics:

PCIe Slots

The system contains two toolless PCIe risers for horizontal installation of PCIe cards (see Figure 3-17).

  • Riser 1: PCIe slots 1 and 2; slot 3 taken by two internal SATA SSD boot-drive sockets. See Table 3-9 .
  • Riser 2 contains slots PCIE 4, 5, and 6. See Table 3-10 .

Figure 3-17 Rear Panel, Showing PCIe Slots

 

352971.eps

 

Table 3-9 Riser 1C (UCSC-PCI-1C-240M4) PCIe Expansion Slots

Slot Number
Electrical
Lane Width
Connector Length
Card Length
Card Height
NCSI Support

1

Gen-3 x8

x16 connector

3/4 length

Full height

Yes

2

Gen-3 x16

x24 connector

Full length

Full height

Yes

SATA SSD sockets (two)

NA

NA

NA

NA

NA

 

Table 3-10 Riser 2 (UCSC-PCI-2-240M4) PCIe Expansion Slots

Slot Number
Electrical
Lane Width
Connector Length
Card Length
Card Height
NCSI Support

4

Gen-3 x8

x24 connector

3/4 length

Full height

Yes

5

Gen-3 x16

x24 connector

Full length

Full height

Yes1

6

Gen-3 x8

x16 connector

Full length

Full height

No

1.NCSI is supported in only one slot at a time in this riser version.

Replacing a PCIe Card

note.gif

Noteblank.gif If you are installing a Cisco UCS Virtual Interface Card, there are prerequisite considerations. See Special Considerations for Cisco UCS Virtual Interface Cards.



Step 1blank.gif Put the node in Cisco HX Maintenance mode as described in Shutting Down the Node Through vSphere With Cisco HX Maintenance Mode.

Step 2blank.gif Shut down the node as described in Shutting Down the Node.

Step 3blank.gif Decommission the node as described in Decommissioning the Node Using Cisco UCS Manager.

caut.gif

Caution blank.gif After a node is shut down to standby power, electric current is still present in the node. To completely remove power, you must disconnect all power cords from the power supplies in the node.

Step 4blank.gif Disconnect all power cables from the power supplies.

Step 5blank.gif Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

caut.gif

Caution blank.gif If you cannot safely view and access the component, remove the node from the rack.

Step 6blank.gif Remove the top cover as described in Removing and Replacing the Node Top Cover.

Step 7blank.gif Remove a PCIe card (or a blanking panel) from the PCIe riser:

a.blank.gif Lift straight up on both ends of the riser to disengage its circuit board from the socket on the motherboard. Set the riser on an antistatic mat.

b.blank.gif On the bottom of the riser, loosen the single thumbscrew that holds the securing plate (see Figure 3-18).

c.blank.gif Swing open the securing plate and remove it from the riser to provide access.

d.blank.gif Swing open the card-tab retainer that secures the back-panel tab of the card (see Figure 3-18).

e.blank.gif Pull evenly on both ends of the PCIe card to disengage it from the socket on the PCIe riser (or remove a blanking panel) and then set the card aside.

Step 8blank.gif Install a PCIe card:

a.blank.gif Align the new PCIe card with the empty socket on the PCIe riser.

b.blank.gif Push down evenly on both ends of the card until it is fully seated in the socket.

Ensure that the card rear panel tab sits flat against the PCIe riser rear panel opening.

c.blank.gif Close the card-tab retainer (see Figure 3-18).

d.blank.gif Return the securing plate to the riser. Insert the two hinge-tabs into the two slots on the riser, and then swing the securing plate closed.

e.blank.gif Tighten the single thumbscrew on the bottom of the securing plate.

f.blank.gif Position the PCIe riser over its socket on the motherboard and over its alignment features in the chassis (see Figure 3-16).

g.blank.gif Carefully push down on both ends of the PCIe riser to fully engage its circuit board connector with the socket on the motherboard.

Step 9blank.gif Replace the top cover.

Step 10blank.gif Replace the node in the rack, replace power cables, and then power on the node by pressing the Power button.

Step 11blank.gif Recommission the node as described in Recommissioning the Node Using Cisco UCS Manager.

Step 12blank.gif Associate the node to its service profile as described in Associating a Service Profile With an HX Node.

Step 13blank.gif After ESXi reboot, exit HX Maintenance mode as described in Exiting HX Maintenance Mode.

Figure 3-18 PCIe Riser Securing Features (Three-Slot Riser Shown)

 

353239.eps
1

Securing plate hinge-tabs

3

GPU card power connector

2

Securing plate thumbscrew (knob not visible on underside of plate)

4

Card-tab retainer in open position


 

Installing Multiple PCIe Cards and Resolving Limited Resources

When a large number of PCIe add-on cards are installed in the node, the node might run out of the following resources required for PCIe devices:

  • Option ROM memory space
  • 16-bit I/O space

The topics in this section provide guidelines for resolving the issues related to these limited resources:

Resolving Insufficient Memory Space to Execute Option ROMs

The node has very limited memory to execute PCIe legacy option ROMs, so when a large number of PCIe add-on cards are installed in the node, the node BIOS might not able to execute all of the option ROMs. The node BIOS loads and executes the option ROMs in the order that the PCIe cards are enumerated (slot 1, slot 2, slot 3, and so on).

If the node BIOS does not have sufficient memory space to load any PCIe option ROM, it skips loading that option ROM, reports a system event log (SEL) event to the Cisco IMC controller and reports the following error in the Error Manager page of the BIOS Setup utility:

ERROR CODE SEVERITY INSTANCE DESCRIPTION
146 Major N/A PCI out of resources error.
Major severity requires user
intervention but does not
prevent system boot.

 

To resolve this issue, disable the Option ROMs that are not needed for node booting. The BIOS Setup Utility provides the setup options to enable or disable the Option ROMs at the PCIe slot level for the PCIe expansion slots and at the port level for the onboard NICs. These options can be found in the BIOS Setup Utility Advanced > PCI Configuration page.

  • Guidelines for RAID controller booting

If the node is configured to boot primarily from RAID storage, make sure that the option ROMs for the slots where your RAID controllers installed are enabled in the BIOS, depending on your RAID controller configuration.

If the RAID controller does not appear in the node boot order even with the option ROMs for those slots are enabled, the RAID controller option ROM might not have sufficient memory space to execute. In that case, disable other option ROMs that are not needed for the node configuration to free up some memory space for the RAID controller option ROM.

  • Guidelines for onboard NIC PXE booting

If the node is configured to primarily perform PXE boot from onboard NICs, make sure that the option ROMs for the onboard NICs to be booted from are enabled in the BIOS Setup Utility. Disable other option ROMs that are not needed to create sufficient memory space for the onboard NICs.

Resolving Insufficient 16-Bit I/O Space

The node has only 64 KB of legacy 16-bit I/O resources available. This 64 KB of I/O space is divided between the CPUs in the node because the PCIe controller is integrated into the CPUs. This node BIOS has the capability to dynamically detect the 16-bit I/O resource requirement for each CPU and then balance the 16-bit I/O resource allocation between the CPUs during the PCI bus enumeration phase of the BIOS POST.

When a large number of PCIe cards are installed in the node, the node BIOS might not have sufficient I/O space for some PCIe devices. If the node BIOS is not able to allocate the required I/O resources for any PCIe devices, the following symptoms have been observed:

  • The node might get stuck in an infinite reset loop.
  • The BIOS might appear to hang while initializing PCIe devices.
  • The PCIe option ROMs might take excessive time to complete, which appears to lock up the node.
  • PCIe boot devices might not be accessible from the BIOS.
  • PCIe option ROMs might report initialization errors. These errors are seen before the BIOS passes control to the operating system.
  • The keyboard might not work.

To work around this problem, rebalance the 16-bit I/O load using the following methods:

1.blank.gif Physically remove any unused PCIe cards.

2.blank.gif If the node has one or more Cisco virtual interface cards (VICs) installed, disable the PXE boot on the VICs that are not required for the node boot configuration by using the Network Adapters page in Cisco IMC Web UI to free up some 16-bit I/O resources. Each VIC uses a minimum 16 KB of 16-bit I/O resource, so disabling PXE boot on Cisco VICs would free up some 16-bit I/O resources that can be used for other PCIe cards that are installed in the node.

Installing a Trusted Platform Module

The trusted platform module (TPM) is a small circuit board that connects to a motherboard socket and is secured by a one-way screw. The socket location is on the motherboard under PCIe riser 2.

This section contains the following procedures, which must be followed in this order when installing and enabling a TPM:

1.blank.gif Installing the TPM Hardware

2.blank.gif Enabling TPM Support in the BIOS

3.blank.gif Enabling the Intel TXT Feature in the BIOS

Installing the TPM Hardware


Step 1blank.gif Put the node in Cisco HX Maintenance mode as described in Shutting Down the Node Through vSphere With Cisco HX Maintenance Mode.

Step 2blank.gif Shut down the node as described in Shutting Down the Node.

Step 3blank.gif Decommission the node as described in Decommissioning the Node Using Cisco UCS Manager.

caut.gif

Caution blank.gif After a node is shut down to standby power, electric current is still present in the node. To completely remove power, you must disconnect all power cords from the power supplies in the node.

Step 4blank.gif Disconnect all power cables from the power supplies.

Step 5blank.gif Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

caut.gif

Caution blank.gif If you cannot safely view and access the component, remove the node from the rack.

Step 6blank.gif Remove the top cover as described in Removing and Replacing the Node Top Cover.

Step 7blank.gif Remove PCIe riser 2 to provide clearance. See Replacing a PCIe Riser for instructions.

Step 8blank.gif Install a TPM:

a.blank.gif Locate the TPM socket on the motherboard, as shown in Figure 3-19.

b.blank.gif Align the connector that is on the bottom of the TPM circuit board with the motherboard TPM socket. Align the screw hole and standoff on the TPM board with the screw hole that is adjacent to the TPM socket.

c.blank.gif Push down evenly on the TPM to seat it in the motherboard socket.

d.blank.gif Install the single one-way screw that secures the TPM to the motherboard.

Step 9blank.gif Replace PCIe riser 2 to the node. See Replacing a PCIe Riser for instructions.

Step 10blank.gif Replace the top cover.

Step 11blank.gif Replace the node in the rack, replace power cables, and then power on the node by pressing the Power button.

Step 12blank.gif Recommission the node as described in Recommissioning the Node Using Cisco UCS Manager.

Step 13blank.gif Associate the node to its service profile as described in Associating a Service Profile With an HX Node.

Step 14blank.gif After ESXi reboot, exit HX Maintenance mode as described in Exiting HX Maintenance Mode.

Step 15blank.gif Continue with Enabling TPM Support in the BIOS.

Figure 3-19 TPM Socket Location on Motherboard

 

352965.eps
1

TPM socket location on the motherboard (under PCIe riser 2)

 


 

Enabling TPM Support in the BIOS

note.gif

Noteblank.gif After hardware installation, you must enable TPM support in the BIOS.


note.gif

Noteblank.gif You must set a BIOS Administrator password before performing this procedure. To set this password, press the F2 key when prompted during system boot to enter the BIOS Setup utility. Then navigate to Security > Set Administrator Password and enter the new password twice as prompted.



Step 1blank.gif Enable TPM support:

a.blank.gif Watch during bootup for the F2 prompt, and then press F2 to enter BIOS setup.

b.blank.gif Log in to the BIOS Setup Utility with your BIOS Administrator password.

c.blank.gif On the BIOS Setup Utility window, choose the Advanced tab.

d.blank.gif Choose Trusted Computing to open the TPM Security Device Configuration window.

e.blank.gif Change TPM SUPPORT to Enabled.

f.blank.gif Press F10 to save your settings and reboot the node.

Step 2blank.gif Verify that TPM support is now enabled:

a.blank.gif Watch during bootup for the F2 prompt, and then press F2 to enter BIOS setup.

b.blank.gif Log into the BIOS Setup utility with your BIOS Administrator password.

c.blank.gif Choose the Advanced tab.

d.blank.gif Choose Trusted Computing to open the TPM Security Device Configuration window.

e.blank.gif Verify that TPM SUPPORT and TPM State are Enabled.

Step 3blank.gif Continue with Enabling the Intel TXT Feature in the BIOS.


 

Enabling the Intel TXT Feature in the BIOS

Intel Trusted Execution Technology (TXT) provides greater protection for information that is used and stored on the node. A key aspect of that protection is the provision of an isolated execution environment and associated sections of memory where operations can be conducted on sensitive data, invisibly to the rest of the node. Intel TXT provides for a sealed portion of storage where sensitive data such as encryption keys can be kept, helping to shield them from being compromised during an attack by malicious code.


Step 1blank.gif Reboot the node and watch for the prompt to press F2.

Step 2blank.gif When prompted, press F2 to enter the BIOS Setup utility.

Step 3blank.gif Verify that the prerequisite BIOS values are enabled:

a.blank.gif Choose the Advanced tab.

b.blank.gif Choose Intel TXT(LT-SX) Configuration to open the Intel TXT(LT-SX) Hardware Support window.

c.blank.gif Verify that the following items are listed as Enabled:

blank.gif VT-d Support (default is Enabled)

blank.gif VT Support (default is Enabled)

blank.gif TPM Support

blank.gif TPM State

  • If VT-d Support and VT Support are already enabled, skip to Step 4.
  • If VT-d Support and VT Support are not enabled, continue with the next steps to enable them.

d.blank.gif Press Escape to return to the BIOS Setup utility Advanced tab.

e.blank.gif On the Advanced tab, choose Processor Configuration to open the Processor Configuration window.

f.blank.gif Set Intel (R) VT and Intel (R) VT-d to Enabled.

Step 4blank.gif Enable the Intel Trusted Execution Technology (TXT) feature:

a.blank.gif Return to the Intel TXT(LT-SX) Hardware Support window if you are not already there.

b.blank.gif Set TXT Support to Enabled.

Step 5blank.gif Press F10 to save your changes and exit the BIOS Setup utility.


 

Replacing Power Supplies

The node can have one or two power supplies. When two power supplies are installed they are redundant as 1+1 and hot-swappable.

note.gif

Noteblank.gif If you have ordered a node with power supply redundancy (two power supplies), you do not have to power off the node to replace power supplies because they are redundant as 1+1 and hot-swappable.


note.gif

Noteblank.gif Do not mix power supply types in the node. Both power supplies must be the same wattage and Cisco product ID (PID).



Step 1blank.gif Perform one of the following actions:

  • If your node has two power supplies, you do not have to shut down the node. Continue with step 2 .
  • If your node has only one power supply:

a.blank.gif Put the node in Cisco HX Maintenance mode as described in Shutting Down the Node Through vSphere With Cisco HX Maintenance Mode.

b.blank.gif Shut down the node as described in Shutting Down the Node.

c.blank.gif Decommission the node as described in Decommissioning the Node Using Cisco UCS Manager.

Step 2blank.gif Remove the power cord from the power supply that you are replacing.

Step 3blank.gif Grasp the power supply handle while pinching the green release lever towards the handle (see Figure 3-20).

Step 4blank.gif Pull the power supply out of the bay.

Step 5blank.gif Install a new power supply:

a.blank.gif Grasp the power supply handle and insert the new power supply into the empty bay.

b.blank.gif Push the power supply into the bay until the release lever locks.

c.blank.gif Connect the power cord to the new power supply.

Step 6blank.gif Only if you shut down the node, perform these steps:

a.blank.gif Press the Power button to return the node to main power mode.

b.blank.gif Recommission the node as described in Recommissioning the Node Using Cisco UCS Manager.

c.blank.gif Associate the node to its service profile as described in Associating a Service Profile With an HX Node.

d.blank.gif After ESXi reboot, exit HX Maintenance mode as described in Exiting HX Maintenance Mode.

Figure 3-20 Power Supplies

 

352966.eps
1

Power supply handle

2

Power supply release lever


 

Installing a DC Power Supply

warn.gif

Warningblank.gif A readily accessible two-poled disconnect device must be incorporated in the fixed wiring. Statement 1022


warn.gif

Warningblank.gif This product requires short-circuit (overcurrent) protection, to be provided as part of the building installation. Install only in accordance with national and local wiring regulations. Statement 1045


warn.gif

Warningblank.gif When installing or replacing the unit, the ground connection must always be made first and disconnected last. Statement 1046


warn.gif

Warningblank.gif Installation of the equipment must comply with local and national electrical codes. Statement 1074


warn.gif

Warningblank.gif Hazardous voltage or energy may be present on DC power terminals. Always replace cover when terminals are not in service. Be sure uninsulated conductors are not accessible when cover is in place. Statement 1075


Installing a 930W DC Power Supply, UCSC-PSU-930WDC

If you are using a Version 1, 930W DC power supply, stripped wires connect power to the removable connector block.

caut.gif

Caution blank.gif Before beginning this wiring procedure, turn off the DC power source from your facility’s circuit breaker to avoid electric shock hazard.


Step 1blank.gif Turn off the DC power source from your facility’s circuit breaker to avoid electric shock hazard.

Step 2blank.gif Remove the DC power connector block from the power supply. (The spare PID for this connector is UCSC-CONN-930WDC=.)

To release the connector block from the power supply, push the orange plastic button on the top of the connector inward toward the power supply and pull the connector block out.

Step 3blank.gif Strip 15mm (.59 inches) of insulation off the DC wires that you will use.

note.gif

Noteblank.gif The recommended wire gauge is 8 AWG. The minimum wire gauge is 10 AWG.


Step 4blank.gif Orient the connector as shown in Figure 3-21, with the orange plastic button toward the top.

Step 5blank.gif Use a small screwdriver to depress the spring-loaded wire retainer lever on the lower spring-cage wire connector. Insert your green (ground) wire into the aperture and then release the lever.

Step 6blank.gif Use a small screwdriver to depress the wire retainer lever on the middle spring-cage wire connector. Insert your black (DC negative) wire into the aperture and then release the lever.

Step 7blank.gif Use a small screwdriver to depress the wire retainer lever on the upper spring-cage wire connector. Insert your red (DC positive) wire into the aperture and then release the lever.

Step 8blank.gif Insert the connector block back into the power supply. Make sure that your red (DC positive) wire aligns with the power supply label, “+ DC”.

Figure 3-21 Version 1 930 W, –48 VDC Power Supply Connector Block

 

352024.eps
1

Wire retainer lever

2

Orange plastic button on top of the connector


 

Replacing an mLOM Card (Cisco VIC 1227)

The node uses a Cisco VIC 1227 mLOM adapter. The mLOM card socket remains powered when the node is in 12 V standby power mode and it supports the network communications services (NCSI) protocol.

Replacing an mLOM Card


Step 1blank.gif Put the node in Cisco HX Maintenance mode as described in Shutting Down the Node Through vSphere With Cisco HX Maintenance Mode.

Step 2blank.gif Shut down the node as described in Shutting Down the Node.

Step 3blank.gif Decommission the node as described in Decommissioning the Node Using Cisco UCS Manager.

caut.gif

Caution blank.gif After a node is shut down to standby power, electric current is still present in the node. To completely remove power, you must disconnect all power cords from the power supplies in the node.

Step 4blank.gif Disconnect all power cables from the power supplies.

Step 5blank.gif Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

caut.gif

Caution blank.gif If you cannot safely view and access the component, remove the node from the rack.

Step 6blank.gif Remove the top cover as described in Removing and Replacing the Node Top Cover.

Step 7blank.gif Remove PCIe riser 1 to provide clearance. See Replacing a PCIe Riser for instructions.

Step 8blank.gif Remove any existing mLOM card or a blanking panel (see Figure 3-22):

a.blank.gif Loosen the single thumbscrew that secures the mLOM card to the chassis floor.

b.blank.gif Slide the mLOM card horizontally to disengage its connector from the motherboard socket.

Step 9blank.gif Install a new mLOM card:

a.blank.gif Set the mLOM card on the chassis floor so that its connector is aligned with the motherboard socket and its thumbscrew is aligned with the standoff on the chassis floor.

b.blank.gif Push the card’s connector into the motherboard socket horizontally.

c.blank.gif Tighten the thumbscrew to secure the card to the chassis floor.

Step 10blank.gif Return PCIe riser 1 to the node. See Replacing a PCIe Riser for instructions.

Step 11blank.gif Replace the top cover.

Step 12blank.gif Replace the node in the rack, replace power cables, and then power on the node by pressing the Power button.

Step 13blank.gif Recommission the node as described in Recommissioning the Node Using Cisco UCS Manager.

Step 14blank.gif Associate the node to its service profile as described in Associating a Service Profile With an HX Node.

Step 15blank.gif After ESXi reboot, exit HX Maintenance mode as described in Exiting HX Maintenance Mode.

Figure 3-22 mLOM Card Location

 

352967.eps
1

mLOM card (VIC 1227) socket location on motherboard (under PCIe riser 1)

 

 


 

Special Considerations for Cisco UCS Virtual Interface Cards

Table 3-11 describes the requirements for the supported Cisco UCS virtual interface cards (VICs).

 

Table 3-11 Cisco HX240c Requirements for Virtual Interface Cards

Virtual Interface Card (VIC)
Number of this VIC Supported in Node
Slots That Support VICs
Primary Slot for Cisco UCS Manager Integration
Primary Slot for Cisco Card NIC Mode
Minimum Cisco IMC Firmware
Minimum VIC Firmware
Cisco UCS VIC 1227

UCSC-MLOM-CSC-02

1 mLOM

mLOM

mLOM

mLOM

2.0(3)

4.0(0)

note.gif

Noteblank.gif The Cisco UCS VIC 1227 (UCSC-MLOM-CSC-02) is not compatible to use in Cisco Card NIC mode with a certain Cisco SFP+ module. Do not use a Cisco SFP+ module part number 37-0961-01 that has a serial number in the range MOC1238xxxx to MOC1309xxxx. If you use the Cisco UCS VIC 1227 in Cisco Card NIC mode, use a different part number Cisco SFP+ module, or you can use this part number 37-0961-01 if the serial number is not included in the range above. See the data sheet for this adapter for other supported SFP+ modules: Cisco UCS VIC 1227 Data Sheet


Service DIP Switches

This section includes the following topics:

DIP Switch Location on the Motherboard

See Figure 3-23. The position of the block of DIP switches (SW8) is shown in red. In the magnified view, all switches are shown in the default position.

  • BIOS recovery—switch 1.
  • Clear password—switch 2.
  • Not used—switch 3.
  • Clear CMOS—switch 4.

Figure 3-23 Service DIP Switches

 

352968.eps
1

DIP switch block SW8

3

Clear password switch 2

2

Clear CMOS switch 4

4

BIOS recovery switch 1

Using the BIOS Recovery DIP Switch

note.gif

Noteblank.gif The following procedures use a recovery.cap recovery file. In Cisco IMC releases 3.0(1) and later, this recovery file has been renamed bios.cap


Depending on which stage the BIOS becomes corrupted, you might see different behavior.

  • If the BIOS BootBlock is corrupted, you might see the node get stuck on the following message:
Initializing and configuring memory/hardware
 
  • If it is a non-BootBlock corruption, the following message is displayed:
****BIOS FLASH IMAGE CORRUPTED****
Flash a valid BIOS capsule file using Cisco IMC WebGUI or CLI interface.
IF Cisco IMC INTERFACE IS NOT AVAILABLE, FOLLOW THE STEPS MENTIONED BELOW.
1. Connect the USB stick with recovery.cap (or bios.cap) file in root folder.
2. Reset the host.
IF THESE STEPS DO NOT RECOVER THE BIOS
1. Power off the system.
2. Mount recovery jumper.
3. Connect the USB stick with recovery.cap (or bios.cap) file in root folder.
4. Power on the system.
Wait for a few seconds if already plugged in the USB stick.
REFER TO SYSTEM MANUAL FOR ANY ISSUES.
note.gif

Noteblank.gif As indicated by the message shown above, there are two procedures for recovering the BIOS. Try procedure 1 first. If that procedure does not recover the BIOS, use procedure 2.


Procedure 1: Reboot with recovery.cap (or bios.cap) File


Step 1blank.gif Download the BIOS update package and extract it to a temporary location.

Step 2blank.gif Copy the contents of the extracted recovery folder to the root directory of a USB thumb drive. The recovery folder contains the recovery.cap (or bios.cap) file that is required in this procedure.

note.gif

Noteblank.gif The recovery.cap (or bios.cap) file must be in the root directory of the USB drive. Do not rename this file. The USB drive must be formatted with either FAT16 or FAT32 file systems.


Step 3blank.gif Insert the USB drive into a USB port on the node.

Step 4blank.gif Reboot the node.

Step 5blank.gif Return the node to main power mode by pressing the Power button on the front panel.

The node boots with the updated BIOS boot block. When the BIOS detects a valid recovery file on the USB thumb drive, it displays this message:

Found a valid recovery file...Transferring to Cisco IMC
System would flash the BIOS image now...
System would restart with recovered image after a few seconds...
 

Step 6blank.gif Wait for node to complete the BIOS update, and then remove the USB thumb drive from the node.

note.gif

Noteblank.gif During the BIOS update, Cisco IMC shuts down the node and the screen goes blank for about 10 minutes. Do not unplug the power cords during this update. Cisco IMC powers on the node after the update is complete.



 

Procedure 2: Use BIOS Recovery DIP switch and recovery.cap (or bios.cap) File

See Figure 3-23 for the location of the SW8 block of DIP switches.


Step 1blank.gif Download the BIOS update package and extract it to a temporary location.

Step 2blank.gif Copy the contents of the extracted recovery folder to the root directory of a USB thumb drive. The recovery folder contains the recovery.cap (or bios.cap) file that is required in this procedure.

note.gif

Noteblank.gif The recovery.cap (or bios.cap) file must be in the root directory of the USB drive. Do not rename this file. The USB drive must be formatted with either FAT16 or FAT32 file systems.


Step 3blank.gif Put the node in Cisco HX Maintenance mode as described in Shutting Down the Node Through vSphere With Cisco HX Maintenance Mode.

Step 4blank.gif Shut down the node as described in Shutting Down the Node.

Step 5blank.gif Decommission the node as described in Decommissioning the Node Using Cisco UCS Manager.

caut.gif

Caution blank.gif After a node is shut down to standby power, electric current is still present in the node. To completely remove power, you must disconnect all power cords from the power supplies in the node.

Step 6blank.gif Disconnect all power cables from the power supplies.

Step 7blank.gif Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

caut.gif

Caution If you cannot safely view and access the component, remove the node from the rack.

Step 8blank.gif Remove the top cover as described in Removing and Replacing the Node Top Cover.

Step 9blank.gif Slide the BIOS recovery DIP switch from position 1 to the closed position (see Figure 3-23).

Step 10blank.gif Reconnect AC power cords to the node. The node powers up to standby power mode.

Step 11blank.gif Insert the USB thumb drive that you prepared in Step 2 into a USB port on the node.

Step 12blank.gif Return the node to main power mode by pressing the Power button on the front panel.

The node boots with the updated BIOS boot block. When the BIOS detects a valid recovery file on the USB thumb drive, it displays this message:

Found a valid recovery file...Transferring to Cisco IMC
System would flash the BIOS image now...
System would restart with recovered image after a few seconds...
 

Step 13blank.gif Wait for node to complete the BIOS update, and then remove the USB thumb drive from the node.

note.gif

Noteblank.gif During the BIOS update, Cisco IMC shuts down the node and the screen goes blank for about 10 minutes. Do not unplug the power cords during this update. Cisco IMC powers on the node after the update is complete.


Step 14blank.gif After the node has fully booted, power off the node again and disconnect all power cords.

Step 15blank.gif Slide the BIOS recovery DIP switch from the closed position back to the default position 1.

note.gif

Noteblank.gif If you do not move the jumper, after recovery completion you see the prompt, “Please remove the recovery jumper.”


Step 16blank.gif Replace the top cover to the node.

Step 17blank.gif Replace the node in the rack, replace power cables, and then power on the node by pressing the Power button.

Step 18blank.gif Recommission the node as described in Recommissioning the Node Using Cisco UCS Manager.

Step 19blank.gif Associate the node to its service profile as described in Associating a Service Profile With an HX Node.

Step 20blank.gif After ESXi reboot, exit HX Maintenance mode as described in Exiting HX Maintenance Mode.


 

Using the Clear Password DIP Switch

See Figure 3-23 for the location of this DIP switch. You can use this switch to clear the administrator password.


Step 1blank.gif Put the node in Cisco HX Maintenance mode as described in Shutting Down the Node Through vSphere With Cisco HX Maintenance Mode.

Step 2blank.gif Shut down the node as described in Shutting Down the Node.

Step 3blank.gif Decommission the node as described in Decommissioning the Node Using Cisco UCS Manager.

caut.gif

Caution blank.gif After a node is shut down to standby power, electric current is still present in the node. To completely remove power, you must disconnect all power cords from the power supplies in the node.

Step 4blank.gif Disconnect all power cables from the power supplies.

Step 5blank.gif Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

caut.gif

Caution If you cannot safely view and access the component, remove the node from the rack.

Step 6blank.gif Remove the top cover as described in Removing and Replacing the Node Top Cover.

Step 7blank.gif Slide the clear password DIP switch from position 2 to the closed position (see Figure 3-23).

Step 8blank.gif Reinstall the top cover and reconnect AC power cords to the node. The node powers up to standby power mode, indicated when the Power LED on the front panel is amber.

Step 9blank.gif Return the node to main power mode by pressing the Power button on the front panel. The node is in main power mode when the Power LED is green.

note.gif

Noteblank.gif You must allow the entire node, not just the service processor, to reboot to main power mode to complete the reset. The state of the jumper cannot be determined without the host CPU running.


Step 10blank.gif Press the Power button to shut down the node to standby power mode, and then remove AC power cords from the node to remove all power.

Step 11blank.gif Remove the top cover from the node.

Step 12blank.gif Slide the clear CMOS DIP switch from the closed position back to default position 2 (see Figure 3-23).

note.gif

Noteblank.gif If you do not move the jumper, the CMOS settings are reset to the default every time that you power-cycle the node.


Step 13blank.gif Replace the top cover to the node.

Step 14blank.gif Replace the node in the rack, replace power cables, and then power on the node by pressing the Power button.

Step 15blank.gif Recommission the node as described in Recommissioning the Node Using Cisco UCS Manager.

Step 16blank.gif Associate the node to its service profile as described in Associating a Service Profile With an HX Node.

Step 17blank.gif After ESXi reboot, exit HX Maintenance mode as described in Exiting HX Maintenance Mode.


 

Using the Clear CMOS DIP Switch

See Figure 3-23 for the location of this DIP switch. You can use this switch to clear the node’s CMOS settings in the case of a system hang. For example, if the node hangs because of incorrect settings and does not boot, use this jumper to invalidate the settings and reboot with defaults.

caut.gif

Caution blank.gif Clearing the CMOS removes any customized settings and might result in data loss. Make a note of any necessary customized settings in the BIOS before you use this clear CMOS procedure.


Step 1blank.gif Put the node in Cisco HX Maintenance mode as described in Shutting Down the Node Through vSphere With Cisco HX Maintenance Mode.

Step 2blank.gif Shut down the node as described in Shutting Down the Node.

Step 3blank.gif Decommission the node as described in Decommissioning the Node Using Cisco UCS Manager.

caut.gif

Caution blank.gif After a node is shut down to standby power, electric current is still present in the node. To completely remove power, you must disconnect all power cords from the power supplies in the node.

Step 4blank.gif Disconnect all power cables from the power supplies.

Step 5blank.gif Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

caut.gif

Caution If you cannot safely view and access the component, remove the node from the rack.

Step 6blank.gif Remove the top cover as described in Removing and Replacing the Node Top Cover.

Step 7blank.gif Slide the clear CMOS DIP switch from position 4 to the closed position (see Figure 3-23).

Step 8blank.gif Reinstall the top cover and reconnect AC power cords to the node. The node powers up to standby power mode, indicated when the Power LED on the front panel is amber.

Step 9blank.gif Return the node to main power mode by pressing the Power button on the front panel. The node is in main power mode when the Power LED is green.

note.gif

Noteblank.gif You must allow the entire node, not just the service processor, to reboot to main power mode to complete the reset. The state of the jumper cannot be determined without the host CPU running.


Step 10blank.gif Press the Power button to shut down the node to standby power mode, and then remove AC power cords from the node to remove all power.

Step 11blank.gif Remove the top cover from the node.

Step 12blank.gif Move the clear CMOS DIP switch from the closed position back to default position 4 (see Figure 3-23).

note.gif

Noteblank.gif If you do not move the jumper, the CMOS settings are reset to the default every time that you power-cycle the node.


Step 13blank.gif Replace the top cover to the node.

Step 14blank.gif Replace the node in the rack, replace power cables, and then power on the node by pressing the Power button.

Step 15blank.gif Recommission the node as described in Recommissioning the Node Using Cisco UCS Manager.

Step 16blank.gif Associate the node to its service profile as described in Associating a Service Profile With an HX Node.

Step 17blank.gif After ESXi reboot, exit HX Maintenance mode as described in Exiting HX Maintenance Mode.


 

Setting Up the Node in Standalone Mode

note.gif

Noteblank.gif The HX Series node is always managed in UCS Manager-controlled mode. This section is included only for cases in which a node might need to be put into standalone mode for troubleshooting purposes. Do not use this setup for normal operation of the HX Series node.


Connecting and Powering On the Node (Standalone Mode)

The node is shipped with these default settings:

  • The NIC mode is Shared LOM EXT.

Shared LOM EXT mode enables the 1-Gb Ethernet ports and the ports on any installed Cisco virtual interface card (VIC) to access Cisco Integrated Management Interface (Cisco IMC). If you want to use the 10/100/1000 dedicated management ports to access Cisco IMC, you can connect to the node and change the NIC mode as described in Step 1 of the following procedure.

  • The NIC redundancy is active-active. All Ethernet ports are utilized simultaneously.
  • DHCP is enabled.
  • IPv4 is enabled. You can change this to IPv6.

There are two methods for connecting to the node for initial setup:

  • Local setup—Use this procedure if you want to connect a keyboard and monitor to the node for setup. This procedure requires a KVM cable (Cisco PID N20-BKVM). See Local Connection Procedure.
  • Remote setup—Use this procedure if you want to perform setup through your dedicated management LAN. See Remote Connection Procedure.
note.gif

Noteblank.gif To configure the node remotely, you must have a DHCP server on the same network as the node. Your DHCP server must be preconfigured with the range of MAC addresses for this node. The MAC address is printed on a label that is on the pull-out asset tag on the front panel (see Figure 1-1). This node has a range of six MAC addresses assigned to the Cisco IMC. The MAC address printed on the label is the beginning of the range of six contiguous MAC addresses.


Local Connection Procedure


Step 1blank.gif Attach a power cord to each power supply in your node, and then attach each power cord to a grounded AC power outlet. See Power Specificationsfor power specifications.

Wait for approximately two minutes to let the node boot in standby power during the first bootup.

You can verify node power status by looking at the node Power Status LED on the front panel (see External Features Overview). The node is in standby power mode when the LED is amber.

Step 2blank.gif Connect a USB keyboard and VGA monitor to the node using one of the following methods:

  • Connect a USB keyboard and VGA monitor to the corresponding connectors on the rear panel (see External Features Overview).
  • Connect an optional KVM cable (Cisco PID N20-BKVM) to the KVM connector on the front panel (see External Features Overview for the connector location). Connect your USB keyboard and VGA monitor to the KVM cable.

Step 3blank.gif Open the Cisco IMC Configuration Utility:

a.blank.gif Press and hold the front panel power button for four seconds to boot the node.

b.blank.gif During bootup, press F8 when prompted to open the Cisco IMC Configuration Utility.

This utility has two windows that you can switch between by pressing F1 or F2.

Step 4blank.gif Continue with Cisco IMC Configuration Utility Setup.


 

Remote Connection Procedure


Step 1blank.gif Attach a power cord to each power supply in your node, and then attach each power cord to a grounded AC power outlet.

Wait for approximately two minutes to let the node boot in standby power during the first bootup.

You can verify node power status by looking at the node Power Status LED on the front panel (see External Features Overview). The node is in standby power mode when the LED is amber.

Step 2blank.gif Plug your management Ethernet cable into the dedicated management port on the rear panel (see External Features Overview).

Step 3blank.gif Allow your preconfigured DHCP server to assign an IP address to the node.

Step 4blank.gif Use the assigned IP address to access and log in to the Cisco IMC for the node. Consult with your DHCP server administrator to determine the IP address.

note.gif

Noteblank.gif The default user name for the node is admin. The default password is password.


Step 5blank.gif From the Cisco IMC Summary page, click Launch KVM Console. A separate KVM console window opens.

Step 6blank.gif From the Cisco IMC Summary page, click Power Cycle System. The node reboots.

Step 7blank.gif Select the KVM console window.

note.gif

Noteblank.gif The KVM console window must be the active window for the following keyboard actions to work.


Step 8blank.gif When prompted, press F8 to enter the Cisco IMC Configuration Utility. This utility opens in the KVM console window.

This utility has two windows that you can switch between by pressing F1 or F2.

Step 9blank.gif Continue with Cisco IMC Configuration Utility Setup.


 

Cisco IMC Configuration Utility Setup

The following procedure is performed after you connect to the node and open the Cisco IMC Configuration Utility.


Step 1blank.gif Set NIC mode and NIC redundancy:

a.blank.gif Set the NIC mode to choose which ports to use to access Cisco IMC for node management (see Figure 1-2 for identification of the ports):

    • Shared LOM EXT (default)—This is the shared LOM extended mode, the factory-default setting. With this mode, the shared LOM and Cisco Card interfaces are both enabled.

In this mode, DHCP replies are returned to both the shared LOM ports and the Cisco card ports. If the node determines that the Cisco card connection is not getting its IP address from a Cisco UCS Manager node because the node is in standalone mode, further DHCP requests from the Cisco card are disabled. Use the Cisco Card NIC mode if you want to connect to Cisco IMC through a Cisco card in standalone mode.

    • Dedicated—The dedicated management port is used to access Cisco IMC. You must select a NIC redundancy and IP setting.
    • Shared LOM—The 1-Gb Ethernet ports are used to access Cisco IMC. You must select a NIC redundancy and IP setting.
    • Cisco Card—The ports on an installed Cisco UCS virtual interface card (VIC) are used to access Cisco IMC. You must select a NIC redundancy and IP setting.

See also the required VIC Slot setting below.

    • VIC Slot—If you use the Cisco Card NIC mode, you must select this setting to match where your VIC is installed. The choices are Riser1, Riser2, or Flex-LOM (the mLOM slot).

blank.gif If you select Riser1, slot 2 is the primary slot, but you can use slot 1.

blank.gif If you select Riser2, slot 5 is the primary slot, but you can use slot 4.

blank.gif If you select Flex-LOM, you must use an mLOM-style VIC in the mLOM slot.

b.blank.gif Use this utility to change the NIC redundancy to your preference. This node has three possible NIC redundancy settings:

blank.gif None—The Ethernet ports operate independently and do not fail over if there is a problem. This setting can be used only with the Dedicated NIC mode.

blank.gif Active-standby—If an active Ethernet port fails, traffic fails over to a standby port.

blank.gif Active-active—All Ethernet ports are utilized simultaneously. Shared LOM EXT mode can have only this NIC redundancy setting. Shared LOM and Cisco Card modes can have both Active-standby and Active-active settings.

Step 2blank.gif Choose whether to enable DHCP for dynamic network settings, or to enter static network settings.

note.gif

Noteblank.gif Before you enable DHCP, you must preconfigure your DHCP server with the range of MAC addresses for this node. The MAC address is printed on a label on the rear of the node. This node has a range of six MAC addresses assigned to Cisco IMC. The MAC address printed on the label is the beginning of the range of six contiguous MAC addresses.


The static IPv4 and IPv6 settings include the following:

  • The Cisco IMC IP address.
  • The prefix/subnet.

For IPv6, valid values are 1–127.

  • The gateway.

For IPv6, if you do not know the gateway, you can set it as none by entering :: (two colons).

  • The preferred DNS node address.

For IPv6, you can set this as none by entering :: (two colons).

Step 3blank.gif (Optional) Use this utility to make VLAN settings.

Step 4blank.gif Press F1 to go to the second settings window, then continue with the next step.

From the second window, you can press F2 to switch back to the first window.

Step 5blank.gif (Optional) Set a hostname for the node.

Step 6blank.gif (Optional) Enable dynamic DNS and set a dynamic DNS (DDNS) domain.

Step 7blank.gif (Optional) If you check the Factory Default check box, the node reverts to the factory defaults.

Step 8blank.gif (Optional) Set a default user password.

Step 9blank.gif (Optional) Enable auto-negotiation of port settings or set the port speed and duplex mode manually.

note.gif

Noteblank.gif Auto-negotiation is applicable only when you use the Dedicated NIC mode. Auto-negotiation sets the port speed and duplex mode automatically based on the switch port to which the node is connected. If you disable auto-negotiation, you must set the port speed and duplex mode manually.


Step 10blank.gif (Optional) Reset port profiles and the port name.

Step 11blank.gif Press F5 to refresh the settings that you made. You might have to wait about 45 seconds until the new settings appear and the message, “Network settings configured” is displayed before you reboot the node in the next step.

Step 12blank.gif Press F10 to save your settings and reboot the node.

note.gif

Noteblank.gif If you chose to enable DHCP, the dynamically assigned IP and MAC addresses are displayed on the console screen during bootup.



 

Use a browser and the IP address of the Cisco IMC to connect to the Cisco IMC management interface. The IP address is based upon the settings that you made (either a static address or the address assigned by your DHCP server).

note.gif

Noteblank.gif The default username for the node is admin. The default password is password.