Maintaining the Node

Status LEDs and Buttons

This section contains information for interpreting LED states.

Front-Panel LEDs

Figure 1. Front Panel LEDs
Table 1. Front Panel LEDs, Definition of States

LED Name

States

1

SAS

SAS/SATA drive fault

Note 
NVMe solid state drive (SSD) drive tray LEDs have different behavior than SAS/SATA drive trays.
  • Off—The hard drive is operating properly.

  • Amber—Drive fault detected.

  • Amber, blinking—The device is rebuilding.

  • Amber, blinking with one-second interval—Drive locate function activated in the software.

2

SAS

SAS/SATA drive activity LED

  • Off—There is no hard drive in the hard drive tray (no access, no fault).

  • Green—The hard drive is ready.

  • Green, blinking—The hard drive is reading or writing data.

1

NVMe

NVMe SSD drive fault

Note 
NVMe solid state drive (SSD) drive tray LEDs have different behavior than SAS/SATA drive trays.
  • Off—The drive is not in use and can be safely removed.

  • Green—The drive is in use and functioning properly.

  • Green, blinking—the driver is initializing following insertion, or the driver is unloading following an eject command.

  • Amber—The drive has failed.

  • Amber, blinking—A drive Locate command has been issued in the software.

2

NVMe

NVMe SSD activity

  • Off—No drive activity.

  • Green, blinking—There is drive activity.

3

Power button/LED

  • Off—There is no AC power to the node.

  • Amber—The node is in standby power mode. Power is supplied only to the Cisco IMC and some motherboard functions.

  • Green—The node is in main power mode. Power is supplied to all node components.

4

Unit identification

  • Off—The unit identification function is not in use.

  • Blue, blinking—The unit identification function is activated.

5

System health

  • Green—The node is running in normal operating condition.

  • Green, blinking—The node is performing system initialization and memory check.

  • Amber, steady—The node is in a degraded operational state (minor fault). For example:

    • Power supply redundancy is lost.

    • CPUs are mismatched.

    • At least one CPU is faulty.

    • At least one DIMM is faulty.

    • At least one drive in a RAID configuration failed.

  • Amber, 2 blinks—There is a major fault with the system board.

  • Amber, 3 blinks—There is a major fault with the memory DIMMs.

  • Amber, 4 blinks—There is a major fault with the CPUs.

6

Fan status

  • Green—All fan modules are operating properly.

  • Amber, blinking—One or more fan modules breached the non-recoverable threshold.

7

Temperature status

  • Green—The node is operating at normal temperature.

  • Amber, steady—One or more temperature sensors breached the critical threshold.

  • Amber, blinking—One or more temperature sensors breached the non-recoverable threshold.

8

Power supply status

  • Green—All power supplies are operating normally.

  • Amber, steady—One or more power supplies are in a degraded operational state.

  • Amber, blinking—One or more power supplies are in a critical fault state.

9

Network link activity

  • Off—The Ethernet LOM port link is idle.

  • Green—One or more Ethernet LOM ports are link-active, but there is no activity.

  • Green, blinking—One or more Ethernet LOM ports are link-active, with activity.

10

DVD drive activity

  • Off—The drive is idle.

  • Green, steady—The drive is spinning up a disk.

  • Green, blinking—The drive is accessing data.

Rear-Panel LEDs

Figure 2. Rear Panel LEDs
Table 2. Rear Panel LEDs, Definition of States

LED Name

States

1

Rear unit identification

  • Off—The unit identification function is not in use.

  • Blue, blinking—The unit identification function is activated.

2

1-Gb/10-Gb Ethernet link speed (on both LAN1 and LAN2)

  • Amber—Link speed is 100 Mbps.

  • Amber—Link speed is 1 Gbps.

  • Green—Link speed is 10 Gbps.

3

1-Gb/10-Gb Ethernet link status (on both LAN1 and LAN2)

  • Off—No link is present.

  • Green—Link is active.

  • Green, blinking—Traffic is present on the active link.

4

1-Gb Ethernet dedicated management link speed

  • Off—Link speed is 10 Mbps.

  • Amber—Link speed is 100 Mbps.

  • Green—Link speed is 1 Gbps.

5

1-Gb Ethernet dedicated management link status

  • Off—No link is present.

  • Green—Link is active.

  • Green, blinking—Traffic is present on the active link.

6

Power supply status (one LED each power supply unit)

AC power supplies:

  • Off—No AC input (12 V main power off, 12 V standby power off).

  • Green, blinking—12 V main power off; 12 V standby power on.

  • Green, solid—12 V main power on; 12 V standby power on.

  • Amber, blinking—Warning threshold detected but 12 V main power on.

  • Amber, solid—Critical error detected; 12 V main power off (for example, over-current, over-voltage, or over-temperature failure).

DC power supply (HX-PSUV2-1050DC):

  • Off—No DC input (12 V main power off, 12 V standby power off).

  • Green, blinking—12 V main power off; 12 V standby power on.

  • Green, solid—12 V main power on; 12 V standby power on.

  • Amber, blinking—Warning threshold detected but 12 V main power on.

  • Amber, solid—Critical error detected; 12 V main power off (for example, over-current, over-voltage, or over-temperature failure).

7

SAS

SAS/SATA drive fault

Note 
NVMe solid state drive (SSD) drive tray LEDs have different behavior than SAS/SATA drive trays.
  • Off—The hard drive is operating properly.

  • Amber—Drive fault detected.

  • Amber, blinking—The device is rebuilding.

  • Amber, blinking with one-second interval—Drive locate function activated in the software.

8

SAS

SAS/SATA drive activity LED

  • Off—There is no hard drive in the hard drive tray (no access, no fault).

  • Green—The hard drive is ready.

  • Green, blinking—The hard drive is reading or writing data.

7

NVMe

NVMe SSD drive fault

Note 
NVMe solid state drive (SSD) drive tray LEDs have different behavior than SAS/SATA drive trays.
  • Off—The drive is not in use and can be safely removed.

  • Green—The drive is in use and functioning properly.

  • Green, blinking—the driver is initializing following insertion, or the driver is unloading following an eject command.

  • Amber—The drive has failed.

  • Amber, blinking—A drive Locate command has been issued in the software.

8

NVMe

NVMe SSD activity

  • Off—No drive activity.

  • Green, blinking—There is drive activity.

Internal Diagnostic LEDs

The node has internal fault LEDs for CPUs, DIMMs, and fan modules.

Figure 3. Internal Diagnostic LED Locations

1

Fan module fault LEDs (one on the top of each fan module)

  • Amber—Fan has a fault or is not fully seated.

  • Green—Fan is OK.

3

DIMM fault LEDs (one behind each DIMM socket on the motherboard)

These LEDs operate only when the node is in standby power mode.

  • Amber—DIMM has a fault.

  • Off—DIMM is OK.

2

CPU fault LEDs (one behind each CPU socket on the motherboard).

These LEDs operate only when the node is in standby power mode.

  • Amber—CPU has a fault.

  • Off—CPU is OK.

-

Preparing For Component Installation

This section includes information and tasks that help prepare the node for component installation.

Required Equipment For Service Procedures

The following tools and equipment are used to perform the procedures in this chapter:

  • T-30 Torx driver (supplied with replacement CPUs for heatsink removal)

  • #1 flat-head screwdriver (used during CPU or heatsink replacement)

  • #1 Phillips-head screwdriver (for M.2 SSD and intrusion switch replacement)

  • Electrostatic discharge (ESD) strap or other grounding equipment such as a grounded mat

Shutting Down and Removing Power From the Node

The node can run in either of two power modes:

  • Main power mode—Power is supplied to all node components and any operating system on your drives can run.

  • Standby power mode—Power is supplied only to the service processor and certain components. It is safe for the operating system and data to remove power cords from the node in this mode.


Caution

After a node is shut down to standby power, electric current is still present in the node. To completely remove power, you must disconnect all power cords from the power supplies in the node, as directed in the service procedures.

You can shut down the node by using the front-panel power button or the software management interfaces.


Shutting Down Using the Power Button

Procedure

Step 1

Check the color of the Power button/LED:

  • Amber—The node is already in standby mode, and you can safely remove power.

  • Green—The node is in main power mode and must be shut down before you can safely remove power.

Step 2

Invoke either a graceful shutdown or a hard shutdown:

Caution 
To avoid data loss or damage to your operating system, you should always invoke a graceful shutdown of the operating system.
  • Graceful shutdown—Press and release the Power button. The operating system performs a graceful shutdown, and the node goes to standby mode, which is indicated by an amber Power button/LED.

  • Emergency shutdown—Press and hold the Power button for 4 seconds to force the main power off and immediately enter standby mode.

Step 3

If a service procedure instructs you to completely remove power from the node, disconnect all power cords from the power supplies in the node.


Shutting Down Using The Cisco IMC GUI

You must log in with user or admin privileges to perform this task.

Procedure

Step 1

In the Navigation pane, click the Server tab.

Step 2

On the Server tab, click Summary.

Step 3

In the Actions area, click Power Off Server.

Step 4

Click OK.

The operating system performs a graceful shutdown, and the node goes to standby mode, which is indicated by an amber Power button/LED.

Step 5

If a service procedure instructs you to completely remove power from the node, disconnect all power cords from the power supplies in the node.


Shutting Down Using The Cisco IMC CLI

You must log in with user or admin privileges to perform this task.

Procedure

Step 1

At the server prompt, enter:

Example:
server# scope chassis
Step 2

At the chassis prompt, enter:

Example:
server/chassis# power shutdown

The operating system performs a graceful shutdown, and the node goes to standby mode, which is indicated by an amber Power button/LED.

Step 3

If a service procedure instructs you to completely remove power from the node, disconnect all power cords from the power supplies in the node.


Removing the Node Top Cover

Procedure


Step 1

Remove the top cover:

  1. If the cover latch is locked, slide the lock sideways to unlock it.

    When the latch is unlocked, the handle pops up so that you can grasp it.

  2. Lift on the end of the latch so that it pivots vertically to 90 degrees.

  3. Simultaneously, slide the cover back and lift the top cover straight up from the node and set it aside.

Step 2

Replace the top cover:

  1. With the latch in the fully open position, place the cover on top of the node about one-half inch (1.27 cm) behind the lip of the front cover panel.

  2. Slide the cover forward until the latch makes contact.

  3. Press the latch down to the closed position. The cover is pushed forward to the closed position as you push down the latch.

  4. Lock the latch by sliding the lock button to sideways to the left.

    Locking the latch ensures that the node latch handle does not protrude when you install the blade.

Figure 4. Removing the Top Cover

1

Cover lock

2

Cover latch handle


Serial Number Location

The serial number for the node is printed on a label on the top of the node, near the front. See Removing the Node Top Cover.

Hot Swap vs Hot Plug

Some components can be removed and replaced without shutting down and removing power from the node. This type of replacement has two varieties: hot-swap and hot-plug.

  • Hot-swap replacement—You do not have to shut down the component in the software or operating system. This applies to the following components:

    • SAS/SATA hard drives

    • SAS/SATA solid state drives

    • Cooling fan modules

    • Power supplies (when redundant as 1+1)

  • Hot-plug replacement—You must take the component offline before removing it for the following component:

    • NVMe PCIe solid state drives

Replacing the Air Duct

The node has an air duct under the top sheet metal cover. The air duct ensures proper cooling and air flow across the node from intake (the cool aisle of the data center) to exhaust (the hot aisle in the data center). The air duct is in the middle of the node and covers the CPU and DIMMs.

To replace the node's air duct, use the following procedures:

Removing the Air Duct

Use this procedure to remove the air duct when needed.

Procedure


Step 1

Remove the server top cover.

Step 2

Locate the detents for the air duct.

The following illustration highlights the middle detent for illustrative purposes only. When removing the air duct, always grasp the detents closest to the chassis sidewalls (left and right).

Step 3

Grasp the left and right detent and lift it out of the chassis.

Note 

You might need to slide the air duct towards the back of the server while lifting the air duct up.


What to do next

When you are done servicing the node, install the air duct. See Installing the Air Duct.

Installing the Air Duct

The air duct sits behind the front-loading drive cage and covers the CPU and DIMMs in the middle of the node.

Procedure


Step 1

Orient the air duct as shown.

Step 2

Lower the air duct into place and gently press down to ensure that all of its edges sit flush.

If the air duct is not seated correctly, it can obstruct installing the node's top cover.

Step 3

When the air duct is correctly seated, attach the node's top cover.

The node top cover should sit flush so that the metal tabs on the top cover match the indents in the top edges of the air duct.


Removing and Replacing Components


Warning

Blank faceplates and cover panels serve three important functions: they prevent exposure to hazardous voltages and currents inside the chassis; they contain electromagnetic interference (EMI) that might disrupt other equipment; and they direct the flow of cooling air through the chassis. Do not operate the system unless all cards, faceplates, front covers, and rear covers are in place.

Statement 1029



Caution

When handling node components, handle them only by carrier edges and use an electrostatic discharge (ESD) wrist-strap or other grounding device to avoid damage.

Tip

You can press the unit identification button on the front panel or rear panel to turn on a flashing, blue unit identification LED on both the front and rear panels of the node. This button allows you to locate the specific node that you are servicing when you go to the opposite side of the rack. You can also activate these LEDs remotely by using the Cisco IMC interface.

This section describes how to install and replace node components.

Serviceable Component Locations

This topic shows the locations of the field-replaceable components and service-related items. The view in the following figure shows the server with the top cover removed.

Figure 5. Cisco HX C240 M6 Server, Serviceable Component Locations

1

Front-loading drive bays.

2

Cooling fan modules (six, hot-swappable)

3

DIMM sockets on motherboard (16 per CPU)

See DIMM Population Rules and Memory Performance Guidelines for DIMM slot numbering.

Note 

An air baffle rests on top of the DIMM and CPUs when the server is operating. The air baffle is not displayed in this illustration.

4

CPU socket 1

5

CPU socket 2

6

M.2 RAID Controller

7

PCIe riser 3 (PCIe slots 7 and 8 numbered from bottom to top), with the following options:

  • 3A (Default Option)—Slots 7 (x16 mechanical, x8 electrical), and 8 (x16 mechanical, x8 electrical). Both slots can accept a full height, full length GPU card.

  • 3B (Storage Option)—Slots 7 (x24 mechanical, x4 electrical) and 8 (x24 mechanical, x4 electrical). Both slots can accept 2.5-ich SFF universal HDDs.

  • 3C (GPU Option)—Slots 7 (x16 mechanical, x16 electrical) and 8 empty (NCSI support limited to one slot at a time). Slot 7 can support a full height, full length GPU card.

8

PCIe riser 2 (PCIe slots 4, 5, 6 numbered from bottom to top), with the following options:

  • 2A (Default Option)—Slot 4 (x24 mechanical, x8 electrical) supports full height, ¾ length card; Slot 5 (x24 mechanical, x16 electrical) supports full height, full length GPU card; Slot 6 (x16 mechanical, x8 electrical) supports full height, full length card.

9

PCIe riser 1 (PCIe slot 1, 2, 3 numbered bottom to top), with the following options:

  • 1A (Default Option)—Slot 1 (x24 mechanical, x8 electrical) supports full height, ¾ length card; Slot 2 (x24 mechanical, x16 electrical) supports full height, full length GPU card; Slot 3 (x16 mechanical, x8 electrical) supports full height, full length card.

  • 1B (Storage Option)—Slot 1 (x24 mechanical, x8 electrical) supports full height, ¾ length card; Slot 2 (x4 electrical), supports 2.5-inch SFF universal HDD; Slot 3 (x4 electrical), supports 2.5-inch SFF universal HDD

-

Replacing Front-Loading SAS/SATA Drives


Note

You do not have to shut down the node or drive to replace SAS/SATA hard drives or SSDs because they are hot-swappable.

To replace rear-loading SAS/SATA drives, see Replacing Rear-Loading SAS/SATA Drives.

Front-Loading SAS/SATA Drive Population Guidelines

The node is orderable in four different versions, each with a different front panel/drive-backplane configuration.

  • Cisco HyperFlex C240 M6 24 SAS/SATA—Small form-factor (SFF) drives, with 24-drive backplane.

    • Front-loading drive bays 1—24 support 2.5-inch SAS/SATA drives.

    • Optionally, front-loading drive bays 1 through 4 can support 2.5-inch NVMe SSDs.

  • Cisco HyperFlex C240 M6 24 NVMe—SFF drives, with 24-drive backplane.

    • Front-loading drive bays 1—24 support 2.5-inch NVMe PCIe SSDs only.

  • Cisco HyperFlex C240 M6 12 SAS/SATA plus optical drive—SFF drives, with 12-drive backplane and DVD drive option.

    • Front-loading drive bays 1—12 support 2.5-inch SAS/SATA drives.

    • Optionally, front-loading drive bays 1 through 4 can support 2.5-inch NVMe SSDs.

  • Cisco HyperFlex C240 M6 12 NVMe—SFF drives, with 24-drive backplane.

    • Front-loading drive bays 1—12 support 2.5-inch NVMe PCIe SSDs only

  • Cisco HyperFlex C240 M6 12 LFF SAS/SATA—Large form-factor (LFF) drives, with 12-drive backplane.

    • Front-loading drive bays 1—12 support 3.5-inch SAS-only drives.

    • Optionally, up to 4 mid-plane mounted SAS-only HDDs can be supported.

    • Optionally, rear drive bays can support up to 4 SFF SAS/SATA or NVMe HDDs.

Drive bay numbering is shown in the following figures.

Figure 6. Small Form-Factor Drive (24-Drive) Versions, Drive Bay Numbering
Figure 7. Large Form-Factor Drive (12-Drive) Version, Drive Bay Numbering

Observe these drive population guidelines for optimum performance:

  • When populating drives, add drives to the lowest-numbered bays first.

  • Front-loading drives are hot pluggable, but each drive requires a 10 second delay between hot removal and hot insertion.

  • Keep an empty drive blanking tray in any unused bays to ensure proper airflow.

  • You can mix SAS/SATA hard drives and SAS/SATA SSDs in the same node. However, you cannot configure a logical volume (virtual drive) that contains a mix of hard drives and SSDs. That is, when you create a logical volume, it must contain all SAS/SATA hard drives or all SAS/SATA SSDs.

4K Sector Format SAS/SATA Drives Considerations

  • You must boot 4K sector format drives in UEFI mode, not legacy mode. See the procedures in this section.

  • Do not configure 4K sector format and 512-byte sector format drives as part of the same RAID volume.

  • For operating system support on 4K sector drives, see the interoperability matrix tool for your node: Hardware and Software Interoperability Matrix Tools

Procedure


Setting Up UEFI Mode Booting in the BIOS Setup Utility
Procedure

Step 1

Enter the BIOS setup utility by pressing the F2 key when prompted during bootup.

Step 2

Go to the Boot Options tab.

Step 3

Set UEFI Boot Options to Enabled.

Step 4

Under Boot Option Priorities, set your OS installation media (such as a virtual DVD) as your Boot Option #1.

Step 5

Go to the Advanced tab.

Step 6

Select LOM and PCIe Slot Configuration.

Step 7

Set the PCIe Slot ID: HBA Option ROM to UEFI Only.

Step 8

Press F10 to save changes and exit the BIOS setup utility. Allow the node to reboot.

Step 9

After the OS installs, verify the installation:

  1. Enter the BIOS setup utility by pressing the F2 key when prompted during bootup.

  2. Go to the Boot Options tab.

  3. Under Boot Option Priorities, verify that the OS you installed is listed as your Boot Option #1.


Setting Up UEFI Mode Booting in the Cisco IMC GUI
Procedure

Step 1

Use a web browser and the IP address of the node to log into the Cisco IMC GUI management interface.

Step 2

Navigate to Server > BIOS.

Step 3

Under Actions, click Configure BIOS.

Step 4

In the Configure BIOS Parameters dialog, select the Advanced tab.

Step 5

Go to the LOM and PCIe Slot Configuration section.

Step 6

Set the PCIe Slot: HBA Option ROM to UEFI Only.

Step 7

Click Save Changes. The dialog closes.

Step 8

Under BIOS Properties, set Configured Boot Order to UEFI.

Step 9

Under Actions, click Configure Boot Order.

Step 10

In the Configure Boot Order dialog, click Add Local HDD.

Step 11

In the Add Local HDD dialog, enter the information for the 4K sector format drive and make it first in the boot order.

Step 12

Save changes and reboot the node. The changes you made will be visible after the system reboots.


Replacing a Front-Loading SAS/SATA Drive


Note

You do not have to shut down the node or drive to replace SAS/SATA hard drives or SSDs because they are hot-swappable.

Procedure


Step 1

Remove the drive that you are replacing or remove a blank drive tray from the bay:

  1. Press the release button on the face of the drive tray.

  2. Grasp and open the ejector lever and then pull the drive tray out of the slot.

  3. If you are replacing an existing drive, remove the four drive-tray screws that secure the drive to the tray and then lift the drive out of the tray.

Step 2

Install a new drive:

  1. Place a new drive in the empty drive tray and install the four drive-tray screws.

  2. With the ejector lever on the drive tray open, insert the drive tray into the empty drive bay.

  3. Push the tray into the slot until it touches the backplane, and then close the ejector lever to lock the drive in place.

Figure 8. Replacing a Drive in a Drive Tray

1

Ejector lever

3

Drive tray screws (two on each side)

2

Release button

4

Drive removed from drive tray


Replacing Rear-Loading SAS/SATA Drives


Note

You do not have to shut down the server or drive to replace SAS/SATA hard drives or SSDs because they are hot-swappable.

Rear-Loading SAS/SATA Drive Population Guidelines

The rear drive bay support differs by node PID and which type of RAID controller is used in the node:

  • HX C240 M6 24 SAS/SATA—Small form-factor (SFF) drives, with 24-drive backplane.

    • Hardware RAID—Rear drive bays support SAS or NVMe drives

    • Intel™ Virtual RAID on CPU—Rear drive bays support NVMe drives only.

  • HX C240 M6 24 SAS/SATA—SFF drives, with 24-drive backplane.

    • Rear drive bays support only NVMe SSDs.

  • HX C240 M6 8 SAS/SATA plus optical—SFF drives, with 8-drive backplane and DVD drive option.

    • Hardware RAID—Rear drive bays support SAS or NVMe drives

    • Intel™ Virtual RAID on CPU—Rear drive bays support NVMe drives only.

  • HX C240 M6 12 LFF—Large form-factor (LFF) drives, with 12-drive backplane.

    • Hardware RAID—Rear drive bays support SAS or NVMe drives

    • Intel™ Virtual RAID on CPU—Rear drive bays support NVMe drives only.

  • The rear drive bay numbering follows the front-drive bay numbering in each node version:

    • 8-drive node—rear bays are numbered bays 9 and 10.

    • 12-drive node—rear bays are numbered bays 13 and 14.

    • 24-drive node—rear bays are numbered bays 25 and 26.

  • When populating drives, add drives to the lowest-numbered bays first.

  • Keep an empty drive blanking tray in any unused bays to ensure proper airflow.

  • You can mix SAS/SATA hard drives and SAS/SATA SSDs in the same node. However, you cannot configure a logical volume (virtual drive) that contains a mix of hard drives and SSDs. That is, when you create a logical volume, it must contain all SAS/SATA hard drives or all SAS/SATA SSDs.

Replacing a Rear-Loading SAS/SATA Drive


Note

You do not have to shut down the node or drive to replace SAS/SATA hard drives or SSDs because they are hot-swappable.
Procedure

Step 1

Remove the drive that you are replacing or remove a blank drive tray from the bay:

  1. Press the release button on the face of the drive tray.

  2. Grasp and open the ejector lever and then pull the drive tray out of the slot.

  3. If you are replacing an existing drive, remove the four drive-tray screws that secure the drive to the tray and then lift the drive out of the tray.

Step 2

Install a new drive:

  1. Place a new drive in the empty drive tray and install the four drive-tray screws.

  2. With the ejector lever on the drive tray open, insert the drive tray into the empty drive bay.

  3. Push the tray into the slot until it touches the backplane, and then close the ejector lever to lock the drive in place.

Figure 9. Replacing a Drive in a Drive Tray

1

Ejector lever

3

Drive tray screws (two on each side)

2

Release button

4

Drive removed from drive tray


Replacing Mid-Mounted SAS/SATA Drives (LFF Server)

Mid-mounted drives are supported on the LFF node only. These drives connect directly to the midplane, so there are no cables to disconnect as part of the replacement procedure.

Mid-mounted drives can be hot swapped and hot inserted, so you do not need to disconnect facility power.

Procedure


Step 1

Open the node top cover.

Step 2

Grasp the handle for the mid-mount drive cage, and swing the cage cover open.

When the cage cover is open, it will be pointing up at a 90-degree angle.

Step 3

Grasping the cage cover handle, pull up on the drive cage until the bottom row of drives clears the top of the node.

When pulling on the mid-mount drive cage, it will arc upward.

Step 4

Grasp the drive handle and pull the drive out of the mid-mount drive cage.

Step 5

Orient the drive so that the handle is at the bottom and align it with its drive bay.

Step 6

Holding the drive level, slide it into the drive bay until it connects with the midplane.

Step 7

Push down on the drive cage so that it seats into the node.

Step 8

Grasp the handle and close the node cage cover.

Note 

Make sure that the node cage cover is completely closed, and the node cage is completely seated in the node. When the node cage is completely seated, its top is flush with the fans and rear PCI riser cages.

Step 9

Install the node's top cover.

If the node's top cover does not close easily, check that the mid-mount drive cage is completely seated into the node.


Basic Troubleshooting: Reseating a SAS/SATA Drive

Due to a drive manufacturing issue, it is possible that a false positive UBAD error can occur on SAS/SATA HDDs installed in the node.

  • Only drives that are managed by the UCS MegaRAID controller are affected.

  • Drives can be affected regardless where they are installed in the node (front-loaded, rear-loaded, and so on).

  • Both SFF and LFF form factor can be affected.

  • Drives installed in all Cisco UCS C-Series nodes with M3 processors and later can be affected.

  • Drives can be affected regardless of whether they are configured for hotplug or not.

  • The UBAD error is not always valid, so the drive is not always defective or in need of repair or replacement. However, it is also possible that the error is correct, or the drive will need replacement.

Before submitting the drive to the RMA process, it is a best practice to reseat the drive. If the false UBAD error exists, reseating the drive can clear it. If successful, reseating the drive reduces inconvenience, cost, and service interruption, and optimizes your node uptime.


Note

Reseat the drive only if a UBAD error occurs. Other errors are transient, and you should not attempt diagnostics and troubleshooting without the assistance of Cisco personnel. Contact Cisco TAC for assistance with other drive errors.


To reseat the drive, see Reseating a SAS/SATA Drive.

Reseating a SAS/SATA Drive

Sometimes, SAS/SATA drives can throw a false UBAD error and reseating the drive can clear the error.

Use the following procedure to reseat the drive.


Caution

This procedure requires powering down the node, which will cause a service interruption.


Before you begin

Before attempting this procedure, be aware of the following:

  • Before reseating the drive, it is a best practice to back up any data on it.

  • When reseating the drive, make sure to reuse the same drive bay.

    • Do not move the drive to a different slot.

    • Do not move the drive to a different node.

    • If you do not reuse the same slot, the Cisco management software (for example, Cisco IMM) might require a rescan/rediscovery of the node.

  • Regardless of the status of hotplug/hot insertion on the drive, it is a best practice to gracefully power down the drive before physically reseating it. To gracefully power down the drive, use the appropriate Cisco management platform:

    • Cisco Intersight Managed Mode (Cisco IMM)

    • Cisco UCS Manager (UCSM)

    • Cisco Integrated Management Controller (CIMC)

Procedure

Step 1

If you have not powered down the node, do so now.

  1. Use your node management software to gracefully power down the node.

    See the appropriate Cisco management software documentation.

  2. If node power down through software is not available, you can power down the node by pressing the power button.

    See Status LEDs and Buttons.

Step 2

Choose the appropriate option:.

  1. For a front-loading drive, see Replacing a Front-Loading SAS/SATA Drive

  2. For a rear-loading drive, see Replacing a Rear-Loading SAS/SATA Drive

  3. For a mid-mount drive, see Replacing Mid-Mounted SAS/SATA Drives (LFF Server)

Note 

While the drive is removed, it is a best practice to perform a visual inspection. Check the drive bay to ensure that no dust or debris is present. Also, check the connector on the back of the drive and the connector on the inside of the node for any obstructions or damage.

Step 3

When the drive is correctly reseated, restart the node.

Step 4

During node boot up, watch the drive's LEDs to verify correct operation.

See Status LEDs and Buttons.

Step 5

If reseating the drive does not clear the UBAD error, choose the appropriate option:

  1. Contact Cisco Systems for assistance with troubleshooting.

  2. Begin an RMA of the errored drive.


Replacing Front-Loading NVMe SSDs

This section is for replacing 2.5-inch or 3.5-inch form-factor NVMe solid-state drives (SSDs) in front-panel drive bays.

Front-Loading NVMe SSD Population Guidelines

The front drive bay support for 2.5-inch NVMe SSDs differs by node PID:

  • HX C240 M6 SFF 24 SAS/SATA—Small form-factor (SFF) drives, with 24-drive backplane. Drive bays 1 and 4 support 2.5-inch NVMe SSDs.

  • HX C240 M6 24 NVMe—SFF drives, with 24-drive backplane. Drive bay 1 - 24 support only 2.5-inch NVMe SSDs.

  • HX C240 M6 12 SAS/SATA plus optical—SFF drives, with 8-drive backplane and DVD drive option. Drive bays 1 and 4 support 2.5-inch NVMe SSDs.

  • HX C240 M6 12 NVMe—SFF drives, with 12-drive backplane. Drive bay 1 - 12 support only 2.5-inch NVMe SSDs.

  • HX C240 M6 LFF—Large form-factor (LFF) drives, with 12-drive backplane. Drive bays 1 - 4 support 2.5-inch NVMe SSDs. If you use 2.5-inch NVMe SSDs, a size-converter drive tray (HX-LFF-SFF-SLED2) is required for this version of the node.

Front-Loading NVME SSD Requirements and Restrictions

Observe these requirements:

  • The node must have two CPUs. PCIe riser 2 is not available in a single-CPU system.

  • PCIe cable. This is the cable that carries the PCIe signal from the front-panel drive backplane to PCIe riser 1B or 3B. The cable differs by node version:

    • For small form factor (SFF) drive versions of the node: CBL-NVME-C240SFF

    • For the large form factor (LFF) drive version of the node: CBL-NVME-C240LFF

  • Hot-plug support must be enabled in the system BIOS. If you ordered the system with NVMe drives, hot-plug support is enabled at the factory.

Observe these restrictions:

  • NVMe 2.5 SSDs support booting only in UEFI mode. Legacy boot is not supported. For instructions on setting up UEFI boot, see Setting Up UEFI Mode Booting in the BIOS Setup Utility or Setting Up UEFI Mode Booting in the Cisco IMC GUI.

  • You cannot control NVMe PCIe SSDs with a SAS RAID controller because NVMe SSDs interface with the node via the PCIe bus.

  • You can combine NVMe SSDs in the same system, but the same partner brand must be used. For example, two Intel NVMe SFF 2.5-inch SSDs and two HGST SSDs is an invalid configuration.

  • UEFI boot is supported in all supported operating systems. Hot-insertion and hot-removal are supported in all supported operating systems except VMWare ESXi.

Enabling Hot-Plug Support in the System BIOS

Hot-plug (OS-informed hot-insertion and hot-removal) is disabled in the system BIOS by default.

  • If the system was ordered with NVMe PCIe SSDs, the setting was enabled at the factory. No action is required.

  • If you are adding NVMe PCIe SSDs after-factory, you must enable hot-plug support in the BIOS. See the following procedures.

Enabling Hot-Plug Support Using the BIOS Setup Utility
Procedure

Step 1

Enter the BIOS setup utility by pressing the F2 key when prompted during bootup.

Step 2

Navigate to Advanced > PCI Subsystem Settings > NVMe SSD Hot-Plug Support.

Step 3

Set the value to Enabled.

Step 4

Save your changes and exit the utility.


Enabling Hot-Plug Support Using the Cisco IMC GUI
Procedure

Step 1

Use a browser to log in to the Cisco IMC GUI for the node.

Step 2

Navigate to Compute > BIOS > Advanced > PCI Configuration.

Step 3

Set NVME SSD Hot-Plug Support to Enabled.

Step 4

Save your changes.


Replacing a Front-Loading NVMe SSD

This topic describes how to replace 2.5- or form-factor NVMe SSDs in the front-panel drive bays.


Note

OS-surprise removal is not supported. OS-informed hot-insertion and hot-removal are supported on all supported operating systems except VMware ESXi.



Note

OS-informed hot-insertion and hot-removal must be enabled in the system BIOS. See Enabling Hot-Plug Support in the System BIOS.


Procedure

Step 1

Remove an existing front-loading NVMe SSD:

  1. Shut down the NVMe SSD to initiate an OS-informed removal. Use your operating system interface to shut down the drive, and then observe the drive-tray LED:

    • Green—The drive is in use and functioning properly. Do not remove.

    • Green, blinking—the driver is unloading following a shutdown command. Do not remove.

    • Off—The drive is not in use and can be safely removed.

  2. Press the release button on the face of the drive tray.

  3. Grasp and open the ejector lever and then pull the drive tray out of the slot.

  4. Remove the four drive tray screws that secure the SSD to the tray and then lift the SSD out of the tray.

Note 
If this is the first time that front-loading NVMe SSDs are being installed in the node, you must install a PCIe cable with PCIe riser 2C.
Step 2

Install a new front-loading NVMe SSD:

  1. Place a new SSD in the empty drive tray and install the four drive-tray screws.

  2. With the ejector lever on the drive tray open, insert the drive tray into the empty drive bay.

  3. Push the tray into the slot until it touches the backplane, and then close the ejector lever to lock the drive in place.

Step 3

Observe the drive-tray LED and wait until it returns to solid green before accessing the drive:

  • Off—The drive is not in use.

  • Green, blinking—the driver is initializing following hot-plug insertion.

  • Green—The drive is in use and functioning properly.

Figure 10. Replacing a Drive in a Drive Tray

1

Ejector lever

3

Drive tray screws (two on each side)

2

Release button

4

Drive removed from drive tray


Cabling NVMe Drives 1 Through 4 (UCS C240 M6 24 SFF Drives Only)

When adding or replacing NVMe drives, two specific cables are required and are available through CBL-FNVME-240M6=.

  • One NVMe C cable (74-126742-01), which connects front-loading drives 1 and 2 to the motherboard.

  • One NVMe D cable (74-124687-01), which connects front-loading drives 3 and 4 to the motherboard.

Connectors are keyed, and they are different at each end of the cable to prevent improper installation. The backplane connector IDs are silk screened onto the interior of the node.

For this task, you need the appropriate cables.

Before you begin

Specific cables are required to add or replace front-loading NVMe drives 1 through 4 in Cisco HX C240 M6 24-SFF drive nodes. This procedure is for Cisco HX C240 M6 24-drive nodes only.

Procedure

Step 1

Remove the node top cover.

See Removing the Node Top Cover.

Step 2

Remove the fan tray.

See Removing the Fan Tray

Step 3

Locate the NVMe backplane connectors.

1

Backplane Connector, NVMe D

2

Backplane Connector, NVMe C

3

Motherboard connector, NVMe D

-

Motherboard connector, NVMe C

Step 4

Orient the NVMe D cable correctly, lower it into place, and attach both ends.

Step 5

Attach both ends of the NVMe D cable.

Note 

This cable must be installed first to allow the NVMe C cable to lie on top of it.

Step 6

Orient the NVMe D cable correctly, lower it into place, and attach both ends.

Note 

The NVMe D cable lies on top of the NVMe C cable.

Step 7

If drives are installed in any of slots 1 through 4, look at the drive LEDs to verify correct operation.

See Front-Panel LEDs.
Step 8

When drives are successfully booted up to runtime, reinstall the fan tray.

See Installing the Fan Tray.

Step 9

Replace the top cover.


Replacing Rear-Loading NVMe SSDs

This section is for replacing 2.5-inch form-factor NVMe solid-state drives (SSDs) in rear-panel drive bays.

Rear-Loading NVMe SSD Population Guidelines

The rear drive bay support differs by node PID and which type of RAID controller is used in the node for non-NVMe drives:

  • HX C240 M6 24 SAS/SATA—Small form-factor (SFF) drives, with 24-drive backplane.

    • Hardware RAID—Rear drive bays support SAS or NVMe drives

  • HX C240 M6 12 SAS/SATA—SFF drives, with 12-drive backplane.

    • Rear drive bays support only NVMe SSDs.

  • HX C240 M6 24 NVMe—SFF drives, with 24-drive backplane.

    • Hardware RAID—Rear drive bays support \NVMe drives only

  • HX C240 M6 12 NVMe—SFF drives, with 12-drive backplane.

    • Hardware RAID—Rear drive bays support NVMe drives only.

  • HX C240 M6 12 LFF—Large form-factor (LFF) drives, with 12-drive backplane.

    • Hardware RAID—Rear drive bays support SAS or NVMe drives

  • The rear drive bay numbering follows the front-drive bay numbering in each node version:

    • 12-drive node—rear bays are numbered bays 103 and 104.

    • 24-drive node—rear bays are numbered bays 101 through 104.

  • When populating drives, add drives to the lowest-numbered bays first.

  • Drives are hot pluggable, but each drive requires a 10-second delay between hot removal and hot insertion.

  • Keep an empty drive blanking tray in any unused bays to ensure proper airflow.

Rear-Loading NVME SSD Requirements and Restrictions

Observe these requirements:

  • The node must have two CPUs. PCIe riser 2 is not available in a single-CPU system.

  • PCIe riser 1A and 3A support NVMe rear drives.

  • Rear PCIe cable and rear drive backplane.

  • Hot-plug support must be enabled in the system BIOS. If you ordered the system with NVMe drives, hot-plug support is enabled at the factory.

Observe these restrictions:

  • NVMe SSDs support booting only in UEFI mode. Legacy boot is not supported. For instructions on setting up UEFI boot, see Setting Up UEFI Mode Booting in the BIOS Setup Utility or Setting Up UEFI Mode Booting in the Cisco IMC GUI.

  • You cannot control NVMe PCIe SSDs with a SAS RAID controller because NVMe SSDs interface with the node via the PCIe bus.

  • You can combine NVMe 2.5-inch SSDs in the same system, but the same partner brand must be used. For example, two Intel NVMe SFF 2.5-inch SSDs and two HGST SSDs is an invalid configuration.

  • UEFI boot is supported in all supported operating systems. Hot-insertion and hot-removal are supported in all supported operating systems except VMWare ESXi.

Replacing a Rear-Loading NVMe SSD

This topic describes how to replace 2.5-inch form-factor NVMe SSDs in the rear-panel drive bays.


Note

OS-surprise removal is not supported. OS-informed hot-insertion and hot-removal are supported on all supported operating systems except VMware ESXi.



Note

OS-informed hot-insertion and hot-removal must be enabled in the system BIOS. See Enabling Hot-Plug Support in the System BIOS .


Procedure

Step 1

Remove an existing rear-loading NVMe SSD:

  1. Shut down the NVMe SSD to initiate an OS-informed removal. Use your operating system interface to shut down the drive, and then observe the drive-tray LED:

    • Green—The drive is in use and functioning properly. Do not remove.

    • Green, blinking—the driver is unloading following a shutdown command. Do not remove.

    • Off—The drive is not in use and can be safely removed.

  2. Press the release button on the face of the drive tray.

  3. Grasp and open the ejector lever and then pull the drive tray out of the slot.

  4. Remove the four drive tray screws that secure the SSD to the tray and then lift the SSD out of the tray.

Note 

If this is the first time that rear-loading NVMe SSDs are being installed in the node, you must install PCIe riser 2B or 2C and a rear NVMe cable kit.

Step 2

Install a new front-loading NVMe SSD:

  1. Place a new SSD in the empty drive tray and install the four drive-tray screws.

  2. With the ejector lever on the drive tray open, insert the drive tray into the empty drive bay.

  3. Push the tray into the slot until it touches the backplane, and then close the ejector lever to lock the drive in place.

Step 3

Observe the drive-tray LED and wait until it returns to solid green before accessing the drive:

  • Off—The drive is not in use.

  • Green, blinking—the driver is initializing following hot-plug insertion.

  • Green—The drive is in use and functioning properly.

Figure 11. Replacing a Drive in a Drive Tray

1

Ejector lever

3

Drive tray screws (two on each side)

2

Release button

4

Drive removed from drive tray


Basic Troubleshooting: Reseating an NVMe Drive

Due to a drive-manufacturing issue, it is possible that a false positive UBAD error can occur on NVMe SSDs installed in the node.

  • Only drives that are managed by the UCS MegaRAID controller are affected.

  • Drives can be affected regardless where they are installed in the node (front-loaded, rear-loaded, and so on).

  • Drives installed in all Cisco UCS C-Series nodes with M3 processors and later can be affected.

  • Drives can be affected regardless of whether they are configured for hotplug or not.

  • The UBAD error is not always valid, so the drive is not always defective or in need of repair or replacement. However, it is also possible that the error is correct, or the drive will need replacement.

Before submitting the drive to the RMA process, it is a best practice to reseat the drive. If the false UBAD error exists, reseating the drive can clear it. If successful, reseating the drive reduces inconvenience, cost, and service interruption, and optimizes your node uptime.


Note

Reseat the drive only if a UBAD error occurs. Other errors are transient. You should not attempt diagnostics and troubleshooting without the assistance of Cisco personnel. Contact Cisco TAC for assistance with other drive errors.


To reseat the drive, see Reseating an NVMe Drive.

Reseating an NVMe Drive

Sometimes, NVMe drives can throw a false UBAD error and reseating the drive can clear the error.

Use the following procedure to reseat the drive.


Caution

This procedure requires powering down the node, which will cause a service interruption.


Before you begin

Before attempting this procedure, be aware of the following:

  • Before reseating the drive, it is a best practice to back up any data on it.

  • When reseating the drive, make sure to reuse the same drive bay.

    • Do not move the drive to a different slot.

    • Do not move the drive to a different node.

    • If you do not reuse the same slot, the Cisco management software (for example, Cisco IMM) might require a rescan/rediscovery of the node.

  • Regardless of the status of hotplug/hot insertion on the drive, it is a best practice to gracefully power down the drive before physically reseating it. To gracefully power down the drive, use the appropriate Cisco management platform:

    • Cisco Intersight Managed Mode (Cisco IMM)

    • Cisco UCS Manager (UCSM)

    • Cisco Integrated Management Controller (CIMC)

Procedure

Step 1

If you have not powered down the node, do so now.

  1. Use your node management software to gracefully power down the node.

    See the appropriate Cisco management software documentation.

  2. If node power down through software is not available, you can power down the node by pressing the power button.

    See Status LEDs and Buttons.

Step 2

Choose the appropriate option:.

  1. For a front-loading drive, see Replacing a Front-Loading NVMe SSD

  2. For a rear-loading drive, see Replacing a Rear-Loading NVMe SSD

Note 

While the drive is removed, it is a best practice to perform a visual inspection. Check the drive bay to ensure that no dust or debris is present. Also, check the connector on the back of the drive and the connector on the inside of the node for any obstructions or damage.

Step 3

When the drive is correctly reseated, restart the node.

Step 4

During node boot up, watch the drive's LEDs to verify correct operation.

See Status LEDs and Buttons.

Step 5

If reseating the drive does not clear the UBAD error, choose the appropriate option:

  1. Contact Cisco Systems for assistance with troubleshooting.

  2. Begin an RMA of the errored drive.


Replacing Fan Modules


Tip

There is a fault LED on the top of each fan module. This LED lights green when the fan is correctly seated and is operating OK. The LED lights amber when the fan has a fault or is not correctly seated.

Caution

You do not have to shut down or remove power from the node to replace fan modules because they are hot- swappable. However, to maintain proper cooling, do not operate the node for more than one minute with any fan module removed.

Procedure


Step 1

Remove an existing fan module:

  1. Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

    Caution 
    If you cannot safely view and access the component, remove the node from the rack.
  2. Remove the top cover from the node as described in Removing the Node Top Cover.

  3. Grasp and squeeze the fan module release latches on its top. Lift straight up to disengage its connector from the motherboard.

Step 2

Install a new fan module:

  1. Set the new fan module in place. The arrow printed on the top of the fan module should point toward the rear of the node.

  2. Press down gently on the fan module to fully engage it with the connector on the motherboard.

  3. Replace the top cover to the node.

  4. Replace the node in the rack, replace cables, and then fully power on the node by pressing the Power button.

Figure 12. Top View of Fan Module

1

Fan module release latches

2

Fan module fault LED


Removing the Fan Tray

The fan tray can be removed either with all fan modules in place, or when some, or all, of the fan modules have been removed.

Procedure


Step 1

Remove the screws that secure the fan tray to the chassis.

  1. Locate the screws that secure the fan tray to the node.

  2. Using a #2 Phillips screwdriver, loosen the screws.

Step 2

Disconnect the fan tray cable from the fan tray, leaving the motherboard connection in place.

Step 3

Remove the fan tray from the node.

  1. Grasp the handles at the top of the fan tray.

  2. Holding the fan tray level and making sure that the fan tray cable does not obstruct removal, lift the fan tray up until it is removed from the chassis.


What to do next

Reinsert the fan tray into the chassis. See Installing the Fan Tray.

Installing the Fan Tray

You can install the fan tray with or without fans installed. Use the following procedure to install the fan tray.

Procedure


Step 1

Install the fan tray.

  1. Align the fan tray with the guides on the inside of the chassis.

  2. Make sure that the fan tray cable is out of the way and will not obstruct installation.

  3. Holding the fan tray by the handles, slide it into place in the chassis.

Step 2

Reconnect the fan tray cable.

Step 3

Close the top cover, or perform additional procedures, if needed.


Replacing CPUs and Heatsinks

This section contains the following topics:

CPU Configuration Rules

This node has two CPU sockets on the motherboard. Each CPU supports 8 DIMM channels (16 DIMM slots). See DIMM Population Rules and Memory Performance Guidelines.

  • The node can operate with one CPU, or two identical CPUs installed.

  • The minimum configuration is that the node must have at least CPU 1 installed. Install CPU 1 first, and then CPU 2.

  • The following restrictions apply when using a single-CPU configuration:

    • Any unused CPU socket must have the socket dust cover from the factory in place.

    • The maximum number of DIMMs is 16 (only CPU 1 channels A through H).

  • Two different form factors exist for heatsinks, a low profile and a high profile. The node can be ordered with either, but you cannot mix high- and low-profile CPUs and heatsinks in the same node. A single node must have all of one type.

    The CPU and heatsink installation procedure is different depending on the type of heatsink used in your node.

    • Low profile (HX-HSLP-M6), which has 4 T30 Torx screws on the main heatsink, and 2 Phillips-head screws on the extended heatsink.

      This heat sink is required for nodes that contain one or more GPUs.

      This heat sink is not supported on C240 M6 LFF nodes.

    • High profile (HX-HSHP-240M6), which has 4 T30 Torx screws.

Tools Required For CPU Replacement

You need the following tools and equipment for this procedure:

  • T-30 Torx driver—Supplied with replacement CPU.

  • #1 flat-head screwdriver—Supplied with replacement CPU.

  • CPU assembly tool—Supplied with replacement CPU. Orderable separately as Cisco PID HX-CPUAT=.

  • Heatsink cleaning kit—Supplied with replacement CPU. Orderable separately as Cisco PID HX-HSCK=.

    One cleaning kit can clean up to four CPUs.

  • Thermal interface material (TIM)—Syringe supplied with replacement CPU. Use only if you are reusing your existing heatsink (new heatsinks have a pre-applied pad of TIM). Orderable separately as Cisco PID HX-CPU-TIM=.

    One TIM kit covers one CPU.

See also Additional CPU-Related Parts to Order with RMA Replacement CPUs.

Removing CPUs and Heat Sinks

Use the following procedure to remove an installed CPU and heatsink from the blade server node. With this procedure, you will remove the CPU from the motherboard, disassemble individual components, then place the CPU and heatsink into the fixture that came with the CPU.

Procedure

Step 1

Choose the appropriate method to loosen the securing screws, based on the whether the CPU has a high-profile or low-profile heatsink.

  • For a CPU with a high-profile heatsink, proceed to step a.

  • For a CPU with a low-profile heatsink, skip to step 2.

  1. Using a T30 Torx driver, loosen all the securing nuts.

  2. Push the rotating wires towards each other to move them to the unlocked position.

    Caution 

    Make sure that the rotating wires are as far inward as possible. When fully unlocked, the bottom of the rotating wire disengages and allows the removal of the CPU assembly. If the rotating wires are not fully in the unlocked position, you can feel resistance when attempting to remove the CPU assembly.

  3. Grasp the CPU and heatsink along the edge of the carrier and lift the CPU and heatsink off of the motherboard.

    Caution 
    While lifting the CPU assembly, make sure not to bend the heatsink fins. Also, if you feel any resistance when lifting the CPU assembly, verify that the rotating wires are completely in the unlocked position.
  4. Go to step 3.

Step 2

Remove the CPU.

  1. Using a #2 Phillips screwdriver, loosen the two Phillips head screws for the extended heatsink.

  2. Using a T30 Torx driver, loosen the four Torx securing nuts.

  3. Push the rotating wires towards each other to move them to the unlocked position.

    Caution 

    Make sure that the rotating wires are as far inward as possible. When fully unlocked, the bottom of the rotating wire disengages and allows the removal of the CPU assembly. If the rotating wires are not fully in the unlocked position, you can feel resistance when attempting to remove the CPU assembly.

  4. Grasp the CPU and heatsink along the edge of the carrier and lift the CPU and heatsink off of the motherboard.

    Caution 
    While lifting the CPU assembly, make sure not to bend the heatsink fins. Also, if you feel any resistance when lifting the CPU assembly, verify that the rotating wires are completely in the unlocked position.
  5. Go to step 3.

Step 3

Put the CPU assembly on a rubberized mat or other ESD-safe work surface.

When placing the CPU on the work surface, the heatsink label should be facing up. Do not rotate the CPU assembly upside down.

Step 4

Attach a CPU dust cover (HX-CPU-M6-CVR=) to the CPU socket.

  1. Align the posts on the CPU bolstering plate with the cutouts at the corners of the dust cover.

  2. Lower the dust cover and simultaneously press down on the edges until it snaps into place over the CPU socket.

    Caution 

    Do not press down in the center of the dust cover!

Step 5

Detach the CPU from the CPU carrier by disengaging CPU clips and using the TIM breaker.

  1. Turn the CPU assembly upside down, so that the heatsink is pointing down.

    This step enables access to the CPU securing clips.

  2. Gently lift the TIM breaker in a 90-degree upward arc to partially disengage the CPU clips on this end of the CPU carrier.

  3. Lower the TIM breaker into the u-shaped securing clip to allow easier access to the CPU carrier.

    Note 

    Make sure that the TIM breaker is completely seated in the securing clip.

  4. Gently pull up on the outer edge of the CPU carrier (2) so that you can disengage the second pair of CPU clips near both ends of the TIM breaker.

    Caution 

    Be careful when flexing the CPU carrier! If you apply too much force you can damage the CPU carrier. Flex the carrier only enough to release the CPU clips. Make sure to watch the clips while performing this step so that you can see when they disengage from the CPU carrier.

  5. Gently pull up on the outer edge of the CPU carrier so that you can disengage the pair of CPU clips (3 in the following illustration) which are opposite the TIM breaker.

  6. Grasp the CPU carrier along the short edges and lift it straight up to remove it from the heatsink.

Step 6

Transfer the CPU and carrier to the fixture.

  1. When all the CPU clips are disengaged, grasp the carrier, and lift it and the CPU to detach them from the heatsink.

    Note 

    If the carrier and CPU do not lift off of the heatsink, attempt to disengage the CPU clips again.

  2. Flip the CPU and carrier right-side up so that the words PRESS HERE are visible.

  3. Align the posts on the fixture and the pin 1 locations on the CPU carrier and the fixture (1 in the following illustration).

  4. Lower the CPU and CPU carrier onto the fixture.

Step 7

Use the provided cleaning kit (HX-HSCK) to remove all of the thermal interface barrier (thermal grease) from the CPU, CPU carrier, and heatsink.

Important 

Make sure to use only the Cisco-provided cleaning kit, and make sure that no thermal grease is left on any surfaces, corners, or crevices. The CPU, CPU carrier, and heatsink must be completely clean.


What to do next

Choose the appropriate option:

  • If you will be installing a CPU, go to Installing the CPUs and Heatsinks.

  • If you will not be installing a CPU, verify that a CPU socket cover is installed. This option is valid only for CPU socket 2 because CPU socket 1 must always be populated in a runtime deployment.

Installing the CPUs and Heatsinks

Use this procedure to install a CPU if you have removed one, or if you are installing a CPU in an empty CPU socket. To install the CPU, you will move the CPU to the fixture, then attach the CPU assembly to the CPU socket on the node mother board.

Procedure

Step 1

Remove the CPU socket dust cover (HX-CPU-M6-CVR=) on the node motherboard.

  1. Push the two vertical tabs inward to disengage the dust cover.

  2. While holding the tabs in, lift the dust cover up to remove it.

  3. Store the dust cover for future use.

    Caution 

    Do not leave an empty CPU socket uncovered. If a CPU socket does not contain a CPU, you must install a CPU dust cover.

Step 2

Grasp the CPU fixture on the edges labeled PRESS, lift it out of the tray, and place the CPU assembly on an ESD-safe work surface.

Step 3

Apply new TIM.

Note 
The heatsink must have new TIM on the heatsink-to-CPU surface to ensure proper cooling and performance.
  • If you are installing a new heatsink, it is shipped with a pre-applied pad of TIM. Go to step 4.

  • If you are reusing a heatsink, you must remove the old TIM from the heatsink and then apply new TIM to the CPU surface from the supplied syringe. Continue with step a below.

  1. Apply the Bottle #1 cleaning solution that is included with the heatsink cleaning kit (UCSX-HSCK=), as well as the spare CPU package, to the old TIM on the heatsink and let it soak for a least 15 seconds.

  2. Wipe all of the TIM off the heatsink using the soft cloth that is included with the heatsink cleaning kit. Be careful to avoid scratching the heatsink surface.

  3. Completely clean the bottom surface of the heatsink using Bottle #2 to prepare the heatsink for installation.

  4. Using the syringe of TIM provided with the new CPU (UCS-CPU-TIM=), apply 1.5 cubic centimeters (1.5 ml) of thermal interface material to the top of the CPU. Use the pattern shown in the following figure to ensure even coverage.

    Figure 13. Thermal Interface Material Application Pattern
    Caution 

    Use only the correct heatsink for your CPU. CPU 1 uses heatsink UCSB-HS-M6-R and CPU 2 uses heatsink HX-HS-M6-F.

Step 4

Attach the heatsink to the CPU fixture.

  1. Make sure the rotating wires are in the unlocked position so that the feet of the wires do not impede installing the heatsink.

  2. Grasp the heatsink by the fins and align the pin 1 location of the heatsink with the pin 1 location on the CPU fixture, then lower the heatsink onto the CPU fixture.

Step 5

Install the CPU assembly onto the CPU motherboard socket.

  1. Push the rotating wires (1 in the following image) to the unlocked position so that they do not obstruct installation.

  2. Grasp the heatsink by the fins, align the pin 1 location on the heatsink with the pin 1 location on the CPU socket (2 in the following image), then seat the heatsink onto the CPU socket.

  3. Holding the CPU assembly level, lower it onto the CPU socket.

  4. Push the rotating wires away from each other to lock the CPU assembly into the CPU socket.

    Caution 

    Make sure that you close the rotating wires completely before using the Torx driver to tighten the securing nuts.

  5. Choose the appropriate option to secure the CPU to the socket.

    • For a CPU with a high-profile heatsink, set the T30 Torx driver to 12 in-lb of torque and tighten the 4 securing nuts to secure the CPU to the motherboard (4).

    • For a CPU with a low-profile heatsink, set the T30 Torx driver to 12 in-lb of torque and tighten the 4 securing nuts to secure the CPU to the motherboard (3) first. Then, set the torque driver to 6 in-lb of torque and tighten the two Phillips head screws for the extended heatsink (4).


Additional CPU-Related Parts to Order with RMA Replacement CPUs

When a return material authorization (RMA) of the CPU is done on a Cisco HX C-Series node, additional parts might not be included with the CPU spare. The TAC engineer might need to add the additional parts to the RMA to help ensure a successful replacement.


Note

The following items apply to CPU replacement scenarios. If you are replacing a system chassis and moving existing CPUs to the new chassis, you do not have to separate the heatsink from the CPU. See Additional CPU-Related Parts to Order with RMA Replacement System Chassis.


  • Scenario 1—You are reusing the existing heatsinks:

    • Heat sink cleaning kit (HX-HSCK=)

      One cleaning kit can clean up to four CPUs.

    • Thermal interface material (TIM) kit for M6 nodes (HX-CPU-TIM=)

      One TIM kit covers one CPU.

  • Scenario 2—You are replacing the existing heatsinks:


    Caution

    Use only the correct heatsink for your CPUs to ensure proper cooling. There are two different heatsinks, a low profile (HX-HSLP-M6) which is used with GPUs, and a high-profile (HX-HSLP-M6).
    • New heatsinks have a pre-applied pad of TIM.

    • Heat sink cleaning kit (HX-HSCK=)

      One cleaning kit can clean up to four CPUs.

  • Scenario 3—You have a damaged CPU carrier (the plastic frame around the CPU):

    • CPU Carrier: HX-M6-CPU-CAR=

    • #1 flat-head screwdriver (for separating the CPU from the heatsink)

    • Heatsink cleaning kit (HX-HSCK=)

      One cleaning kit can clean up to four CPUs.

    • Thermal interface material (TIM) kit for M6 nodes (HX-CPU-TIM=)

      One TIM kit covers one CPU.

A CPU heat sink cleaning kit is good for up to four CPU and heat sink cleanings. The cleaning kit contains two bottles of solution, one to clean the CPU and heat sink of old TIM and the other to prepare the surface of the heat sink.

New heat sink spares come with a pre-applied pad of TIM. It is important to clean any old TIM off of the CPU surface prior to installing the heat sinks. Therefore, even when you are ordering new heat sinks, you must order the heat sink cleaning kit.

Additional CPU-Related Parts to Order with RMA Replacement System Chassis

When a return material authorization (RMA) of the system chassis is done on a Cisco HyperFlex C-Series server node, you move existing CPUs to the new chassis.


Note

Unlike previous generation CPUs, the M6 node CPUs do not require you to separate the heatsink from the CPU when you move the CPU-heatsink assembly. Therefore, no additional heatsink cleaning kit or thermal-interface material items are required.


  • The only tool required for moving a CPU/heatsink assembly is a T-30 Torx driver.

Replacing Memory DIMMs


Caution

DIMMs and their sockets are fragile and must be handled with care to avoid damage during installation.



Caution

Cisco does not support third-party DIMMs. Using non-Cisco DIMMs in the node might result in system problems or damage to the motherboard.



Note

To ensure the best node performance, it is important that you are familiar with memory performance guidelines and population rules before you install or replace DIMMs.


DIMM Population Rules and Memory Performance Guidelines

The following sections provide partial information for memory usage. mixing, and population guidelines.

DIMM Slot Numbering

The following figure shows the numbering of the DIMM slots on the motherboard.

Figure 14. DIMM Slot Numbering
DIMM Population Rules

Observe the following guidelines when installing or replacing DIMMs for maximum performance:

  • Each CPU supports eight memory channels, A through H.

    • CPU 1 supports channels P1 A1, P1 A2, P1 B1, P1 B2, P1 C1, P1 C2, P1 D1, P1 D2, P1 E1, P1 E2, P1 F1, P1 F2, P1 G1, P1 G2, P1 H1, and P1 H2.

    • CPU 2 supports channels P2 A1, P2 A2, P2 B1, P2 B2, P2 C1, P2 C2, P2 D1, P2 D2, P2 E1, P2 E2, P2 F1, P2 F2, P2 G1, P2 G2, P2 H1, and P2 H2.

  • Each channel has two DIMM sockets (for example, channel A = slots A1, A2).

  • In a single-CPU configuration, populate the channels for CPU1 only (P1 A1 through P1 H2).

  • For optimal performance, populate DIMMs in the order shown in the following table, depending on the number of CPUs and the number of DIMMs per CPU. If your node has two CPUs, balance DIMMs evenly across the two CPUs as shown in the table.


    Note

    The sections below list recommended configurations. Using 5, 7, 9, 10, or 11 DIMMs per CPU is not recommended.


Memory Population Order

The Cisco UCS C240 M6 node has two memory options, DIMMs only or DIMMs plus Intel Optane Persistent Memory 200 series memory.

Memory slots are color coded, blue and black. The color-coded channel population order is blue slots first, then black.

The following tables show the memory population order for each memory option.

Table 3. DIMMs Population Order

Number of DDR4 DIMMs per CPU (Recommended Configurations)

Populate CPU 1 Slot

Populate CPU2 Slots

P1 Blue #1 Slots

P1 Black #2 Slots

P2 Blue #1 Slots

P2 Black #2 Slots

1

A1

-

A1

2

(A1, E1)

-

(A1, E1)

4

(A1, C1); (E1, G1)

-

(A1, C1); (E1, G1)

6

(A1, C1,D1); (E1, G1, H1)

-

(A1, C1,D1); (E1, G1, H1)

8

(A1, B1,C1, D1, E1, F1, G1, H1)

-

(A1, B1,C1, D1, E1, F1, G1, H1)

12

A1, C1, D1, E1, G1, H1

A2, C2, D2, E2, G2, H2

A1, C1, D1, E1, G1, H1

A2, C2, D2, E2, G2, H2

16

All populated (A1 through H1)

All populated (A2 through H2)

All populated (A1 through H1)

All populated (A2 through H2)

Table 4. DIMM Plus Intel Optane Persistent Memory 200 Series Memory Population Order

Total Number of DIMMs per CPU

DDR4 DIMM Slot

Intel Optane Persistent Memory 200 Series DIMM Slot

4+4 DIMM

A1, C1, E1, G1,

B1, D1, F1, H1

8+1 DIMMs

A1, B1, C1, D1, E1, F1, G1, H1

A1

8+4 DIMMs

A1, B1, C1, D1, E1, F1, G1, H1

A1, C1, E1, G1

8+8 DIMMs

A0, B0, C0, D0, E0, F0, G0, H0

A1, B1, C1, D1, E1, F1, G1, H1

Memory Mirroring

The CPUs in the node support memory mirroring only when an even number of channels are populated with DIMMs. If one or three channels are populated with DIMMs, memory mirroring is automatically disabled.

Memory mirroring reduces the amount of memory available by 50 percent because only one of the two populated channels provides data. The second, duplicate channel provides redundancy.

Replacing DIMMs

Identifying a Faulty DIMM

Each DIMM socket has a corresponding DIMM fault LED, directly in front of the DIMM socket. See Internal Diagnostic LEDs for the locations of these LEDs. When the node is in standby power mode, these LEDs light amber to indicate a faulty DIMM.

Procedure

Step 1

Remove an existing DIMM:

  1. Shut down and remove power from the node as described in Shutting Down and Removing Power From the Node.

  2. Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

    Caution 
    If you cannot safely view and access the component, remove the node from the rack.
  3. Remove the top cover from the node as described in Removing the Node Top Cover.

  4. Remove the air baffle that covers the front ends of the DIMM slots to provide clearance.

  5. Locate the DIMM that you are removing, and then open the ejector levers at each end of its DIMM slot.

Step 2

Install a new DIMM:

Note 

Before installing DIMMs, see the memory population rules for this node: DIMM Population Rules and Memory Performance Guidelines.

  1. Align the new DIMM with the empty slot on the motherboard. Use the alignment feature in the DIMM slot to correctly orient the DIMM.

  2. Push down evenly on the top corners of the DIMM until it is fully seated and the ejector levers on both ends lock into place.

  3. Replace the top cover to the node.

  4. Replace the node in the rack, replace cables, and then fully power on the node by pressing the Power button.


Replacing Intel Optane DC Persistent Memory Modules

This topic contains information for replacing Intel Optane Data Center Persistent memory modules (DCPMMs), including population rules and methods for verifying functionality. DCPMMs have the same form-factor as DDR4 DIMMs and they install to DIMM slots.


Caution

DCPMMs and their sockets are fragile and must be handled with care to avoid damage during installation.



Note

To ensure the best node performance, it is important that you are familiar with memory performance guidelines and population rules before you install or replace DCPMMs.


DCPMMs can be configured to operate in one of the following modes:

  • Memory Mode: The module operates as 100% memory module. Data is volatile and DRAM acts as a cache for DCPMMs.

  • App Direct Mode: The module operates as a solid-state disk storage device. Data is saved and is non-volatile.

Intel Optane DC Persistent Memory Module Population Rules and Performance Guidelines

This topic describes the rules and guidelines for maximum memory performance when using Intel Optane DC persistent memory modules (DCPMMs) with DDR4 DRAM DIMMs.

DIMM Slot Numbering

The following figure shows the numbering of the DIMM slots on the node motherboard.

Figure 15. DIMM Slot Numbering
Configuration Rules

Observe the following rules and guidelines:

  • To use DCPMMs in this node, two CPUs must be installed.

  • The DCPMMs run at 2666 MHz. If you have 2933 MHz RDIMMs or LRDIMMs in the node and you add DCPMMs, the main memory speed clocks down to 2666 MHz to match the speed of the DCPMMs.

  • Each DCPMM draws 18 W sustained, with a 20 W peak.

  • When using DCPMMs in a node:

    • The DDR4 DIMMs installed in the node must all be the same size.

    • The DCPMMs installed in the node must all be the same size and must have the same SKU.

Installing Intel Optane DC Persistent Memory Modules


Note

DCPMM configuration is always applied to all DCPMMs in a region, including a replacement DCPMM. You cannot provision a specific replacement DCPMM on a preconfigured server.


Procedure

Step 1

Remove an existing DCPMM:

  1. Shut down and remove power from the server as described in Shutting Down and Removing Power From the Node.

  2. Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

    Caution 
    If you cannot safely view and access the component, remove the server from the rack.
  3. Remove the top cover from the server as described in Removing the Node Top Cover.

  4. Remove the air baffle that covers the front ends of the DIMM slots to provide clearance.

    Caution 

    If you are moving DCPMMs with active data (persistent memory) from one server to another as in an RMA situation, each DCPMM must be installed to the identical position in the new server. Note the positions of each DCPMM or temporarily label them when removing them from the old server.

  5. Locate the DCPMM that you are removing, and then open the ejector levers at each end of its DIMM slot.

Step 2

Install a new DCPMM:

Note 

Before installing DCPMMs, see the population rules for this server: Intel Optane DC Persistent Memory Module Population Rules and Performance Guidelines.

  1. Align the new DCPMM with the empty slot on the motherboard. Use the alignment feature in the DIMM slot to correctly orient the DCPMM.

  2. Push down evenly on the top corners of the DCPMM until it is fully seated and the ejector levers on both ends lock into place.

  3. Reinstall the air baffle.

  4. Replace the top cover to the server.

  5. Replace the server in the rack, replace cables, and then fully power on the server by pressing the Power button.

Step 3

Perform post-installation actions:

  • If the existing configuration is in 100% Memory mode, and the new DCPMM is also in 100% Memory mode (the factory default), the only action is to ensure that all DCPMMs are at the latest, matching firmware level.

  • If the existing configuration is fully or partly in App-Direct mode and new DCPMM is also in App-Direct mode, then ensure that all DCPMMs are at the latest matching firmware level and also re-provision the DCPMMs by creating a new goal.

  • If the existing configuration and the new DCPMM are in different modes, then ensure that all DCPMMs are at the latest matching firmware level and also re-provision the DCPMMs by creating a new goal.

There are a number of tools for configuring goals, regions, and namespaces.


Node BIOS Setup Utility Menu for DCPMM


Caution

Potential data loss: If you change the mode of a currently installed DCPMM from App Direct to Memory Mode, any data in persistent memory is deleted.


DCPMMs can be configured by using the node's BIOS Setup Utility, Cisco IMC, Cisco UCS Manager, or OS-related utilities.

The node BIOS Setup Utility includes menus for DCPMMs. They can be used to view or configure DCPMM regions, goals, and namespaces, and to update DCPMM firmware.

To open the BIOS Setup Utility, press F2 when prompted onscreen during a system boot.

The DCPMM menu is on the Advanced tab of the utility:

Advanced > Intel Optane DC Persistent Memory Configuration

From this tab, you can access other menus:

  • DIMMs: Displays the installed DCPMMs. From this page, you can update DCPMM firmware and configure other DCPMM parameters.

    • Monitor health

    • Update firmware

    • Configure security

      You can enable security mode and set a password so that the DCPMM configuration is locked. When you set a password, it applies to all installed DCPMMs. Security mode is disabled by default.

    • Configure data policy

  • Regions: Displays regions and their persistent memory types. When using App Direct mode with interleaving, the number of regions is equal to the number of CPU sockets in the node. When using App Direct mode without interleaving, the number of regions is equal to the number of DCPMMs in the node.

    From the Regions page, you can configure memory goals that tell the DCPMM how to allocate resources.

    • Create goal config

  • Namespaces: Displays namespaces and allows you to create or delete them when persistent memory is used. Namespaces can also be created when creating goals. A namespace provisioning of persistent memory applies only to the selected region.

    Existing namespace attributes such as the size cannot be modified. You can only add or delete namespaces.

  • Total capacity: Displays the total DCPMM resource allocation across the node.

Updating the DCPMM Firmware Using the BIOS Setup Utility

You can update the DCPMM firmware from the BIOS Setup Utility if you know the path to the .bin files. The firmware update is applied to all installed DCPMMs.

  1. Navigate to Advanced > Intel Optane DC Persistent Memory Configuration > DIMMs > Update firmware

  2. Under File:, provide the file path to the .bin file.

  3. Select Update.

Replacing a Mini-Storage Module

The mini-storage module plugs into a vertical riser card that attaches to motherboard by two captive screws to provide additional internal storage. The module is an M.2 SSD Carrier which provides two M.2 form-factor SSD sockets. See also Replacing a Boot-Optimized M.2 RAID Controller Module.


Note

The Cisco IMC firmware does not include an out-of-band management interface for the M.2 drives installed in the M.2 version of this mini-storage module (HX-MSTOR-M2). The M.2 drives are not listed in Cisco IMC inventory, nor can they be managed by Cisco IMC. This is expected behavior.


Replacing a Mini-Storage Module Carrier

This topic describes how to remove and replace a mini-storage module carrier. The carrier sits in an M.2 vertical riser card that is attached to the motherboard by two captive screws.

The carrier has one media socket on its top and one socket on its underside. Use the following procedure for an M.2 SSD mini-storage module carrier.

Procedure

Step 1

Shut down and remove power from the node as described in Shutting Down and Removing Power From the Node.

Step 2

Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

Caution 
If you cannot safely view and access the component, remove the node from the rack.
Step 3

Remove the top cover from the node as described in Removing the Node Top Cover.

Step 4

Locate the mini-storage module carrier in its socket between PCIe riser 2 and 3.

Step 5

Using a Phillips screwdriver, loosen each of the captive screws and lift the M.2 riser out of the node.

Step 6

Remove a carrier from its socket:

  1. Using a Phillips screwdriver, loosen the screw that holds the module to the carrier.

  2. Push outward on the securing clips that holds each end of the carrier.

  3. Lift both ends of the carrier to disengage it from the socket on the motherboard.

  4. Set the carrier on an anti-static surface.

Step 7

Install a carrier to its socket:

  1. Position carrier over socket, with the carrier's connector facing down. Two alignment pegs must match with two holes on the carrier.

  2. Gently push down the socket end of the carrier so that the two pegs go through the two holes on the carrier.

  3. Push down on the carrier so that the securing clips click over it at both ends.

Step 8

Replace the top cover to the node.

Step 9

Replace the node in the rack, replace cables, and then fully power on the node by pressing the Power button.


Replacing an M.2 SSD in a Mini-Storage Carrier For M.2

This topic describes how to remove and replace an M.2 SATA or NVMe SSD in a mini-storage carrier for M.2 (PID HX-MSTOR-M2). The carrier has one M.2 SSD socket on its top and one socket on its underside.

Population Rules For Mini-Storage M.2 SSDs

  • Both M.2 SSDs must be either SATA or NVMe; do not mix types in the carrier.

  • You can use one or two M.2 SSDs in the carrier.

  • M.2 socket 1 is on the top side of the carrier; M.2 socket 2 is on the underside of the carrier (the same side as the carrier's motherboard connector).

Procedure

Step 1

Power off the node and then remove the mini-storage module carrier from the node as described in Replacing a Mini-Storage Module Carrier.

Step 2

Remove an M.2 SSD:

  1. Use a #1 Phillips-head screwdriver to remove the single screw that secures the M.2 SSD to the carrier.

  2. Remove the M.2 SSD from its socket on the carrier.

Step 3

Install a new M.2 SSD:

  1. Insert the new M.2 SSD connector-end into the socket on the carrier with its label side facing up.

  2. Press the M.2 SSD flat against the carrier.

  3. Install the single screw that secures the end of the M.2 SSD to the carrier.

Step 4

Install the mini-storage module carrier back into the node and then power it on as described in Replacing a Mini-Storage Module Carrier.


Replacing the RTC Battery


Warning

There is danger of explosion if the battery is replaced incorrectly. Replace the battery only with the same or equivalent type recommended by the manufacturer. Dispose of used batteries according to the manufacturer’s instructions.

[Statement 1015]



Warning

Recyclers: Do not shred the battery! Make sure you dispose of the battery according to appropriate regulations for your country or locale.


The real-time clock (RTC) battery retains system settings when the node is disconnected from power. The battery type is CR2032. Cisco supports the industry-standard CR2032 battery, which can be purchased from most electronic stores.

Procedure


Step 1

Remove the RTC battery:

  1. Shut down and remove power from the node as described in Shutting Down and Removing Power From the Node.

  2. Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

    Caution 
    If you cannot safely view and access the component, remove the node from the rack.
  3. Remove the top cover from the node as described in Removing the Node Top Cover.

  4. Remove PCIe riser 1 from the node to provide clearance to the RTC battery socket that is on the motherboard. See Replacing a PCIe Riser.

  5. Locate the horizontal RTC battery socket.

  6. Remove the battery from the socket on the motherboard. Gently pry the securing clip to the side to provide clearance, then lift up on the battery.

Step 2

Install a new RTC battery:

  1. Insert the battery into its socket and press down until it clicks in place under the clip.

    Note 

    The positive side of the battery marked “3V+” should face up.

  2. Replace PCIe riser 1 to the node. See Replacing a PCIe Riser.

  3. Replace the top cover to the node.

  4. Replace the node in the rack, replace cables, and then fully power on the node by pressing the Power button.


Replacing Power Supplies

When two power supplies are installed they are redundant as 1+1 by default, but they also support cold redundancy mode. Cold redundancy (CR) suspends power delivery on one or more power supplies and forces the remainder of the load to be supplied by the active PSU(s). As a result, total power efficiency is improved by best utilizing the PSU efficiency when compared to load characteristics.

This section includes procedures for replacing AC and DC power supply units.

Supported Power Supplies

The Cisco HyperFlex C240 M6 supports the following power supplies.


Caution

Do not mix PSU types in the same node. PSU must be the same type and wattage.


For detailed information, see Power Specifications.

PSU Type

Supported In

Notes

1050 W AC

All Cisco HyperFlex C240 M6 models

One power supply is mandatory; one more can be added for 1 + 1 redundancy as long power supplies are the same.

1050 W DC

All Cisco HyperFlex C240 M6 models

One power supply is mandatory; one more can be added for 1 + 1 redundancy as long power supplies are the same.

1600 W AC

All Cisco HyperFlex C240 M6 models

One power supply is mandatory; one more can be added for 1 + 1 redundancy as long power supplies are the same.

2300 W AC

All Cisco HyperFlex C240 M6 models

One power supply is mandatory; one more can be added for 1 + 1 redundancy as long power supplies are the same.

Replacing AC Power Supplies


Note

If you have ordered a node with power supply redundancy (two power supplies), you do not have to power off the node to replace a power supply because they are redundant as 1+1.

Note

Do not mix power supply types or wattages in the node. Both power supplies must be identical.

Caution

DO NOT interchange power supplies of Cisco HyperFlex C240 M5 server nodes and Cisco HyperFlex C240 SD M5 server nodes with the power supplies of the Cisco HyperFlex C240 M6 node.


Procedure


Step 1

Remove the power supply that you are replacing or a blank panel from an empty bay:

  1. Perform one of the following actions:

  2. Remove the power cord from the power supply that you are replacing.

  3. Grasp the power supply handle while pinching the release lever toward the handle.

  4. Pull the power supply out of the bay.

Step 2

Install a new power supply:

  1. Grasp the power supply handle and insert the new power supply into the empty bay.

  2. Push the power supply into the bay until the release lever locks.

  3. Connect the power cord to the new power supply.

  4. Only if you shut down the node, press the Power button to boot the node to main power mode.


Replacing DC Power Supplies


Note

This procedure is for replacing DC power supplies in a node that already has DC power supplies installed. If you are installing DC power supplies to the node for the first time, see Installing DC Power Supplies (First Time Installation).



Warning

A readily accessible two-poled disconnect device must be incorporated in the fixed wiring.

Statement 1022



Warning

This product requires short-circuit (overcurrent) protection, to be provided as part of the building installation. Install only in accordance with national and local wiring regulations.

Statement 1045



Warning

Installation of the equipment must comply with local and national electrical codes.

Statement 1074



Note

If you are replacing DC power supplies in a node with power supply redundancy (two power supplies), you do not have to power off the node to replace a power supply because they are redundant as 1+1.

Note

Do not mix power supply types or wattages in the node. Both power supplies must be identical.

Procedure


Step 1

Remove the DC power supply that you are replacing or a blank panel from an empty bay:

  1. Perform one of the following actions:

    • If you are replacing a power supply in a node that has only one DC power supply, shut down and remove power from the node as described in Shutting Down and Removing Power From the Node.

    • If you are replacing a power supply in a node that has two DC power supplies, you do not have to shut down the node.

  2. Remove the power cord from the power supply that you are replacing. Lift the connector securing clip slightly and then pull the connector from the socket on the power supply.

  3. Grasp the power supply handle while pinching the release lever toward the handle.

  4. Pull the power supply out of the bay.

Step 2

Install a new DC power supply:

  1. Grasp the power supply handle and insert the new power supply into the empty bay.

  2. Push the power supply into the bay until the release lever locks.

  3. Connect the power cord to the new power supply. Press the connector into the socket until the securing clip clicks into place.

  4. Only if you shut down the node, press the Power button to boot the node to main power mode.

Figure 16. Replacing DC Power Supplies

1

Keyed cable connector (CAB-48DC-40A-8AWG)

3

PSU status LED

2

Keyed DC input socket

-


Installing DC Power Supplies (First Time Installation)


Note

This procedure is for installing DC power supplies to the node for the first time. If you are replacing DC power supplies in a node that already has DC power supplies installed, see Replacing DC Power Supplies.



Warning

A readily accessible two-poled disconnect device must be incorporated in the fixed wiring.

Statement 1022



Warning

This product requires short-circuit (overcurrent) protection, to be provided as part of the building installation. Install only in accordance with national and local wiring regulations.

Statement 1045



Warning

Installation of the equipment must comply with local and national electrical codes.

Statement 1074



Note

Do not mix power supply types or wattages in the node. Both power supplies must be identical.

Caution

As instructed in the first step of this wiring procedure, turn off the DC power source from your facility’s circuit breaker to avoid electric shock hazard.

Procedure


Step 1

Turn off the DC power source from your facility’s circuit breaker to avoid electric shock hazard.

Note 

The required DC input cable is Cisco part CAB-48DC-40A-8AWG. This 3-meter cable has a 3-pin connector on one end that is keyed to the DC input socket on the power supply. The other end of the cable has no connector so that you can wire it to your facility’s DC power.

Step 2

Wire the non-terminated end of the cable to your facility’s DC power input source.

Step 3

Connect the terminated end of the cable to the socket on the power supply. The connector is keyed so that the wires align for correct polarity and ground.

Step 4

Return DC power from your facility’s circuit breaker.

Step 5

Press the Power button to boot the node to main power mode.

Figure 17. Replacing DC Power Supplies

1

Keyed cable connector (CAB-48DC-40A-8AWG)

3

PSU status LED

2

Keyed DC input socket

-

Step 6

See Grounding for DC Power Supplies for information about additional chassis grounding.


Grounding for DC Power Supplies

AC power supplies have internal grounding and so no additional grounding is required when the supported AC power cords are used.

When using a DC power supply, additional grounding of the node chassis to the earth ground of the rack is available. Two screw holes for use with your dual-hole grounding lug and grounding wire are supplied on the chassis rear panel.


Note

The grounding points on the chassis are sized for 10-32 screws. You must provide your own screws, grounding lug, and grounding wire. The grounding lug must be dual-hole lug that fits 10-32 screws. The grounding cable that you provide must be 14 AWG (2 mm), minimum 60° C wire, or as permitted by the local code.

Replacing a PCIe Riser

This node has two toolless PCIe risers for horizontal installation of PCIe cards. Each riser is available in multiple versions. See PCIe Slot Specifications for detailed descriptions of the slots and features in each riser version.

Procedure


Step 1

Shut down and remove power from the node as described in Shutting Down and Removing Power From the Node.

Step 2

Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

Caution 
If you cannot safely view and access the component, remove the node from the rack.
Step 3

Remove the top cover from the node as described in Removing the Node Top Cover.

Step 4

Remove the PCIe riser that you are replacing:

  1. Grasp the flip-up handle on the riser and the blue forward edge, and then lift up evenly to disengage its circuit board from the socket on the motherboard. Set the riser on an antistatic surface.

  2. If the riser has a card installed, remove the card from the riser. See Replacing a PCIe Card.

Step 5

Install a new PCIe riser:

Note 

The PCIe risers are not interchangeable. If you plug a PCIe riser into the wrong socket, the node will not boot. Riser 1 must plug into the motherboard socket labeled “RISER1.” Riser 2 must plug into the motherboard socket labeled “RISER2.”

  1. If you removed a card from the old PCIe riser, install the card to the new riser. See Replacing a PCIe Card.

  2. Position the PCIe riser over its socket on the motherboard and over its alignment slots in the chassis.

  3. Carefully push down on both ends of the PCIe riser to fully engage its circuit board connector with the socket on the motherboard.

Step 6

Replace the top cover to the node.

Step 7

Replace the node in the rack, replace cables, and then fully power on the node by pressing the Power button.


Replacing a PCIe Card


Note

Cisco supports all PCIe cards qualified and sold by Cisco. PCIe cards not qualified or sold by Cisco are the responsibility of the customer. Although Cisco will always stand behind and support the C-Series rack-mount server nodes, customers using standard, off-the-shelf, third-party cards must go to the third-party card vendor for support if any issue with that particular card occurs.

PCIe Slot Specifications

The node contains three toolless PCIe risers for horizontal installation of PCIe cards. Each riser is orderable in multiple versions.

  • Riser 1 contains PCIe slots 1, 2, and 3 and is available in the following different options:

    • SFF node, I/O-Centric—Slots 1 (x8), 2 (x16), and 3 (x8). All slots are controlled by CPU 1 or CPU 2 depending on the node model.

    • SFF node, Storage Centric—

      • Slots 1 (Reserved), 2 (x4), and 3 (x4) for drive bays in 24-drive SAS/SATA model, 24-drive NVMe model, and 12 drive NVMe model. All slots are controlled by CPU 2.

      • Slots 1, 2, and 3 are not supported in 12 SAS/SATA model.

    • LFF node—Slots 1 (Reserved), 2 (x4), and 3 (x4). All slots are controlled by CPU 1.

  • Riser 2 contains PCIe slots 4, 5 and 6 and is available in the following different options:

    • SFF node, I/O-Centric—Slots 4 (x8), 5 (x16), and 6 (x8). All slots are controlled by CPU 2.

    • SFF node, Storage Centric—Slots 4, 5, and 6 do not support storage devices in the SFF model of the node.

    • LFF node—Slots 4 (x8), 5 (x16), and 6 (x8). All slots are controlled by CPU 2.

  • Riser 3 contains PCIe slots 7 and 8 and is available in the following different options:

    • SFF node, I/O-Centric—Slots 7 (x8) and 8 (x8) for SATA/SAS models. Slots 7 and 8 are controlled by CPU 2 for SATA/SAS nodes.

      Slots 7 and 8 are not supported for NVMe-only models.

    • SFF node, Storage Centric—Slots 7 (x4) and 8 (x4) for drive bays in 24-drive and 12-drive SAS/SATA versions of the server. All slots are controlled by CPU 2 .

      Slots 7 and 8 are not supported for NVMe-only models.

    • LFF server—Slots 7 (x4) and 8 (x4) for drive bays. All slots are controlled by CPU 2.

The following illustration shows the PCIe slot numbering.

Figure 18. Rear Panel, Showing PCIe Slot Numbering

Replacing a PCIe Card


Note

If you are installing a Cisco UCS Virtual Interface Card, there are prerequisite considerations. See Cisco Virtual Interface Card (VIC) Considerations.



Note

RAID controller cards install into a dedicated motherboard socket. See Replacing a SAS Storage Controller Card (RAID or HBA).



Note

For instructions on installing or replacing double-wide GPU cards, see GPU Card Installation.


Procedure


Step 1

Shut down and remove power from the node as described in Shutting Down and Removing Power From the Node.

Step 2

Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

Caution 
If you cannot safely view and access the component, remove the node from the rack.
Step 3

Remove the top cover from the node as described in Removing the Node Top Cover.

Step 4

Remove the PCIe card that you are replacing:

  1. Remove any cables from the ports of the PCIe card that you are replacing.

  2. Use two hands to flip up and grasp the blue riser handle and the blue finger grip area on the front edge of the riser, and then lift straight up.

  3. On the bottom of the riser, push the release latch that holds the securing plate, and then swing the hinged securing plate open.

  4. Open the hinged card-tab retainer that secures the rear-panel tab of the card.

  5. Pull evenly on both ends of the PCIe card to remove it from the socket on the PCIe riser.

    If the riser has no card, remove the blanking panel from the rear opening of the riser.

Step 5

Install a new PCIe card:

  1. With the hinged card-tab retainer open, align the new PCIe card with the empty socket on the PCIe riser.

  2. Push down evenly on both ends of the card until it is fully seated in the socket.

  3. Ensure that the card’s rear panel tab sits flat against the riser rear-panel opening and then close the hinged card-tab retainer over the card’s rear-panel tab.

  4. Swing the hinged securing plate closed on the bottom of the riser. Ensure that the clip on the plate clicks into the locked position.

  5. Position the PCIe riser over its socket on the motherboard and over the chassis alignment channels.

  6. Carefully push down on both ends of the PCIe riser to fully engage its connector with the sockets on the motherboard.

Step 6

Replace the top cover to the node.

Step 7

Replace the node in the rack, replace cables, and then fully power on the node by pressing the Power button.

Figure 19. PCIe Riser Card Securing Mechanisms

1

Release latch on hinged securing plate

3

Hinged card-tab retainer

2

Hinged securing plate

-


Cisco Virtual Interface Card (VIC) Considerations

This section describes VIC card support and special considerations for this node.


Note

If you use the Cisco Card NIC mode, you must also make a VIC Slot setting that matches where your VIC is installed. The options are Riser1, Riser2, and Flex-LOM. See NIC Mode and NIC Redundancy Settings for more information about NIC modes.
Table 5. VIC Support and Considerations in This Server

VIC

How Many Supported in Server

Slots That Support VICs

Primary Slot For Cisco Card NIC Mode

Minimum Cisco IMC Firmware

Cisco HX VIC 1455

HX-PCIE-C25Q-04

2 PCIe

PCIe 2

PCIe 5

PCIe 2

4.0(1)

Cisco HX VIC 1495

HX-PCIE-C100-04

2 PCIe

PCIe 2

PCIe 5

PCIe 2

4.0(2)

Cisco HX VIC 1457

HX-MLOM-C25Q-04

1 mLOM

mLOM

mLOM

4.0(1)

Cisco HX VIC 1497

HX-MLOM-C100-04

1 mLOM

mLOM

mLOM

4.0(2)

  • A total of 3 VICs are supported in the node: 2 PCIe style, and 1 mLOM style.


    Note

    Single wire management is supported on only one VIC at a time. If multiple VICs are installed on a node, only one slot has NCSI enabled at a time. For single wire management, priority goes to the MLOM slot, then slot 2, then slot 5 for NCSI management traffic. When multiple cards are installed, connect the single-wire management cables in the priority order mentioned above.


  • The primary slot for a VIC card in PCIe riser 1 is slot 2. The secondary slot for a VIC card in PCIe riser 1 is slot 1.


    Note

    The NCSI protocol is supported in only one slot at a time in each riser. If a GPU card is present in slot 2, NCSI automatically shifts from slot 2 to slot 1.


  • The primary slot for a VIC card in PCIe riser 2 is slot 5. The secondary slot for a VIC card in PCIe riser 2 is slot 4.


    Note

    The NCSI protocol is supported in only one slot at a time in each riser. If a GPU card is present in slot 5, NCSI automatically shifts from slot 5 to slot 4.



    Note

    PCIe riser 2 is not available in a single-CPU system.


Replacing an mLOM Card

The node supports a modular LOM (mLOM) card to provide additional rear-panel connectivity. The mLOM socket is on the motherboard, under the storage controller card.

The mLOM socket provides a Gen-3 x16 PCIe lane. The socket remains powered when the node is in 12 V standby power mode, and it supports the network communications services interface (NCSI) protocol.


Note

If your mLOM card is a Cisco UCS Virtual Interface Card (VIC), see Cisco Virtual Interface Card (VIC) Considerations for more information and support details.

Procedure


Step 1

Remove any existing mLOM card (or a blanking panel):

  1. Shut down and remove power from the node as described in Shutting Down and Removing Power From the Node.

  2. Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

    Caution 
    If you cannot safely view and access the component, remove the node from the rack.
  3. Remove the top cover from the node as described in Removing the Node Top Cover.

  4. Remove any storage controller (RAID or HBA card) to provide clearance to the mLOM socket on the motherboard. See Replacing a SAS Storage Controller Card (RAID or HBA).

  5. Loosen the single captive thumbscrew that secures the mLOM card to the threaded standoff on the chassis floor.

  6. Slide the mLOM card horizontally to free it from the socket, then lift it out of the node.

Step 2

Install a new mLOM card:

  1. Set the mLOM card on the chassis floor so that its connector is aligned with the motherboard socket.

  2. Push the card horizontally to fully engage the card's edge connector with the socket.

  3. Tighten the captive thumbscrew to secure the card to the chassis floor.

  4. Return the storage controller card to the node. See Replacing a SAS Storage Controller Card (RAID or HBA).

  5. Replace the top cover to the node.

  6. Replace the node in the rack, replace cables, and then fully power on the node by pressing the Power button.


Replacing a SAS Storage Controller Card (RAID or HBA)

For hardware-based storage control, the node can use a Cisco modular SAS RAID controller or SAS HBA that plugs into a dedicated, vertical socket on the motherboard.

Storage Controller Card Firmware Compatibility

Firmware on the storage controller (RAID or HBA) must be verified for compatibility with the current Cisco IMC and BIOS versions that are installed on the node. If not compatible, upgrade or downgrade the storage controller firmware using the Host Upgrade Utility (HUU) for your firmware release to bring it to a compatible level.


Note

For nodes running in standalone mode only: After you replace controller hardware (HX-RAID-M6T, HX-RAID-M6HD, HX-RAID-M6SD, HX-SAS-M6T, or HX-SAS-M6HD), you must run the Cisco UCS Host Upgrade Utility (HUU) to update the controller firmware, even if the firmware Current Version is the same as the Update Version. Running HUU is necessary to program any controller specific values to the storage controller for the specific node. If you do not run HUU, the storage controller may not be discovered..


See the HUU guide for your Cisco IMC release for instructions on downloading and using the utility to bring node components to compatible levels: HUU Guides.

Removing the Dual Storage Controller Cards

The front RAID assembly can contain either a single Storage controller card in a single tray, or two Storage controller cards each in its own tray. Use this procedure to remove each of the Storage controller cards. This procedure assumes that you have removed power from the node and removed the top cover.

Procedure


Step 1

Locate the dual Storage controller cards.

Each Storage controller card has its own tray, as shown.

Step 2

Remove the fan tray.

For more information, see Removing the Fan Tray.

Step 3

Disconnect the cables.

  1. For each Storage controller card, grasp the ribbon cable connector and disconnect it from the RAID card.

    You can leave the other end of the ribbon cable attached to the motherboard.

  2. For each Storage controller card, grasp the connector for the rear-drive cable, and disconnect it from the card.

    You can leave the other end of the rear-drive cable attached.

    1

    SAS cable connection on Storage controller card.

    2

    SAS cable that connects to rear drives in Riser 3B

    3

    SAS cable connection on rear Riser 3B.

    4

    SAS cable connection on Storage controller card.

    5

    Ribbon cables connecting Storage controller cards to motherboard.

    6

    SAS cable that connects to rear drives in Riser 1B

    7

    SAS cable connection on rear Riser 1B.

Step 4

Remove the Storage controller cards.

  1. Grasp the cable that leads to the rear drives and disconnect it from each card.

  2. Grasp the handle at the top of each card tray, and gently push it towards the rear of the node.

    The handle should slide to the open position. This step disconnects the Storage controller card from a socket on an interior wall.

  3. Using a #2 Phillips screwdriver, loosen the captive screws at the edges of the trays.

  4. Grasp each card tray by the handle and lift the Storage controller cards out of the chassis.


What to do next

Reinsert the dual Storage controller cards. Go to Installing the Dual Storage Controller Cards.

Installing the Dual Storage Controller Cards

Use this procedure to install the dual Storage controller cards into the node. The Storage controller cards are contained in tray, which is replaceable.

Procedure


Step 1

Grasp each card tray by the handle.

Step 2

Install the Storage controller cards.

  1. Make sure that the handle of the tray is in the open position.

  2. Make sure that the cables do not obstruct installing the Storage controller cards.

  3. Orient the Storage controller card so that the thumbscrews align with their threaded standoffs on the motherboard.

  4. Holding the card tray by the handle, keep the tray level and lower it into the node.

  5. Using a #2 Phillips head screwdriver, tighten the screws at the edges of each tray.

  6. Gently push the handle of the tray towards the front of the node.

    This step seats each Storage controller card into its socket on the interior wall. You might feel some resistance as the card meets the socket. This resistance is normal.

Step 3

Reconnect the cables.

Step 4

Reinsert the fan tray.

For more information, see Installing the Fan Tray.


What to do next

Perform other maintenance tasks, if needed, or replace the top cover and restore facility power.

Removing the Storage Controller Card

The Storage controller can contain either a single controller card in a single tray, or two controller cards each in its own tray. Use this procedure to remove the single Storage controller card. This procedure assumes that you have removed power from the node and removed the top cover.

Procedure


Step 1

Locate the Storage controller card.

Step 2

Remove the fan tray.

For more information, see Removing the Fan Tray.

Step 3

Disconnect the cables.

  1. Grasp the ribbon cable connectors and disconnect them from the Storage controller card.

    You can leave the other end of the ribbon cable attached to the motherboard.

  2. Grasp the connector for the rear-drive cables (1 and 4) and disconnect them from the Storage controller card.

    You can leave the other end of the rear-drive cable attached.

    1

    Storage controller card connector for rear drives (Riser 3B)

    2

    SAS/SATA cable for rear drives

    3

    Connector for PCI Riser 3

    4

    Storage controller card connector for rear drives (Riser 1B)

    5

    SAS/SATA cable for rear drives

    6

    Connector for PCI Riser 1

Step 4

Remove the Storage controller card.

  1. Using both hands, grasp the handle at the top of the card tray, and gently push it towards the rear of the node.

    The handle should slide to the open position. This step disconnects the Storage controller card from a socket on an interior wall.

  2. Using a #2 Phillips screwdriver, loosen the captive screws at the edges of the tray.

  3. Using both hands, grasp the tray's handle, and keeping the Storage controller card tray level, lift it out of the chassis.


What to do next

Reinsert the Storage controller card. Go to Installing the Storage Controller Card.

Installing the Storage Controller Card

Use this procedure to install the single Storage controller card into the server. The Storage controller card is contained in tray, which is replaceable.

Procedure


Step 1

Grasp the card tray by the handle.

Step 2

Install the Storage controller card.

  1. Make sure that the handle of the tray is in the open position.

  2. Make sure that the cables do not obstruct installing the Storage controller card.

  3. Orient the Storage controller card so that the thumbscrews align with their threaded standoffs on the motherboard.

  4. Using both hands, hold the card tray by the handle, keep the tray level, and lower it into the server.

  5. Using a #2 Phillips head screwdriver, tighten the screws at the edges of the tray.

  6. Using both hands, make sure to apply equal pressure to both sides of the handle, and gently push the handle of the tray towards the front of the server.

    This step seats the Storage controller card into its sockets on the interior wall. You might feel some resistance as the card meets the socket. This resistance is normal.

Step 3

Reconnect the cables.

Step 4

Reinsert the fan tray.

For more information, see Installing the Fan Tray.


What to do next

Perform other maintenance tasks, if needed, or replace the top cover and restore facility power.

Verify Cabling

After installing a Storage controller card, the cabling between the card(s) and rear drives should be as follows.

  • For a 24-drive node, verify the following:

    • the SAS/SATA cable is connected to the controller card and Riser 3B

    • the SAS/SATA cable is connected to the controller card and the Riser 1B

    • both ribbon cables are connected to the controller card and the motherboard

    1

    SAS cable connection on Storage controller card.

    2

    SAS cable that connects to rear drives in Riser 3B

    3

    SAS cable connection on rear Riser 3B.

    4

    SAS cable connection on Storage controller card.

    5

    Ribbon cables connecting Storage controller cards to motherboard.

    6

    SAS cable that connects to rear drives in Riser 1B

    7

    SAS cable connection on rear Riser 1B.

  • For 12 drive node, verify the following:

    • the SAS/SATA cable is connected to the Storage controller card and the Riser 3B

    • the ribbon cable is connected to the Storage controller card and the motherboard.

    1

    SAS cable connects from rear drive to SAS card

    2

    Ribbon cable connects from motherboard to SAS card.

Replacing a SATA Interposer Card (12-Drive SFF Server Only)


Note

The only version of this node that supports the SATA Interposer card is the SFF, 12-drive version (HX-C240-M6S)


For software-based storage control for the front-loading drives, the node requires a SATA interposer card that plugs into a dedicated socket on the motherboard (the same socket used for SAS storage controllers). The interposer card sits between the front-loading drive bays and the fan tray and supports up to eight SATA drives in bays 1 through 8).


Note

You cannot use a hardware RAID controller card and the embedded software RAID controller to control front drives at the same time.


Procedure


Step 1

Prepare the node for component installation:

  1. Shut down and remove power from the node as described in Shutting Down and Removing Power From the Node.

  2. Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

    Caution 
    If you cannot safely view and access the component, remove the node from the rack.
  3. Remove the top cover from the node as described in Removing the Node Top Cover.

Step 2

Remove any existing SATA interposer card from the node:

Note 

A SATA interposer card for this node is preinstalled inside a carrier-frame that helps to secure the card to the inner chassis wall. You do not have to remove this carrier frame from the existing card.

  1. Disconnect the PCIe SAS cable from the existing card.

    This is the only cable on the card, and it connects the card to the motherboard by a x8 slimSAS connector.

  2. Using a #2 Phillips screwdriver, completely loosen the captive screws.

  3. Gently push the handle toward the middle of the node.

    This step moves the handle to the open position and unseats the interposer card from its socket on the interior wall.

  4. Making sure that the card is disconnected from its socket, grasp the handle and lift straight up to remove the interposer from the node.

Step 3

Install a new SATA interposer card:

  1. Orient the interposer card so that the front edge connector is facing its socket on the interior wall.

  2. Align the captive screws on the card with their threaded standoffs.

  3. Making sure that the handle is completely in the open position and the PCIe SAS cable will not obstruct installation, lower the interposer card into the node.

  4. Keeping the card level, push the handle forward to seat the interposer card into its socket on the interior wall.

  5. Using a #2 Phillips screwdriver, tighten the captive screws.

  6. Connect PCIe cables to the new card.

    If this is a first-time installation, see Storage Controller Cable Connectors and Backplanes for cabling instructions.

Step 4

Replace the top cover to the node.

Step 5

Replace the node in the rack, replace cables, and then fully power on the node by pressing the Power button.


Replacing the Supercap (RAID Backup)

This node supports installation of one Supercap unit for 12 drive nodes, and two Supercap units for 24 drive nodes. The unit mounts to a bracket on the removable air baffle.

The Supercap provides approximately three years of backup for the disk write-back cache DRAM in the case of a sudden power loss by offloading the cache to the NAND flash.

Procedure


Step 1

Prepare the node for component installation:

  1. Shut down and remove power from the node as described in Shutting Down and Removing Power From the Node.

  2. Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

    Caution 
    If you cannot safely view and access the component, remove the node from the rack.
  3. Remove the top cover from the node as described in Removing the Node Top Cover.

  4. Locate the SuperCap unit(s) as shown below.

Step 2

Remove an existing Supercap:

  1. Disconnect the Supercap cable from the RAID cable.

  2. Push aside the securing tab that holds the Supercap to its bracket.

  3. Lift the Supercap free of the bracket and set it aside.

Step 3

Install a new Supercap:

  1. Orient the SuperCap unit so that the SuperCap cable and the RAID cable connectors meet.

  2. Make sure that the RAID cable does not obstruct the SuperCap when you install it, then insert the new Supercap into the mounting bracket.

    Make that the SuperCap unit is securely inserted into it bracket.

  3. Connect the Supercap cable from the RAID controller card to the connector on the Supercap cable.

Step 4

Replace the top cover to the node.

Step 5

Replace the node in the rack, replace cables, and then fully power on the node by pressing the Power button.


Replacing a Boot-Optimized M.2 RAID Controller Module

The Cisco Boot-Optimized M.2 RAID Controller module connects to the mini-storage module socket on the motherboard. It includes slots for two SATA M.2 drives, plus an integrated 6-Gbps SATA RAID controller that can control the SATA M.2 drives in a RAID 1 array.

Cisco Boot-Optimized M.2 RAID Controller Considerations

Review the following considerations:


Note

The Cisco Boot-Optimized M.2 RAID Controller is not supported when the node is used as a compute-only node in Cisco HyperFlex configurations.


  • The minimum version of Cisco IMC and Cisco UCS Manager that support this controller is 4.0(4) and later.

  • This controller supports RAID 1 (single volume) and JBOD mode.


    Note

    Do not use the node's embedded SW MegaRAID controller to configure RAID settings when using this controller module. Instead, you can use the following interfaces:

    • Cisco IMC 4.2(1) and later

    • BIOS HII utility, BIOS 4.2(1) and later

    • Cisco UCS Manager 4.2(1) and later (UCS Manager-integrated nodes)


  • A SATA M.2 drive in slot 1 (the top) is the first SATA device; a SATA M.2 drive in slot 2 (the underside) is the second SATA device.

    • The name of the controller in the software is MSTOR-RAID.

    • A drive in Slot 1 is mapped as drive 253; a drive in slot 2 is mapped as drive 254.

  • When using RAID, we recommend that both SATA M.2 drives are the same capacity. If different capacities are used, the smaller capacity of the two drives is used to create a volume and the rest of the drive space is unusable.

    JBOD mode supports mixed capacity SATA M.2 drives.

  • Hot-plug replacement is not supported. The node must be powered off.

  • Monitoring of the controller and installed SATA M.2 drives can be done using Cisco IMC and Cisco UCS Manager. They can also be monitored using other utilities such as UEFI HII, PMCLI, XMLAPI, and Redfish.

  • Updating firmware of the controller and the individual drives:

  • The SATA M.2 drives can boot in UEFI mode only. Legacy boot mode is not supported.

  • If you replace a single SATA M.2 drive that was part of a RAID volume, rebuild of the volume is auto-initiated after the user accepts the prompt to import the configuration. If you replace both drives of a volume, you must create a RAID volume and manually reinstall any OS.

  • We recommend that you erase drive contents before creating volumes on used drives from another node. The configuration utility in the node BIOS includes a SATA secure-erase function.

  • The node BIOS includes a configuration utility specific to this controller that you can use to create and delete RAID volumes, view controller properties, and erase the physical drive contents. Access the utility by pressing F2 when prompted during node boot. Then navigate to Advanced > Cisco Boot Optimized M.2 RAID Controller.

Replacing a Cisco Boot-Optimized M.2 RAID Controller

This topic describes how to remove and replace a Cisco Boot-Optimized M.2 RAID Controller. The controller board has one M.2 socket on its top (Slot 1) and one M.2 socket on its underside (Slot 2).

Procedure


Step 1

Shut down and remove power from the node as described in Shutting Down and Removing Power From the Node.

Step 2

Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

Caution 
If you cannot safely view and access the component, remove the node from the rack.
Step 3

Remove the top cover from the node as described in Removing the Node Top Cover.

Step 4

Remove a controller from its motherboard socket:

  1. Locate the controller in its socket between PCIe Riser 2 and 3.

    Figure 20. Cisco Boot-Optimized M.2 RAID Controller on Motherboard
  2. Using a #2 Phillips screwdriver, loosen the captive screws and remove the M.2 module.

  3. At each end of the controller board, push outward on the clip that secures the carrier.

  4. Lift both ends of the controller to disengage it from the carrier.

  5. Set the carrier on an anti-static surface.

Step 5

If you are transferring SATA M.2 drives from the old controller to the replacement controller, do that before installing the replacement controller:

Note 

Any previously configured volume and data on the drives are preserved when the M.2 drives are transferred to the new controller. The system will boot the existing OS that is installed on the drives.

  1. Use a #1 Phillips-head screwdriver to remove the single screw that secures the M.2 drive to the carrier.

  2. Lift the M.2 drive from its socket on the carrier.

  3. Position the replacement M.2 drive over the socket on the controller board.

  4. Angle the M.2 drive downward and insert the connector-end into the socket on the carrier. The M.2 drive's label must face up.

  5. Press the M.2 drive flat against the carrier.

  6. Install the single screw that secures the end of the M.2 SSD to the carrier.

  7. Turn the controller over and install the second M.2 drive.

Figure 21. Cisco Boot-Optimized M.2 RAID Controller, Showing M.2 Drive Installation
Step 6

Install the controller to its socket on the motherboard:

  1. Position the controller over the socket, with the controller's connector facing down and at the same end as the motherboard socket. Two alignment pegs must match with two holes on the controller.

  2. Gently push down the socket end of the controller so that the two pegs go through the two holes on the controller.

  3. Push down on the controller so that the securing clips click over it at both ends.

Step 7

Replace the top cover to the node.

Step 8

Replace the node in the rack, replace cables, and then fully power on the node by pressing the Power button.


Replacing a Chassis Intrusion Switch

The chassis intrusion switch in an optional security feature that logs an event in the system event log (SEL) whenever the cover is removed from the chassis.

Procedure


Step 1

Prepare the node for component installation:

  1. Shut down and remove power from the node as described in Shutting Down and Removing Power From the Node.

  2. Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

    Caution 
    If you cannot safely view and access the component, remove the node from the rack.
  3. Remove the top cover from the node as described in Removing the Node Top Cover.

Step 2

Remove an existing intrusion switch:

  1. Disconnect the intrusion switch cable from the socket on the motherboard.

  2. Use a #1 Phillips-head screwdriver to loosen and remove the single screw that holds the switch mechanism to the chassis wall.

  3. Slide the switch mechanism straight up to disengage it from the clips on the chassis.

Step 3

Install a new intrusion switch:

  1. Slide the switch mechanism down into the clips on the chassis wall so that the screwholes line up.

  2. Use a #1 Phillips-head screwdriver to install the single screw that secures the switch mechanism to the chassis wall.

  3. Connect the switch cable to the socket on the motherboard.

Step 4

Replace the cover to the node.

Step 5

Replace the node in the rack, replace cables, and then fully power on the node by pressing the Power button.


Installing a Trusted Platform Module (TPM)

The trusted platform module (TPM) is a small circuit board that plugs into a motherboard socket and is then permanently secured with a one-way screw. The socket location is on the motherboard below PCIe riser 2.

TPM Considerations

  • This node supports either TPM version 1.2 or TPM version 2.0.

  • Field replacement of a TPM is not supported; you can install a TPM after-factory only if the node does not already have a TPM installed.

  • If there is an existing TPM 1.2 installed in the node, you cannot upgrade to TPM 2.0. If there is no existing TPM in the node, you can install TPM 2.0.

  • If the TPM 2.0 becomes unresponsive, reboot the node.

Installing and Enabling a TPM


Note

Field replacement of a TPM is not supported; you can install a TPM after-factory only if the node does not already have a TPM installed.

This topic contains the following procedures, which must be followed in this order when installing and enabling a TPM:

  1. Installing the TPM Hardware

  2. Enabling the TPM in the BIOS

  3. Enabling the Intel TXT Feature in the BIOS

Installing TPM Hardware


Note

For security purposes, the TPM is installed with a one-way screw. It cannot be removed with a standard screwdriver.
Procedure

Step 1

Prepare the node for component installation:

  1. Shut down and remove power from the node as described in Shutting Down and Removing Power From the Node.

  2. Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

    Caution 
    If you cannot safely view and access the component, remove the node from the rack.
  3. Remove the top cover from the node as described in Removing the Node Top Cover.

Step 2

Remove PCIe riser 2 from the node to provide clearance to the TPM socket on the motherboard.

Step 3

Install a TPM:

  1. Locate the TPM socket on the motherboard.

  2. Align the connector that is on the bottom of the TPM circuit board with the motherboard TPM socket. Align the screw hole on the TPM board with the screw hole that is adjacent to the TPM socket.

  3. Push down evenly on the TPM to seat it in the motherboard socket.

  4. Install the single one-way screw that secures the TPM to the motherboard.

Step 4

Replace PCIe riser 2 to the node. See Replacing a PCIe Riser.

Step 5

Replace the cover to the node.

Step 6

Replace the node in the rack, replace cables, and then fully power on the node by pressing the Power button.

Step 7

Continue with Enabling the TPM in the BIOS.


Enabling the TPM in the BIOS

After hardware installation, you must enable TPM support in the BIOS.


Note

You must set a BIOS Administrator password before performing this procedure. To set this password, press the F2 key when prompted during system boot to enter the BIOS Setup utility. Then navigate to Security > Set Administrator Password and enter the new password twice as prompted.


Procedure

Step 1

Enable TPM Support:

  1. Watch during bootup for the F2 prompt, and then press F2 to enter BIOS setup.

  2. Log in to the BIOS Setup Utility with your BIOS Administrator password.

  3. On the BIOS Setup Utility window, choose the Advanced tab.

  4. Choose Trusted Computing to open the TPM Security Device Configuration window.

  5. Change TPM SUPPORT to Enabled.

  6. Press F10 to save your settings and reboot the node.

Step 2

Verify that TPM support is now enabled:

  1. Watch during bootup for the F2 prompt, and then press F2 to enter BIOS setup.

  2. Log into the BIOS Setup utility with your BIOS Administrator password.

  3. Choose the Advanced tab.

  4. Choose Trusted Computing to open the TPM Security Device Configuration window.

  5. Verify that TPM SUPPORT and TPM State are Enabled.

Step 3

Continue with Enabling the Intel TXT Feature in the BIOS.


Enabling the Intel TXT Feature in the BIOS

Intel Trusted Execution Technology (TXT) provides greater protection for information that is used and stored on the business node. A key aspect of that protection is the provision of an isolated execution environment and associated sections of memory where operations can be conducted on sensitive data, invisibly to the rest of the system. Intel TXT provides for a sealed portion of storage where sensitive data such as encryption keys can be kept, helping to shield them from being compromised during an attack by malicious code.

Procedure

Step 1

Reboot the node and watch for the prompt to press F2.

Step 2

When prompted, press F2 to enter the BIOS Setup utility.

Step 3

Verify that the prerequisite BIOS values are enabled:

  1. Choose the Advanced tab.

  2. Choose Intel TXT(LT-SX) Configuration to open the Intel TXT(LT-SX) Hardware Support window.

  3. Verify that the following items are listed as Enabled:

    • VT-d Support (default is Enabled)

    • VT Support (default is Enabled)

    • TPM Support

    • TPM State

  4. Do one of the following:

    • If VT-d Support and VT Support are already enabled, skip to step 4.

    • If VT-d Support and VT Support are not enabled, continue with the next steps to enable them.

  5. Press Escape to return to the BIOS Setup utility Advanced tab.

  6. On the Advanced tab, choose Processor Configuration to open the Processor Configuration window.

  7. Set Intel (R) VT and Intel (R) VT-d to Enabled.

Step 4

Enable the Intel Trusted Execution Technology (TXT) feature:

  1. Return to the Intel TXT(LT-SX) Hardware Support window if you are not already there.

  2. Set TXT Support to Enabled.

Step 5

Press F10 to save your changes and exit the BIOS Setup utility.


Removing the PCB Assembly (PCBA)

The PCBA is secured to the node's sheet metal. You must disconnect the PCBA from the tray before recycling the PCBA. The PCBA is secured by different types of fasteners.

Before you begin


Note

For Recyclers Only! This procedure is not a standard field-service option. This procedure is for recyclers who will be reclaiming the electronics for proper disposal to comply with local eco design and e-waste regulations.


To remove the printed circuit board assembly (PCBA), the following requirements must be met:

  • The node must be disconnected from facility power.

  • The node must be removed from the equipment rack.

  • The node's top cover must be removed. See Removing the Node Top Cover.

Gather the following tools before beginning this procedure:

  • Pliers

  • T10 Torx screwdriver

  • #2 Phillips screwdriver

Procedure


Step 1

Locate the PCBA's mounting screws.

The following figure shows the location of the mounting screws.

Figure 22. Screw Locations for Removing the HX C240 M6 PCBA

Indicator

Fastener

Required Tool

Red circles ()

M3.5x0.6 mm screws (18)

Torx T10 screwdriver

Green circles ( )

H15 M4x0.7 mm lock screws (2)

Pliers

Blue circles ()

M3.5x0.6 mm Thumb screws (2)

Torx T10 screwdriver

Yellow circles ( )

H12 M4x0.7 mm locking screws (2)

Pliers

Purple circle ( )

M3.5 Thumb screw (1), on the M.2 riser cage

#2 Phillips screwdriver

Lavender circle ()

M3.5 Thumb screw (1), on the air duct

#2 Phillips screwdriver

Step 2

Using the appropriate tools, remove the screws.

Step 3

Remove the PCBA from the sheet metal and dispose of each in compliance with your local e-waste and recycling regulations.


Service Headers and Jumpers

This node includes blocks of headers and switches (SW12, CN3) that you can use for certain service and debug functions.

This section contains the following topics:

1

Location of header block CN3

5

Clear BIOS Password: SW12 switch 6

Default setting: Off

Clear password: On.

2

Boot Cisco IMC from alternate image: CN pins 1-2

Default: Open. Place the jumper shunt over the pins to close the circuit.

6

Recover BIOS: SW12 switch 5

Default setting: Off

Recovery mode: On.

3

System Firmware Secure Erase: CN3 pins 3-4

Default: Open. Place the jumper shunt over the pins to close the circuit.

7

Clear CMOS: SW12 switch 9

Default setting: Off

Recovery mode: On. Gently push the switch to the right, which is the On position.

4

Location of switch block SW12

-

Using the Clear CMOS Switch (SW12, Switch 9)

You can use this switch to clear the node’s CMOS settings in the case of a system hang. For example, if the node hangs because of incorrect settings and does not boot, use this switch to invalidate the settings and reboot with defaults.

You will find it helpful to refer to the location of the CN3 header. See Service Headers and Jumpers.


Caution

Clearing the CMOS removes any customized settings and might result in data loss. Make a note of any necessary customized settings in the BIOS before you use this clear CMOS procedure.

Procedure


Step 1

Shut down and remove power from the node as described in Shutting Down and Removing Power From the Node.

Step 2

Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

Caution 
If you cannot safely view and access the component, remove the node from the rack.
Step 3

Remove the top cover from the node as described in Removing the Node Top Cover.

Step 4

Using your finger, gently push the SW12 switch 9 to the side marked ON.

Step 5

Reinstall the top cover and reconnect AC power cords to the node. The node powers up to standby power mode, indicated when the Power LED on the front panel is amber.

Step 6

Return the node to main power mode by pressing the Power button on the front panel. The node is in main power mode when the Power LED is green.

Note 
You must allow the entire node to reboot to main power mode to complete the reset. The state of the switch cannot be determined without the host CPU running.
Step 7

Press the Power button to shut down the node to standby power mode, and then remove AC power cords from the node to remove all power.

Step 8

Remove the top cover from the node.

Step 9

Using your finger, gently push switch 9 to its original position (OFF).

Note 
If you do not reset the switch to its original position (OFF), the CMOS settings are reset to the defaults every time you power-cycle the node.
Step 10

Replace the top cover, replace the node in the rack, replace power cords and any other cables, and then power on the node by pressing the Power button.


Using the BIOS Recovery Switch (SW12, Switch 5)

Depending on which stage the BIOS becomes corrupted, you might see different behavior.

  • If the BIOS BootBlock is corrupted, you might see the system get stuck on the following message:

    Initializing and configuring memory/hardware
  • If it is a non-BootBlock corruption, a message similar to the following is displayed:

    ****BIOS FLASH IMAGE CORRUPTED****
    Flash a valid BIOS capsule file using Cisco IMC WebGUI or CLI interface.
    IF Cisco IMC INTERFACE IS NOT AVAILABLE, FOLLOW THE STEPS MENTIONED BELOW.
    1. Connect the USB stick with bios.cap file in root folder.
    2. Reset the host.
    IF THESE STEPS DO NOT RECOVER THE BIOS
    1. Power off the system.
    2. Mount recovery jumper.
    3. Connect the USB stick with bios.cap file in root folder.
    4. Power on the system.
    Wait for a few seconds if already plugged in the USB stick.
    REFER TO SYSTEM MANUAL FOR ANY ISSUES.

Note

As indicated by the message shown above, there are two procedures for recovering the BIOS. Try procedure 1 first. If that procedure does not recover the BIOS, use procedure 2.

Procedure 1: Reboot With bios.cap Recovery File

Procedure

Step 1

Download the BIOS update package and extract it to a temporary location.

Step 2

Copy the contents of the extracted recovery folder to the root directory of a USB drive. The recovery folder contains the bios.cap file that is required in this procedure.

Note 
The bios.cap file must be in the root directory of the USB drive. Do not rename this file. The USB drive must be formatted with either the FAT16 or FAT32 file system.
Step 3

Insert the USB drive into a USB port on the node.

Step 4

Reboot the node.

Step 5

Return the node to main power mode by pressing the Power button on the front panel.

The node boots with the updated BIOS boot block. When the BIOS detects a valid bios.cap file on the USB drive, it displays this message:

Found a valid recovery file...Transferring to Cisco IMC
System would flash the BIOS image now...
System would restart with recovered image after a few seconds...
Step 6

Wait for node to complete the BIOS update, and then remove the USB drive from the node.

Note 
During the BIOS update, Cisco IMC shuts down the node and the screen goes blank for about 10 minutes. Do not unplug the power cords during this update. Cisco IMC powers on the node after the update is complete.

Using the BIOS Recovery Switch (SW12, Switch 5) and bios.cap File

You can use this switch to switch the server to use a recovery BIOS.

You will find it helpful to refer to the location of the CN3 header. See Service Headers and Jumpers.

Procedure

Step 1

Download the BIOS update package and extract it to a temporary location.

Step 2

Copy the contents of the extracted recovery folder to the root directory of a USB drive. The recovery folder contains the bios.cap file that is required in this procedure.

Note 
The bios.cap file must be in the root directory of the USB drive. Do not rename this file. The USB drive must be formatted with either the FAT16 or FAT32 file system.
Step 3

Shut down and remove power from the server as described in Shutting Down and Removing Power From the Node. Disconnect power cords from all power supplies.

Step 4

Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

Caution 
If you cannot safely view and access the component, remove the server from the rack.
Step 5

Remove the top cover from the server as described in Removing the Node Top Cover.

Step 6

Using your finger, gently slide the SW12 switch 5 to the ON position.

Step 7

Reconnect AC power cords to the server. The server powers up to standby power mode.

Step 8

Insert the USB thumb drive that you prepared in Step 2 into a USB port on the server.

Step 9

Return the server to main power mode by pressing the Power button on the front panel.

The server boots with the updated BIOS boot block. When the BIOS detects a valid bios.cap file on the USB drive, it displays this message:

Found a valid recovery file...Transferring to Cisco IMC
System would flash the BIOS image now...
System would restart with recovered image after a few seconds...
Step 10

Wait for server to complete the BIOS update, and then remove the USB drive from the server.

Note 
During the BIOS update, Cisco IMC shuts down the server and the screen goes blank for about 10 minutes. Do not unplug the power cords during this update. Cisco IMC powers on the server after the update is complete.
Step 11

After the server has fully booted, power off the server again and disconnect all power cords.

Step 12

Using your finger, gently slide the switch back to its original position (OFF).

Note 
If you do not reset the switch to is original position (OFF), after recovery completion you see the prompt, “Please remove the recovery jumper.”
Step 13

Replace the top cover, replace the server in the rack, replace power cords and any other cables, and then power on the server by pressing the Power button.


Using the Clear BIOS Password Switch (SW12, Switch 6)

You can use this switch to clear the BIOS password.

You will find it helpful to refer to the location of the CN3 header. See Service Headers and Jumpers.

Procedure


Step 1

Shut down and remove power from the node as described in Shutting Down and Removing Power From the Node. Disconnect power cords from all power supplies.

Step 2

Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

Caution 
If you cannot safely view and access the component, remove the node from the rack.
Step 3

Remove the top cover from the node as described in Removing the Node Top Cover.

Step 4

Using your finger, gently slide the SW12 switch 6 to the ON position.

Step 5

Reinstall the top cover and reconnect AC power cords to the node. The node powers up to standby power mode, indicated when the Power LED on the front panel is amber.

Step 6

Return the node to main power mode by pressing the Power button on the front panel. The node is in main power mode when the Power LED is green.

Note 
You must allow the entire node to reboot to main power mode to complete the reset. The state of the switch cannot be determined without the host CPU running.
Step 7

Press the Power button to shut down the node to standby power mode, and then remove AC power cords from the node to remove all power.

Step 8

Remove the top cover from the node.

Step 9

Reset the switch to its original position (OFF).

Note 
If you do not remove the switch to its original position (OFF), the BIOS password is cleared every time you power-cycle the node.
Step 10

Replace the top cover, replace the node in the rack, replace power cords and any other cables, and then power on the node by pressing the Power button.


Using the Boot Alternate Cisco IMC Image Header (CN3, Pins 1-2)

You can use this Cisco IMC debug header to force the system to boot from an alternate Cisco IMC image.

You will find it helpful to refer to the location of the CN3 header. See Service Headers and Jumpers.

Procedure


Step 1

Shut down and remove power from the node as described in Shutting Down and Removing Power From the Node. Disconnect power cords from all power supplies.

Step 2

Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

Caution 
If you cannot safely view and access the component, remove the node from the rack.
Step 3

Remove the top cover from the node as described in Removing the Node Top Cover.

Step 4

Install a two-pin jumper across CN3 pins 1 and 2.

Step 5

Reinstall the top cover and reconnect AC power cords to the node. The node powers up to standby power mode, indicated when the Power LED on the front panel is amber.

Step 6

Return the node to main power mode by pressing the Power button on the front panel. The node is in main power mode when the Power LED is green.

Note 

When you next log in to Cisco IMC, you see a message similar to the following:

'Boot from alternate image' debug functionality is enabled.  
CIMC will boot from alternate image on next reboot or input power cycle.
Note 
If you do not remove the jumper, the node will boot from an alternate Cisco IMC image every time that you power cycle the node or reboot Cisco IMC.
Step 7

To remove the jumper, press the Power button to shut down the node to standby power mode, and then remove AC power cords from the node to remove all power.

Step 8

Remove the top cover from the node.

Step 9

Remove the jumper that you installed.

Step 10

Replace the top cover, replace the node in the rack, replace power cords and any other cables, and then power on the node by pressing the Power button.


Using the System Firmware Secure Erase Header (CN3, Pins 3-4)

You can use this header to securely erase system firmware from the node.

You will find it helpful to refer to the location of the CN3 header. See Service Headers and Jumpers.

Procedure


Step 1

Shut down and remove power from the node as described in Shutting Down and Removing Power From the Node. Disconnect power cords from all power supplies.

Step 2

Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

Caution 
If you cannot safely view and access the component, remove the node from the rack.
Step 3

Remove the top cover from the node as described in Removing the Node Top Cover.

Step 4

Install a two-pin jumper across CN3 pins 3 and 4.

Step 5

Reinstall the top cover and reconnect AC power cords to the node. The node powers up to standby power mode, indicated when the Power LED on the front panel is amber.

Step 6

Return the node to main power mode by pressing the Power button on the front panel. The node is in main power mode when the Power LED is green.

Note 
You must allow the entire node to reboot to main power mode to complete the reset. The state of the jumper cannot be determined without the host CPU running.
Step 7

Press the Power button to shut down the node to standby power mode, and then remove AC power cords from the node to remove all power.

Step 8

Remove the top cover from the node.

Step 9

Remove the jumper that you installed.

Note 
If you do not remove the jumper, the password is cleared every time you power-cycle the node.
Step 10

Replace the top cover, replace the node in the rack, replace power cords and any other cables, and then power on the node by pressing the Power button.