The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product. Learn more about how Cisco is using Inclusive Language.
This chapter describes how to diagnose node problems using LEDs. It also provides information about how to install or replace hardware components, and it includes the following sections:
This section describes the location and meaning of LEDs and buttons and includes the following topics
Figure 3-1 shows the front panel LEDs. Table 3-1 defines the LED states.
|
|
||
|
|
||
|
|
||
|
|
||
|
|
Figure 3-2 shows the rear panel LEDs and buttons. Table 3-2 defines the LED states.
Figure 3-2 Rear Panel LEDs and Buttons
|
mLOM card LED (Cisco VIC 1227) |
|
|
|
|
||
|
|
||
|
|
The node has internal fault LEDs for CPUs, DIMMs, fan modules, SD cards, the RTC battery, and the mLOM card. These LEDs are available only when the node is in standby power mode. An LED lights amber to indicate a faulty component.
See Figure 3-3 for the locations of these internal LEDs.
Figure 3-3 Internal Diagnostic LED Locations
|
Fan module fault LEDs (one next to each fan connector on the motherboard) |
|
|
|
|
||
|
DIMM fault LEDs (one in front of each DIMM socket on the motherboard) |
|
|
|
---|---|
This section describes how to prepare for component installation, and it includes the following topics:
The following equipment is used to perform the procedures in this chapter:
The node can run in two power modes:
This section contains the following procedures, which are referenced from component replacement procedures. Alternate shutdown procedures are included.
When you use this procedure to shut down an HX node, Cisco UCS Manager triggers the OS into a graceful shutdown sequence.
Note If the Shutdown Server link is dimmed in the Actions area, the node is not running.
Step 1 In the Navigation pane, click Equipment.
Step 2 Expand Equipment > Rack Mounts > Servers.
Step 3 Choose the node that you want to shut down.
Step 4 In the Work pane, click the General tab.
Step 5 In the Actions area, click Shutdown Server.
Step 6 If a confirmation dialog displays, click Yes.
After the node has been successfully shut down, the Overall Status field on the General tab displays a power-off status.
When you use this procedure to shut down an HX node, Cisco UCS Manager triggers the OS into a graceful shutdown sequence.
Note If the Shutdown Server link is dimmed in the Actions area, the node is not running.
Step 1 In the Navigation pane, click Servers.
Step 2 Expand Servers > Service Profiles.
Step 3 Expand the node for the organization that contains the service profile of the server node you are shutting down.
Step 4 Choose the service profile of the server node that you are shutting down.
Step 5 In the Work pane, click the General tab.
Step 6 In the Actions area, click Shutdown Server.
Step 7 If a confirmation dialog displays, click Yes.
After the node has been successfully shut down, the Overall Status field on the General tab displays a power-off status.
Some procedures directly place the node into Cisco HX Maintenance mode. This procedure migrates all VMs to other nodes before the node is shut down and decommissioned from Cisco UCS Manager.
Step 1 Put the node in Cisco HX Maintenance mode by using the vSphere interface:
a. Log in to the vSphere web client.
b. Go to Home > Hosts and Clusters.
c. Expand the Datacenter that contains the HX Cluster.
d. Expand the HX Cluster and select the node.
e. Right-click the node and select Cisco HX Maintenance Mode > Enter HX Maintenance Mode.
a. Log in to the storage controller cluster command line as a user with root privileges.
b. Move the node into HX Maintenance Mode.
1. Identify the node ID and IP address:
2. Enter the node into HX Maintenance Mode.
#
stcli node maintenanceMode (--id ID | --ip IP Address ) --mode enter
(see also stcli node maintenanceMode --help)
c. Log into the ESXi command line of this node as a user with root privileges.
d. Verify that the node has entered HX Maintenance Mode:
#
esxcli system maintenanceMode get
Step 2 Shut down the node using UCS Manager as described in Shutting Down the Node.
Note This method is not recommended for a HyperFlex node, but the operation of the physical power button is explained here in case an emergency shutdown is required.
Step 1 Check the color of the Power Status LED (see the “Front Panel LEDs” section).
Step 2 Invoke either a graceful shutdown or a hard shutdown:
Before replacing an internal component of a node, you must decommission the node to remove it from the Cisco UCS configuration. When you use this procedure to shut down an HX node, Cisco UCS Manager triggers the OS into a graceful shutdown sequence.
Step 1 In the Navigation pane, click Equipment.
Step 2 Expand Equipment > Rack Mounts > Servers.
Step 3 Choose the node that you want to decommission.
Step 4 In the Work pane, click the General tab.
Step 5 In the Actions area, click Server Maintenance.
Step 6 In the Maintenance dialog box, click Decommission, then click OK.
The node is removed from the Cisco UCS configuration.
This section contains the following procedures, which are referenced from component replacement procedures:
After replacing an internal component of a node, you must recommission the node to add it back into the Cisco UCS configuration.
Step 1 In the Navigation pane, click Equipment.
Step 2 Under Equipment, click the Rack Mounts node.
Step 3 In the Work pane, click the Decommissioned tab.
Step 4 On the row for each rack-mount server that you want to recommission, do the following:
a. In the Recommission column, check the check box.
Step 5 If a confirmation dialog box displays, click Yes.
Step 6 (Optional) Monitor the progress of the server recommission and discovery on the FSM tab for the server.
Use this procedure to associate an HX node to its service profile after recommissioning.
Step 1 In the Navigation pane, click Servers.
Step 2 Expand Servers > Service Profiles.
Step 3 Expand the node for the organization that contains the service profile that you want to associate with the HX node.
Step 4 Right-click the service profile that you want to associate with the HX node and then select Associate Service Profile.
Step 5 In the Associate Service Profile dialog box, select the Server option.
Step 6 Navigate through the navigation tree and select the HX node to which you are assigning the service profile.
Use this procedure to exit HX Maintenance Mode after performing a service procedure.
Step 1 Exit the node from Cisco HX Maintenance mode by using the vSphere interface:
a. Log in to the vSphere web client.
b. Go to Home > Hosts and Clusters.
c. Expand the Datacenter that contains the HX Cluster.
d. Expand the HX Cluster and select the node.
e. Right-click the node and select Cisco HX Maintenance Mode > Exit HX Maintenance Mode.
a. Log in to the storage controller cluster command line as a user with root privileges.
b. Exit the node out of HX Maintenance Mode.
1. Identify the node ID and IP address:
2. Exit the node out of HX Maintenance Mode
#
stcli node maintenanceMode (--id ID | --ip IP Address ) --mode exit
(see also stcli node maintenanceMode --help)
c. Log into ESXi command line of this node as a user with root privileges.
d. Verify that the node has exited HX Maintenance Mode:
#
esxcli system maintenanceMode get
Step 1 Remove the top cover (see Figure 3-4).
a. If the cover latch is locked, use a screwdriver to turn the lock 90-degrees counterclockwise to unlock it. See Figure 3-4.
b. Lift on the end of the latch that has the green finger grip. The cover is pushed back to the open position as you lift the latch.
c. Lift the top cover straight up from the node and set it aside.
Note The latch must be in the fully open position when you set the cover back in place, which allows the opening in the latch to sit over a peg that is on the fan tray.
a. With the latch in the fully open position, place the cover on top of the node about one-half inch (1.27 cm) behind the lip of the front cover panel. The opening in the latch should fit over the peg that sticks up from the fan tray.
b. Press the cover latch down to the closed position. The cover is pushed forward to the closed position as you push down the latch.
c. If desired, lock the latch by using a screwdriver to turn the lock 90-degrees clockwise.
Figure 3-4 Removing the Top Cover
|
|
Warning Blank faceplates and cover panels serve three important functions: they prevent exposure to hazardous voltages and currents inside the chassis; they contain electromagnetic interference (EMI) that might disrupt other equipment; and they direct the flow of cooling air through the chassis. Do not operate the node unless all cards, faceplates, front covers, and rear covers are in place.
Statement 1029
Tip You can press the unit identification button on the front panel or rear panel to turn on a flashing unit identification LED on the front and rear panels of the node. This button allows you to locate the specific node that you are servicing when you go to the opposite side of the rack. You can also activate these LEDs remotely by using the Cisco IMC interface. See the “Status LEDs and Buttons” section for locations of these LEDs.
This section describes how to install and replace node components, and it includes the following topics:
This section shows the locations of the field-replaceable components. The view in Figure 3-5 is from the top down with the top cover and air baffle removed.
Figure 3-5 Replaceable Component Locations
|
See Replacing Drives for information about supported drives. |
|
Power supplies (up to two, hot-swappable when redundant as 1+1) |
|
Drive bay 2: SSD caching drive The supported caching SSD differs between the HX220c and HX220c All-Flash nodes. See Replacing Drives. |
|
Trusted platform module (TPM) socket on motherboard (not visible in this view) |
|
|
||
|
|
||
|
|
Modular LOM (mLOM) connector on chassis floor for Cisco VIC 1227 |
|
|
|
Cisco modular HBA PCIe riser (dedicated riser with horizontal socket) |
|
|
|
||
|
|
The drive bay numbering is shown in Figure 3-6.
Figure 3-6 Drive Bay Numbering
|
|
||
|
|
Observe these drive population guidelines:
– HX220c: HDD persistent data drives
– HX220c All-Flash: SSD persistent data drives
See Table 3-4 for the supported persistent drives, which differ between supported drive configurations.
|
|
|
|
|
---|---|---|---|---|
The three types of drives in the node require different replacement procedures.
Hot-swap replacement is supported. See Replacing Persistent Data Drives (Bays 3 – 8). NOTE: Hot-swap replacement includes hot-removal, so you can remove the drive while it is still operating. |
|
Node must be put into Cisco HX Maintenance Mode before replacing the housekeeping SSD. Replacement requires additional technical assistance and cannot be completed by the customer. See Replacing the Housekeeping SSD for SDS Logs (Bay 1). |
|
Hot-swap replacement is supported. See Replacing the SSD Caching Drive (Bay 2). NOTE: Hot-swap replacement for SAS/SATA drives includes hot-removal, so you can remove the drive while it is still operating. NOTE: If an NVMe SSD is used as the caching drive, additional steps are required, as described in the procedure. |
The persistent data drives must be installed only in drive bays 3 - 8.
See HX220c Drive Configuration Comparison for supported drives.
Note Hot-swap replacement includes hot-removal, so you can remove the drive while it is still operating.
Step 1 Remove the drive that you are replacing or remove a blank drive tray from the bay:
a. Press the release button on the face of the drive tray. See Figure 3-7.
b. Grasp and open the ejector lever and then pull the drive tray out of the slot.
c. If you are replacing an existing drive, remove the four drive-tray screws that secure the drive to the tray and then lift the drive out of the tray.
a. Place a new drive in the empty drive tray and install the four drive-tray screws.
b. With the ejector lever on the drive tray open, insert the drive tray into the empty drive bay.
c. Push the tray into the slot until it touches the backplane, and then close the ejector lever to lock the drive in place.
|
|
||
|
|
Note This procedure requires assistance from technical support for additional software update steps after the hardware is replaced. It cannot be completed without technical support assistance.
Step 1 Put the node in Cisco HX Maintenance mode as described in Shutting Down the Node Through vSphere With Cisco HX Maintenance Mode.
Step 2 Shut down the node as described in Shutting Down the Node.
Step 3 Decommission the node as described in Decommissioning the Node Using Cisco UCS Manager.
Step 4 Disconnect all power cables from the power supplies.
Step 5 Remove the drive that you are replacing:
a. Press the release button on the face of the drive tray. See Figure 3-7.
b. Grasp and open the ejector lever and then pull the drive tray out of the slot.
c. If you are replacing an existing drive, remove the four drive-tray screws that secure the drive to the tray and then lift the drive out of the tray.
a. Place a new drive in the empty drive tray and install the four drive-tray screws.
b. With the ejector lever on the drive tray open, insert the drive tray into the empty drive bay.
c. Push the tray into the slot until it touches the backplane, and then close the ejector lever to lock the drive in place.
Step 7 Replace the node in the rack, replace power cables, and then power on the node by pressing the Power button.
Step 8 Recommission the node as described in Recommissioning the Node Using Cisco UCS Manager.
Step 9 Associate the node to its service profile as described in Associating a Service Profile With an HX Node.
Step 10 After ESXi reboot, exit HX Maintenance mode as described in Exiting HX Maintenance Mode.
Note After you replace the SSD hardware, you must contact technical support for additional software update steps.
The SSD caching drive must be installed in drive bay 2.
Note the following considerations and restrictions for NVMe SSDs When Used As the Caching SSD:
Note Always replace the drive with the same type and size as the original drive.
Note Upgrading or downgrading the Caching drive in an existing HyperFlex cluster is not supported. If the Caching drive must be upgraded or downgraded, then a full redeployment of the HyperFlex cluster is required.
Note When using a SAS drive, hot-swap replacement includes hot-removal, so you can remove a SAS drive while it is still operating. NVMe drives cannot be hot-swapped.
Step 1 Only if the caching drive is an NVMe SSD, enter the ESXi host into HX Maintenance Mode. Otherwise, skip to step 2.
Step 2 Remove the SSD caching drive:
a. Press the release button on the face of the drive tray (see Figure 3-7).
b. Grasp and open the ejector lever and then pull the drive tray out of the slot.
c. Remove the four drive-tray screws that secure the SSD to the tray and then lift the SSD out of the tray.
Step 3 Install a new SSD caching drive:
a. Place a new SSD in the empty drive tray and replace the four drive-tray screws.
b. With the ejector lever on the drive tray open, insert the drive tray into the empty drive bay.
c. Push the tray into the slot until it touches the backplane, and then close the ejector lever to lock the drive in place.
Step 4 Only if the caching drive is an NVMe SSD :
a. reboot the ESXi host. This enables ESXi to discover the NVMe SSD.
b. Exit the ESXi host from HX Maintenance Mode.
The six fan modules in the node are numbered as follows when you are facing the front of the node (also see Figure 3-9).
Figure 3-8 Fan Module Numbering
Tip Each fan module has a fault LED next to the fan connector on the motherboard that lights amber if the fan module fails. Standby power is required to operate these LEDs.
Step 1 Remove a fan module that you are replacing (see Figure 3-9):
a. Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.
b. Remove the top cover as described in Removing and Replacing the Node Top Cover.
c. Grasp the fan module at its front and on the green connector. Lift straight up to disengage its connector from the motherboard and free it from the two alignment pegs.
Step 2 Install a new fan module:
a. Set the new fan module in place, aligning its two openings with the two alignment pegs on the motherboard. See Figure 3-9.
b. Press down gently on the fan module connector to fully engage it with the connector on the motherboard.
d. Replace the node in the rack.
Figure 3-9 Top View of Fan Module
|
|
Warning There is danger of explosion if the battery is replaced incorrectly. Replace the battery only with the same or equivalent type recommended by the manufacturer. Dispose of used batteries according to the manufacturer’s instructions. [Statement 1015]
The real-time clock (RTC) battery retains node settings when the node is disconnected from power. The battery type is CR2032. Cisco supports the industry-standard CR2032 battery, which can be purchased from most electronic stores.
Step 1 Put the node in Cisco HX Maintenance mode as described in Shutting Down the Node Through vSphere With Cisco HX Maintenance Mode.
Step 2 Shut down the node as described in Shutting Down the Node.
Step 3 Decommission the node as described in Decommissioning the Node Using Cisco UCS Manager.
Step 4 Disconnect all power cables from the power supplies.
Step 5 Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.
Step 6 Remove the top cover as described in Removing and Replacing the Node Top Cover.
Step 7 Locate the RTC battery. See Figure 3-10.
Step 8 Gently remove the battery from the holder on the motherboard.
Step 9 Insert the battery into its holder and press down until it clicks in place.
Note The positive side of the battery marked “3V+” should face upward.
Step 10 Replace the top cover.
Step 11 Replace the node in the rack, replace power cables, and then power on the node by pressing the Power button.
Step 12 Recommission the node as described in Recommissioning the Node Using Cisco UCS Manager.
Step 13 Associate the node to its service profile as described in Associating a Service Profile With an HX Node.
Step 14 After ESXi reboot, exit HX Maintenance mode as described in Exiting HX Maintenance Mode.
Figure 3-10 Motherboard RTC Battery Location
|
|
This section includes the following topics:
Note To ensure the best node performance, it is important that you are familiar with memory performance guidelines and population rules before you install or replace DIMMs.
This section describes the type of memory that the node requires and its effect on performance. The section includes the following topics:
Figure 3-11 shows the numbering of the DIMM slots.
Figure 3-11 DIMM Slots and CPUs
Observe the following guidelines when installing or replacing DIMMs:
– CPU1 supports channels A, B, C, and D.
– CPU2 supports channels E, F, G, and H.
– A channel can operate with one, two, or three DIMMs installed.
– If a channel has only one DIMM, populate slot 1 first (the blue slot).
– Fill blue #1 slots in the channels first: A1, E1, B1, F1, C1, G1, D1, H1
– Fill black #2 slots in the channels second: A2, E2, B2, F2, C2, G2, D2, H2
– Fill white #3 slots in the channels third: A3, E3, B3, F3, C3, G3, D3, H3
The Intel E5-2600 CPUs within the node support memory mirroring only when an even number of channels are populated with DIMMs. If one or three channels are populated with DIMMs, memory mirroring is automatically disabled. Furthermore, if memory mirroring is used, DRAM size is reduced by 50 percent for reasons of reliability.
When you enable lockstep channel mode, each memory access is a 128-bit data access that spans four channels.
Lockstep channel mode requires that all four memory channels on a CPU must be populated identically with regard to size and organization. DIMM socket populations within a channel (for example, A1, A2, A3) do not have to be identical but the same DIMM slot location across all four channels must be populated the same.
For example, DIMMs in sockets A1, B1, C1, and D1 must be identical. DIMMs in sockets A2, B2, C2, and D2 must be identical. However, the A1-B1-C1-D1 DIMMs do not have to be identical with the A2-B2-C2-D2 DIMMs.
Each DIMM socket has a corresponding DIMM fault LED, directly in front of the DIMM socket. See Figure 3-3 for the locations of these LEDs. The LEDs light amber to indicate a faulty DIMM. To operate these LEDs from the supercap power source, remove AC power cords and then press the unit identification button. See also Internal Diagnostic LEDs.
Step 1 Put the node in Cisco HX Maintenance mode as described in Shutting Down the Node Through vSphere With Cisco HX Maintenance Mode.
Step 2 Shut down the node as described in Shutting Down the Node.
Step 3 Decommission the node as described in Decommissioning the Node Using Cisco UCS Manager.
Step 4 Disconnect all power cables from the power supplies.
Step 5 Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.
Step 6 Remove the top cover as described in Removing and Replacing the Node Top Cover.
Step 7 Identify the faulty DIMM by observing the DIMM slot fault LEDs on the motherboard.
Step 8 Open the ejector levers at both ends of the DIMM slot, and then lift the DIMM out of the slot.
Note Before installing DIMMs, see the population guidelines: Memory Performance Guidelines and Population Rules.
e. Align the new DIMM with the empty slot on the motherboard. Use the alignment key in the DIMM slot to correctly orient the DIMM.
f. Push down evenly on the top corners of the DIMM until it is fully seated and the ejector levers on both ends lock into place.
Step 10 Replace the node in the rack, replace power cables, and then power on the node by pressing the Power button.
Step 11 Recommission the node as described in Recommissioning the Node Using Cisco UCS Manager.
Step 12 Associate the node to its service profile as described in Associating a Service Profile With an HX Node.
Step 13 After ESXi reboot, exit HX Maintenance mode as described in Exiting HX Maintenance Mode.
This section contains the following topics:
Note You can use Xeon v3- and v4-based nodes in the same cluster. Do not mix Xeon v3 and v4 CPUs within the same node.
The minimum software and firmware versions required for the node to support Intel v4 CPUs are as follows:
|
|
---|---|
Do one of the following actions:
This node has two CPU sockets. Each CPU supports four DIMM channels (12 DIMM slots). See Figure 3-11.
Note This node uses the new independent loading mechanism (ILM) CPU sockets, so no Pick-and-Place tools are required for CPU handling or installation. Always grasp the plastic frame on the CPU when handling.
Step 1 Put the node in Cisco HX Maintenance mode as described in Shutting Down the Node Through vSphere With Cisco HX Maintenance Mode.
Step 2 Shut down the node as described in Shutting Down the Node.
Step 3 Decommission the node as described in Decommissioning the Node Using Cisco UCS Manager.
Step 4 Disconnect all power cables from the power supplies.
Step 5 Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.
Step 6 Remove the top cover as described in Removing and Replacing the Node Top Cover.
Step 7 Remove the plastic air baffle that sits over the CPUs.
Step 8 Remove the heatsink that you are replacing. Use a Number 2 Phillips-head screwdriver to loosen the four captive screws that secure the heatsink and then lift it off of the CPU.
Note Alternate loosening each screw evenly to avoid damaging the heatsink or CPU.
Step 9 Open the CPU retaining mechanism:
a. Unclip the first retaining latch labeled with the icon, and then unclip the second retaining latch labeled with the icon. See Figure 3-12.
b. Open the hinged CPU cover plate.
|
|
||
|
|
||
|
|
Step 10 Remove any existing CPU:
a. With the latches and hinged CPU cover plate open, swing up the CPU in its hinged seat to the open position, as shown in Figure 3-12.
b. Grasp the CPU by the finger grips on its plastic frame and lift it up and out of the hinged CPU seat.
c. Set the CPU aside on an anti-static surface.
a. Grasp the new CPU by the finger grips on its plastic frame and align the tab on the frame that is labeled “ALIGN” with the SLS mechanism, as shown in Figure 3-13.
b. Insert the tab on the CPU frame into the seat until it stops and is held firmly.
The line below the word “ALIGN” should be level with the edge of the seat, as shown in Figure 3-13.
c. Swing the hinged seat with the CPU down until the CPU frame clicks in place and holds flat in the socket.
d. Close the hinged CPU cover plate.
e. Clip down the CPU retaining latch with the icon, and then clip down the CPU retaining latch with the icon. See Figure 3-12.
Figure 3-13 CPU and Socket Alignment Features
|
|
a. Apply the cleaning solution, which is included with the heatsink cleaning kit (UCSX-HSCK=, shipped with spare CPUs), to the old thermal grease on the heatsink and CPU and let it soak for a least 15 seconds.
b. Wipe all of the old thermal grease off the old heat sink and CPU using the soft cloth that is included with the heatsink cleaning kit. Be careful to not scratch the heat sink surface.
Note New heatsinks come with a pre-applied pad of thermal grease. If you are reusing a heatsink, you must apply thermal grease from a syringe (UCS-CPU-GREASE3=).
c. Using the syringe of thermal grease provided with the CPU (UCS-CPU-GREASE3=), apply 2 cubic centimeters of thermal grease to the top of the CPU. Use the pattern shown in Figure 3-14 to ensure even coverage.
Figure 3-14 Thermal Grease Application Pattern
d. Align the four heatsink captive screws with the motherboard standoffs, and then use a Number 2 Phillips-head screwdriver to tighten the captive screws evenly.
Note Alternate tightening each screw evenly to avoid damaging the heatsink or CPU.
Step 13 Replace the air baffle.
Step 14 Replace the top cover.
Step 15 Replace the node in the rack, replace power cables, and then power on the node by pressing the Power button.
Step 16 Recommission the node as described in Recommissioning the Node Using Cisco UCS Manager.
Step 17 Associate the node to its service profile as described in Associating a Service Profile With an HX Node.
Step 18 After ESXi reboot, exit HX Maintenance mode as described in Exiting HX Maintenance Mode.
Use the following procedure to upgrade the node and CPUs.
Step 1 Upgrade the Cisco UCS Manager software to the minimum version for your node (or later). See Table 3-7 .
Use the procedures in the appropriate Cisco UCS Manager upgrade guide (depending on your current software version): Cisco UCS Manager Upgrade Guides.
Step 2 Use Cisco UCS Manager to upgrade and activate the node Cisco IMC to the minimum version for your node (or later). See Table 3-7 .
Use the procedures in the GUI or CLI Cisco UCS Manager Firmware Management Guide for your release.
Step 3 Use Cisco UCS Manager to upgrade and activate the node BIOS to the minimum version for your node (or later). See Table 3-7 .
Use the procedures in the Cisco UCS Manager GUI or CLI Cisco UCS Manager Firmware Management Guide for your release.
Step 4 Replace the CPUs with the Intel Xeon v4 Series CPUs.
Use the CPU replacement procedures in CPU Replacement Procedure.
When a return material authorization (RMA) of the motherboard or CPU is done on a node, additional parts might not be included with the CPU or motherboard spare bill of materials (BOM). The TAC engineer might need to add the additional parts to the RMA to help ensure a successful replacement.
– Heat sink cleaning kit (UCSX-HSCK=)
– Thermal grease kit for C240 M4 (UCS-CPU-GREASE3=)
– Heat sink cleaning kit (UCSX-HSCK=)
A CPU heatsink cleaning kit is good for up to four CPU and heatsink cleanings. The cleaning kit contains two bottles of solution, one to clean the CPU and heatsink of old thermal interface material and the other to prepare the surface of the heatsink.
New heatsink spares come with a pre-applied pad of thermal grease. It is important to clean the old thermal grease off of the CPU prior to installing the heatsinks. Therefore, when you are ordering new heatsinks, you must order the heatsink cleaning kit.
The node has two internal SD card bays on the motherboard.
Dual SD cards are supported. RAID 1 support can be configured through the Cisco IMC interface.
Step 1 Put the node in Cisco HX Maintenance mode as described in Shutting Down the Node Through vSphere With Cisco HX Maintenance Mode.
Step 2 Shut down the node as described in Shutting Down the Node.
Step 3 Decommission the node as described in Decommissioning the Node Using Cisco UCS Manager.
Step 4 Disconnect all power cables from the power supplies.
Step 5 Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.
Step 6 Remove the top cover as described in Removing and Replacing the Node Top Cover.
Step 7 Locate the SD card that you are replacing on the motherboard (see Figure 3-15).
Step 8 Push on the top of the SD card, and then release it to allow it to spring up in the slot.
Step 9 Remove the SD card from the slot.
Step 10 Insert the SD card into the slot with the label side facing up.
Step 11 Press on the top of the card until it clicks in the slot and stays in place.
Step 12 Replace the top cover.
Step 13 Replace the node in the rack, replace power cables, and then power on the node by pressing the Power button.
Step 14 Recommission the node as described in Recommissioning the Node Using Cisco UCS Manager.
Step 15 Associate the node to its service profile as described in Associating a Service Profile With an HX Node.
Step 16 After ESXi reboot, exit HX Maintenance mode as described in Exiting HX Maintenance Mode.
Figure 3-15 SD Card Bays and USB Port Locations on the Motherboard
|
|
The factory default is for all USB ports on the node to be enabled. However, the internal USB port can be enabled or disabled in the node BIOS. See Figure 3-15 for the location of the USB port on the motherboard.
Step 1 Enter the BIOS Setup Utility by pressing the F2 key when prompted during bootup.
Step 2 Navigate to the Advanced tab.
Step 3 On the Advanced tab, select USB Configuration.
Step 4 On the USB Configuration page, select USB Ports Configuration.
Step 5 Scroll to USB Port: Internal, press Enter, and then choose either Enabled or Disabled from the dialog box.
Step 6 Press F10 to save and exit the utility.
The node has a dedicated internal riser 3 that is used only for the Cisco modular HBA card. This riser plugs into a dedicated motherboard socket and provides a horizontal socket for the HBA.
Step 1 Put the node in Cisco HX Maintenance mode as described in Shutting Down the Node Through vSphere With Cisco HX Maintenance Mode.
Step 2 Shut down the node as described in Shutting Down the Node.
Step 3 Decommission the node as described in Decommissioning the Node Using Cisco UCS Manager.
Step 4 Disconnect all power cables from the power supplies.
Step 5 Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.
Step 6 Remove the top cover as described in Removing and Replacing the Node Top Cover.
Step 7 Remove the existing riser (see Figure 3-16):
a. If the existing riser has a card in it, disconnect the SAS cable from the card.
b. Lift the riser straight up to disengage the riser from the motherboard socket. The riser bracket must also lift off of two pegs that hold it to the inner chassis wall.
d. Remove the card from the riser. Loosen the single thumbscrew that secures the card to the riser bracket and then pull the card straight out from its socket on the riser (see Figure 3-17).
a. Install your HBA card into the new riser. See Replacing a Cisco Modular HBA Card.
b. Align the connector on the riser with the socket on the motherboard. At the same time, align the two slots on the back side of the bracket with the two pegs on the inner chassis wall.
c. Push down gently to engage the riser connector with the motherboard socket. The metal riser bracket must also engage the two pegs that secure it to the chassis wall.
d. Reconnect the SAS cable to its connector on the HBA card.
Step 10 Replace the node in the rack, replace power cables, and then power on the node by pressing the Power button.
Step 11 Recommission the node as described in Recommissioning the Node Using Cisco UCS Manager.
Step 12 Associate the node to its service profile as described in Associating a Service Profile With an HX Node.
Step 13 After ESXi reboot, exit HX Maintenance mode as described in Exiting HX Maintenance Mode.
Figure 3-16 Cisco Modular HBA Riser (Internal Riser 3) Location
|
|
The node can use a Cisco modular HBA card that plugs into a horizontal socket on a dedicated internal riser 3.
Step 1 Put the node in Cisco HX Maintenance mode as described in Shutting Down the Node Through vSphere With Cisco HX Maintenance Mode.
Step 2 Shut down the node as described in Shutting Down the Node.
Step 3 Decommission the node as described in Decommissioning the Node Using Cisco UCS Manager.
Step 4 Disconnect all power cables from the power supplies.
Step 5 Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.
Step 6 Remove the top cover as described in Removing and Replacing the Node Top Cover.
Step 7 Remove the riser from the node (see Figure 3-16):
a. Disconnect the SAS cable from the existing HBA card.
b. Lift the riser straight up to disengage the riser from the motherboard socket. The riser bracket must also lift off of two pegs that hold it to the inner chassis wall.
Step 8 Remove the card from the riser:
a. Loosen the single thumbscrew that secures the card to the metal riser bracket (see Figure 3-17).
b. Pull the card straight out from its socket on the riser and the guide channel on the riser bracket.
Step 9 Install the HBA card into the riser:
a. With the riser upside down, set the card on the riser. Align the right end of the card with the alignment channel on the riser; align the connector on the card edge with the socket on the riser (see Figure 3-17).
b. Being careful to avoid scraping the underside of the card on the threaded standoff on the riser, push on both corners of the card to seat its connector in the riser socket.
c. Tighten the single thumbscrew that secures the card to the riser bracket.
Step 10 Return the riser to the node:
a. Align the connector on the riser with the socket on the motherboard. At the same time, align the two slots on the back side of the bracket with the two pegs on the inner chassis wall.
b. Push down gently to engage the riser connector with the motherboard socket. The metal riser bracket must also engage the two pegs that secure it to the chassis wall.
Step 11 Reconnect the SAS cable to its connector on the HBA card.
Step 12 Replace the top cover.
Step 13 Replace the node in the rack, replace power cables, and then power on the node by pressing the Power button.
Step 14 Recommission the node as described in Recommissioning the Node Using Cisco UCS Manager.
Step 15 Associate the node to its service profile as described in Associating a Service Profile With an HX Node.
Step 16 After ESXi reboot, exit HX Maintenance mode as described in Exiting HX Maintenance Mode.
Figure 3-17 Cisco Modular HBA Card in Riser
|
|
||
|
|
CAUTION: Do not scrape the underside of the card on this threaded standoff. |
The node contains two PCIe risers that are attached to a single riser assembly. Riser 1 provides PCIe slot 1 and riser 2 provides PCIe slot 2, as shown in Figure 3-18. See Table 3-8 for a description of the PCIe slots on each riser.
Figure 3-18 Rear Panel, Showing PCIe Slots
To install or replace a PCIe riser, follow these steps:
Step 1 Put the node in Cisco HX Maintenance mode as described in Shutting Down the Node Through vSphere With Cisco HX Maintenance Mode.
Step 2 Shut down the node as described in Shutting Down the Node.
Step 3 Decommission the node as described in Decommissioning the Node Using Cisco UCS Manager.
Step 4 Disconnect all power cables from the power supplies.
Step 5 Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.
Step 6 Remove the top cover as described in Removing and Replacing the Node Top Cover.
Step 7 Use two hands to grasp the metal bracket of the riser assembly and lift straight up to disengage its connectors from the two sockets on the motherboard.
Step 8 If the riser has any cards installed, remove them from the riser.
Step 9 Install a new PCIe riser assembly:
a. If you removed any cards from the old riser assembly, install the cards to the new riser assembly (see Replacing a PCIe Card).
b. Position the riser assembly over its two sockets on the motherboard and over the chassis alignment channels (see Figure 3-19):
c. Carefully push down on both ends of the riser assembly to fully engage its connectors with the two sockets on the motherboard.
Step 10 Replace the top cover.
Step 11 Replace the node in the rack, replace power cables, and then power on the node by pressing the Power button.
Step 12 Recommission the node as described in Recommissioning the Node Using Cisco UCS Manager.
Step 13 Associate the node to its service profile as described in Associating a Service Profile With an HX Node.
Step 14 After ESXi reboot, exit HX Maintenance mode as described in Exiting HX Maintenance Mode.
Figure 3-19 PCIe Riser Assembly Location and Alignment Channels
|
|
The node contains two toolless PCIe risers for horizontal installation of PCIe cards. See Figure 3-20 and Table 3-8 for a description of the PCIe slots on these risers.
Both slots support the network communications services interface (NCSI) protocol and standby power.
Figure 3-20 Rear Panel, Showing PCIe Slots
|
Lane Width |
|
|
|
|
---|---|---|---|---|---|
1.This is the supported length because of internal clearance. |
To install or replace a PCIe card, follow these steps:
Step 1 Put the node in Cisco HX Maintenance mode as described in Shutting Down the Node Through vSphere With Cisco HX Maintenance Mode.
Step 2 Shut down the node as described in Shutting Down the Node.
Step 3 Decommission the node as described in Decommissioning the Node Using Cisco UCS Manager.
Step 4 Disconnect all power cables from the power supplies.
Step 5 Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.
Step 6 Remove the top cover as described in Removing and Replacing the Node Top Cover.
a. Remove any cables from the ports of the PCIe card that you are replacing.
b. Use two hands to grasp the metal bracket of the riser assembly and lift straight up to disengage its connectors from the two sockets on the motherboard.
c. Open the hinged plastic retainer that secures the rear-panel tab of the card (see Figure 3-21).
d. Pull evenly on both ends of the PCIe card to remove it from the socket on the PCIe riser.
If the riser has no card, remove the blanking panel from the rear opening of the riser.
Step 8 Install a new PCIe card:
a. Open the hinged plastic retainer
b. With the hinged tab retainer open, align the new PCIe card with the empty socket on the PCIe riser.
c. Push down evenly on both ends of the card until it is fully seated in the socket.
d. Ensure that the card’s rear panel tab sits flat against the riser rear-panel opening and then close the hinged tab retainer over the card’s rear-panel tab (see Figure 3-21).
e. Position the PCIe riser over its two sockets on the motherboard and over the chassis alignment channels (see Figure 3-19).
f. Carefully push down on both ends of the PCIe riser to fully engage its connector with the sockets on the motherboard.
Step 10 Replace the node in the rack, replace power cables, and then power on the node by pressing the Power button.
Step 11 Recommission the node as described in Recommissioning the Node Using Cisco UCS Manager.
Step 12 Associate the node to its service profile as described in Associating a Service Profile With an HX Node.
Step 13 After ESXi reboot, exit HX Maintenance mode as described in Exiting HX Maintenance Mode.
Figure 3-21 PCIe Riser Assembly
|
|
When a large number of PCIe add-on cards are installed in the node, the node might run out of the following resources required for PCIe devices:
The topics in this section provide guidelines for resolving the issues related to these limited resources:
The node has very limited memory to execute PCIe legacy option ROMs, so when a large number of PCIe add-on cards are installed in the node, the node BIOS might not able to execute all of the option ROMs. The node BIOS loads and executes the option ROMs in the order that the PCIe cards are enumerated (slot 1, slot 2, slot 3, and so on).
If the node BIOS does not have sufficient memory space to load any PCIe option ROM, it skips loading that option ROM, reports a node event log (SEL) event to the Cisco IMC controller and reports the following error in the Error Manager page of the BIOS Setup utility:
To resolve this issue, disable the Option ROMs that are not needed for system booting. The BIOS Setup Utility provides the setup options to enable or disable the Option ROMs at the PCIe slot level for the PCIe expansion slots and at the port level for the onboard NICs. These options can be found in the BIOS Setup Utility Advanced > PCI Configuration page.
If the node is configured to boot primarily from RAID storage, make sure that the option ROMs for the slots where your RAID controllers installed are enabled in the BIOS, depending on your RAID controller configuration.
If the RAID controller does not appear in the node boot order even with the option ROMs for those slots enabled, the RAID controller option ROM might not have sufficient memory space to execute. In that case, disable other option ROMs that are not needed for the node configuration to free up some memory space for the RAID controller option ROM.
If the node is configured to primarily perform PXE boot from onboard NICs, make sure that the option ROMs for the onboard NICs to be booted from are enabled in the BIOS Setup Utility. Disable other option ROMs that are not needed to create sufficient memory space for the onboard NICs.
The node has only 64 KB of legacy 16-bit I/O resources available. This 64 KB of I/O space is divided between the CPUs in the node because the PCIe controller is integrated into the CPUs. This node BIOS has the capability to dynamically detect the 16-bit I/O resource requirement for each CPU and then balance the 16-bit I/O resource allocation between the CPUs during the PCI bus enumeration phase of the BIOS POST.
When a large number of PCIe cards are installed in the node, the node BIOS might not have sufficient I/O space for some PCIe devices. If the node BIOS is not able to allocate the required I/O resources for any PCIe devices, the following symptoms have been observed:
To work around this problem, rebalance the 16-bit I/O load using the following methods:
1. Physically remove any unused PCIe cards.
2. If the node has one or more Cisco virtual interface cards (VICs) installed, disable the PXE boot on the VICs that are not required for the node boot configuration by using the Network Adapters page in the Cisco IMC Web UI to free up some 16-bit I/O resources. Each VIC uses a minimum 16 KB of 16-bit I/O resource, so disabling PXE boot on Cisco VICs would free up some 16-bit I/O resources that can be used for other PCIe cards that are installed in the node.
The trusted platform module (TPM) is a small circuit board that attaches to a motherboard socket. The socket location is on the motherboard between the power supplies and PCIe riser 2 (see Figure 3-22).
This section contains the following procedures, which must be followed in this order when installing and enabling a TPM:
1. Installing the TPM Hardware
2. Enabling TPM Support in the BIOS
3. Enabling the Intel TXT Feature in the BIOS
Note For security purposes, the TPM is installed with a one-way screw. It cannot be removed with a standard screwdriver.
Step 1 Put the node in Cisco HX Maintenance mode as described in Shutting Down the Node Through vSphere With Cisco HX Maintenance Mode.
Step 2 Shut down the node as described in Shutting Down the Node.
Step 3 Decommission the node as described in Decommissioning the Node Using Cisco UCS Manager.
Step 4 Disconnect all power cables from the power supplies.
Step 5 Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.
Step 6 Remove the top cover as described in Removing and Replacing the Node Top Cover.
Step 7 Check if there is a card installed in PCIe riser 2. See Figure 3-22.
Step 8 Install a TPM (see Figure 3-22):
a. Locate the TPM socket on the motherboard, as shown in Figure 3-22.
b. Align the connector that is on the bottom of the TPM circuit board with the motherboard TPM socket. Align the screw hole on the TPM board with the screw hole adjacent to the TPM socket.
c. Push down evenly on the TPM to seat it in the motherboard socket.
d. Install the single one-way screw that secures the TPM to the motherboard.
Step 9 If you removed the PCIe riser assembly, return it to the node now. See Replacing a PCIe Riser Assembly for details.
Step 10 Replace the top cover.
Step 11 Replace the node in the rack, replace power cables, and then power on the node by pressing the Power button.
Step 12 Recommission the node as described in Recommissioning the Node Using Cisco UCS Manager.
Step 13 Associate the node to its service profile as described in Associating a Service Profile With an HX Node.
Step 14 After ESXi reboot, exit HX Maintenance mode as described in Exiting HX Maintenance Mode.
Step 15 Continue with Enabling TPM Support in the BIOS.
Figure 3-22 TPM Socket Location on Motherboard
|
|
Note After hardware installation, you must enable TPM support in the BIOS.
Note You must set a BIOS Administrator password before performing this procedure. To set this password, press the F2 key when prompted during system boot to enter the BIOS Setup utility. Then navigate to Security > Set Administrator Password and enter the new password twice as prompted.
a. Watch during bootup for the F2 prompt, and then press F2 to enter BIOS setup.
b. Log in to the BIOS Setup Utility with your BIOS Administrator password.
c. On the BIOS Setup Utility window, choose the Advanced tab.
d. Choose Trusted Computing to open the TPM Security Device Configuration window.
e. Change TPM SUPPORT to Enabled.
f. Press F10 to save your settings and reboot the node.
Step 2 Verify that TPM support is now enabled:
a. Watch during bootup for the F2 prompt, and then press F2 to enter BIOS setup.
b. Log into the BIOS Setup utility with your BIOS Administrator password.
d. Choose Trusted Computing to open the TPM Security Device Configuration window.
e. Verify that TPM SUPPORT and TPM State are Enabled.
Step 3 Continue with Enabling the Intel TXT Feature in the BIOS.
Intel Trusted Execution Technology (TXT) provides greater protection for information that is used and stored on the node. A key aspect of that protection is the provision of an isolated execution environment and associated sections of memory where operations can be conducted on sensitive data, invisibly to the rest of the node. Intel TXT provides for a sealed portion of storage where sensitive data such as encryption keys can be kept, helping to shield them from being compromised during an attack by malicious code.
Step 1 Reboot the node and watch for the prompt to press F2.
Step 2 When prompted, press F2 to enter the BIOS Setup utility.
Step 3 Verify that the prerequisite BIOS values are enabled:
b. Choose Intel TXT(LT-SX) Configuration to open the Intel TXT(LT-SX) Hardware Support window.
c. Verify that the following items are listed as Enabled:
– VT-d Support (default is Enabled)
– VT Support (default is Enabled)
d. Press Escape to return to the BIOS Setup utility Advanced tab.
e. On the Advanced tab, choose Processor Configuration to open the Processor Configuration window.
f. Set Intel (R) VT and Intel (R) VT-d to Enabled.
Step 4 Enable the Intel Trusted Execution Technology (TXT) feature:
a. Return to the Intel TXT(LT-SX) Hardware Support window if you are not already there.
b. Set TXT Support to Enabled.
Step 5 Press F10 to save your changes and exit the BIOS Setup utility.
The node can use a modular LOM (mLOM) card to provide additional rear-panel connectivity. The mLOM card socket remains powered when the node is in 12 V standby power mode and it supports the network communications services interface (NCSI) protocol.
Step 1 Put the node in Cisco HX Maintenance mode as described in Shutting Down the Node Through vSphere With Cisco HX Maintenance Mode.
Step 2 Shut down the node as described in Shutting Down the Node.
Step 3 Decommission the node as described in Decommissioning the Node Using Cisco UCS Manager.
Step 4 Disconnect all power cables from the power supplies.
c. Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.
d. Remove the top cover as described in Removing and Replacing the Node Top Cover.
Step 5 See the location of the mLOM socket in Figure 3-23. You might have to remove PCIe riser 1 and the Cisco modular HBA riser to provide clearance.
Step 6 Loosen the single thumbscrew that secures the mLOM card to the chassis floor and then slide the mLOM card horizontally to disengage its connector from the motherboard socket.
Step 7 Install a new mLOM card:
a. Set the mLOM card on the chassis floor so that its connector is aligned with the motherboard socket and its thumbscrew is aligned with the standoff on the chassis floor.
b. Push the card’s connector into the motherboard socket horizontally.
c. Tighten the thumbscrew to secure the card to the chassis floor.
Step 8 If you removed PCIe riser 1 or the HBA card riser, return them to the node. See Replacing a PCIe Riser Assembly or Replacing a Cisco Modular HBA Card for instructions.
Step 10 Replace the node in the rack, replace power cables, and then power on the node by pressing the Power button.
Step 11 Recommission the node as described in Recommissioning the Node Using Cisco UCS Manager.
Step 12 Associate the node to its service profile as described in Associating a Service Profile With an HX Node.
Step 13 After ESXi reboot, exit HX Maintenance mode as described in Exiting HX Maintenance Mode.
Figure 3-23 mLOM Card Socket Location
|
mLOM card socket location on motherboard for |
|
Table 3-9 describes the requirements for the supported Cisco UCS virtual interface cards (VICs).
Note The Cisco UCS VIC 1227 (UCSC-MLOM-CSC-02) is not compatible to use in Cisco Card NIC mode with a certain Cisco SFP+ module. Do not use a Cisco SFP+ module part number 37-0961-01 that has a serial number in the range MOC1238xxxx to MOC1309xxxx. If you use the Cisco UCS VIC 1227 in Cisco Card NIC mode, use a different part number Cisco SFP+ module, or you can use this part number 37-0961-01 if the serial number is not included in the range above. See the data sheet for this adapter for other supported SFP+ modules: Cisco UCS VIC 1227 Data Sheet
When two power supplies are installed they are redundant as 1+1.
Note You do not have to power off the node to replace power supplies because they are redundant as 1+1.
Step 1 Perform one of the following actions:
a. Put the node in Cisco HX Maintenance mode as described in Shutting Down the Node Through vSphere With Cisco HX Maintenance Mode.
b. Shut down the node as described in Shutting Down the Node.
c. Decommission the node as described in Decommissioning the Node Using Cisco UCS Manager.
Step 2 Remove the power cord from the power supply that you are replacing.
Step 3 Grasp the power supply handle while pinching the release lever toward the handle.
Step 4 Pull the power supply out of the bay.
Step 5 Install a new power supply:
a. Grasp the power supply handle and insert the new power supply into the empty bay.
b. Push the power supply into the bay until the release lever locks.
c. Connect the power cord to the new power supply.
Step 6 Only if you shut down the node, perform these steps:
a. Press the Power button to return the node to main power mode.
b. Recommission the node as described in Recommissioning the Node Using Cisco UCS Manager.
c. Associate the node to its service profile as described in Associating a Service Profile With an HX Node.
d. After ESXi reboot, exit HX Maintenance mode as described in Exiting HX Maintenance Mode.
Figure 3-24 Removing and Replacing Power Supplies
|
|
||
|
|
This section includes the following topics:
See Figure 3-25. The position of the block of DIP switches (SW6) is shown in red. In the magnified view, all switches are shown in the default position.
Figure 3-25 Service DIP Switches (SW6)
|
|
||
|
|
Note The following procedures use a recovery.cap recovery file. In Cisco IMC releases 3.0(1) and later, this recovery file has been renamed bios.cap.
Depending on which stage the BIOS becomes corrupted, you might see different behavior.
Note As indicated by the message shown above, there are two procedures for recovering the BIOS. Try procedure 1 first. If that procedure does not recover the BIOS, use procedure 2.
Step 1 Download the BIOS update package and extract it to a temporary location.
Step 2 Copy the contents of the extracted recovery folder to the root directory of a USB thumb drive. The recovery folder contains the recovery.cap (or bios.cap) file that is required in this procedure.
Note The recovery.cap (or bios.cap) file must be in the root directory of the USB drive. Do not rename this file. The USB drive must be formatted with either FAT16 or FAT32 file systems.
Step 3 Insert the USB thumb drive into a USB port on the node.
Step 5 Return the node to main power mode by pressing the Power button on the front panel.
The node boots with the updated BIOS boot block. When the BIOS detects a valid recovery.cap (or bios.cap) file on the USB thumb drive, it displays this message:
Step 6 Wait for node to complete the BIOS update, and then remove the USB thumb drive from the node.
Note During the BIOS update, Cisco IMC shuts down the node and the screen goes blank for about 10 minutes. Do not unplug the power cords during this update. Cisco IMC powers on the node after the update is complete.
See Figure 3-25 for the location of the SW8 block of DIP switches.
Step 1 Download the BIOS update package and extract it to a temporary location.
Step 2 Copy the contents of the extracted recovery folder to the root directory of a USB thumb drive. The recovery folder contains the recovery.cap (or bios.cap) file that is required in this procedure.
Note The recovery.cap (or bios.cap) file must be in the root directory of the USB drive. Do not rename this file. The USB drive must be formatted with either FAT16 or FAT32 file systems.
Step 3 Put the node in Cisco HX Maintenance mode as described in Shutting Down the Node Through vSphere With Cisco HX Maintenance Mode.
Step 4 Shut down the node as described in Shutting Down the Node.
Step 5 Decommission the node as described in Decommissioning the Node Using Cisco UCS Manager.
Step 6 Disconnect all power cables from the power supplies.
Step 7 Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.
Step 8 Remove the top cover as described in Removing and Replacing the Node Top Cover.
Step 9 Slide the BIOS recovery DIP switch from position 1 to the closed position (see Figure 3-25).
Step 10 Reconnect AC power cords to the node. The node powers up to standby power mode.
Step 11 Insert the USB thumb drive that you prepared in Step 2 into a USB port on the node.
Step 12 Return the node to main power mode by pressing the Power button on the front panel.
The node boots with the updated BIOS boot block. When the BIOS detects a valid recovery file on the USB thumb drive, it displays this message:
Step 13 Wait for node to complete the BIOS update, and then remove the USB thumb drive from the system.
Note During the BIOS update, Cisco IMC shuts down the node and the screen goes blank for about 10 minutes. Do not unplug the power cords during this update. Cisco IMC powers on the node after the update is complete.
Step 14 After the node has fully booted, power off the node again and disconnect all power cords.
Step 15 Slide the BIOS recovery DIP switch from the closed position back to the default position 1 (see Figure 3-25).
Note If you do not move the jumper, after recovery completion you see the prompt, “Please remove the recovery jumper.”
Step 16 Replace the top cover to the node.
Step 17 Replace the node in the rack, replace power cables, and then power on the node by pressing the Power button.
Step 18 Recommission the node as described in Recommissioning the Node Using Cisco UCS Manager.
Step 19 Associate the node to its service profile as described in Associating a Service Profile With an HX Node.
Step 20 After ESXi reboot, exit HX Maintenance mode as described in Exiting HX Maintenance Mode.
See Figure 3-25 for the location of this DIP switch. You can use this switch to clear the administrator password.
Step 1 Put the node in Cisco HX Maintenance mode as described in Shutting Down the Node Through vSphere With Cisco HX Maintenance Mode.
Step 2 Shut down the node as described in Shutting Down the Node.
Step 3 Decommission the node as described in Decommissioning the Node Using Cisco UCS Manager.
Step 4 Disconnect all power cables from the power supplies.
Step 5 Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.
Step 6 Remove the top cover as described in Removing and Replacing the Node Top Cover.
Step 7 Slide the clear password DIP switch from position 2 to the closed position (see Figure 3-25).
Step 8 Reinstall the top cover and reconnect AC power cords to the node. The node powers up to standby power mode, indicated when the Power LED on the front panel is amber.
Step 9 Return the node to main power mode by pressing the Power button on the front panel. The node is in main power mode when the Power LED is green.
Note You must allow the entire node, not just the service processor, to reboot to main power mode to complete the reset. The state of the jumper cannot be determined without the host CPU running.
Step 10 Press the Power button to shut down the node to standby power mode, and then remove AC power cords from the node to remove all power.
Step 11 Remove the top cover from the node.
Step 12 Slide the clear CMOS DIP switch from the closed position back to default position 2 (see Figure 3-25).
Note If you do not move the jumper, the CMOS settings are reset to the default every time that you power-cycle the node.
Step 13 Replace the top cover to the node.
Step 14 Replace the node in the rack, replace power cables, and then power on the node by pressing the Power button.
Step 15 Recommission the node as described in Recommissioning the Node Using Cisco UCS Manager.
Step 16 Associate the node to its service profile as described in Associating a Service Profile With an HX Node.
Step 17 After ESXi reboot, exit HX Maintenance mode as described in Exiting HX Maintenance Mode.
See Figure 3-25 for the location of this DIP switch. You can use this switch to clear the node’s CMOS settings in the case of a node hang. For example, if the node hangs because of incorrect settings and does not boot, use this jumper to invalidate the settings and reboot with defaults.
Step 1 Put the node in Cisco HX Maintenance mode as described in Shutting Down the Node Through vSphere With Cisco HX Maintenance Mode.
Step 2 Shut down the node as described in Shutting Down the Node.
Step 3 Decommission the node as described in Decommissioning the Node Using Cisco UCS Manager.
Step 4 Disconnect all power cables from the power supplies.
Step 5 Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.
Step 6 Remove the top cover as described in Removing and Replacing the Node Top Cover.
Step 7 Slide the clear CMOS DIP switch from position 4 to the closed position (see Figure 3-25).
Step 8 Reinstall the top cover and reconnect AC power cords to the node. The node powers up to standby power mode, indicated when the Power LED on the front panel is amber.
Step 9 Return the node to main power mode by pressing the Power button on the front panel. The node is in main power mode when the Power LED is green.
Note You must allow the entire node, not just the service processor, to reboot to main power mode to complete the reset. The state of the jumper cannot be determined without the host CPU running.
Step 10 Press the Power button to shut down the node to standby power mode, and then remove AC power cords from the node to remove all power.
Step 11 Remove the top cover from the node.
Step 12 Slide the clear CMOS DIP switch from the closed position back to default position 4 (see Figure 3-25).
Note If you do not move the jumper, the CMOS settings are reset to the default every time that you power-cycle the node.
Step 13 Replace the top cover to the node.
Step 14 Replace the node in the rack, replace power cables, and then power on the node by pressing the Power button.
Step 15 Recommission the node as described in Recommissioning the Node Using Cisco UCS Manager.
Step 16 Associate the node to its service profile as described in Associating a Service Profile With an HX Node.
Step 17 After ESXi reboot, exit HX Maintenance mode as described in Exiting HX Maintenance Mode.
Note The HX Series node is always managed in UCS Manager-controlled mode. This section is included only for cases in which a node might need to be put into standalone mode for troubleshooting purposes. Do not use this setup for normal operation of the HX Series node.
The node is shipped with these default settings:
Shared LOM EXT mode enables the 1-Gb Ethernet ports and the ports on any installed Cisco virtual interface card (VIC) to access Cisco Integrated Management Interface (Cisco IMC). If you want to use the 10/100/1000 dedicated management ports to access Cisco IMC, you can connect to the node and change the NIC mode as described in Step 1 of the following procedure.
There are two methods for connecting to the node for initial setup:
Note To configure the node remotely, you must have a DHCP server on the same network as the node. Your DHCP server must be preconfigured with the range of MAC addresses for this node. The MAC address is printed on a label that is on the pull-out asset tag on the front panel (see Figure 1-1). This node has a range of six MAC addresses assigned to the Cisco IMC. The MAC address printed on the label is the beginning of the range of six contiguous MAC addresses.
Step 1 Attach a power cord to each power supply in your node, and then attach each power cord to a grounded AC power outlet. See Power Specifications for power specifications.
Wait for approximately two minutes to let the node boot in standby power during the first bootup.
You can verify node power status by looking at the node Power Status LED on the front panel (see External Features Overview). The node is in standby power mode when the LED is amber.
Step 2 Connect a USB keyboard and VGA monitor to the node using one of the following methods:
Step 3 Open the Cisco IMC Configuration Utility:
a. Press and hold the front panel power button for four seconds to boot the node.
b. During bootup, press F8 when prompted to open the Cisco IMC Configuration Utility.
This utility has two windows that you can switch between by pressing F1 or F2.
Step 4 Continue with Cisco IMC Configuration Utility Setup.
Step 1 Attach a power cord to each power supply in your node, and then attach each power cord to a grounded AC power outlet. See Power Specifications for power specifications.
Wait for approximately two minutes to let the node boot in standby power during the first bootup.
You can verify node power status by looking at the node Power Status LED on the front panel (see External Features Overview). The node is in standby power mode when the LED is amber.
Step 2 Plug your management Ethernet cable into the dedicated management port on the rear panel (see External Features Overview).
Step 3 Allow your preconfigured DHCP server to assign an IP address to the node.
Step 4 Use the assigned IP address to access and log in to the Cisco IMC for the node. Consult with your DHCP server administrator to determine the IP address.
Note The default user name for the node is admin. The default password is password.
Step 5 From the Cisco IMC node Summary page, click Launch KVM Console. A separate KVM console window opens.
Step 6 From the Cisco IMC Summary page, click Power Cycle. The node reboots.
Step 7 Select the KVM console window.
Note The KVM console window must be the active window for the following keyboard actions to work.
Step 8 When prompted, press F8 to enter the Cisco IMC Configuration Utility. This utility opens in the KVM console window.
This utility has two windows that you can switch between by pressing F1 or F2.
Step 9 Continue with Cisco IMC Configuration Utility Setup.
The following procedure is performed after you connect to the node and open the Cisco IMC Configuration Utility.
Step 1 Set NIC mode and NIC redundancy:
a. Set the NIC mode to choose which ports to use to access Cisco IMC for node management (see Figure 1-2 for identification of the ports):
In this mode, DHCP replies are returned to both the shared LOM ports and the Cisco card ports. If the node determines that the Cisco card connection is not getting its IP address from a Cisco UCS Manager node because the node is in standalone mode, further DHCP requests from the Cisco card are disabled. Use the Cisco Card NIC mode if you want to connect to Cisco IMC through a Cisco card in standalone mode.
See also the required VIC Slot setting below.
– If you select Riser1, slot 1 is used.
– If you select Riser2, slot 2 is used.
– If you select Flex-LOM, you must use an mLOM-style VIC in the mLOM slot.
b. Use this utility to change the NIC redundancy to your preference. This node has three possible NIC redundancy settings:
– None—The Ethernet ports operate independently and do not fail over if there is a problem. This setting can be used only with the Dedicated NIC mode.
– Active-standby—If an active Ethernet port fails, traffic fails over to a standby port.
– Active-active—All Ethernet ports are utilized simultaneously. The Shared LOM EXT mode can have only this NIC redundancy setting. Shared LOM and Cisco Card modes can have both Active-standby and Active-active settings.
Step 2 Choose whether to enable DHCP for dynamic network settings, or to enter static network settings.
Note Before you enable DHCP, you must preconfigure your DHCP node with the range of MAC addresses for this node. The MAC address is printed on a label on the rear of the node. This node has a range of six MAC addresses assigned to Cisco IMC. The MAC address printed on the label is the beginning of the range of six contiguous MAC addresses.
The static IPv4 and IPv6 settings include the following:
For IPv6, valid values are 1–127.
For IPv6, if you do not know the gateway, you can set it as none by entering :: (two colons).
For IPv6, you can set this as none by entering :: (two colons).
Step 3 (Optional) Use this utility to make VLAN settings.
Step 4 Press F1 to go to the second settings window, then continue with the next step.
From the second window, you can press F2 to switch back to the first window.
Step 5 (Optional) Set a hostname for the node.
Step 6 (Optional) Enable dynamic DNS and set a dynamic DNS (DDNS) domain.
Step 7 (Optional) If you check the Factory Default check box, the node reverts to the factory defaults.
Step 8 (Optional) Set a default user password.
Step 9 (Optional) Enable auto-negotiation of port settings or set the port speed and duplex mode manually.
Note Auto-negotiation is applicable only when you use the Dedicated NIC mode. Auto-negotiation sets the port speed and duplex mode automatically based on the switch port to which the node is connected. If you disable auto-negotiation, you must set the port speed and duplex mode manually.
Step 10 (Optional) Reset port profiles and the port name.
Step 11 Press F5 to refresh the settings that you made. You might have to wait about 45 seconds until the new settings appear and the message, “Network settings configured” is displayed before you reboot the node in the next step.
Step 12 Press F10 to save your settings and reboot the node.
Note If you chose to enable DHCP, the dynamically assigned IP and MAC addresses are displayed on the console screen during bootup.
Use a browser and the IP address of the Cisco IMC to connect to the Cisco IMC management interface. The IP address is based upon the settings that you made (either a static address or the address assigned by your DHCP node).
Note The default username for the node is admin. The default password is password.
To manage the node in standalone mode, see the Cisco UCS C-Series Rack-Mount Server Configuration Guide or the Cisco UCS C-Series Rack-Mount Server CLI Configuration Guide for instructions on using those interfaces. The links to these documents are in the C-Series documentation roadmap: