- Status LEDs and Buttons
- Preparing for Component Installation
- Installing or Replacing Node Components
- Replaceable Component Locations
- Replacing Drives
- Replacing the Internal Housekeeping SSDs for SDS Logs
- Replacing Fan Modules
- Replacing DIMMs
- Replacing CPUs and Heatsinks
- Replacing a Cisco Modular HBA Card
- Replacing the Motherboard RTC Battery
- Replacing an Internal SD Card
- Enabling or Disabling the Internal USB Port
- Replacing a PCIe Riser
- Replacing a PCIe Card
- Installing a Trusted Platform Module
- Replacing Power Supplies
- Replacing an mLOM Card (Cisco VIC 1227)
Maintaining the Node
This chapter describes how to diagnose node problems using LEDs. It also provides information about how to install or replace hardware components, and it includes the following sections:
Status LEDs and Buttons
This section describes the location and meaning of LEDs and buttons and includes the following topics
Front Panel LEDs
Figure 3-1 shows the front panel LEDs. Table 3-1 defines the LED states.
|
|
||
|
|
||
|
|
||
|
|
||
|
|
Rear Panel LEDs and Buttons
Figure 3-2 shows the rear panel LEDs and buttons. Table 3-2 defines the LED states.
Figure 3-2 Rear Panel LEDs and Buttons
|
|
||
|
|
||
|
mLOM card LEDs (Cisco VIC 1227) |
|
|
|
|
|
|
|
---|---|---|
This is a summary; for advanced power supply LED information, see Table 3-3 . |
|
|
This is a summary; for advanced power supply LED information, see Table 3-3 . |
||
In Table 3-3 , read the status and fault LED states together in each row to determine the event that cause this combination.
|
|
|
---|---|---|
Internal Diagnostic LEDs
The node is equipped with a supercap voltage source that can activate internal component fault LEDs up to 30 minutes after AC power is removed. The node has internal fault LEDs for CPUs, DIMMs, fan modules, SD cards, the RTC battery, and the mLOM card.
To use these LEDs to identify a failed component, press the front or rear Unit Identification button (see Figure 3-1 or Figure 3-2) with AC power removed. An LED lights amber to indicate a faulty component.
See Figure 3-3 for the locations of these internal LEDs.
Figure 3-3 Internal Diagnostic LED Locations
|
|
||
|
DIMM fault LEDs (one directly in front of each DIMM socket on the motherboard) |
|
|
|
|
|
|
---|---|
Preparing for Component Installation
This section describes how to prepare for component installation, and it includes the following topics:
- Required Equipment
- Shutting Down the Node
- Decommissioning the Node Using Cisco UCS Manager
- Post-Maintenance Procedures
- Removing and Replacing the Node Top Cover
- Serial Number Location
Required Equipment
The following equipment is used to perform the procedures in this chapter:
Shutting Down the Node
The node can run in two power modes:
- Main power mode—Power is supplied to all node components and any operating system on your drives can run.
- Standby power mode—Power is supplied only to the service processor and the cooling fans and it is safe to power off the node from this mode.
This section contains the following procedures, which are referenced from component replacement procedures. Alternate shutdown procedures are included.
Shutting Down the Node From the Equipment Tab in Cisco UCS Manager
When you use this procedure to shut down an HX node, Cisco UCS Manager triggers the OS into a graceful shutdown sequence.
Note If the Shutdown Server link is dimmed in the Actions area, the node is not running.
Step 1 In the Navigation pane, click Equipment.
Step 2 Expand Equipment > Rack Mounts > Servers.
Step 3 Choose the node that you want to shut down.
Step 4 In the Work pane, click the General tab.
Step 5 In the Actions area, click Shutdown Server.
Step 6 If a confirmation dialog displays, click Yes.
After the node has been successfully shut down, the Overall Status field on the General tab displays a power-off status.
Shutting Down the Node From the Service Profile in Cisco UCS Manager
When you use this procedure to shut down an HX node, Cisco UCS Manager triggers the OS into a graceful shutdown sequence.
Note If the Shutdown Server link is dimmed in the Actions area, the node is not running.
Step 1 In the Navigation pane, click Servers.
Step 2 Expand Servers > Service Profiles.
Step 3 Expand the node for the organization that contains the service profile of the server node you are shutting down.
Step 4 Choose the service profile of the server node that you are shutting down.
Step 5 In the Work pane, click the General tab.
Step 6 In the Actions area, click Shutdown Server.
Step 7 If a confirmation dialog displays, click Yes.
After the node has been successfully shut down, the Overall Status field on the General tab displays a power-off status.
Shutting Down the Node Through vSphere With Cisco HX Maintenance Mode
Some procedures directly place the node into Cisco HX Maintenance mode. This procedure migrates all VMs to other nodes before the node is shut down and decommissioned from Cisco UCS Manager.
Step 1 Put the node in Cisco HX Maintenance mode by using the vSphere interface:
a. Log in to the vSphere web client.
b. Go to Home > Hosts and Clusters.
c. Expand the Datacenter that contains the HX Cluster.
d. Expand the HX Cluster and select the node.
e. Right-click the node and select Cisco HX Maintenance Mode > Enter HX Maintenance Mode.
a. Log in to the storage controller cluster command line as a user with root privileges.
b. Move the node into HX Maintenance Mode.
1. Identify the node ID and IP address:
2. Enter the node into HX Maintenance Mode.
#
stcli node maintenanceMode (--id ID | --ip IP Address ) --mode enter
(see also stcli node maintenanceMode --help)
c. Log into the ESXi command line of this node as a user with root privileges.
d. Verify that the node has entered HX Maintenance Mode:
#
esxcli system maintenanceMode get
Step 2 Shut down the node using UCS Manager as described in Shutting Down the Node.
Shutting Down the Node with the Node Power Button
Note This method is not recommended for a HyperFlex node, but the operation of the physical power button is explained here in case an emergency shutdown is required.
Step 1 Check the color of the Power Status LED (see the “Front Panel LEDs” section).
- Green—The node is in main power mode and must be shut down before it can be safely powered off. Go to Step 2.
- Amber—The node is already in standby mode and can be safely powered off. Go to .
Step 2 Invoke either a graceful shutdown or a hard shutdown:
- Graceful shutdown—Press and release the Power button. The operating system performs a graceful shutdown and the node goes to standby mode, which is indicated by an amber Power Status LED.
- Emergency shutdown—Press and hold the Power button for 4 seconds to force the main power off and immediately enter standby mode.
Decommissioning the Node Using Cisco UCS Manager
Before replacing an internal component of a node, you must decommission the node to remove it from the Cisco UCS configuration. When you use this procedure to shut down an HX node, Cisco UCS Manager triggers the OS into a graceful shutdown sequence.
Step 1 In the Navigation pane, click Equipment.
Step 2 Expand Equipment > Rack Mounts > Servers.
Step 3 Choose the node that you want to decommission.
Step 4 In the Work pane, click the General tab.
Step 5 In the Actions area, click Server Maintenance.
Step 6 In the Maintenance dialog box, click Decommission, then click OK.
The node is removed from the Cisco UCS configuration.
Post-Maintenance Procedures
This section contains the following procedures, which are referenced from component replacement procedures:
Recommissioning the Node Using Cisco UCS Manager
After replacing an internal component of a node, you must recommission the node to add it back into the Cisco UCS configuration.
Step 1 In the Navigation pane, click Equipment.
Step 2 Under Equipment, click the Rack Mounts node.
Step 3 In the Work pane, click the Decommissioned tab.
Step 4 On the row for each rack-mount server that you want to recommission, do the following:
a. In the Recommission column, check the check box.
Step 5 If a confirmation dialog box displays, click Yes.
Step 6 (Optional) Monitor the progress of the server recommission and discovery on the FSM tab for the server.
Associating a Service Profile With an HX Node
Use this procedure to associate an HX node to its service profile after recommissioning.
Step 1 In the Navigation pane, click Servers.
Step 2 Expand Servers > Service Profiles.
Step 3 Expand the node for the organization that contains the service profile that you want to associate with the HX node.
Step 4 Right-click the service profile that you want to associate with the HX node and then select Associate Service Profile.
Step 5 In the Associate Service Profile dialog box, select the Server option.
Step 6 Navigate through the navigation tree and select the HX node to which you are associating the service profile.
Exiting HX Maintenance Mode
Use this procedure to exit HX Maintenance Mode after performing a service procedure.
Step 1 Exit the node from Cisco HX Maintenance mode by using the vSphere interface:
a. Log in to the vSphere web client.
b. Go to Home > Hosts and Clusters.
c. Expand the Datacenter that contains the HX Cluster.
d. Expand the HX Cluster and select the node.
e. Right-click the node and select Cisco HX Maintenance Mode > Exit HX Maintenance Mode.
a. Log in to the storage controller cluster command line as a user with root privileges.
b. Exit the node out of HX Maintenance Mode.
1. Identify the node ID and IP address:
2. Exit the node out of HX Maintenance Mode
#
stcli node maintenanceMode (--id ID | --ip IP Address ) --mode exit
(see also stcli node maintenanceMode --help)
c. Log into ESXi command line of this node as a user with root privileges.
d. Verify that the node has exited HX Maintenance Mode:
#
esxcli system maintenanceMode get
Removing and Replacing the Node Top Cover
Step 1 Remove the top cover (see Figure 3-4).
a. If the cover latch is locked, use a screwdriver to turn the lock 90-degrees counterclockwise to unlock it. See Figure 3-4.
b. Lift on the end of the latch that has the green finger grip. The cover is pushed back to the open position as you lift the latch.
c. Lift the top cover straight up from the node and set it aside.
Note The latch must be in the fully open position when you set the cover back in place, which allows the opening in the latch to sit over a peg that is on the fan tray.
a. With the latch in the fully open position, place the cover on top of the node about one-half inch (1.27 cm) behind the lip of the front cover panel. The opening in the latch should fit over the peg that sticks up from the fan tray.
b. Press the cover latch down to the closed position. The cover is pushed forward to the closed position as you push down the latch.
c. If desired, lock the latch by using a screwdriver to turn the lock 90-degrees clockwise.
Figure 3-4 Removing the Top Cover
|
|
||
|
|
Serial Number Location
The serial number (SN) for the node is printed on a label on the top of the node, near the front.
Installing or Replacing Node Components
Warning Blank faceplates and cover panels serve three important functions: they prevent exposure to hazardous voltages and currents inside the chassis; they contain electromagnetic interference (EMI) that might disrupt other equipment; and they direct the flow of cooling air through the chassis. Do not operate the node unless all cards, faceplates, front covers, and rear covers are in place.
Statement 1029
Tip You can press the Unit Identification button on the front panel or rear panel to turn on a flashing Unit Identification LED on the front and rear panels of the node. This button allows you to locate the specific node that you are servicing when you go to the opposite side of the rack. You can also activate these LEDs remotely by using the Cisco IMC interface. See the “Status LEDs and Buttons” section for locations of these LEDs.
This section describes how to install and replace node components, and it includes the following topics:
- Replaceable Component Locations
- Replacing Drives
- Replacing the Internal Housekeeping SSDs for SDS Logs
- Replacing Fan Modules
- Replacing DIMMs
- Replacing CPUs and Heatsinks
- Replacing a Cisco Modular HBA Card
- Replacing the Motherboard RTC Battery
- Replacing an Internal SD Card
- Enabling or Disabling the Internal USB Port
- Replacing a PCIe Riser
- Replacing a PCIe Card
- Installing a Trusted Platform Module
- Replacing Power Supplies
- Replacing an mLOM Card (Cisco VIC 1227)
Replaceable Component Locations
Figure 3-5 shows the locations of the field-replaceable components. The view shown is from the top down with the top covers and air baffle removed.
Figure 3-5 Replaceable Component Locations
|
See Replacing Drives for information about supported drives. |
|
Trusted platform module (TPM) socket on motherboard, under PCIe riser 2 |
|
Drive bay 1: SSD caching drive The supported SSD differs between the HX240c and HX240c All-Flash nodes. See Replacing Drives. |
|
|
|
|
||
|
|
120 GB internal housekeeping SSDs for SDS logs (two SATA SSDs in PCIe riser 1 sockets) |
|
|
|
mLOM card socket on motherboard under |
|
|
|
||
|
|
||
|
|
Replacing Drives
Drive Population Guidelines
The drive-bay numbering is shown in Figure 3-6.
Figure 3-6 Drive Bay Numbering
|
|
Observe these drive population guidelines:
- Populate the SSD caching drive only in bay 1. See Table 3-5 for the supported caching SSDs, which differ between supported drive configurations.
- Populate persistent data drives as follows:
– HX240c: HDD persistent data drives—populate in bays 2–24.
– HX240c All-Flash: SSD persistent data drives—populate in bays 2–11. With Cisco HyperFlex Release 2.0, only 10 SSD persistent data drives are supported.
See Table 3-5 for the supported persistent drives, which differ between supported drive configurations.
- When populating persistent data drives, add drives in the lowest numbered bays first.
- Keep an empty drive blanking tray in any unused bays to ensure optimal airflow and cooling.
- See HX240c Drive Configuration Comparison for a comparison of supported drive configurations (hybrid, All-Flash, SED).
HX240c Drive Configuration Comparison
|
|
|
|
|
---|---|---|---|---|
Note the following considerations and restrictions for All-Flash HyperFlex nodes:
- The minimum Cisco HyperFlex software required is Release 2.0 or later.
- With Cisco HX Release 2.0, only 10 SSD persistent data drives are supported.
- HX240c All-Flash HyperFlex nodes are ordered as specific All-Flash PIDs; All-Flash configurations are supported only on those PIDs.
- Conversion from hybrid HX240c configuration to HX240c All-Flash configuration is not supported.
- Mixing hybrid HX240c HyperFlex nodes with HX240c All-Flash HyperFlex nodes within the same HyperFlex cluster is not supported.
Note the following considerations and restrictions for self-encrypting drive (SED) HyperFlex nodes:
Drive Replacement Overview
The three types of drives in the node require different replacement procedures.
Hot-swap replacement. See Replacing Persistent Data Drives. NOTE: Hot-swap replacement includes hot-removal, so you can remove the drive while it is still operating. |
|
Hot-swap replacement. See Replacing the SSD Caching Drive (Bay 1). NOTE: Hot-swap replacement for SAS/SATA drives includes hot-removal, so you can remove the drive while it is still operating. NOTE: If an NVMe SSD is used as the caching drive, additional steps are required, as described in the procedure. |
|
Node must be put into Cisco HX Maintenance Mode before shutting down and powering off the node. Replacement requires additional technical assistance and cannot be completed by the customer. See Replacing the Internal Housekeeping SSDs for SDS Logs. |
Replacing Persistent Data Drives
The persistent data drives must be installed only as follows:
- HX240c: HDDs in bays 2–24
- HX240c All-Flash: SSDs in bays 2–11. With Cisco HyperFlex Release 2.0, only 10 SSD persistent data drives are supported.
Note Hot-swap replacement includes hot-removal, so you can remove the drive while it is still operating.
Step 1 Remove the drive that you are replacing or remove a blank drive tray from an empty bay:
a. Press the release button on the face of the drive tray. See Figure 3-7.
b. Grasp and open the ejector lever and then pull the drive tray out of the slot.
c. If you are replacing an existing drive, remove the four drive-tray screws that secure the drive to the tray and then lift the drive out of the tray.+
a. Place a new drive in the empty drive tray and replace the four drive-tray screws.
b. With the ejector lever on the drive tray open, insert the drive tray into the empty drive bay.
c. Push the tray into the slot until it touches the backplane, and then close the ejector lever to lock the drive in place.
|
|
||
|
|
Replacing the SSD Caching Drive (Bay 1)
The SSD caching drive must be installed in drive bay 1 (see Figure 3-6).
Note the following considerations and restrictions for NVMe SSDs When Used As the Caching SSD:
- NVMe SSDs are supported in HX240c and All-Flash nodes.
- NVMe SSDs are not supported in Hybrid nodes.
- NVMe SSDs are supported only in the Caching SSD position, in drive bay 1.
- NVMe SSDs are not supported for persistent storage or as the Housekeeping drive.
- The locator (beacon) LED cannot be turned on or off on NVMe SSDs.
Note Always replace the drive with the same type and size as the original drive.
Note Upgrading or downgrading the Caching drive in an existing HyperFlex cluster is not supported. If the Caching drive must be upgraded or downgraded, then a full redeployment of the HyperFlex cluster is required.
Note When using a SAS drive, hot-swap replacement includes hot-removal, so you can remove a SAS drive while it is still operating. NVMe drives cannot be hot-swapped.
Step 1 Only if the caching drive is an NVMe SSD, enter the ESXi host into HX Maintenance Mode. Otherwise, skip to step 2.
Step 2 Remove the SSD caching drive:
a. Press the release button on the face of the drive tray (see Figure 3-7).
b. Grasp and open the ejector lever and then pull the drive tray out of the slot.
c. Remove the four drive-tray screws that secure the SSD to the tray and then lift the SSD out of the tray.
Step 3 Install a new SSD caching drive:
a. Place a new SSD in the empty drive tray and replace the four drive-tray screws.
b. With the ejector lever on the drive tray open, insert the drive tray into the empty drive bay.
c. Push the tray into the slot until it touches the backplane, and then close the ejector lever to lock the drive in place.
Step 4 Only if the caching drive is an NVMe SSD :
a. reboot the ESXi host. This enables ESXi to discover the NVMe SSD.
b. Exit the ESXi host from HX Maintenance Mode.
Replacing the Internal Housekeeping SSDs for SDS Logs
PCIe riser 1 includes sockets for two 120 GB SATA SSDs.
Note This procedure requires assistance from technical support for additional software update steps after the hardware is replaced. It cannot be completed without technical assistance.
Step 1 Put the node in Cisco HX Maintenance mode as described in Shutting Down the Node Through vSphere With Cisco HX Maintenance Mode.
Step 2 Shut down the node as described in Shutting Down the Node.
Step 3 Decommission the node as described in Decommissioning the Node Using Cisco UCS Manager.
Step 4 Disconnect all power cables from the power supplies.
Step 5 Do one of the following:
- If the drive is a SED drive in front drive bay 1, remove it from front drive bay 1. Continue with Step 6.
- If the SSD is in PCIe riser 1, remove it from PCIe riser 1. Continue with Step 8.
Step 6 Remove the drive that you are replacing or remove a blank drive tray from an empty bay:
a. Press the release button on the face of the drive tray. See Figure 3-7.
b. Grasp and open the ejector lever and then pull the drive tray out of the slot.
c. Remove the four drive-tray screws that secure the drive to the tray and then lift the drive out of the tray.
a. Place a new drive in the empty drive tray and replace the four drive-tray screws.
b. With the ejector lever on the drive tray open, insert the drive tray into the empty drive bay.
c. Push the tray into the slot until it touches the backplane, and then close the ejector lever to lock the drive in place.
d. Go to Step 16.
Step 8 Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.
Step 9 Remove the top cover as described in Removing and Replacing the Node Top Cover.
Step 10 Remove PCIe riser 1 from the node:
a. Lift straight up on both ends of the riser to disengage its circuit board from the socket on the motherboard. Set the riser on an antistatic mat.
b. On the bottom of the riser, loosen the single thumbscrew that holds the securing plate. See Figure 3-18.
c. Swing open the securing plate and remove it from the riser to provide access.
Step 11 Remove an existing SSD from PCIe riser 1.
Grasp the carrier-tabs on each side of the SSD and pinch them together as you pull the SSD from its cage and the socket on the PCIe riser.
Step 12 Install a new SSD to PCIe riser 1:
a. Grasp the two carrier-tabs on either side of the SSD and pinch them together as you insert the SSD into the cage on the riser.
b. Push the SSD straight into the cage to engage it with the socket on the riser. Stop when the carrier-tabs click and lock into place on the cage.
Step 13 Return PCIe riser 1 to the node:
a. Return the securing plate to the riser. Insert the two hinge-tabs into the two slots on the riser, and then swing the securing plate closed.
b. Tighten the single thumbscrew that holds the securing plate.
c. Position the PCIe riser over its socket on the motherboard and over its alignment features in the chassis (see Figure 3-16).
d. Carefully push down on both ends of the PCIe riser to fully engage its circuit board connector with the socket on the motherboard.
Step 14 Replace the top cover.
Step 15 Replace the node in the rack, replace power cables, and then power on the node by pressing the Power button.
Step 16 Recommission the node as described in Recommissioning the Node Using Cisco UCS Manager.
Step 17 Associate the node to its service profile as described in Associating a Service Profile With an HX Node.
Step 18 After ESXi reboot, exit HX Maintenance mode as described in Exiting HX Maintenance Mode.
Note After you replace the Housekeeping SSD hardware, you must contact technical support for additional software update steps.
Replacing Fan Modules
The six hot-swappable fan modules in the node are numbered as follows when you are facing the front of the node.
Figure 3-8 Fan Module Numbering
Tip A fault LED is on the top of each fan module that lights amber if the fan module fails. To operate these LEDs from the supercap power source, remove AC power cords and then press the Unit Identification button. See also Internal Diagnostic LEDs.
Step 1 Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.
Step 2 Remove the top cover as described in Removing and Replacing the Node Top Cover.
Step 3 Identify a faulty fan module by looking for a fan fault LED that is lit amber (see Figure 3-9).
Step 4 Remove a fan module that you are replacing (see Figure 3-9):
a. Grasp the top of the fan and pinch the green plastic latch toward the center.
b. Lift straight up to remove the fan module from the node.
Step 5 Install a new fan module:
a. Set the new fan module in place, aligning the connector on the bottom of the fan module with the connector on the motherboard.
Note The arrow label on the top of the fan module, which indicates the direction of airflow, should point toward the rear of the node.
b. Press down gently on the fan module until the latch clicks and locks in place.
Step 7 Replace the node in the rack.
Figure 3-9 Fan Modules Latch and Fault LED
|
|
Replacing DIMMs
This section includes the following topics:
Note To ensure the best node performance, it is important that you are familiar with memory performance guidelines and population rules before you install or replace the memory.
Memory Performance Guidelines and Population Rules
This section describes the type of memory that the node requires and its effect on performance. The section includes the following topics:
DIMM Socket Numbering
Figure 3-10 shows the numbering of the DIMM sockets and CPUs.
Figure 3-10 CPUs and DIMM Socket Numbering on Motherboard
DIMM Population Rules
Observe the following guidelines when installing or replacing DIMMs:
– CPU1 supports channels A, B, C, and D.
– CPU2 supports channels E, F, G, and H.
– A channel can operate with one, two, or three DIMMs installed.
– If a channel has only one DIMM, populate slot 1 first (the blue slot).
– Fill blue #1 slots in the channels first: A1, E1, B1, F1, C1, G1, D1, H1
– Fill black #2 slots in the channels second: A2, E2, B2, F2, C2, G2, D2, H2
– Fill white #3 slots in the channels third: A3, E3, B3, F3, C3, G3, D3, H3
- Any DIMM installed in a DIMM socket for which the CPU is absent is not recognized. In a single-CPU configuration, populate the channels for CPU1 only (A, B, C, D).
- Memory mirroring reduces the amount of memory available by 50 percent because only one of the two populated channels provides data. When memory mirroring is enabled, you must install DIMMs in sets of 4, 6, 8, or 12 as described in Memory Mirroring and RAS.
- Observe the DIMM mixing rules shown in Table 3-7 .
Memory Mirroring and RAS
The Intel E5-2600 CPUs within the node support memory mirroring only when an even number of channels are populated with DIMMs. If one or three channels are populated with DIMMs, memory mirroring is automatically disabled. Furthermore, if memory mirroring is used, DRAM size is reduced by 50 percent for reasons of reliability.
Lockstep Channel Mode
When you enable lockstep channel mode, each memory access is a 128-bit data access that spans four channels.
Lockstep channel mode requires that all four memory channels on a CPU must be populated identically with regard to size and organization. DIMM socket populations within a channel (for example, A1, A2, A3) do not have to be identical but the same DIMM slot location across all four channels must be populated the same.
For example, DIMMs in sockets A1, B1, C1, and D1 must be identical. DIMMs in sockets A2, B2, C2, and D2 must be identical. However, the A1-B1-C1-D1 DIMMs do not have to be identical with the A2-B2-C2-D2 DIMMs.
DIMM Replacement Procedure
Identifying a Faulty DIMM
Each DIMM socket has a corresponding DIMM fault LED, directly in front of the DIMM socket. See Figure 3-3 for the locations of these LEDs. The LEDs light amber to indicate a faulty DIMM. To operate these LEDs from the SuperCap power source, remove AC power cords and then press the Unit Identification button.
Replacing DIMMs
Step 1 Put the node in Cisco HX Maintenance mode as described in Shutting Down the Node Through vSphere With Cisco HX Maintenance Mode.
Step 2 Shut down the node as described in Shutting Down the Node.
Step 3 Decommission the node as described in Decommissioning the Node Using Cisco UCS Manager.
Step 4 Disconnect all power cables from the power supplies.
Step 5 Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.
Step 6 Remove the top cover as described in Removing and Replacing the Node Top Cover.
Step 7 Remove the air baffle that sits over the DIMM sockets and set it aside.
Step 8 Identify the faulty DIMM by observing the DIMM socket fault LEDs on the motherboard (see Figure 3-3).
Step 9 Remove the DIMMs that you are replacing. Open the ejector levers at both ends of the DIMM socket, and then lift the DIMM out of the socket.
Note Before installing DIMMs, see the population guidelines. See Memory Performance Guidelines and Population Rules.
a. Align the new DIMM with the empty socket on the motherboard. Use the alignment key in the DIMM socket to correctly orient the DIMM.
b. Push down evenly on the top corners of the DIMM until it is fully seated and the ejector levers on both ends lock into place.
Step 11 Replace the air baffle.
Step 12 Replace the top cover.
Step 13 Replace the node in the rack, replace power cables, and then power on the node by pressing the Power button.
Step 14 Recommission the node as described in Recommissioning the Node Using Cisco UCS Manager.
Step 15 Associate the node to its service profile as described in Associating a Service Profile With an HX Node.
Step 16 After ESXi reboot, exit HX Maintenance mode as described in Exiting HX Maintenance Mode.
Replacing CPUs and Heatsinks
This section contains the following topics:
- CPU Configuration Rules
- Replacing a CPU and Heatsink
- Additional CPU-Related Parts to Order with RMA Replacement Motherboards
Note You can use Xeon v3- and v4-based nodes in the same cluster. Do not mix Xeon v3 and v4 CPUs within the same node.
Special Information For Upgrades to Intel Xeon v4 CPUs
The minimum software and firmware versions required for the node to support Intel v4 CPUs are as follows:
|
|
---|---|
Do one of the following actions:
- If your node’s firmware and/or Cisco UCS Manager software are already at the required levels shown in Table 3-8 , you can replace the CPU hardware by using the procedure in this section.
- If your node’s firmware and/or Cisco UCS Manager software is earlier than the required levels, use the instructions in Special Instructions for Upgrades to Intel Xeon v4 Series to upgrade your software. After you upgrade the software, return to the procedure in this section as directed to replace the CPU hardware.
CPU Configuration Rules
This node has two CPU sockets. Each CPU supports four DIMM channels (12 DIMM sockets). See Figure 3-10.
Replacing a CPU and Heatsink
Note This node uses the new independent loading mechanism (ILM) CPU sockets, so no Pick-and-Place tools are required for CPU handling or installation. Always grasp the plastic frame on the CPU when handling.
Step 1 Put the node in Cisco HX Maintenance mode as described in Shutting Down the Node Through vSphere With Cisco HX Maintenance Mode.
Step 2 Shut down the node as described in Shutting Down the Node.
Step 3 Decommission the node as described in Decommissioning the Node Using Cisco UCS Manager.
Step 4 Disconnect all power cables from the power supplies.
Step 5 Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.
Step 6 Remove the top cover as described in Removing and Replacing the Node Top Cover.
Step 7 Remove the plastic air baffle that sits over the CPUs.
Step 8 Remove the heatsink that you are replacing:
a. Use a Number 2 Phillips-head screwdriver to loosen the four captive screws that secure the heatsink.
Note Alternate loosening each screw evenly to avoid damaging the heatsink or CPU.
b. Lift the heatsink off of the CPU.
Step 9 Open the CPU retaining mechanism:
a. Unclip the first retaining latch labeled with the icon, and then unclip the second retaining latch labeled with the icon. See Figure 3-11.
b. Open the hinged CPU cover plate.
|
|
||
|
|
||
|
|
Step 10 Remove any existing CPU:
a. With the latches and hinged CPU cover plate open, swing the CPU in its hinged seat up to the open position, as shown in Figure 3-11.
b. Grasp the CPU by the finger-grips on its plastic frame and lift it up and out of the hinged CPU seat.
c. Set the CPU aside on an antistatic surface.
a. Grasp the new CPU by the finger-grips on its plastic frame and align the tab on the frame that is labeled “ALIGN” with the hinged seat, as shown in Figure 3-12.
b. Insert the tab on the CPU frame into the seat until it stops and is held firmly.
The line below the word “ALIGN” should be level with the edge of the seat, as shown in Figure 3-12.
c. Swing the hinged seat with the CPU down until the CPU frame clicks in place and holds flat in the socket.
d. Close the hinged CPU cover plate.
e. Clip down the CPU retaining latch with the icon, and then clip down the CPU retaining latch with the icon. See Figure 3-11.
Figure 3-12 CPU and Socket Alignment Features
|
|
a. Apply the cleaning solution, which is included with the heatsink cleaning kit (UCSX-HSCK=, shipped with spare CPUs), to the old thermal grease on the heatsink and CPU and let it soak for a least 15 seconds.
b. Wipe all of the old thermal grease off the old heat sink and CPU using the soft cloth that is included with the heatsink cleaning kit. Be careful to not scratch the heat sink surface.
Note New heatsinks come with a pre-applied pad of thermal grease. If you are reusing a heatsink, you must apply thermal grease from a syringe (UCS-CPU-GREASE3=).
c. Align the four heatsink captive screws with the motherboard standoffs, and then use a Number 2 Phillips-head screwdriver to tighten the captive screws evenly.
Note Alternate tightening each screw evenly to avoid damaging the heatsink or CPU.
Step 13 Replace the air baffle.
Step 14 Replace the top cover.
Step 15 Replace the node in the rack, replace power cables, and then power on the node by pressing the Power button.
Step 16 Recommission the node as described in Recommissioning the Node Using Cisco UCS Manager.
Step 17 Associate the node to its service profile as described in Associating a Service Profile With an HX Node.
Step 18 After ESXi reboot, exit HX Maintenance mode as described in Exiting HX Maintenance Mode.
Special Instructions for Upgrades to Intel Xeon v4 Series
Use the following procedure to upgrade the node and CPUs.
Step 1 Upgrade the Cisco UCS Manager software to the minimum version for your node (or later). See Table 3-8 .
Use the procedures in the appropriate Cisco UCS Manager upgrade guide (depending on your current software version): Cisco UCS Manager Upgrade Guides.
Step 2 Use Cisco UCS Manager to upgrade and activate the node Cisco IMC to the minimum version for your node (or later). See Table 3-8 .
Use the procedures in the GUI or CLI Cisco UCS Manager Firmware Management Guide for your release.
Step 3 Use Cisco UCS Manager to upgrade and activate the node BIOS to the minimum version for your node (or later). See Table 3-8 .
Use the procedures in the Cisco UCS Manager GUI or CLI Cisco UCS Manager Firmware Management Guide for your release.
Step 4 Replace the CPUs with the Intel Xeon v4 Series CPUs.
Use the CPU replacement procedures in Replacing a CPU and Heatsink.
Additional CPU-Related Parts to Order with RMA Replacement Motherboards
When a return material authorization (RMA) of the motherboard or CPU is done, additional parts might not be included with the CPU or motherboard spare bill of materials (BOM). The TAC engineer might need to add the additional parts to the RMA to help ensure a successful replacement.
Note This node uses the new independent loading mechanism (ILM) CPU sockets, so no Pick-and-Place tools are required for CPU handling or installation. Always grasp the plastic frame on the CPU when handling.
– Heat sink cleaning kit (UCSX-HSCK=)
– Thermal grease kit for HX240c (UCS-CPU-GREASE3=)
– Heat sink cleaning kit (UCSX-HSCK=)
A CPU heatsink cleaning kit is good for up to four CPU and heatsink cleanings. The cleaning kit contains two bottles of solution, one to clean the CPU and heatsink of old thermal interface material and the other to prepare the surface of the heatsink.
New heatsink spares come with a pre-applied pad of thermal grease. It is important to clean the old thermal grease off of the CPU prior to installing the heatsinks. Therefore, when you are ordering new heatsinks, you must order the heatsink cleaning kit.
Replacing a Cisco Modular HBA Card
The node has an internal, dedicated PCIe slot on the motherboard for an HBA card (see Figure 3-13).
Step 1 Put the node in Cisco HX Maintenance mode as described in Shutting Down the Node Through vSphere With Cisco HX Maintenance Mode.
Step 2 Shut down the node as described in Shutting Down the Node.
Step 3 Decommission the node as described in Decommissioning the Node Using Cisco UCS Manager.
Step 4 Disconnect all power cables from the power supplies.
Step 5 Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.
Step 6 Remove the top cover as described in Removing and Replacing the Node Top Cover.
Step 7 Remove an existing HBA controller card:
a. Disconnect the data cable from the card. Depress the tab on the cable connector and pull.
b. Disconnect the supercap power module cable from the transportable memory module (TMM), if present.
c. Lift straight up on the metal bracket that holds the card. The bracket lifts off of two pegs on the chassis wall.
d. Loosen the two thumbscrews that hold the card to the metal bracket and then lift the card from the bracket.
Step 8 Install a new HBA controller card:
a. Set the new card on the metal bracket, aligned so that the thumbscrews on the card enter the threaded standoffs on the bracket. Tighten the thumbscrews to secure the card to the bracket.
b. Align the two slots on the back of the bracket with the two pegs on the chassis wall.
The two slots on the bracket must slide down over the pegs at the same time that you push the card into the motherboard socket.
c. Gently press down on both top corners of the metal bracket to seat the card into the socket on the motherboard.
d. Connect the supercap power module cable to its connector on the TMM, if present.
e. Connect the single data cable to the card.
Step 10 Replace the node in the rack, replace power cables, and then power on the node by pressing the Power button.
Step 11 Recommission the node as described in Recommissioning the Node Using Cisco UCS Manager.
Step 12 Associate the node to its service profile as described in Associating a Service Profile With an HX Node.
Step 13 After ESXi reboot, exit HX Maintenance mode as described in Exiting HX Maintenance Mode.
|
|
Replacing the Motherboard RTC Battery
Warning There is danger of explosion if the battery is replaced incorrectly. Replace the battery only with the same or equivalent type recommended by the manufacturer. Dispose of used batteries according to the manufacturer’s instructions. [Statement 1015]
The real-time clock (RTC) battery retains node settings when the node is disconnected from power. The battery type is CR2032. Cisco supports the industry-standard CR2032 battery, which can be purchased from most electronic stores.
Step 1 Put the node in Cisco HX Maintenance mode as described in Shutting Down the Node Through vSphere With Cisco HX Maintenance Mode.
Step 2 Shut down the node as described in Shutting Down the Node.
Step 3 Decommission the node as described in Decommissioning the Node Using Cisco UCS Manager.
Step 4 Disconnect all power cables from the power supplies.
Step 5 Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.
Step 6 Remove the top cover as described in Removing and Replacing the Node Top Cover.
Step 7 Remove the battery from its holder on the motherboard (see Figure 3-14):
a. Use a small screwdriver or pointed object to press inward on the battery at the prying point (see Figure 3-14).
b. Lift up on the battery and remove it from the holder.
Step 8 Install an RTC battery. Insert the battery into its holder and press down until it clicks in place.
Note The positive side of the battery marked “3V+” should face upward.
Step 10 Replace the node in the rack, replace power cables, and then power on the node by pressing the Power button.
Step 11 Recommission the node as described in Recommissioning the Node Using Cisco UCS Manager.
Step 12 Associate the node to its service profile as described in Associating a Service Profile With an HX Node.
Step 13 After ESXi reboot, exit HX Maintenance mode as described in Exiting HX Maintenance Mode.
Figure 3-14 RTC Battery Location and Prying Point
|
|
Replacing an Internal SD Card
The node has two internal SD card bays on the motherboard. Dual SD cards are supported.
Step 1 Put the node in Cisco HX Maintenance mode as described in Shutting Down the Node Through vSphere With Cisco HX Maintenance Mode.
Step 2 Shut down the node as described in Shutting Down the Node.
Step 3 Decommission the node as described in Decommissioning the Node Using Cisco UCS Manager.
Step 4 Disconnect all power cables from the power supplies.
Step 5 Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.
Step 6 Remove the top cover as described in Removing and Replacing the Node Top Cover.
Step 7 Remove an SD card (see Figure 3-15).
a. Push on the top of the SD card, and then release it to allow it to spring out from the slot.
b. Remove the SD card from the slot.
a. Insert the SD card into the slot with the label side facing up.
b. Press on the top of the card until it clicks in the slot and stays in place.
Step 10 Replace the node in the rack, replace power cables, and then power on the node by pressing the Power button.
Step 11 Recommission the node as described in Recommissioning the Node Using Cisco UCS Manager.
Step 12 Associate the node to its service profile as described in Associating a Service Profile With an HX Node.
Step 13 After ESXi reboot, exit HX Maintenance mode as described in Exiting HX Maintenance Mode.
Figure 3-15 SD Card Bay Location and Numbering on the Motherboard
|
|
Enabling or Disabling the Internal USB Port
The factory default is for all USB ports on the node to be enabled. However, the internal USB port can be enabled or disabled in the node BIOS. See Figure 3-5 for the location of the internal USB 3.0 slot on the motherboard.
Step 1 Enter the BIOS Setup Utility by pressing the F2 key when prompted during bootup.
Step 2 Navigate to the Advanced tab.
Step 3 On the Advanced tab, select USB Configuration.
Step 4 On the USB Configuration page, choose USB Ports Configuration.
Step 5 Scroll to USB Port: Internal, press Enter, and then choose either Enabled or Disabled from the dialog box.
Step 6 Press F10 to save and exit the utility.
Replacing a PCIe Riser
The node contains two toolless PCIe risers for horizontal installation of PCIe cards. See Replacing a PCIe Card for the specifications of the PCIe slots on the risers.
Step 1 Put the node in Cisco HX Maintenance mode as described in Shutting Down the Node Through vSphere With Cisco HX Maintenance Mode.
Step 2 Shut down the node as described in Shutting Down the Node.
Step 3 Decommission the node as described in Decommissioning the Node Using Cisco UCS Manager.
Step 4 Disconnect all power cables from the power supplies.
Step 5 Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.
Step 6 Remove the top cover as described in Removing and Replacing the Node Top Cover.
Step 7 Remove the PCIe riser that you are replacing (see Figure 3-16):
a. Grasp the top of the riser and lift straight up on both ends to disengage its circuit board from the socket on the motherboard. Set the riser on an antistatic mat.
b. If the riser has a card installed, remove the card from the riser. See Replacing a PCIe Card.
Step 8 Install a new PCIe riser:
a. If you removed a card from the old PCIe riser, install the card to the new riser (see Replacing a PCIe Card).
b. Position the PCIe riser over its socket on the motherboard and over its alignment slots in the chassis (see Figure 3-16). There are also two alignment pegs on the motherboard for each riser.
Note The PCIe risers are not interchangeable. If you plug a PCIe riser into the wrong socket, the node will not boot. Riser 1 must plug into the motherboard socket labeled “RISER1.” Riser 2 must plug into the motherboard socket labeled “RISER2.”
c. Carefully push down on both ends of the PCIe riser to fully engage its circuit board connector with the socket on the motherboard.
Step 10 Replace the node in the rack, replace power cables, and then power on the node by pressing the Power button.
Step 11 Recommission the node as described in Recommissioning the Node Using Cisco UCS Manager.
Step 12 Associate the node to its service profile as described in Associating a Service Profile With an HX Node.
Step 13 After ESXi reboot, exit HX Maintenance mode as described in Exiting HX Maintenance Mode.
Figure 3-16 PCIe Riser Alignment Features
|
|
Replacing a PCIe Card
PCIe Slots
The system contains two toolless PCIe risers for horizontal installation of PCIe cards (see Figure 3-17).
- Riser 1: PCIe slots 1 and 2; slot 3 taken by two internal SATA SSD boot-drive sockets. See Table 3-9 .
- Riser 2 contains slots PCIE 4, 5, and 6. See Table 3-10 .
Figure 3-17 Rear Panel, Showing PCIe Slots
|
Lane Width |
|
|
|
|
---|---|---|---|---|---|
|
Lane Width |
|
|
|
|
---|---|---|---|---|---|
Yes1 |
|||||
1.NCSI is supported in only one slot at a time in this riser version. |
Replacing a PCIe Card
Note If you are installing a Cisco UCS Virtual Interface Card, there are prerequisite considerations. See Special Considerations for Cisco UCS Virtual Interface Cards.
Step 1 Put the node in Cisco HX Maintenance mode as described in Shutting Down the Node Through vSphere With Cisco HX Maintenance Mode.
Step 2 Shut down the node as described in Shutting Down the Node.
Step 3 Decommission the node as described in Decommissioning the Node Using Cisco UCS Manager.
Step 4 Disconnect all power cables from the power supplies.
Step 5 Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.
Step 6 Remove the top cover as described in Removing and Replacing the Node Top Cover.
Step 7 Remove a PCIe card (or a blanking panel) from the PCIe riser:
a. Lift straight up on both ends of the riser to disengage its circuit board from the socket on the motherboard. Set the riser on an antistatic mat.
b. On the bottom of the riser, loosen the single thumbscrew that holds the securing plate (see Figure 3-18).
c. Swing open the securing plate and remove it from the riser to provide access.
d. Swing open the card-tab retainer that secures the back-panel tab of the card (see Figure 3-18).
e. Pull evenly on both ends of the PCIe card to disengage it from the socket on the PCIe riser (or remove a blanking panel) and then set the card aside.
a. Align the new PCIe card with the empty socket on the PCIe riser.
b. Push down evenly on both ends of the card until it is fully seated in the socket.
Ensure that the card rear panel tab sits flat against the PCIe riser rear panel opening.
c. Close the card-tab retainer (see Figure 3-18).
d. Return the securing plate to the riser. Insert the two hinge-tabs into the two slots on the riser, and then swing the securing plate closed.
e. Tighten the single thumbscrew on the bottom of the securing plate.
f. Position the PCIe riser over its socket on the motherboard and over its alignment features in the chassis (see Figure 3-16).
g. Carefully push down on both ends of the PCIe riser to fully engage its circuit board connector with the socket on the motherboard.
Step 10 Replace the node in the rack, replace power cables, and then power on the node by pressing the Power button.
Step 11 Recommission the node as described in Recommissioning the Node Using Cisco UCS Manager.
Step 12 Associate the node to its service profile as described in Associating a Service Profile With an HX Node.
Step 13 After ESXi reboot, exit HX Maintenance mode as described in Exiting HX Maintenance Mode.
Figure 3-18 PCIe Riser Securing Features (Three-Slot Riser Shown)
|
|
||
|
Securing plate thumbscrew (knob not visible on underside of plate) |
|
Installing Multiple PCIe Cards and Resolving Limited Resources
When a large number of PCIe add-on cards are installed in the node, the node might run out of the following resources required for PCIe devices:
The topics in this section provide guidelines for resolving the issues related to these limited resources:
Resolving Insufficient Memory Space to Execute Option ROMs
The node has very limited memory to execute PCIe legacy option ROMs, so when a large number of PCIe add-on cards are installed in the node, the node BIOS might not able to execute all of the option ROMs. The node BIOS loads and executes the option ROMs in the order that the PCIe cards are enumerated (slot 1, slot 2, slot 3, and so on).
If the node BIOS does not have sufficient memory space to load any PCIe option ROM, it skips loading that option ROM, reports a system event log (SEL) event to the Cisco IMC controller and reports the following error in the Error Manager page of the BIOS Setup utility:
To resolve this issue, disable the Option ROMs that are not needed for node booting. The BIOS Setup Utility provides the setup options to enable or disable the Option ROMs at the PCIe slot level for the PCIe expansion slots and at the port level for the onboard NICs. These options can be found in the BIOS Setup Utility Advanced > PCI Configuration page.
If the node is configured to boot primarily from RAID storage, make sure that the option ROMs for the slots where your RAID controllers installed are enabled in the BIOS, depending on your RAID controller configuration.
If the RAID controller does not appear in the node boot order even with the option ROMs for those slots are enabled, the RAID controller option ROM might not have sufficient memory space to execute. In that case, disable other option ROMs that are not needed for the node configuration to free up some memory space for the RAID controller option ROM.
If the node is configured to primarily perform PXE boot from onboard NICs, make sure that the option ROMs for the onboard NICs to be booted from are enabled in the BIOS Setup Utility. Disable other option ROMs that are not needed to create sufficient memory space for the onboard NICs.
Resolving Insufficient 16-Bit I/O Space
The node has only 64 KB of legacy 16-bit I/O resources available. This 64 KB of I/O space is divided between the CPUs in the node because the PCIe controller is integrated into the CPUs. This node BIOS has the capability to dynamically detect the 16-bit I/O resource requirement for each CPU and then balance the 16-bit I/O resource allocation between the CPUs during the PCI bus enumeration phase of the BIOS POST.
When a large number of PCIe cards are installed in the node, the node BIOS might not have sufficient I/O space for some PCIe devices. If the node BIOS is not able to allocate the required I/O resources for any PCIe devices, the following symptoms have been observed:
- The node might get stuck in an infinite reset loop.
- The BIOS might appear to hang while initializing PCIe devices.
- The PCIe option ROMs might take excessive time to complete, which appears to lock up the node.
- PCIe boot devices might not be accessible from the BIOS.
- PCIe option ROMs might report initialization errors. These errors are seen before the BIOS passes control to the operating system.
- The keyboard might not work.
To work around this problem, rebalance the 16-bit I/O load using the following methods:
1. Physically remove any unused PCIe cards.
2. If the node has one or more Cisco virtual interface cards (VICs) installed, disable the PXE boot on the VICs that are not required for the node boot configuration by using the Network Adapters page in Cisco IMC Web UI to free up some 16-bit I/O resources. Each VIC uses a minimum 16 KB of 16-bit I/O resource, so disabling PXE boot on Cisco VICs would free up some 16-bit I/O resources that can be used for other PCIe cards that are installed in the node.
Installing a Trusted Platform Module
The trusted platform module (TPM) is a small circuit board that connects to a motherboard socket and is secured by a one-way screw. The socket location is on the motherboard under PCIe riser 2.
This section contains the following procedures, which must be followed in this order when installing and enabling a TPM:
1. Installing the TPM Hardware
Installing the TPM Hardware
Step 1 Put the node in Cisco HX Maintenance mode as described in Shutting Down the Node Through vSphere With Cisco HX Maintenance Mode.
Step 2 Shut down the node as described in Shutting Down the Node.
Step 3 Decommission the node as described in Decommissioning the Node Using Cisco UCS Manager.
Step 4 Disconnect all power cables from the power supplies.
Step 5 Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.
Step 6 Remove the top cover as described in Removing and Replacing the Node Top Cover.
Step 7 Remove PCIe riser 2 to provide clearance. See Replacing a PCIe Riser for instructions.
a. Locate the TPM socket on the motherboard, as shown in Figure 3-19.
b. Align the connector that is on the bottom of the TPM circuit board with the motherboard TPM socket. Align the screw hole and standoff on the TPM board with the screw hole that is adjacent to the TPM socket.
c. Push down evenly on the TPM to seat it in the motherboard socket.
d. Install the single one-way screw that secures the TPM to the motherboard.
Step 9 Replace PCIe riser 2 to the node. See Replacing a PCIe Riser for instructions.
Step 10 Replace the top cover.
Step 11 Replace the node in the rack, replace power cables, and then power on the node by pressing the Power button.
Step 12 Recommission the node as described in Recommissioning the Node Using Cisco UCS Manager.
Step 13 Associate the node to its service profile as described in Associating a Service Profile With an HX Node.
Step 14 After ESXi reboot, exit HX Maintenance mode as described in Exiting HX Maintenance Mode.
Step 15 Continue with Enabling TPM Support in the BIOS.
Figure 3-19 TPM Socket Location on Motherboard
|
|
Enabling TPM Support in the BIOS
Note After hardware installation, you must enable TPM support in the BIOS.
Note You must set a BIOS Administrator password before performing this procedure. To set this password, press the F2 key when prompted during system boot to enter the BIOS Setup utility. Then navigate to Security > Set Administrator Password and enter the new password twice as prompted.
a. Watch during bootup for the F2 prompt, and then press F2 to enter BIOS setup.
b. Log in to the BIOS Setup Utility with your BIOS Administrator password.
c. On the BIOS Setup Utility window, choose the Advanced tab.
d. Choose Trusted Computing to open the TPM Security Device Configuration window.
e. Change TPM SUPPORT to Enabled.
f. Press F10 to save your settings and reboot the node.
Step 2 Verify that TPM support is now enabled:
a. Watch during bootup for the F2 prompt, and then press F2 to enter BIOS setup.
b. Log into the BIOS Setup utility with your BIOS Administrator password.
d. Choose Trusted Computing to open the TPM Security Device Configuration window.
e. Verify that TPM SUPPORT and TPM State are Enabled.
Step 3 Continue with Enabling the Intel TXT Feature in the BIOS.
Enabling the Intel TXT Feature in the BIOS
Intel Trusted Execution Technology (TXT) provides greater protection for information that is used and stored on the node. A key aspect of that protection is the provision of an isolated execution environment and associated sections of memory where operations can be conducted on sensitive data, invisibly to the rest of the node. Intel TXT provides for a sealed portion of storage where sensitive data such as encryption keys can be kept, helping to shield them from being compromised during an attack by malicious code.
Step 1 Reboot the node and watch for the prompt to press F2.
Step 2 When prompted, press F2 to enter the BIOS Setup utility.
Step 3 Verify that the prerequisite BIOS values are enabled:
b. Choose Intel TXT(LT-SX) Configuration to open the Intel TXT(LT-SX) Hardware Support window.
c. Verify that the following items are listed as Enabled:
– VT-d Support (default is Enabled)
– VT Support (default is Enabled)
- If VT-d Support and VT Support are already enabled, skip to Step 4.
- If VT-d Support and VT Support are not enabled, continue with the next steps to enable them.
d. Press Escape to return to the BIOS Setup utility Advanced tab.
e. On the Advanced tab, choose Processor Configuration to open the Processor Configuration window.
f. Set Intel (R) VT and Intel (R) VT-d to Enabled.
Step 4 Enable the Intel Trusted Execution Technology (TXT) feature:
a. Return to the Intel TXT(LT-SX) Hardware Support window if you are not already there.
b. Set TXT Support to Enabled.
Step 5 Press F10 to save your changes and exit the BIOS Setup utility.
Replacing Power Supplies
The node can have one or two power supplies. When two power supplies are installed they are redundant as 1+1 and hot-swappable.
- See Power Specifications for more information about the supported power supplies.
- See Rear Panel LEDs and Buttons for information about the power supply LEDs.
- See Installing a DC Power Supply for information about wiring a DC power supply.
Note If you have ordered a node with power supply redundancy (two power supplies), you do not have to power off the node to replace power supplies because they are redundant as 1+1 and hot-swappable.
Note Do not mix power supply types in the node. Both power supplies must be the same wattage and Cisco product ID (PID).
Step 1 Perform one of the following actions:
- If your node has two power supplies, you do not have to shut down the node. Continue with step 2 .
- If your node has only one power supply:
a. Put the node in Cisco HX Maintenance mode as described in Shutting Down the Node Through vSphere With Cisco HX Maintenance Mode.
b. Shut down the node as described in Shutting Down the Node.
c. Decommission the node as described in Decommissioning the Node Using Cisco UCS Manager.
Step 2 Remove the power cord from the power supply that you are replacing.
Step 3 Grasp the power supply handle while pinching the green release lever towards the handle (see Figure 3-20).
Step 4 Pull the power supply out of the bay.
Step 5 Install a new power supply:
a. Grasp the power supply handle and insert the new power supply into the empty bay.
b. Push the power supply into the bay until the release lever locks.
c. Connect the power cord to the new power supply.
Step 6 Only if you shut down the node, perform these steps:
a. Press the Power button to return the node to main power mode.
b. Recommission the node as described in Recommissioning the Node Using Cisco UCS Manager.
c. Associate the node to its service profile as described in Associating a Service Profile With an HX Node.
d. After ESXi reboot, exit HX Maintenance mode as described in Exiting HX Maintenance Mode.
|
|
Installing a DC Power Supply
Warning A readily accessible two-poled disconnect device must be incorporated in the fixed wiring. Statement 1022
Warning This product requires short-circuit (overcurrent) protection, to be provided as part of the building installation. Install only in accordance with national and local wiring regulations. Statement 1045
Warning When installing or replacing the unit, the ground connection must always be made first and disconnected last. Statement 1046
Warning Installation of the equipment must comply with local and national electrical codes. Statement 1074
Warning Hazardous voltage or energy may be present on DC power terminals. Always replace cover when terminals are not in service. Be sure uninsulated conductors are not accessible when cover is in place. Statement 1075
Installing a 930W DC Power Supply, UCSC-PSU-930WDC
If you are using a Version 1, 930W DC power supply, stripped wires connect power to the removable connector block.
Step 1 Turn off the DC power source from your facility’s circuit breaker to avoid electric shock hazard.
Step 2 Remove the DC power connector block from the power supply. (The spare PID for this connector is UCSC-CONN-930WDC=.)
To release the connector block from the power supply, push the orange plastic button on the top of the connector inward toward the power supply and pull the connector block out.
Step 3 Strip 15mm (.59 inches) of insulation off the DC wires that you will use.
Note The recommended wire gauge is 8 AWG. The minimum wire gauge is 10 AWG.
Step 4 Orient the connector as shown in Figure 3-21, with the orange plastic button toward the top.
Step 5 Use a small screwdriver to depress the spring-loaded wire retainer lever on the lower spring-cage wire connector. Insert your green (ground) wire into the aperture and then release the lever.
Step 6 Use a small screwdriver to depress the wire retainer lever on the middle spring-cage wire connector. Insert your black (DC negative) wire into the aperture and then release the lever.
Step 7 Use a small screwdriver to depress the wire retainer lever on the upper spring-cage wire connector. Insert your red (DC positive) wire into the aperture and then release the lever.
Step 8 Insert the connector block back into the power supply. Make sure that your red (DC positive) wire aligns with the power supply label, “+ DC”.
Figure 3-21 Version 1 930 W, –48 VDC Power Supply Connector Block
|
|
Replacing an mLOM Card (Cisco VIC 1227)
The node uses a Cisco VIC 1227 mLOM adapter. The mLOM card socket remains powered when the node is in 12 V standby power mode and it supports the network communications services (NCSI) protocol.
Replacing an mLOM Card
Step 1 Put the node in Cisco HX Maintenance mode as described in Shutting Down the Node Through vSphere With Cisco HX Maintenance Mode.
Step 2 Shut down the node as described in Shutting Down the Node.
Step 3 Decommission the node as described in Decommissioning the Node Using Cisco UCS Manager.
Step 4 Disconnect all power cables from the power supplies.
Step 5 Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.
Step 6 Remove the top cover as described in Removing and Replacing the Node Top Cover.
Step 7 Remove PCIe riser 1 to provide clearance. See Replacing a PCIe Riser for instructions.
Step 8 Remove any existing mLOM card or a blanking panel (see Figure 3-22):
a. Loosen the single thumbscrew that secures the mLOM card to the chassis floor.
b. Slide the mLOM card horizontally to disengage its connector from the motherboard socket.
Step 9 Install a new mLOM card:
a. Set the mLOM card on the chassis floor so that its connector is aligned with the motherboard socket and its thumbscrew is aligned with the standoff on the chassis floor.
b. Push the card’s connector into the motherboard socket horizontally.
c. Tighten the thumbscrew to secure the card to the chassis floor.
Step 10 Return PCIe riser 1 to the node. See Replacing a PCIe Riser for instructions.
Step 11 Replace the top cover.
Step 12 Replace the node in the rack, replace power cables, and then power on the node by pressing the Power button.
Step 13 Recommission the node as described in Recommissioning the Node Using Cisco UCS Manager.
Step 14 Associate the node to its service profile as described in Associating a Service Profile With an HX Node.
Step 15 After ESXi reboot, exit HX Maintenance mode as described in Exiting HX Maintenance Mode.
Figure 3-22 mLOM Card Location
|
mLOM card (VIC 1227) socket location on motherboard (under PCIe riser 1) |
|
Special Considerations for Cisco UCS Virtual Interface Cards
Table 3-11 describes the requirements for the supported Cisco UCS virtual interface cards (VICs).
Note The Cisco UCS VIC 1227 (UCSC-MLOM-CSC-02) is not compatible to use in Cisco Card NIC mode with a certain Cisco SFP+ module. Do not use a Cisco SFP+ module part number 37-0961-01 that has a serial number in the range MOC1238xxxx to MOC1309xxxx. If you use the Cisco UCS VIC 1227 in Cisco Card NIC mode, use a different part number Cisco SFP+ module, or you can use this part number 37-0961-01 if the serial number is not included in the range above. See the data sheet for this adapter for other supported SFP+ modules: Cisco UCS VIC 1227 Data Sheet
Service DIP Switches
This section includes the following topics:
- DIP Switch Location on the Motherboard
- Using the BIOS Recovery DIP Switch
- Using the Clear Password DIP Switch
- Using the Clear CMOS DIP Switch
DIP Switch Location on the Motherboard
See Figure 3-23. The position of the block of DIP switches (SW8) is shown in red. In the magnified view, all switches are shown in the default position.
Figure 3-23 Service DIP Switches
|
|
||
|
|
Using the BIOS Recovery DIP Switch
Note The following procedures use a recovery.cap recovery file. In Cisco IMC releases 3.0(1) and later, this recovery file has been renamed bios.cap
Depending on which stage the BIOS becomes corrupted, you might see different behavior.
Note As indicated by the message shown above, there are two procedures for recovering the BIOS. Try procedure 1 first. If that procedure does not recover the BIOS, use procedure 2.
Procedure 1: Reboot with recovery.cap (or bios.cap) File
Step 1 Download the BIOS update package and extract it to a temporary location.
Step 2 Copy the contents of the extracted recovery folder to the root directory of a USB thumb drive. The recovery folder contains the recovery.cap (or bios.cap) file that is required in this procedure.
Note The recovery.cap (or bios.cap) file must be in the root directory of the USB drive. Do not rename this file. The USB drive must be formatted with either FAT16 or FAT32 file systems.
Step 3 Insert the USB drive into a USB port on the node.
Step 5 Return the node to main power mode by pressing the Power button on the front panel.
The node boots with the updated BIOS boot block. When the BIOS detects a valid recovery file on the USB thumb drive, it displays this message:
System would flash the BIOS image now...
System would restart with recovered image after a few seconds...
Step 6 Wait for node to complete the BIOS update, and then remove the USB thumb drive from the node.
Note During the BIOS update, Cisco IMC shuts down the node and the screen goes blank for about 10 minutes. Do not unplug the power cords during this update. Cisco IMC powers on the node after the update is complete.
Procedure 2: Use BIOS Recovery DIP switch and recovery.cap (or bios.cap) File
See Figure 3-23 for the location of the SW8 block of DIP switches.
Step 1 Download the BIOS update package and extract it to a temporary location.
Step 2 Copy the contents of the extracted recovery folder to the root directory of a USB thumb drive. The recovery folder contains the recovery.cap (or bios.cap) file that is required in this procedure.
Note The recovery.cap (or bios.cap) file must be in the root directory of the USB drive. Do not rename this file. The USB drive must be formatted with either FAT16 or FAT32 file systems.
Step 3 Put the node in Cisco HX Maintenance mode as described in Shutting Down the Node Through vSphere With Cisco HX Maintenance Mode.
Step 4 Shut down the node as described in Shutting Down the Node.
Step 5 Decommission the node as described in Decommissioning the Node Using Cisco UCS Manager.
Step 6 Disconnect all power cables from the power supplies.
Step 7 Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.
Step 8 Remove the top cover as described in Removing and Replacing the Node Top Cover.
Step 9 Slide the BIOS recovery DIP switch from position 1 to the closed position (see Figure 3-23).
Step 10 Reconnect AC power cords to the node. The node powers up to standby power mode.
Step 11 Insert the USB thumb drive that you prepared in Step 2 into a USB port on the node.
Step 12 Return the node to main power mode by pressing the Power button on the front panel.
The node boots with the updated BIOS boot block. When the BIOS detects a valid recovery file on the USB thumb drive, it displays this message:
System would flash the BIOS image now...
System would restart with recovered image after a few seconds...
Step 13 Wait for node to complete the BIOS update, and then remove the USB thumb drive from the node.
Note During the BIOS update, Cisco IMC shuts down the node and the screen goes blank for about 10 minutes. Do not unplug the power cords during this update. Cisco IMC powers on the node after the update is complete.
Step 14 After the node has fully booted, power off the node again and disconnect all power cords.
Step 15 Slide the BIOS recovery DIP switch from the closed position back to the default position 1.
Note If you do not move the jumper, after recovery completion you see the prompt, “Please remove the recovery jumper.”
Step 16 Replace the top cover to the node.
Step 17 Replace the node in the rack, replace power cables, and then power on the node by pressing the Power button.
Step 18 Recommission the node as described in Recommissioning the Node Using Cisco UCS Manager.
Step 19 Associate the node to its service profile as described in Associating a Service Profile With an HX Node.
Step 20 After ESXi reboot, exit HX Maintenance mode as described in Exiting HX Maintenance Mode.
Using the Clear Password DIP Switch
See Figure 3-23 for the location of this DIP switch. You can use this switch to clear the administrator password.
Step 1 Put the node in Cisco HX Maintenance mode as described in Shutting Down the Node Through vSphere With Cisco HX Maintenance Mode.
Step 2 Shut down the node as described in Shutting Down the Node.
Step 3 Decommission the node as described in Decommissioning the Node Using Cisco UCS Manager.
Step 4 Disconnect all power cables from the power supplies.
Step 5 Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.
Step 6 Remove the top cover as described in Removing and Replacing the Node Top Cover.
Step 7 Slide the clear password DIP switch from position 2 to the closed position (see Figure 3-23).
Step 8 Reinstall the top cover and reconnect AC power cords to the node. The node powers up to standby power mode, indicated when the Power LED on the front panel is amber.
Step 9 Return the node to main power mode by pressing the Power button on the front panel. The node is in main power mode when the Power LED is green.
Note You must allow the entire node, not just the service processor, to reboot to main power mode to complete the reset. The state of the jumper cannot be determined without the host CPU running.
Step 10 Press the Power button to shut down the node to standby power mode, and then remove AC power cords from the node to remove all power.
Step 11 Remove the top cover from the node.
Step 12 Slide the clear CMOS DIP switch from the closed position back to default position 2 (see Figure 3-23).
Note If you do not move the jumper, the CMOS settings are reset to the default every time that you power-cycle the node.
Step 13 Replace the top cover to the node.
Step 14 Replace the node in the rack, replace power cables, and then power on the node by pressing the Power button.
Step 15 Recommission the node as described in Recommissioning the Node Using Cisco UCS Manager.
Step 16 Associate the node to its service profile as described in Associating a Service Profile With an HX Node.
Step 17 After ESXi reboot, exit HX Maintenance mode as described in Exiting HX Maintenance Mode.
Using the Clear CMOS DIP Switch
See Figure 3-23 for the location of this DIP switch. You can use this switch to clear the node’s CMOS settings in the case of a system hang. For example, if the node hangs because of incorrect settings and does not boot, use this jumper to invalidate the settings and reboot with defaults.
Step 1 Put the node in Cisco HX Maintenance mode as described in Shutting Down the Node Through vSphere With Cisco HX Maintenance Mode.
Step 2 Shut down the node as described in Shutting Down the Node.
Step 3 Decommission the node as described in Decommissioning the Node Using Cisco UCS Manager.
Step 4 Disconnect all power cables from the power supplies.
Step 5 Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.
Step 6 Remove the top cover as described in Removing and Replacing the Node Top Cover.
Step 7 Slide the clear CMOS DIP switch from position 4 to the closed position (see Figure 3-23).
Step 8 Reinstall the top cover and reconnect AC power cords to the node. The node powers up to standby power mode, indicated when the Power LED on the front panel is amber.
Step 9 Return the node to main power mode by pressing the Power button on the front panel. The node is in main power mode when the Power LED is green.
Note You must allow the entire node, not just the service processor, to reboot to main power mode to complete the reset. The state of the jumper cannot be determined without the host CPU running.
Step 10 Press the Power button to shut down the node to standby power mode, and then remove AC power cords from the node to remove all power.
Step 11 Remove the top cover from the node.
Step 12 Move the clear CMOS DIP switch from the closed position back to default position 4 (see Figure 3-23).
Note If you do not move the jumper, the CMOS settings are reset to the default every time that you power-cycle the node.
Step 13 Replace the top cover to the node.
Step 14 Replace the node in the rack, replace power cables, and then power on the node by pressing the Power button.
Step 15 Recommission the node as described in Recommissioning the Node Using Cisco UCS Manager.
Step 16 Associate the node to its service profile as described in Associating a Service Profile With an HX Node.
Step 17 After ESXi reboot, exit HX Maintenance mode as described in Exiting HX Maintenance Mode.
Setting Up the Node in Standalone Mode
Note The HX Series node is always managed in UCS Manager-controlled mode. This section is included only for cases in which a node might need to be put into standalone mode for troubleshooting purposes. Do not use this setup for normal operation of the HX Series node.
Connecting and Powering On the Node (Standalone Mode)
The node is shipped with these default settings:
Shared LOM EXT mode enables the 1-Gb Ethernet ports and the ports on any installed Cisco virtual interface card (VIC) to access Cisco Integrated Management Interface (Cisco IMC). If you want to use the 10/100/1000 dedicated management ports to access Cisco IMC, you can connect to the node and change the NIC mode as described in Step 1 of the following procedure.
- The NIC redundancy is active-active. All Ethernet ports are utilized simultaneously.
- DHCP is enabled.
- IPv4 is enabled. You can change this to IPv6.
There are two methods for connecting to the node for initial setup:
- Local setup—Use this procedure if you want to connect a keyboard and monitor to the node for setup. This procedure requires a KVM cable (Cisco PID N20-BKVM). See Local Connection Procedure.
- Remote setup—Use this procedure if you want to perform setup through your dedicated management LAN. See Remote Connection Procedure.
Note To configure the node remotely, you must have a DHCP server on the same network as the node. Your DHCP server must be preconfigured with the range of MAC addresses for this node. The MAC address is printed on a label that is on the pull-out asset tag on the front panel (see Figure 1-1). This node has a range of six MAC addresses assigned to the Cisco IMC. The MAC address printed on the label is the beginning of the range of six contiguous MAC addresses.
Local Connection Procedure
Step 1 Attach a power cord to each power supply in your node, and then attach each power cord to a grounded AC power outlet. See Power Specificationsfor power specifications.
Wait for approximately two minutes to let the node boot in standby power during the first bootup.
You can verify node power status by looking at the node Power Status LED on the front panel (see External Features Overview). The node is in standby power mode when the LED is amber.
Step 2 Connect a USB keyboard and VGA monitor to the node using one of the following methods:
- Connect a USB keyboard and VGA monitor to the corresponding connectors on the rear panel (see External Features Overview).
- Connect an optional KVM cable (Cisco PID N20-BKVM) to the KVM connector on the front panel (see External Features Overview for the connector location). Connect your USB keyboard and VGA monitor to the KVM cable.
Step 3 Open the Cisco IMC Configuration Utility:
a. Press and hold the front panel power button for four seconds to boot the node.
b. During bootup, press F8 when prompted to open the Cisco IMC Configuration Utility.
This utility has two windows that you can switch between by pressing F1 or F2.
Step 4 Continue with Cisco IMC Configuration Utility Setup.
Remote Connection Procedure
Step 1 Attach a power cord to each power supply in your node, and then attach each power cord to a grounded AC power outlet.
Wait for approximately two minutes to let the node boot in standby power during the first bootup.
You can verify node power status by looking at the node Power Status LED on the front panel (see External Features Overview). The node is in standby power mode when the LED is amber.
Step 2 Plug your management Ethernet cable into the dedicated management port on the rear panel (see External Features Overview).
Step 3 Allow your preconfigured DHCP server to assign an IP address to the node.
Step 4 Use the assigned IP address to access and log in to the Cisco IMC for the node. Consult with your DHCP server administrator to determine the IP address.
Note The default user name for the node is admin. The default password is password.
Step 5 From the Cisco IMC Summary page, click Launch KVM Console. A separate KVM console window opens.
Step 6 From the Cisco IMC Summary page, click Power Cycle System. The node reboots.
Step 7 Select the KVM console window.
Note The KVM console window must be the active window for the following keyboard actions to work.
Step 8 When prompted, press F8 to enter the Cisco IMC Configuration Utility. This utility opens in the KVM console window.
This utility has two windows that you can switch between by pressing F1 or F2.
Step 9 Continue with Cisco IMC Configuration Utility Setup.
Cisco IMC Configuration Utility Setup
The following procedure is performed after you connect to the node and open the Cisco IMC Configuration Utility.
Step 1 Set NIC mode and NIC redundancy:
a. Set the NIC mode to choose which ports to use to access Cisco IMC for node management (see Figure 1-2 for identification of the ports):
In this mode, DHCP replies are returned to both the shared LOM ports and the Cisco card ports. If the node determines that the Cisco card connection is not getting its IP address from a Cisco UCS Manager node because the node is in standalone mode, further DHCP requests from the Cisco card are disabled. Use the Cisco Card NIC mode if you want to connect to Cisco IMC through a Cisco card in standalone mode.
- Dedicated—The dedicated management port is used to access Cisco IMC. You must select a NIC redundancy and IP setting.
- Shared LOM—The 1-Gb Ethernet ports are used to access Cisco IMC. You must select a NIC redundancy and IP setting.
- Cisco Card—The ports on an installed Cisco UCS virtual interface card (VIC) are used to access Cisco IMC. You must select a NIC redundancy and IP setting.
See also the required VIC Slot setting below.
– If you select Riser1, slot 2 is the primary slot, but you can use slot 1.
– If you select Riser2, slot 5 is the primary slot, but you can use slot 4.
– If you select Flex-LOM, you must use an mLOM-style VIC in the mLOM slot.
b. Use this utility to change the NIC redundancy to your preference. This node has three possible NIC redundancy settings:
– None—The Ethernet ports operate independently and do not fail over if there is a problem. This setting can be used only with the Dedicated NIC mode.
– Active-standby—If an active Ethernet port fails, traffic fails over to a standby port.
– Active-active—All Ethernet ports are utilized simultaneously. Shared LOM EXT mode can have only this NIC redundancy setting. Shared LOM and Cisco Card modes can have both Active-standby and Active-active settings.
Step 2 Choose whether to enable DHCP for dynamic network settings, or to enter static network settings.
Note Before you enable DHCP, you must preconfigure your DHCP server with the range of MAC addresses for this node. The MAC address is printed on a label on the rear of the node. This node has a range of six MAC addresses assigned to Cisco IMC. The MAC address printed on the label is the beginning of the range of six contiguous MAC addresses.
The static IPv4 and IPv6 settings include the following:
For IPv6, valid values are 1–127.
For IPv6, if you do not know the gateway, you can set it as none by entering :: (two colons).
For IPv6, you can set this as none by entering :: (two colons).
Step 3 (Optional) Use this utility to make VLAN settings.
Step 4 Press F1 to go to the second settings window, then continue with the next step.
From the second window, you can press F2 to switch back to the first window.
Step 5 (Optional) Set a hostname for the node.
Step 6 (Optional) Enable dynamic DNS and set a dynamic DNS (DDNS) domain.
Step 7 (Optional) If you check the Factory Default check box, the node reverts to the factory defaults.
Step 8 (Optional) Set a default user password.
Step 9 (Optional) Enable auto-negotiation of port settings or set the port speed and duplex mode manually.
Note Auto-negotiation is applicable only when you use the Dedicated NIC mode. Auto-negotiation sets the port speed and duplex mode automatically based on the switch port to which the node is connected. If you disable auto-negotiation, you must set the port speed and duplex mode manually.
Step 10 (Optional) Reset port profiles and the port name.
Step 11 Press F5 to refresh the settings that you made. You might have to wait about 45 seconds until the new settings appear and the message, “Network settings configured” is displayed before you reboot the node in the next step.
Step 12 Press F10 to save your settings and reboot the node.
Note If you chose to enable DHCP, the dynamically assigned IP and MAC addresses are displayed on the console screen during bootup.
Use a browser and the IP address of the Cisco IMC to connect to the Cisco IMC management interface. The IP address is based upon the settings that you made (either a static address or the address assigned by your DHCP server).
Note The default username for the node is admin. The default password is password.