Cisco UCS C3X60 M4 Server Node For Cisco UCS S3260 Storage Server Service Note
Cisco UCS C3X60 M4 Software/Firmware Requirements
Cisco UCS C3X60 M4 Server Node Overview
C3X60 M4 Server Node Internal Component Locations
I/O Expander Internal Component Locations
Removing or Installing a C3X60 M4 Server Node
Shutting Down a C3X60 M4 Server Node
Shutting Down a Server Node By Using the Cisco IMC GUI
Shutting Down a Server Node By Using the Power Button on the Server Node
C3X60 M4 Server Node Population Rules
Replacing a C3X60 M4 Server Node
Exporting Cisco IMC Configuration From a Server Node
Importing Cisco IMC Configuration to a Server Node
Replacing C3X60 M4 Server Node and I/O Expander Components
Removing a C3X60 M4 Server Node or I/O Expander Top Cover
Removing an I/O Expander From a C3X60 M4 Server Node
Disassembling the Server Node/ I/O Expander Assembly
Reassembling the Server Node/ I/O Expander Assembly
C3X60 M4 Internal Diagnostic LEDs
Replacing DIMMs Inside the C3X60 M4 Server Node
DIMM Performance Guidelines and Population Rules
Replacing CPUs and Heatsinks Inside the C3X60 M4 Server Node
Additional CPU-Related Parts To Order With RMA Replacement Server Nodes
Replacing an RTC Battery Inside the Server Node
Replacing an NVMe SSD Inside the C3X60 M4 Server Node
Installing a Trusted Platform Module (TPM) Inside the C3X60 M4 Server Node
Enabling TPM Support in the BIOS
Enabling the Intel TXT Feature in the BIOS
Replacing a Storage Controller Card Inside the C3X60 M4 Server Node
Replacing a Supercap (RAID Backup) on a RAID Controller
Adding an I/O Expander After-Factory
Replacing a PCIe Card Inside an I/O Expander
Replacing a Storage Controller Card Inside the I/O Expander
Replacing a Supercap (RAID Backup) in an I/O Expander
Replacing an NVMe SSD Inside the I/O Expander
Service Headers on the Server Node Board
Service Header Locations on the C3X60 M4 Server Node Board
Using the Clear Password Header J64
Using the Clear CMOS Header P19
Storage Controller Considerations
Supported Storage Controllers and Required Cables
Storage Controller Population Rules for C3X60 M4 and I/O Expander
Cisco UCS C3X60 12G SAS RAID Controller Information
Cisco UCS S3260 12G Dual Pass-Through Controller Information
Best Practices For Configuring RAID Controllers
RAID Card Firmware Compatibility
Choosing Between RAID 0 and JBOD
Restoring RAID Configuration After Replacing a RAID Controller
Launching the LSI Embedded MegaRAID Configuration Utility
Installing LSI MegaSR Drivers For Windows and Linux
Downloading the LSI MegaSR Drivers
Microsoft Windows Driver Installation
For More Information on Using Storage Controllers
This document covers server node installation and replacement of internal server node components.
■Cisco UCS C3X60 M4 Software/Firmware Requirements
■Cisco UCS C3X60 M4 Server Node Overview
■Removing or Installing a C3X60 M4 Server Node
■Replacing C3X60 M4 Server Node and I/O Expander Components
■Service Headers on the Server Node Board
■Storage Controller Considerations
The Cisco UCS S3260 system firmware and software requirements for using the Cisco UCS C3X60 M4 server nodes are listed in Table 1.
■C3X60 M4 Server Node Internal Component Locations
This connector allows you to connect a local keyboard, video, and mouse (KVM) cable if you want to perform setup and management tasks locally rather than remotely.
■Reset button—You can hold this button down for 5 seconds and then release it to restart the server node controller chipset if other methods of restarting do not work.
■Server node power button/LED—You can press this button to put the server node in a standby power state or return it to full power instead of shutting down the entire system. See also Externally Viewable LEDs.
■Unit identification button/LED—This LED can be activated by pressing the button or by activating it from the software interface. This helps to locate a specific server node. See also Externally Viewable LEDs.
Figure 2 Cisco UCS C3X60 M4 Server Node Internal Components
The C3X60 M4 server node might include an optional I/O expander that attaches to the top of the server node.
Figure 3 I/O Expander Internal Components
■Shutting Down a C3X60 M4 Server Node
■C3X60 M4 Server Node Population Rules
■Replacing a C3X60 M4 Server Node
You can invoke a graceful shutdown or a hard shutdown of a server node by using either the Cisco Integrated Management Controller (Cisco IMC) interface, or the power button that is on the face of the server node.
To use the Cisco IMC GUI to shut down the server node, follow these steps:
1. Use a browser and the management IP address of the system to log in to the Cisco IMC GUI.
2. In the Navigation pane, click the Chassis menu.
3. In the Chassis menu, click Summary.
4. In the toolbar above the work pane, click the Host Power link.
The Server Power Management dialog opens. This dialog lists all servers that are present in the system.
5. In the Server Power Management dialog, select one of the following buttons for the server that you want to shut down:
CAUTION: To avoid data loss or damage to your operating system, you should always invoke a graceful shutdown of the operating system. Do not power off a server if any firmware or BIOS updates are in progress.
■Shut Down—Performs a graceful shutdown of the operating system.
■Power Off—Powers off the chosen server, even if tasks are running on that server.
It is safe to remove the server node from the chassis when the Chassis Status pane shows the Power State as Off for the server node that you are removing.
The physical power button on the server node face also turns amber when it is safe to remove the server node from the chassis.
To use the physical server node power button to shut down the server node only, follow these steps:
1. Check the color of the server node power status LED (see Figure 1):
■Green—The server node is powered on. Go to step Invoke either a graceful shutdown or a hard shutdown:
■Amber—the server node is powered off. It is safe to remove the server node from the chassis.
2. Invoke either a graceful shutdown or a hard shutdown:
CAUTION : To avoid data loss or damage to your operating system, you should always invoke a graceful shutdown of the operating system. Do not power off a server if any firmware or BIOS updates are in progress.
■ Graceful shutdown—Press and release the Power button. The software performs a graceful shutdown of the server node.
■ Emergency shutdown—Press and hold the Power button for 4 seconds to force the power off the server node.
When the server node power button turns amber, it is safe to remove the server node from the chassis.
■Do not mix a C3X60 M4 server node and a C3X60 M3 server node in the same Cisco UCS S3260 system. An M4 server node can be identified by the “M4 SVRN” label on the rear panel (see Figure 1).
–Cisco IMC releases earlier than 2.0(13): If your S3260 system has only one server node, it must be installed in bay 1.
–Cisco IMC releases 2.0(13) and later: If your S3260 system has only one server node, it can be installed in either server bay.
NOTE: Whichever bay a server node is installed to, it must have a corresponding SIOC. That is, a server node in bay 1 must be paired with a SIOC in SIOC slot 1; a server node in bay 2 must be paired with a SIOC in SIOC bay 2.
The server node is accessed from the rear of the system, so you do not have to pull the system out from the rack.
CAUTION: Before you replace a server node, export and save the Cisco IMC configuration from the node if you want that same configuration on the new node. You can import the saved configuration to the new replacement node after you install it.
1. Optional—Export the Cisco IMC configuration from the server node that you are replacing so that you can import it to the replacement server node. If you choose to do this, use the procedure in Exporting Cisco IMC Configuration From a Server Node, then return to the next step.
You do not have to power off the chassis in the next step. Replacement with chassis powered on is supported if you shut down the server node before removal.
2. Shut down the server node by using the software interface or by pressing the node power button, as described in Shutting Down a C3X60 M4 Server Node.
3. Remove the server node from the system:
a. Grasp the two ejector levers and pinch their latches to release the levers (see Cisco UCS C3X60 M4 Server Node Rear-Panel Features).
b. Rotate both levers to the outside at the same time to evenly disengage the server node from its midplane connectors.
c. Pull the server node straight out from the system.
4. If the server node has an I/O expander attached, use the procedure in Removing an I/O Expander From a C3X60 M4 Server Node to remove it and then install it on the new server node before you continue with the next step.
If there is no I/O expander, continue with the next step.
a. With the two ejector levers open, align the new server node with the empty bay. Note these configuration rules:
–Cisco IMC releases earlier than 2.0(13): If your S3260 system has only one server node, it must be installed in bay 1
–Cisco IMC releases 2.0(13) and later: If your S3260 system has only one server node, it can be installed in either server bay.
b. Push the server node into the bay until it engages with the midplane connectors and is flush with the chassis.
c. Rotate both ejector levers toward the center until they lay flat and their latches lock into the rear of the server node.
7. Perform initial setup on the new server node to assign an IP address and your other preferred network settings. See Initial System Setup in the Cisco UCS S3260 Storage Server Installation and Service Guide.
8. Optional—Import the Cisco IMC configuration that you saved in step 1. If you choose to do this, use the procedure in Importing Cisco IMC Configuration to a Server Node.
This operation can be performed using either the GUI or CLI interface of the Cisco IMC. The example in this procedure uses the CLI commands. For more information see Exporting a Cisco IMC Configuration in the CLI and GUI guides here: Configuration Guides.
1. Log in to the IP address and CLI interface of the server node that you are replacing.
2. Enter the following commands as you are prompted:
3. Enter the user name, password, and pass phrase.
This sets the user name password, and pass phrase for the file that you are exporting. The export operation begins after you enter a pass phrase, which can be anything that you choose.
To determine whether the export operation has completed successfully, use the show detail command. To abort the operation, type CTRL+C.
The following is an example of an export operation. In this example, the TFTP protocol is used to export the configuration to IP address 192.0.2.34, in file /ucs/backups/cimc5.xml.
This operation can be performed using either the GUI or CLI interface of the Cisco IMC. The example in this procedure uses the CLI commands. For more information see Importing a Cisco IMC Configuration in the CLI and GUI guides here: Configuration Guides.
1. SSH into the CLI interface of the new server node.
2. Enter the following commands as you are prompted:
3. Enter the user name, password, and pass phrase.
This should be the user name, password, and pass phrase that you used during the export operation. The import operation begins after you enter the pass phrase.
The following is an example of an import operation. In this example, the TFTP protocol is used to import the configuration to the server node from IP address 192.0.2.34, from file /ucs/backups/cimc5.xml.
■Removing a C3X60 M4 Server Node or I/O Expander Top Cover
■Removing an I/O Expander From a C3X60 M4 Server Node
■C3X60 M4 Internal Diagnostic LEDs
■Replacing DIMMs Inside the C3X60 M4 Server Node
■Replacing CPUs and Heatsinks Inside the C3X60 M4 Server Node
■Replacing an RTC Battery Inside the Server Node
■Replacing an NVMe SSD Inside the C3X60 M4 Server Node
■Installing a Trusted Platform Module (TPM) Inside the C3X60 M4 Server Node
■Replacing a Storage Controller Card Inside the C3X60 M4 Server Node
■Replacing a Supercap (RAID Backup) on a RAID Controller
■Adding an I/O Expander After-Factory
■Replacing a PCIe Card Inside an I/O Expander
■Replacing a Storage Controller Card Inside the I/O Expander
■Replacing a Supercap (RAID Backup) in an I/O Expander
■Replacing an NVMe SSD Inside the I/O Expander
The optional I/O expander and the server node use the same top cover. If an I/O expander is attached to the top of the server node, the top cover is on the I/O expander, as shown in the side view in Side View, Server Node With Attached I/O Expander. In this case, there is also an intermediate cover between the server node and the I/O expander.
Figure 4 Side View, Server Node With Attached I/O Expander
In this view, the top cover is on the attached I/O expander. |
|||
In this view, the intermediate cover is attached to the server node. |
NOTE: You do not have to slide the system out of the rack to remove the server node from the rear of the system.
1. Shut down the server node by using the software interface or by pressing the node power button, as described in Shutting Down a C3X60 M4 Server Node.
2. Remove the server node from the system (or server node with attached I/O expander, if present):
a. Grasp the two ejector levers and pinch their latches to release the levers (see Cisco UCS C3X60 M4 Server Node Rear-Panel Features).
b. Rotate both levers to the outside at the same time to evenly disengage the server node from its midplane connectors.
c. Pull the server node straight out from the system.
3. Remove the top cover from the server node or the I/O expander (if present).
a. Lift the latch handle to an upright position (see Cisco UCS C3X60 M4 Server Node or I/O Expander Top Cover).
b. Turn the latch handle 90 degrees to release the lock.
c. Slide the cover toward the rear (toward the rear-panel buttons) and then lift it from the server node or I/O expander (if present).
a. Set the cover in place on the server node or I/O expander (if present), offset about one inch toward the rear. Pegs on the inside of the cover must engage the tracks on the server node or I/O expander base.
b. Push the cover forward until it stops.
c. Turn the latch handle 90 degrees to close the lock.
d. Fold the latch handle flat.
5. Reinstall a server node (or server node with attached I/O expander, if present):
a. With the two ejector levers open, align the new server node with the empty bay.
b. Push the server node into the bay until it engages with the midplane connectors and is flush with the chassis.
c. Rotate both ejector levers toward the center until they lay flat and their latches lock into the rear of the server node.
Figure 5 Cisco UCS C3X60 M4 Server Node or I/O Expander Top Cover
This sections in this topic describes how to remove an I/O expander and intermediate cover from a C3X60 M4 server node so that you can access the components inside the server node.
■Disassembling the Server Node/ I/O Expander Assembly
■Reassembling the Server Node/ I/O Expander Assembly
NOTE: You do not have to power off the chassis in the next step. Replacement with the chassis powered on is supported if you shut down the server node before removal.
1. Shut down the server node by using the software interface or by pressing the node power button, as described in Shutting Down a C3X60 M4 Server Node.
2. Remove the server node with attached I/O expander from the system:
a. Grasp the two ejector levers and pinch their latches to release the levers (see Figure 1).
b. Rotate both levers to the outside at the same time to evenly disengage the server node from its midplane connectors.
c. Pull the server node with attached I/O expander straight out from the system.
3. Remove the top cover from the I/O expander:
a. Lift the latch handle to an upright position (see Figure 5).
b. Turn the latch handle 90 degrees to release the lock.
c. Slide the cover toward the rear (toward the rear-panel buttons) and then lift it from the I/O expander.
4. Remove the I/O expander from the server node:
a. Remove the five screws that secure the I/O expander to the top of the server node (see Figure 6).
Figure 6 I/O Expander Securing Screws (Five)
b. Use two small flat-head screwdrivers (1/4-inch or equivalent) to help separate the connector on the underside of the I/O expander from the socket on the server node board.
Insert a screwdriver about 1/2-inch into the “REMOVAL SLOT” that is marked with an arrow on each side of the I/O expander (see Figure 7). Then lift up evenly on both screwdrivers at the same time to separate the connectors and lift the I/O expander about 1/2-inch.
c. Grasp the two handles on the I/O expander board and lift it straight up.
Figure 7 Separating the I/O Expander From the C3X60 M4 Server Node
5. Remove the intermediate cover from the server node:
a. Remove the four screws that secure the intermediate cover and set them aside. There are two screws on each side of the intermediate cover (see Side View, Server Node With Attached I/O Expander).
b. Slide the intermediate cover toward the rear (toward the rear-panel buttons) and then lift it from the server node.
1. Reinstall the intermediate cover to the server node:
a. Set the intermediate cover in place on the server node, offset about one inch toward the rear. Pegs on the inside of the cover must engage the tracks on the server node base.
b. Push the cover forward until it stops.
c. Reinstall the four screws that secure the intermediate cover.
2. Reinstall the I/O expander to the server node:
CAUTION: Use caution to align all features of the I/O expander with the intermediate cover and server node before mating the connector on the underside of the expander with the socket on the server board. The connector can be damaged if correct alignment is not used.
a. Carefully align the I/O expander with the alignment pegs on the top of the intermediate cover (see Figure 8).
b. Set the I/O expander down on the intermediate cover and lower it gently to mate the mezzanine connector and the socket on the server board.
Figure 8 Replacing the I/O Expander to the M4 Server Node
c. If a RAID controller is present or an NVMe SSD is present in the right-hand socket (IOENVMe2) of the I/O expander, you must remove them to access the PRESS HERE plate in the next step.
See Replacing a Storage Controller Card Inside the I/O Expander and Replacing an NVMe SSD Inside the I/O Expander to remove them, then return to the next step.
d. Press down firmly on the plastic plate marked “PRESS HERE” to fully seat the connectors (see Figure 9).
Figure 9 I/O Expander, Showing “PRESS HERE” Plate
e. If you removed a RAID controller or an NVMe SSD to access the PRESS HERE plate, reinstall them now.
See Replacing a Storage Controller Card Inside the I/O Expander and Replacing an NVMe SSD Inside the I/O Expander.
CAUTION: Before you reinstall the I/O expander securing screws, you must use the supplied alignment tool (UCSC-C3K-M4IOTOOL) in the next step to ensure alignment of the connectors that plug into the internal chassis backplane. Failure to ensure alignment might damage the sockets on the backplane.
3. Insert the four pegs of the alignment tool into the holes that are built into the connector side of the server node and I/O expander. Ensure that the alignment tool fits into all four holes and lies flat (see Figure 10).
The alignment tool is shipped with systems that are ordered with an I/O expander. It is also shipped with I/O expander replacement spares. You can order the tool using Cisco PID UCSC-C3K-M4IOTOOL.
Figure 10 Using the I/O Expander Alignment Tool
4. Reinstall and tighten the five screws that secure the I/O expander to the top of the server node (see Figure 6).
6. Reinstall the I/O expander top cover:
a. Set the cover in place on the I/O expander, offset about one inch toward the rear. Pegs on the inside of the cover must engage the tracks on the I/O expander base.
b. Push the cover forward until it stops.
c. Turn the latch handle 90 degrees to close the lock.
d. Fold the latch handle flat.
7. Install a server node with attached I/O expander to the chassis:
a. With the two ejector levers open, align the server node and I/O expander with the two empty bays.
b. Push the server node into the bay until it engages with the midplane connectors and is flush with the chassis.
c. Rotate both ejector levers toward the center until they lay flat and their latches lock into the rear of the server node.
There are internal diagnostic LEDs on the edge of the server node board. These LEDs can be viewed while the server node is removed from the chassis, up to 30 seconds after AC power is removed.
1. Shut down and remove the server node from the system as described in Shutting Down a C3X60 M4 Server Node.
NOTE: You do not have to remove the server node cover to view the LEDs on the edge of the board.
2. Press and hold the server node unit identification button (see Figure 11) within 30 seconds of removing the server node from the system.
A fault LED that lights amber indicates a faulty component.
Figure 11 Cisco UCS C3X60 M4 Server Node Internal Diagnostic LEDs
There are 16 DIMM sockets on the server node board.
CAUTION: DIMMs and their sockets are fragile and must be handled with care to avoid damage during installation.
CAUTION: Cisco does not support third-party DIMMs. Using non-Cisco DIMMs in the system might result in system problems or damage to the motherboard.
NOTE: To ensure the best system performance, it is important that you are familiar with memory performance guidelines and population rules before you install or replace the memory.
For additional information about troubleshooting DIMM memory issues, see the document Troubleshoot DIMM Memory in UCS.
Cisco UCS C3X60 M4 DIMM and CPU Numbering shows the DIMM sockets and how they are numbered on a C3X60 M4 server node board.
■A server node has 16 DIMM sockets (8 for each CPU).
■Channels are labeled with letters as shown in Cisco UCS C3X60 M4 DIMM and CPU Numbering.
For example, channel A = DIMM sockets A1, A2.
■Each channel has two DIMM sockets. The blue socket in a channel is always socket 1.
Figure 12 Cisco UCS C3X60 M4 DIMM and CPU Numbering
Observe the following guidelines when installing or replacing DIMMs:
■For optimal performance, spread DIMMs evenly across both CPUs and all channels.
■ Populate the DIMM sockets of each CPU identically. Populate the blue DIMM 1 sockets first, then the black DIMM 2 slots. For example, populate the DIMM slots in this order:
1. A1, E1, B1, F1, C1, G1, D1, H1
2. A2, E2, B2, F2, C2, G2, D2, H2
■Observe the DIMM mixing rules shown in DIMM Mixing Rules.
When you enable memory mirroring mode, the memory subsystem simultaneously writes identical data to two channels. If a memory read from one of the channels returns incorrect data due to an uncorrectable memory error, the system automatically retrieves the data from the other channel. A transient or soft error in one channel does not affect the mirrored data, and operation continues.
Memory mirroring reduces the amount of memory available to the operating system by 50 percent because only one of the two populated channels provides data.
When you enable lockstep channel mode, each memory access is a 128-bit data access that spans four channels.
Lockstep channel mode requires that all four memory channels on a CPU must be populated identically with regards to size and organization. DIMM socket populations within a channel do not have to be identical but the same DIMM slot location across all four channels must be populated the same.
For example, DIMMs in sockets A1, B1, C1, and D1 must be identical. DIMMs in sockets A2, B2, C2, and D2 must be identical. However, the A1-B1-C1-D1 DIMMs do not have to be identical with the A2-B2-C2-D2 DIMMs.
1. Shut down the server node by using the software interface or by pressing the node power button, as described in Shutting Down a C3X60 M4 Server Node.
2. Remove a server node from the system:
a. Grasp the two ejector levers and pinch their latches to release the levers (see Cisco UCS C3X60 M4 Server Node Rear-Panel Features).
b. Rotate both levers to the outside at the same time to evenly disengage the server node from its midplane connectors.
c. Pull the server node straight out from the system.
3. Do one of the following to access the component inside the server node:
■If the server node does not have an I/O expander attached—Remove the server node cover as described in Removing a C3X60 M4 Server Node or I/O Expander Top Cover before continuing with the next step.
■If the server node has an I/O expander attached—Remove the I/O expander and intermediate cover as described in Removing an I/O Expander From a C3X60 M4 Server Node before you continue with the next step.
4. Locate the faulty DIMM and remove it from the socket on the riser by opening the ejector levers at both ends of the DIMM socket.
NOTE: Before installing DIMMs, refer to the population guidelines. See DIMM Performance Guidelines and Population Rules.
a. Align the new DIMM with the socket on the riser. Use the alignment key in the DIMM socket to correctly orient the DIMM.
b. Push the DIMM into the socket until it is fully seated and the ejector levers on either side of the socket lock into place.
■If the server node did not have an I/O expander attached—Reinstall the server node cover as described in Removing a C3X60 M4 Server Node or I/O Expander Top Cover before continuing with the next step.
■If the server node had an I/O expander attached—Reinstall the I/O expander and intermediate cover as described in Removing an I/O Expander From a C3X60 M4 Server Node before you continue with the next step.
7. Install a server node to the chassis:
a. With the two ejector levers open, align the new server node with the empty bay.
–Cisco IMC releases earlier than 2.0(13): If your S3260 system has only one server node, it must be installed in bay 1
–Cisco IMC releases 2.0(13) and later: If your S3260 system has only one server node, it can be installed in either server bay.
b. Push the server node into the bay until it engages with the midplane connectors and is flush with the chassis.
c. Rotate both ejector levers toward the center until they lay flat and their latches lock into the rear of the server node.
Two CPUs are inside each server node. Although CPUs are not spared separately for this server node, you might need to move your CPUs from a faulty server node to a new server node.
■A server node must have two CPUs to operate. See Cisco UCS C3X60 M4 DIMM and CPU Numbering for the C3X60 M4 server node CPU numbering.
CAUTION: CPUs and their motherboard sockets are fragile and must be handled with care to avoid damaging pins during installation. The CPUs must be installed with heatsinks and their thermal pads to ensure proper cooling. Failure to install a CPU correctly might result in damage to the system.
NOTE: This server uses the new independent loading mechanism (ILM) CPU sockets, so no Pick-and-Place tools are required for CPU handling or installation. Always grasp the plastic frame on the CPU when handling.
1. Shut down the server node by using the software interface or by pressing the node power button, as described in Shutting Down a C3X60 M4 Server Node.
2. Remove a server node from the system:
a. Grasp the two ejector levers and pinch their latches to release the levers (see Cisco UCS C3X60 M4 Server Node Rear-Panel Features).
b. Rotate both levers to the outside at the same time to evenly disengage the server node from its midplane connectors.
c. Pull the server node straight out from the system.
3. Do one of the following to access the component inside the server node:
■If the server node does not have an I/O expander attached—Remove the server node cover as described in Removing a C3X60 M4 Server Node or I/O Expander Top Cover before continuing with the next step.
■If the server node has an I/O expander attached—Remove the I/O expander and intermediate cover as described in Removing an I/O Expander From a C3X60 M4 Server Node before you continue with the next step.
4. Use a Number 2 Phillips-head screwdriver to loosen the four captive screws that secure the heatsink, and then lift it off of the CPU.
NOTE: Alternate loosening each screw evenly to avoid damaging the heatsink or CPU.
5. Open the CPU retaining mechanism:
a. Unclip the first retaining latch labeled with the icon, and then unclip the second retaining latch labeled with the icon. See CPU Socket.
b. Open the hinged CPU cover plate.
a. With the latches and hinged CPU cover plate open, swing the CPU in its hinged seat up to the open position, as shown in CPU Socket.
b. Grasp the CPU by the finger-grips on its plastic frame and lift it up and out of the hinged CPU seat.
c. Set the CPU aside on an antistatic surface.
a. Grasp the new CPU by the finger-grips on its plastic frame and align the tab on the frame that is labeled “ALIGN” with the hinged seat, as shown in CPU and Socket Alignment Features.
b. Insert the tab on the CPU frame into the seat until it stops and is held firmly.
The line below the word “ALIGN” should be level with the edge of the seat, as shown in CPU and Socket Alignment Features.
c. Swing the hinged seat with the CPU down until the CPU frame clicks in place and holds flat in the socket.
d. Close the hinged CPU cover plate.
e. Clip down the CPU retaining latch with the icon, and then clip down the CPU retaining latch with the icon. See CPU Socket.
Figure 14 CPU and Socket Alignment Features
CAUTION: The heatsink must have a new thermal grease on the heatsink-to-CPU surface to ensure proper cooling. New heatsinks have a pre-applied pad of grease. If you are reusing a heatsink, you must remove the old thermal grease and apply grease from a syringe.
–If you are installing a new heatsink, remove the protective film from the pre-applied pad of thermal grease on the bottom of the new heatsink. Then continue with step Align the heatsink captive screws with the motherboard standoffs, and then use a Number 2 Phillips-head screwdriver to tighten the captive screws evenly.
–If you are reusing a heatsink, continue with step Apply an alcohol-based cleaning solution to the old thermal grease and let it soak for a least 15 seconds.
b. Apply an alcohol-based cleaning solution to the old thermal grease and let it soak for a least 15 seconds.
c. Wipe all of the old thermal grease off the old heatsink using a soft cloth that will not scratch the heatsink surface.
d. Apply thermal grease from the syringe that is included with the new CPU to the top of the CPU. Apply about half the syringe contents to the top of the CPU in the pattern that is shown in CPU Thermal Grease Application Pattern.
NOTE: If you do not have a syringe of thermal grease, you can order a spare (Cisco PID UCS-CPU-GREASE3).
Figure 15 CPU Thermal Grease Application Pattern
9. Align the heatsink captive screws with the motherboard standoffs, and then use a Number 2 Phillips-head screwdriver to tighten the captive screws evenly.
CAUTION: Alternate tightening each screw evenly to avoid damaging the heatsink or CPU.
■If the server node did not have an I/O expander attached—Reinstall the server node cover as described in Removing a C3X60 M4 Server Node or I/O Expander Top Cover before continuing with the next step.
■If the server node had an I/O expander attached—Reinstall the I/O expander and intermediate cover as described in Removing an I/O Expander From a C3X60 M4 Server Node before you continue with the next step.
a. With the two ejector levers open, align the new server node with the empty bay.
–Cisco IMC releases earlier than 2.0(13): If your S3260 system has only one server node, it must be installed in bay 1
–Cisco IMC releases 2.0(13) and later: If your S3260 system has only one server node, it can be installed in either server bay.
b. Push the server node into the bay until it engages with the midplane connectors and is flush with the chassis.
c. Rotate both ejector levers toward the center until they lay flat and their latches lock into the rear of the server node.
When a return material authorization (RMA) of the server node or CPU is done on a system, there are additional parts that might not be included with the CPU or motherboard spare bill of materials (BOM). The TAC engineer might need to add the additional parts to the RMA to help ensure a successful replacement.
■Scenario 1—You are re-using the existing heatsinks:
–Heat sink cleaning kit (UCSX-HSCK=)
–Thermal grease kit for S3260 server nodes (UCS-CPU-GREASE3=)
–Intel CPU Pick-n-Place tool for EP CPUs (UCS-CPU-EP-PNP=)
■Scenario 2—You are replacing the existing heatsinks:
–Heat sink cleaning kit (UCSX-HSCK=)
–Intel CPU Pick-n-Place tool for EP CPUs (UCS-CPU-EP-PNP=)
A CPU heatsink cleaning kit is good for up to four CPU and heatsink cleanings. The cleaning kit contains two bottles of solution, one to clean the CPU and heatsink of old thermal interface material and the other to prepare the surface of the heatsink.
It is important to clean the old thermal interface material off of the CPU prior to installing the heatsinks. Therefore, when ordering new heatsinks it is still necessary to order the heatsink cleaning kit at a minimum.
The real-time clock (RTC) battery retains system settings when the server is disconnected from power. The battery type is CR2032. Cisco supports the industry-standard CR2032 battery, which can be purchased from most electronic stores.
NOTE: When the RTC battery is removed or it completely loses power, settings that were stored in the BMC of the server node are lost. You must reconfigure the BMC settings after installing a new battery.
1. Shut down the server node by using the software interface or by pressing the node power button, as described in Shutting Down a C3X60 M4 Server Node.
2. Remove a server node from the system:
a. Grasp the two ejector levers on the node and pinch their latches to release the levers.
b. Rotate both levers to the outside at the same time to evenly disengage the server node from its midplane connectors.
c. Pull the server node straight out from the system.
NOTE: You do not have to remove the server node cover or an I/O expander to access the RTC battery.
3. Remove the server node RTC battery:
a. Locate the RTC battery. See RTC Battery Inside the C3X60 M4 Server Node.
b. Pull the battery retaining clip away from the battery and pull the battery from the socket.
4. Install the new RTC battery:
a. Pull the retaining clip away from the battery socket and insert the battery in the socket.
NOTE: The flat, positive side of the battery marked “+” should face the retaining clip.
b. Push the battery into the socket until it is fully seated and the retaining clip clicks over the top of the battery.
a. With the two ejector levers open, align the new server node with the empty bay.
–Cisco IMC releases earlier than 2.0(13): If your S3260 system has only one server node, it must be installed in bay 1
–Cisco IMC releases 2.0(13) and later: If your S3260 system has only one server node, it can be installed in either server bay.
b. Push the server node into the bay until it engages with the midplane connectors and is flush with the chassis.
c. Rotate both ejector levers toward the center until they lay flat and their latches lock into the rear of the server node.
7. Reconfigure the BMC settings for this node.
Figure 16 RTC Battery Inside the C3X60 M4 Server Node
The SBNVMe1 socket inside each C3X60 M4 server node can support a single NVMe PCIe SSD. The SSD might be under a storage controller card, if one is installed in the server node.
The software designation of the SSD socket inside the server node is SBNVMe1.
NOTE: 2.5-inch form-factor NVMe SSDs are bootable in UEFI mode; legacy booting is not supported.
1. Shut down the server node by using the software interface or by pressing the node power button, as described in Shutting Down a C3X60 M4 Server Node.
2. Remove a server node from the system:
a. Grasp the two ejector levers and pinch their latches to release the levers.
b. Rotate both levers to the outside at the same time to evenly disengage the server node from its midplane connectors.
c. Pull the server node straight out from the system.
3. Do one of the following to access the component inside the server node:
■If the server node does not have an I/O expander attached—Remove the server node cover as described in Removing a C3X60 M4 Server Node or I/O Expander Top Cover before continuing with the next step.
■If the server node has an I/O expander attached—Remove the I/O expander and intermediate cover as described in Removing an I/O Expander From a C3X60 M4 Server Node before you continue with the next step.
4. If a storage controller is present in the server node, remove it to provide clearance:
a. Loosen the captive thumbscrews that secure the card to the server node (see Figure 19 or HBA Controller Card (UCS-C3K-M4DHBA) Thumbscrews Inside the C3X60 M4 Server Node).
b. Grasp the card at both ends and lift it evenly to disengage the connector on the underside of the card from the mezzanine socket.
a. Remove the single screw that secures the drive to its bracket (see Figure 17).
b. Slide the drive to disengage it from its horizontal socket, then lift it from the server node.
a. Set the drive in its bracket on the server board, then slide it forward to engage its connector with the socket.
b. Install the single screw that secures the drive to the bracket.
7. If you removed a storage controller card, reinstall it:
a. Align the card with bracket over the mezzanine socket and the four standoffs.
b. Press down on both ends of the card to engage the connector on the underside of the card with the mezzanine socket.
c. Tighten the four captive thumbscrews that secure the card to the server node.
■If the server node did not have an I/O expander attached—Reinstall the server node cover as described in Removing a C3X60 M4 Server Node or I/O Expander Top Cover before continuing with the next step.
■If the server node had an I/O expander attached—Reinstall the I/O expander and intermediate cover as described in Removing an I/O Expander From a C3X60 M4 Server Node before you continue with the next step.
a. With the two ejector levers open, align the new server node with the empty bay.
b. Push the server node into the bay until it engages with the midplane connectors and is flush with the chassis.
c. Rotate both ejector levers toward the center until they lay flat and their latches lock into the rear of the server node.
Figure 17 SSD Inside the C3X60 M4 Server Node
The trusted platform module (TPM) is a small circuit board that attaches to a socket on the server node board.
This section contains the following procedures, which must be followed in this order when installing and enabling a TPM:
1. Installing the TPM Hardware
If there is no existing TPM in the server, you can install TPM 2.0. TPM 2.0 requires Intel v4 code or later.
CAUTION: If your M4 server node Intel v4 system is currently supported and protected by TPM version 2.0, a potential security exposure might occur if you downgrade the system software and BIOS to an earlier version.
NOTE: If the TPM 2.0 becomes unresponsive, reboot the server.
NOTE: For security purposes, the TPM is installed with a one-way screw. It cannot be removed with a standard screwdriver.
1. Shut down the server node by using the software interface or by pressing the node power button, as described in Shutting Down a C3X60 M4 Server Node.
2. Remove a server node from the system:
a. Grasp the two ejector levers on the node and pinch their latches to release the levers.
b. Rotate both levers to the outside at the same time to evenly disengage the server node from its midplane connectors.
c. Pull the server node straight out from the system.
NOTE: You do not have to remove the server node cover or an I/O expander to access the TPM socket.
a. Locate the TPM socket on the server node board, as shown in TPM Location Inside the C3X60 M4 Server Node.
b. Align the connector that is on the bottom of the TPM circuit board with the TPM socket. Align the screw hole on the TPM board with the screw hole adjacent to the TPM socket.
c. Push down evenly on the TPM to seat it in the motherboard socket.
d. Install the single one-way screw that secures the TPM to the motherboard.
a. With the two ejector levers open, align the new server node with the empty bay.
–Cisco IMC releases earlier than 2.0(13): If your S3260 system has only one server node, it must be installed in bay 1
–Cisco IMC releases 2.0(13) and later: If your S3260 system has only one server node, it can be installed in either server bay.
b. Push the server node into the bay until it engages with the midplane connectors and is flush with the chassis.
c. Rotate both ejector levers toward the center until they lay flat and their latches lock into the rear of the server node.
6. Continue with Enabling TPM Support in the BIOS.
Figure 18 TPM Location Inside the C3X60 M4 Server Node
NOTE: After TPM hardware installation, you must enable TPM support in the BIOS.
NOTE: You must set a BIOS Administrator password before performing this procedure. To set this password, press the F2 key when prompted during system boot to enter the BIOS Setup utility. Then navigate to Security > Set Administrator Password and enter the new password twice as prompted.
d. Watch during bootup for the F2 prompt, and then press F2 to enter BIOS setup.
e. Log in to the BIOS Setup Utility with your BIOS Administrator password.
f. On the BIOS Setup Utility window, choose the Advanced tab.
g. Choose Trusted Computing to open the TPM Security Device Configuration window.
h. Change TPM SUPPORT to Enabled.
i. Press F10 to save your settings and reboot the server node.
2. Verify that TPM support is now enabled:
a. Watch during bootup for the F2 prompt, and then press F2 to enter BIOS setup.
b. Log into the BIOS Setup utility with your BIOS Administrator password.
d. Choose Trusted Computing to open the TPM Security Device Configuration window.
e. Verify that TPM SUPPORT and TPM State are Enabled.
3. Continue with Enabling the Intel TXT Feature in the BIOS.
Intel Trusted Execution Technology (TXT) provides greater protection for information that is used and stored on the business server. A key aspect of that protection is the provision of an isolated execution environment and associated sections of memory where operations can be conducted on sensitive data, invisibly to the rest of the system. Intel TXT provides for a sealed portion of storage where sensitive data such as encryption keys can be kept, helping to shield them from being compromised during an attack by malicious code.
1. Reboot the server node and watch for the prompt to press F2.
2. When prompted, press F2 to enter the BIOS Setup utility.
3. Verify that the prerequisite BIOS values are enabled:
b. Choose Intel TXT(LT-SX) Configuration to open the Intel TXT(LT-SX) Hardware Support window.
c. Verify that the following items are listed as Enabled:
– VT-d Support (default is Enabled)
– VT Support (default is Enabled)
■If VT-d Support and VT Support are already enabled, skip to Enable the Intel Trusted Execution Technology (TXT) feature:.
■If VT-d Support and VT Support are not enabled, continue with the next steps to enable them.
a. Press Escape to return to the BIOS Setup utility Advanced tab.
b. On the Advanced tab, choose Processor Configuration to open the Processor Configuration window.
c. Set Intel (R) VT and Intel (R) VT-d to Enabled.
4. Enable the Intel Trusted Execution Technology (TXT) feature:
a. Return to the Intel TXT(LT-SX) Hardware Support window if you are not already there.
b. Set TXT Support to Enabled.
5. Press F10 to save your changes and exit the BIOS Setup utility.
The Cisco storage controller card connects to a mezzanine-style socket inside the server node.
To replace a supercap power module (RAID backup), see Replacing a Supercap (RAID Backup) on a RAID Controller.
Note the following population rules for storage controllers:
■If the server node has UCS-S3260-DHBA, then no controller is allowed in the I/O expander.
■If the server node has UCS-C3K-M4RAID, then the I/O expander can also have UCS-C3K-M4RAID (but not UCS-S3260-DHBA).
■If the server node has no storage controller, then the I/O expander can have UCS-C3K-M4RAID or UCS-S3260-DHBA.
NOTE: If you move a controller from the server node to the I/O expander, or move it from the I/O expander to the server node, you must run the Cisco Host Upgrade Utility (HUU) on the server node to update the sub-OEMID to the correct values. If you do not do this, the server node Cisco IMC returns an Invalid Hardware Configuration error. HUU Guides.
NOTE: Do not mix different storage controllers in the same system. If the system has two server nodes, they must both contain the same controller.
NOTE: See Supported Storage Controllers and Required Cables for information about the controllers supported in the C3X60 M4 server node.
1. Shut down the server node by using the software interface or by pressing the node power button, as described in Shutting Down a C3X60 M4 Server Node.
2. Remove a server node from the system:
a. Grasp the two ejector levers on the node and pinch their latches to release the levers.
b. Rotate both levers to the outside at the same time to evenly disengage the server node from its midplane connectors.
c. Pull the server node straight out from the system.
3. Do one of the following to access the component inside the server node:
■If the server node does not have an I/O expander attached—Remove the server node cover as described in Removing a C3X60 M4 Server Node or I/O Expander Top Cover before continuing with the next step.
■If the server node has an I/O expander attached—Remove the I/O expander and intermediate cover as described in Removing an I/O Expander From a C3X60 M4 Server Node before you continue with the next step.
4. Remove a storage controller card:
a. Loosen the captive thumbscrews that secure the card to the board (see RAID Controller Card (UCS-C3K-M4RAID) Thumbscrews Inside the C3X60 M4 Server Node for RAID card UCS-C3K-M4RAID or HBA Controller Card (UCS-C3K-M4DHBA) Thumbscrews Inside the C3X60 M4 Server Node for HBA card UCS-C3K-M4DHBA).
b. Grasp the card at both ends and lift it evenly to disengage the connector on the underside of the card from the mezzanine socket.
5. Install a storage controller card:
NOTE : If you are installing Cisco UCS S3260 Dual Pass-Through Controller (UCS-C3K-M4DHBA), it is supported in an M4 server node only; it is not supported in M3 server nodes.
NOTE : If you are installing Cisco UCS S3260 Dual Pass-Through Controller (UCS-C3K-M4DHBA), it requires chassis part number 68-5286 -06 or later. The chassis motherboard in earlier chassis versions does not support this controller. You can determine the chassis part number by looking on the part-number label on the top-front of the chassis, or by using the inventory-all command, as shown in the following example using the Cisco IMC CLI:
a. Align the card over the mezzanine socket and the three standoffs.
b. Press down on both ends of the card to engage the connector on the underside of the card with the mezzanine socket.
c. Tighten the captive screws that secure the card to the board (see RAID Controller Card (UCS-C3K-M4RAID) Thumbscrews Inside the C3X60 M4 Server Node or HBA Controller Card (UCS-C3K-M4DHBA) Thumbscrews Inside the C3X60 M4 Server Node).
■If the server node did not have an I/O expander attached—Reinstall the server node cover as described in Removing a C3X60 M4 Server Node or I/O Expander Top Cover before continuing with the next step.
■If the server node had an I/O expander attached—Reinstall the I/O expander and intermediate cover as described in Removing an I/O Expander From a C3X60 M4 Server Node before you continue with the next step.
a. With the two ejector levers open, align the new server node with the empty bay.
–Cisco IMC releases earlier than 2.0(13): If your S3260 system has only one server node, it must be installed in bay 1
–Cisco IMC releases 2.0(13) and later: If your S3260 system has only one server node, it can be installed in either server bay.
b. Push the server node into the bay until it engages with the midplane connectors and is flush with the chassis.
c. Rotate both ejector levers toward the center until they lay flat and their latches lock into the rear of the server node.
9. See Restoring RAID Configuration After Replacing a RAID Controller to restore your RAID configuration.
Figure 19 RAID Controller Card (UCS-C3K-M4RAID) Thumbscrews Inside the C3X60 M4 Server Node
Figure 20 HBA Controller Card (UCS-C3K-M4DHBA) Thumbscrews Inside the C3X60 M4 Server Node
The SCPM mounts directly to the RAID controller.
The SCPM provides approximately 3 years of backup for the disk write-back cache DRAM in the case of sudden power loss by offloading the cache to the NAND flash.
The PID for the spare SCPM is UCSC-SCAP-M5=.
1. Shut down the server node by using the software interface or by pressing the node power button, as described in Shutting Down a C3X60 M4 Server Node.
2. Remove a server node from the system:
a. Grasp the two ejector levers on the node and pinch their latches to release the levers.
b. Rotate both levers to the outside at the same time to evenly disengage the server node from its midplane connectors.
c. Pull the server node straight out from the system.
3. Do one of the following to access the SCPM on a RAID controller in a server node or I/O expander:
■If the RAID controller is in an I/O expander or in a server node that does not have an I/O expander attached—Remove the top cover as described in Removing a C3X60 M4 Server Node or I/O Expander Top Cover before continuing with the next step.
■If the RAID controller is in a server node that has an I/O expander attached—Remove the I/O expander and intermediate cover as described in Removing an I/O Expander From a C3X60 M4 Server Node before you continue with the next step.
NOTE: The connector for the supercap cable is on the underside of the RAID controller card, so you must remove the card to access the connector.
4. Remove a storage controller card:
a. Loosen the four captive thumbscrews that secure the card to the board (see screws circled in red in SCPM on RAID Controller).
b. Grasp the card at both ends and lift it evenly to disengage the connector on the underside of the card from the mezzanine socket.
c. Set the card on an anti-static surface.
5. Remove an SCPM from the controller card:
a. Gently lift the free end of the metal plate that secures the SCPM, but only enough so that you can slide the SCPM free.
b. Disconnect the SCPM cable from the connector that is on the underside of the controller card.
6. Install a new SCPM to the controller card:
a. Orient the SCPM so that its corner with the cable is positioned as in SCPM on RAID Controller.
b. Gently lift the securing metal plate enough so that you can slide the SCPM under it, then release the plate.
c. Run the SCPM cable through the opening in the bracket and then plug its connector into the connector on the underside of the card.
7. Install the storage controller card:
a. Align the card over the mezzanine socket and the standoffs.
b. Press down on both ends of the card to engage the connector on the underside of the card with the mezzanine socket.
c. Tighten the four captive screws that secure the card to the board.
■If the RAID controller is in an I/O expander or in a server node that does not have an I/O expander attached—Reinstall the top cover as described in Removing a C3X60 M4 Server Node or I/O Expander Top Cover before continuing with the next step.
■If the RAID controller is in a server node that had an I/O expander attached—Reinstall the I/O expander and intermediate cover as described in Removing an I/O Expander From a C3X60 M4 Server Node before you continue with the next step.
a. With the two ejector levers open, align the new server node with the empty bay.
–Cisco IMC releases earlier than 2.0(13): If your S3260 system has only one server node, it must be installed in bay 1
–Cisco IMC releases 2.0(13) and later: If your S3260 system has only one server node, it can be installed in either server bay.
b. Push the server node into the bay until it engages with the midplane connectors and is flush with the chassis.
c. Rotate both ejector levers toward the center until they lay flat and their latches lock into the rear of the server node.
Figure 21 SCPM on RAID Controller
This procedure is for removing and replacing an I/O expander. If you are adding an I/O expander for the first time, a special kit is required. See Adding an I/O Expander After-Factory.
The server node with optional I/O expander is accessed from the rear of the system, so you do not have to pull the system out from the rack.
NOTE: You do not have to power off the chassis in the next step. Replacement with the chassis powered on is supported if you shut down the server node before removal.
1. Shut down the server node by using the software interface or by pressing the node power button, as described in Shutting Down a C3X60 M4 Server Node.
2. Remove the server node with attached I/O expander from the system:
a. Grasp the two ejector levers and pinch their latches to release the levers (see Cisco UCS C3X60 M4 Server Node Rear-Panel Features).
b. Rotate both levers to the outside at the same time to evenly disengage the server node from its midplane connectors.
c. Pull the server node with I/O expander straight out from the system.
3. Remove the top cover from the I/O expander:
a. Lift the latch handle to an upright position (see Figure 5).
b. Turn the latch handle 90 degrees to release the lock.
c. Slide the cover toward the rear (toward the rear-panel buttons) and then lift it from the I/O expander.
4. Remove the I/O expander from the server node:
a. Remove the five screws that secure the I/O expander to the top of the server node (see Figure 22).
Figure 22 I/O Expander Securing Screws (Five)
b. Use two small flat-head screwdrivers (1/4-inch or equivalent) to help separate the connector on the underside of the I/O expander from the socket on the server node board.
Insert a screwdriver about 1/2-inch into the “REMOVAL SLOT” that is marked with an arrow on each side of the I/O expander (see Figure 23). Then lift up evenly on both screwdrivers at the same time to separate the connectors and lift the I/O expander about 1/2-inch.
c. Grasp the two handles on the I/O expander board and lift it straight up.
Figure 23 Separating the I/O Expander From the M4 Server Node
5. Reinstall the I/O expander to the server node:
CAUTION: Use caution to align all features of the I/O expander with the server node before mating the connector on the underside of the expander with the socket on the server board. The connector can be damaged if correct alignment is not used.
a. Carefully align the I/O expander with the alignment pegs on the top of the intermediate cover (see Figure 24).
b. Set the I/O expander down on the server node intermediate cover and push down gently to mate the connectors.
Figure 24 Replacing the I/O Expander to the M4 Server Node
c. If a RAID controller is present or an NVMe SSD is present in the right-hand socket (IOENVMe2) of the I/O expander, you must remove them to access the PRESS HERE plate in the next step.
See Replacing a Storage Controller Card Inside the I/O Expander and Replacing an NVMe SSD Inside the I/O Expander to remove them, then return to the next step.
d. Press down firmly on the plastic plate marked “PRESS HERE” to fully seat the connectors (see Figure 25).
Figure 25 I/O Expander, Showing “PRESS HERE” Plate
e. If you removed a RAID controller or an NVMe SSD to access the PRESS HERE plate, reinstall them now.
See Replacing a Storage Controller Card Inside the I/O Expander and Replacing an NVMe SSD Inside the I/O Expander.
CAUTION: Before you reinstall the securing screws, you must use the supplied alignment tool (UCSC-C3K-M4IOTOOL) in the next step to ensure alignment of the connectors that connect to the internal chassis backplane. Failure to ensure alignment might damage the sockets on the backplane.
6. Insert the four pegs of the alignment tool into the holes that are built into the forward-connector side of the server node and I/O expander. Ensure that the alignment tool fits into all four holes and lies flat (see Figure 26).
NOTE: The alignment tool is shipped with systems that are ordered with an I/O expander. It is also shipped with I/O expander replacement spares. You can order the tool using Cisco PID UCSC-C3K-M4IOTOOL.
Figure 26 Using the I/O Expander Alignment Tool
7. Reinstall and tighten the five screws that secure the I/O expander to the top of the server node (see Figure 22).
9. Reinstall the I/O expander top cover:
a. Set the cover in place on the I/O expander, offset about one inch toward the rear. Pegs on the inside of the cover must set into the tracks on the I/O expander base.
b. Push the cover forward until it stops.
c. Turn the latch handle 90 degrees to close the lock.
d. Fold the latch handle flat.
10. Install the server node with I/O expander to the chassis:
a. With the two ejector levers open, align the server node and I/O expander with the two empty bays.
b. Push the server node into the bay until it engages with the midplane connectors and is flush with the chassis.
c. Rotate both ejector levers toward the center until they lay flat and their latches lock into the rear of the server node.
NOTE: This procedure is for adding an I/O expander to a C3X60 M4 server node. If you are replacing an existing I/O expander, see Replacing an I/O Expander.
■I/O expander alignment tool UCSC-C3K-M4IOTOOL. This tool is included with the I/O expander spare.
■I/O expander kit UCS-S3260-IOLID. This kit includes:
–One intermediate cover for the server node, including 4 cover screws
When an I/O expander is installed, the server node must occupy lower server bay 1. The I/O expander occupies upper server bay 2.
1. Shut down all server nodes in the chassis by using the software interface or by pressing the node power button, as described in Shutting Down a C3X60 M4 Server Node.
2. Remove any server node (or disk expansion tray) from upper server bay 2 and set it on an antistatic work surface.
3. Remove any server node (or disk expansion tray) from lower server bay 1 and set it on an antistatic work surface.
4. Remove the top cover from the server node to which you will install an I/O expander:
a. Lift the latch handle to an upright position, and then turn the latch handle 90 degrees to release the lock.
b. Slide the cover toward the rear (toward the rear-panel buttons) and then lift it from the server node.
Figure 27 Cisco UCS C3X60 M4 Server Node Top Cover
5. If there is a storage controller card installed in the server node, remove it to provide clearance in the next step. (If there is no storage controller, continue with step 6.)
a. Loosen the captive thumbscrews that secure the controller card to the board.
b. Grasp the card at both ends and lift it evenly to disengage the connector on the underside of the card from the mezzanine socket.
6. Move the cover latch assembly from the controller-card bracket to a bracket in the IO expander. You will install the top cover to the I/O expander at the end of this procedure and so it requires the cover latch.
Remove the two Phillips-head screws that secure the cover latch assembly to the controller card. See the following figure.
–If your I/O expander has a storage controller card, use the two Phillips-head screws that you just removed to install the cover latch assembly to the bracket of that controller card.
–If your I/O expander does not have a storage controller card, use the two Phillips-head screws to install the cover latch assembly to the bracket on the I/O expander board. See the following figure.
Figure 28 Cover Latch Assembly Screws in Server Node and I/O Expander
7. Remove one Phillips-head screw from the server node board. See the following figure for the screw location.
8. Install the threaded metal support post from the kit, using one screw.
Set the post against the edge of the server board where you removed the screw in the prior step. The flange with the screw hole must sit flat on top of the board. See the following figure.
Figure 29 Support Post and Screw, C3X60 M4 Server Node
9. If you removed a storage controller card, install it back to the server node now.
a. Align the card over the mezzanine socket and the three standoffs.
b. Press down on both ends of the card to engage the connector on the underside of the card with the mezzanine socket.
c. Tighten the captive screws that secure the card to the board.
10. Install the intermediate cover from the kit to the server node. Set the intermediate cover in place and then install its four securing screws (two on each side).
11. Install the I/O expander to the server node:
CAUTION: Use caution to align all features of the I/O expander with the server node before mating the connector on the underside of the expander with the socket on the server board. The connector can be damaged if correct alignment is not used.
a. Carefully align the I/O expander with the alignment pegs on the top of the intermediate cover (see the following figure).
b. Set the I/O expander down on the server node intermediate cover and push down gently to mate the connectors.
Figure 30 Replacing the I/O Expander to the M4 Server Node
c. If a RAID controller is present or an NVMe SSD is present in the right-hand socket (IOENVMe2) of the I/O expander, you must remove them to access the PRESS HERE plate in the next step.
See Replacing a Storage Controller Card Inside the I/O Expander and Replacing an NVMe SSD Inside the I/O Expander to remove them, then return to the next step.
d. Press down firmly on the plastic plate marked “PRESS HERE” to fully seat the connectors.
Figure 31 I/O Expander, Showing “PRESS HERE” Plate
e. If you removed a RAID controller or an NVMe SSD to access the PRESS HERE plate, reinstall them now.
See Replacing a Storage Controller Card Inside the I/O Expander and Replacing an NVMe SSD Inside the I/O Expander.
CAUTION: Before you install the I/O expander securing screws, you must use the supplied alignment tool (UCSC-C3K-M4IOTOOL) in the next step to ensure alignment of the connectors that connect to the internal chassis backplane. Failure to ensure alignment might damage the sockets on the backplane.
12. Insert the four pegs of the alignment tool into the holes that are built into the forward-connector side of the server node and I/O expander. Ensure that the alignment tool fits into all four holes and lies flat (see the following figure).
NOTE: The alignment tool is shipped with I/O expander spares. You can order the tool using Cisco PID UCSC-C3K-M4IOTOOL.
Figure 32 Using the I/O Expander Alignment Tool
13. Install and tighten the five screws that secure the I/O expander to the top of the server node (see Figure 33). The screw at the center of the board edge screws into the support post that you installed earlier in this procedure.
Figure 33 I/O Expander Securing Screws (Five)
14. Remove the alignment tool.
15. Install the top cover that you removed from the server node to the I/O expander:
a. Set the cover in place on the I/O expander, offset about one inch toward the rear. Pegs on the inside of the cover must set into the tracks on the I/O expander base.
b. Push the cover forward until it stops.
c. Turn the latch handle 90 degrees to close the lock and then fold the latch handle flat.
16. Install the server node with attached I/O expander to the chassis:
a. With the two ejector levers open, align the server node and I/O expander with the two empty bays.
b. Push the server node into the bay until it engages with the midplane connectors and is flush with the chassis.
c. Rotate both ejector levers toward the center until they lay flat and their latches lock into the rear of the server node.
The optional I/O expander has two horizontal PCIe sockets.
1. Shut down the server node by using the software interface or by pressing the node power button, as described in Shutting Down a C3X60 M4 Server Node.
2. Remove a server node with attached I/O expander from the system:
a. Grasp the two ejector levers and pinch their latches to release the levers (see Figure 1).
b. Rotate both levers to the outside at the same time to evenly disengage the server node from its midplane connectors.
c. Pull the server node with attached I/O expander straight out from the system.
3. Remove the top cover from the I/O expander:
a. Lift the latch handle to an upright position (see Figure 5).
b. Turn the latch handle 90 degrees to release the lock.
c. Slide the cover toward the rear (toward the rear-panel buttons) and then lift it from the I/O expander.
4. Remove an existing PCIe card (or a filler panel if no card is present):
a. Release the card-tab retainer. On the inside of the I/O expander, pull the spring-loaded plunger on the card-tab retainer inward and then rotate the card-tab retainer 90 degrees to the open position (see Figure 34).
b. Slide the PCIe card horizontally to free its edge connector from the socket, and then lift the card out from the I/O expander.
If no card is present, remove the filler panel from the slot.
a. With the card-tab retainer in the open position, set the card in the I/O expander and align its edge connector with the socket.
b. Slide the card horizontally to fully engage the edge connector with the socket. The card’s tab should sit flat against the rear-panel opening.
c. Close the card-tab retainer. Rotate the retainer 90 degrees until it clicks and locks.
6. Replace the I/O expander cover:
a. Set the cover in place on the I/O expander, offset about one inch toward the rear. Pegs on the inside of the cover must engage the tracks on the I/O expander base.
b. Push the cover forward until it stops.
c. Turn the latch handle 90 degrees to close the lock.
d. Fold the latch handle flat.
7. Install a server node with I/O expander to the chassis:
a. With the two ejector levers open, align the server node and I/O expander with the two empty bays.
b. Push the server node into the bay until it engages with the midplane connectors and is flush with the chassis.
c. Rotate both ejector levers toward the center until they lay flat and their latches lock into the rear of the server node.
Figure 34 PCIe Card Sockets Inside the I/O Expander
The software designation of the controller card socket inside the I/O expander is IOEMezz1.
Note the following population rules for storage controllers:
■If the server node has UCS-S3260-DHBA, then no controller is allowed in the I/O expander.
■If the server node has UCS-C3K-M4RAID, then the I/O expander can also have UCS-C3K-M4RAID (but not UCS-S3260-DHBA).
■If the server node has no storage controller, then the I/O expander can have UCS-C3K-M4RAID or UCS-S3260-DHBA.
NOTE: If you move a controller from the server node to the I/O expander, or move it from the I/O expander to the server node, you must run the Cisco Host Upgrade Utility (HUU) on the server node to update the sub-OEMID to the correct values. If you do not do this, the server node Cisco IMC returns an Invalid Hardware Configuration error. HUU Guides.
1. Shut down and power off the server node by using the software interface or by pressing the node power button, as described in Shutting Down a C3X60 M4 Server Node.
2. Remove a server node with attached I/O expander from the system:
a. Grasp the two ejector levers and pinch their latches to release the levers (see Figure 1).
b. Rotate both levers to the outside at the same time to evenly disengage the server node from its midplane connectors.
c. Pull the server node with attached I/O expander straight out from the system.
3. Remove the cover from the I/O expander:
a. Lift the latch handle to an upright position (see Figure 5).
b. Turn the latch handle 90-degrees to release the lock.
c. Slide the cover toward the rear (toward the rear-panel buttons) and then lift it from the I/O expander.
4. Remove a Cisco modular RAID controller card:
a. Loosen the four captive thumbscrews that secure the card to the I/O expander (see Figure 35).
b. Grasp the card at both ends and lift it evenly to disengage the connector on the underside of the card from the mezzanine socket.
5. Install a Cisco modular RAID controller card:
a. Align the card with bracket over the mezzanine socket and the four standoffs.
b. Press down on both ends of the card to engage the connector on the underside of the card with the mezzanine socket.
c. Tighten the four captive thumbscrews that secure the card to the server node.
6. Replace the I/O expander cover:
a. Set the cover in place on the I/O expander, offset about one inch toward the rear. Pegs on the inside of the cover must set into the tracks on the I/O expander base.
b. Push the cover forward until it stops.
c. Turn the latch handle 90-degrees to close the lock.
d. Fold the latch handle flat.
7. Install a server node with attached I/O expander to the chassis:
a. With the two ejector levers open, align the new server node with the empty bay.
b. Push the server node into the bay until it engages with the midplane connectors and is flush with the chassis.
c. Rotate both ejector levers toward the center until they lay flat and their latches lock into the rear of the server node.
Figure 35 Storage Controller Card Thumbscrews Inside the I/O Expander
To replace a supercap power module (SCPM) that is on a RAID controller inside an I/O expander, use the procedure in Replacing a Supercap (RAID Backup) on a RAID Controller.
The I/O expander has two sockets for NVMe SSDs.
The software designations of the NVMe PCIe SSD sockets inside the I/O expander are IOENVMe1 and IOENVMe2 (see Figure 36).
1. Shut down and power off the server node by using the software interface or by pressing the node power button, as described in Shutting Down a C3X60 M4 Server Node.
2. Remove a server node with attached I/O expander from the system:
a. Grasp the two ejector levers and pinch their latches to release the levers (see Figure 1).
b. Rotate both levers to the outside at the same time to evenly disengage the server node from its midplane connectors.
c. Pull the server node straight out from the system.
3. Remove the cover from the I/O expander:
a. Lift the latch handle to an upright position (see Figure 5).
b. Turn the latch handle 90-degrees to release the lock.
c. Slide the cover toward the rear (toward the rear-panel buttons) and then lift it from the I/O expander.
4. If a RAID controller is present, remove it from the I/O expander to provide clearance:
a. Loosen the four captive thumbscrews that secure the card (see Figure 35).
b. Grasp the card at both ends and lift it evenly to disengage the connector on the underside of the card from the mezzanine socket.
a. Remove the single screw that secures the drive to its bracket (see Figure 36).
b. Slide the drive to disengage it from its horizontal socket, then lift it from the I/O expander.
6. Install a new NVME PCIe SSD:
a. Set the drive in its bracket, then slide it forward to engage its connector with the socket.
b. Install the single screw that secures the drive to the bracket.
7. If you removed a RAID controller, reinstall it:
a. Align the card with bracket over the mezzanine socket and the four standoffs.
b. Press down on both ends of the card to engage the connector on the underside of the card with the mezzanine socket.
c. Tighten the four captive thumbscrews that secure the card.
8. Replace the I/O expander cover:
a. Set the cover in place on the I/O expander, offset about one inch toward the rear. Pegs on the inside of the cover must set into the tracks on the I/O expander base.
b. Push the cover forward until it stops.
c. Turn the latch handle 90-degrees to close the lock.
d. Fold the latch handle flat.
9. Install a server node with attached I/O expander to the chassis:
a. With the two ejector levers open, align the new server node with the empty bay.
b. Push the server node into the bay until it engages with the midplane connectors and is flush with the chassis.
c. Rotate both ejector levers toward the center until they lay flat and their latches lock into the rear of the server node.
Figure 36 NVMe PCIe SSDs Inside the I/O Expander
The server node board includes headers that you can jumper for certain service functions. This section includes the following topics:
■Service Header Locations on the C3X60 M4 Server Node Board
■Using the Clear Password Header J64
■Using the Clear CMOS Header P19
There are two 2-pin service headers on the server node board that are supported for use. See Service Headers on the C3X60 M4 Server Node Board for the locations.
Figure 37 Service Headers on the C3X60 M4 Server Node Board
You can use a jumper on header J64 to clear the administrator password.
1. Shut down the server node by using the software interface or by pressing the node power button, as described in Shutting Down a C3X60 M4 Server Node.
2. Remove a server node from the system:
a. Grasp the two ejector levers and pinch their latches to release the levers.
b. Rotate both levers to the outside at the same time to evenly disengage the server node from its midplane connectors.
c. Pull the server node straight out from the system.
3. Do one of the following to access the component inside the server node:
■If the server node does not have an I/O expander attached—Remove the server node cover as described in Removing a C3X60 M4 Server Node or I/O Expander Top Cover before continuing with the next step.
■If the server node has an I/O expander attached—Remove the I/O expander and intermediate cover as described in Removing an I/O Expander From a C3X60 M4 Server Node before you continue with the next step.
4. Locate header J64 (see Service Headers on the C3X60 M4 Server Node Board).
5. Install a jumper to pins 1 and 2 of the header.
a. With the two ejector levers open, align the server node with the empty bay.
b. Push the server node into the bay until it engages with the midplane connectors and is flush with the chassis.
c. Rotate both ejector levers toward the center until they lay flat and their latches lock into the rear of the server node.
8. After the server node has fully booted, shut it down again, as described in Shutting Down a C3X60 M4 Server Node.
9. Remove the server node from the system, and then remove the server node cover.
10. Remove the jumper from pins 1 and 2.
NOTE: If you do not remove the jumper, the Cisco IMC clears the password each time that you boot the server node.
■If the server node did not have an I/O expander attached—Reinstall the server node cover as described in Removing a C3X60 M4 Server Node or I/O Expander Top Cover before continuing with the next step.
■If the server node had an I/O expander attached—Reinstall the I/O expander and intermediate cover as described in Removing an I/O Expander From a C3X60 M4 Server Node before you continue with the next step.
You can install a jumper to header P19 to clear the CMOS settings.
1. Shut down the server node by using the software interface or by pressing the node power button, as described in Shutting Down a C3X60 M4 Server Node.
2. Remove a server node chassis from the system:
a. Grasp the two ejector levers and pinch their latches to release the levers.
b. Rotate both levers to the outside at the same time to evenly disengage the server node from its midplane connectors.
c. Pull the server node chassis straight out from the system.
3. Do one of the following to access the component inside the server node:
■If the server node does not have an I/O expander attached—Remove the server node cover as described in Removing a C3X60 M4 Server Node or I/O Expander Top Cover before continuing with the next step.
■If the server node has an I/O expander attached—Remove the I/O expander and intermediate cover as described in Removing an I/O Expander From a C3X60 M4 Server Node before you continue with the next step.
4. Locate header P19 (see Service Headers on the C3X60 M4 Server Node Board).
5. Install a jumper to pins 1 and 2 of the header.
6. Install the server node to the system:
a. With the two ejector levers open, align the server node with the empty bay.
b. Push the server node into the bay until it engages with the midplane connectors and is flush with the chassis.
c. Rotate both ejector levers toward the center until they lay flat and their latches lock into the rear of the server node.
8. After the server node has fully booted, shut it down again, as described in Shutting Down a C3X60 M4 Server Node.
9. Remove the server node from the system, and then remove the server node cover.
10. Remove the jumper from pins 1 and 2.
NOTE: If you do not remove the jumper, the Cisco IMC clears the CMOS settings each time that you boot the server node.
■If the server node did not have an I/O expander attached—Reinstall the server node cover as described in Removing a C3X60 M4 Server Node or I/O Expander Top Cover before continuing with the next step.
■If the server node had an I/O expander attached—Reinstall the I/O expander and intermediate cover as described in Removing an I/O Expander From a C3X60 M4 Server Node before you continue with the next step.
■Supported Storage Controllers and Required Cables
■Cisco UCS C3X60 12G SAS RAID Controller Information
■Cisco UCS S3260 12G Dual Pass-Through Controller Information
■Best Practices For Configuring RAID Controllers
■Restoring RAID Configuration After Replacing a RAID Controller
■For More Information on Using Storage Controllers
Cisco UCS C3X60 M4 Supported Storage Controller Options lists the supported storage controller cards for the C3X60 M4 server node.
■If the server has UCS-S3260-DHBA, then no controller is allowed in the I/O expander.
■If the server node has UCS-C3K-M4RAID, then the I/O expander can also have UCS-C3K-M4RAID (but not UCS-S3260-DHBA).
■If the server node has no storage controller, then the I/O expander can have UCS-C3K-M4RAID or UCS-S3260-DHBA.
NOTE: If you move a controller from the server node to the I/O expander, or move it from the I/O expander to the server node, you must run the Cisco Host Upgrade Utility (HUU) on the server node to update the sub-OEMID to the correct values. If you do not do this, the server node Cisco IMC returns an Invalid Hardware Configuration error. HUU Guides.
The Cisco UCS C3X60 12G SAS RAID Controller for M4 server nodes is based on the Broadcom 3316 SAS/SATA, 16-port RAID-on-chip (RoC).
The controller can be used in JBOD mode (non-RAID) or in MegaRAID hardware RAID mode with a choice of RAID levels 0,1,5,6,10, 50, or 60.
■Maximum drives controllable—64 (the server has maximum 60 internal drives)
The Cisco UCS S3260 Dual Pass-Through Controller is based on the Broadcom 3316 SAS/SATA, 16-port RAID-on-chip (RoC). This controller is divided into two boards: a base board and a power board. The power board supplies power to the base board.
This pass-through controller has the following features:
■Dual Broadcom SAS3316-based subsystems.
■Dual 8x SAS-3 lanes (12G, 6G, 3G, and SATA), one for each subsystem.
■Dual 8x PCIe Gen-3 lanes via the server-board mezzanine connector, one for each subsystem.
■RAID Card Firmware Compatibility
■Choosing Between RAID 0 and JBOD
Do not configure 4K sector format and 512-byte sector format drives as part of the same RAID volume.
Firmware on the RAID controller must be verified for compatibility with the current Cisco IMC and BIOS versions that are installed on the server. If not compatible, upgrade or downgrade the RAID controller firmware accordingly using the Host Upgrade Utility (HUU) for your firmware release to bring it to a compatible level.
See the HUU guide for your Cisco IMC release for instructions on downloading and using the utility to bring server components to compatible levels: HUU Guides
The RAID controller supports JBOD mode (non-RAID) on physical drives that are in pass-through mode and directly exposed to the OS. We recommended that you use JBOD mode instead of individual RAID 0 volumes when possible.
The RAID controller allows you to create a large RAID 5 or 6 volume by including all the drives in the system in a spanned array configuration (RAID 50/RAID 60). Where possible, we recommend you create multiple, smaller RAID 5/6 volumes with fewer drives per RAID array. This provides redundancy and reduces the operations time for initialization, RAID rebuilds, and other operations.
The I/O policy applies to reads on a specific virtual drive. It does not affect the read-ahead cache. RAID volumes can be configured in two types of I/O policies. These are:
■Cached I/O—In this mode, all reads are buffered in cache memory. Cached I/O provides faster processing.
■Direct I/O—In this mode, reads are not buffered in cache memory. Data is transferred to the cache and the host concurrently. If the same data block is read again, it comes from cache memory. Direct I/O makes sure that the cache and the host contain the same data.
Although Cached I/O provides faster processing, it is useful only when the RAID volume has a small number of slower drives. With the C3X60 4-TB SAS drives, Cached I/O has not shown any significant advantage over Direct I/O. Instead, Direct I/O has shown better results over Cached I/O in a majority of I/O patterns. We recommended you use Direct I/O (the default) in all cases.
The RAID controller conducts different background operations like Consistency Check (CC), Background Initialization (BGI), Rebuild (RBLD), Volume Expansion & Reconstruction (RLM), and Patrol Real (PR).
While these BGOPS are expected to limit their impact to I/O operations, there have been cases of higher impact during some of the I/O operations such as Format. In these cases, both the I/O operation and the BGOPS may take more time to complete. In such cases, we recommend you limit concurrent BGOPS and other intensive I/O operations where possible.
When you replace a RAID controller, the RAID configuration that is stored in the controller is lost.
To restore your RAID configuration to your new RAID controller, follow these steps.
1. Replace your RAID controller. See Replacing a Storage Controller Card Inside the C3X60 M4 Server Node.
2. If this was a full chassis swap, replace all drives into the drive bays, in the same order that they were installed in the old chassis.
4. Press any key (other than C) to continue when you see the following onscreen prompt:
5. Watch the subsequent screens for confirmation that your RAID configuration was imported correctly:
■If you see the following message, your configuration was successfully imported. The LSI virtual drive is also listed among the storage devices.
■If you see the following message, your configuration was not imported. In this case, reboot the server node and try the import operation again.
■Launching the LSI Embedded MegaRAID Configuration Utility
■Installing LSI MegaSR Drivers For Windows and Linux
Each server node includes an embedded MegaRAID controller that can control two rear-panel solid state drives (SSDs) in a RAID 0 or 1 configuration. This embedded software RAID is available only when the server node has the UCS S3260 12G Dual Pass-Through Controller (UCS-S3260-DHBA) installed. When this HBA is installed, the two rear-panel SSDs can be controlled through software RAID mode or AHCI mode, when selected in the server BIOS.
NOTE: Embedded software RAID is not available when the HW RAID Cisco UCS C3X60 12G SAS RAID Controller (UCS-C3K-M4RAID) is installed. In that case, the rear-panel SSDs are controlled by hardware RAID.
NOTE: VMware ESX/ESXi or any other virtualized environments are not supported for use with the embedded MegaRAID controller.
NOTE: The Microsoft Windows Server 2016 Hyper-V hypervisor is supported for use with the embedded MegaRAID controller. Other hypervisors such as Xen and KVM are not supported.
NOTE: The embedded RAID controller in server node 1 can control the upper two rear-panel SSDs; the embedded RAID controller in server node 2 can control the lower two rear-panel SSDs.
1. When the server reboots, watch for the prompt to press Ctrl+M.
2. When you see the prompt, press Ctrl+M to launch the utility.
NOTE: The required drivers for this controller are already installed and ready to use with the LSI software RAID Configuration Utility. However, if you will use this controller with Windows or Linux, you must download and install additional drivers for those operating systems.
This section explains how to install the LSI MegaSR drivers for the following supported operating systems:
■Red Hat Enterprise Linux (RHEL)
■SUSE Linux Enterprise Server (SLES)
For the specific supported OS versions, see the Hardware and Software Interoperability Matrix for your server release.
This section contains the following topics:
■Downloading the LSI MegaSR Drivers
The MegaSR drivers are included in the C-Series driver ISO for your server and OS. Download the drivers from Cisco.com.
1. Find the drivers ISO file download for your server online and download it to a temporary location on your workstation:
a. See the following URL: http://www.cisco.com/cisco/software/navigator.html
b. Type the name of your server in the Select a Product search field and then press Enter.
c. Click Unified Computing System (UCS) Drivers.
d. Click the release number that you are downloading.
e. Click the Download icon to download the drivers ISO file.
2. Continue through the subsequent screens to accept the license agreement and then browse to a location where you want to save the drivers’ ISO file.
This section describes how to install the LSI MegaSR driver in a Windows installation.
This section contains the following topics:
The Windows operating system automatically adds the driver to the registry and copies the driver to the appropriate directory.
1. Create a RAID drive group using the LSI Software RAID Configuration Utility before you install this driver for Windows. Launch this utility by pressing Ctrl+M when LSI SWRAID is shown during the BIOS POST.
2. Download the Cisco UCS C-Series drivers’ ISO, as described in Downloading the LSI MegaSR Drivers.
3. Prepare the drivers on a USB thumb drive:
a. Burn the ISO image to a disk.
b. Browse the contents of the drivers folders to the location of the embedded MegaRAID drivers:
c. Expand the Zip file, which contains the folder with the MegaSR driver files.
d. Copy the expanded folder to a USB thumb drive.
4. Start the Windows driver installation using one of the following methods:
■To install from local media, connect an external USB DVD drive to the server and then insert the first Windows installation disk into the drive. Skip to Power cycle the server..
■To install from remote ISO, log in to the server’s Cisco IMC interface and continue with the next step.
5. Launch a Virtual KVM console window and click the Virtual Media tab.
a. Click Add Image and browse to select your remote Windows installation ISO file.
b. Check the check box in the Mapped column for the media that you just added, and then wait for mapping to complete.
7. Press F6 when you see the F6 prompt during bootup. The Boot Menu window opens.
8. On the Boot Manager window, choose the physical disk or virtual DVD and press Enter. The Windows installation begins when the image is booted.
9. Press Enter when you see the prompt, “Press any key to boot from CD.”
10. Observe the Windows installation process and respond to prompts in the wizard as required for your preferences and company standards.
11. When Windows prompts you with “Where do you want to install Windows,” install the drivers for embedded MegaRAID:
a. Click Load Driver. You are prompted by a Load Driver dialog box to select the driver to be installed.
b. Connect the USB thumb drive that you prepared in Prepare the drivers on a USB thumb drive: to the target server.
c. On the Windows Load Driver dialog that you opened in Step a, click Browse.
d. Use the dialog box to browse to the location of the drivers folder on the USB thumb drive, and then click OK.
Windows loads the drivers from the folder and when finished, the driver is listed under the prompt, “Select the driver to be installed.”
1. Click Start, point to Settings, and then click Control Panel.
2. Double-click System, click the Hardware tab, and then click Device Manager. Device Manager starts.
3. In Device Manager, double-click SCSI and RAID Controllers, right-click the device for which you are installing the driver, and then click Properties.
4. On the Driver tab, click Update Driver to open the Update Device Driver wizard, and then follow the wizard instructions to update the driver.
This section explains the steps to install the embedded MegaRAID device driver in a Red Hat Enterprise Linux installation or a SUSE Linux Enterprise Server installation.
This section contains the following topics:
■Obtaining the Driver Image File
■Preparing Physical Installation Disks For Linux
See Downloading the LSI MegaSR Drivers for instructions on obtaining the drivers. The Linux driver is offered in the form of dud-[ driver version ].img, which is the boot image for the embedded MegaRAID stack.
NOTE: The LSI MegaSR drivers that Cisco provides for Red Hat Linux and SUSE Linux are for the original GA versions of those distributions. The drivers do not support updates to those OS kernels.
This section describes how to prepare physical Linux installation disks from the driver image files, using either the Windows operating system or the Linux operating system.
NOTE: The driver image is too large for a floppy disk, so use a USB thumb drive instead.
NOTE: Alternatively, you can mount the dud.img file as a virtual floppy disk, as described in the installation procedures.
Preparing Physical Installation Disks For Linux With the Windows Operating System
Under Windows, you can use the RaWrite floppy image-writer utility to create disk images from image files.
1. Download the Cisco UCS C-Series drivers ISO, as described in Downloading the LSI MegaSR Drivers and save it to your Windows system that has a diskette drive.
a. Burn the ISO image to a disc.
b. Browse the contents of the drivers folders to the location of the embedded MegaRAID drivers:
c. Expand the Zip file, which contains the folder with the driver files.
3. Copy the driver update disk image dud-[driver version].img and your file raw write.exe to a directory.
NOTE: RaWrite is not included in the driver package.
4. If necessary, use this command to change the filename of the driver update disk to a name with fewer than eight characters: copy dud-[ driver version ].img dud.img
5. Open the DOS Command Prompt and navigate to the directory where raw write.exe is located.
6. Enter the following command to create the installation diskette: raw write
You are prompted to enter the name of the boot image file.
You are prompted for the target disk.
10. Insert a floppy disk into the server and enter: A:
12. Press Enter again to start copying the file to the diskette.
13. After the command prompt returns and the floppy disk drive LED goes out, remove the disk.
14. Label the diskette with the image name.
Preparing Installation Disks with a Linux Operating System
Under Red Hat Linux and SUSE Linux, you can use a driver disk utility to create disk images from image files.
NOTE: The driver image is too large for a floppy disk, so use a USB thumb drive instead.
1. Download the Cisco UCS C-Series drivers ISO, as described in Downloading the LSI MegaSR Drivers and save it to your Linux system that has a disk drive.
a. Burn the ISO image to a disc.
b. Browse the contents of the drivers folders to the location of the embedded MegaRAID drivers:
c. Expand the Zip file, which contains the folder with the driver files.
3. Copy the driver update disk image dud-[driver version].img to your Linux system.
4. Insert a blank USB thumb drive into a port on your Linux system.
5. Create a directory and mount the DUD image to that directory:
mount -oloop <driver_image> <destination_folder>
6. Copy the contents in the directory to your USB thumb drive.
For the specific supported OS versions, see the Hardware and Software Interoperability Matrix for your server release.
This section describes the fresh installation of the Red Hat Enterprise Linux device driver on systems with the embedded MegaRAID stack.
1. Create a RAID drive group using the LSI Software RAID Configuration utility before you install this driver for the OS. Launch this utility by pressing Ctrl-M when LSI SWRAID is shown during the BIOS POST.
2. Prepare the dud.img file using one of the following methods:
■To install from a physical disk: Use one of the procedures in Preparing Physical Installation Disks For Linux.
Then return to Start the Linux driver installation using one of the following methods: of this procedure.
■To install from a virtual floppy disk: Download and save the Cisco UCS C-Series drivers’ ISO, as described in Downloading the LSI MegaSR Drivers. Then continue with the next step.
3. Extract the dud.img or dd.iso file:
a. Burn the Cisco UCS C-Series Drivers ISO image to a disc.
b. Browse the contents of the drivers folders to the location of the embedded MegaRAID drivers:
d. Copy the dud-< driver version >.img or dd.iso file to a temporary location on your workstation.
e. If you are using RHEL 7.x, rename the saved dd.iso to dd.img.
NOTE: If you are using RHEL 7.x, renaming the dd.iso file to dd.img simplifies this procedure and saves time. The Cisco UCS virtual drive mapper can map only one.iso at a time, and only as a virtual CD/DVD. Renaming the file to dd.img allows you to mount the RHEL installation ISO as a virtual CD/DVD and the renamed dd.img as a virtual floppy disk or removable disk at the same time. This avoids the steps of unmounting and remounting the RHEL ISO when the dd.iso driver file is prompted for.
4. Start the Linux driver installation using one of the following methods:
■To install from local media, connect an external USB DVD drive to the server and then insert the first RHEL installation disk into the drive. Then continue with Launch a Virtual KVM console window and click the Virtual Media tab..
■To install from virtual disk, log in to the server’s Cisco IMC interface.
Then continue with the next step.
5. Launch a Virtual KVM console window and click the Virtual Media tab.
a. Click Add Image and browse to select your remote RHEL installation ISO image.
NOTE: An.iso file can be mapped only as a virtual CD/DVD.
b. Click Add Image again and browse to select your RHEL 6.x dud.img or the RHEL 7.x dd.img file that you renamed in Extract the dud.img or dd.iso file:.
NOTE: Map the.img file as a virtual floppy disk or removable disk.
c. Check the check boxes in the Mapped column for the media that you just added, then wait for mapping to complete.
6. Power cycle the target server.
7. Press F6 when you see the F6 prompt during bootup. The Boot Menu window opens.
NOTE: Do not press Enter in the next step to start the installation. Instead, press e to edit installation parameters.
8. On the Boot Menu, use the arrow keys to select Install Red Hat Enterprise Linux and then press e to edit installation parameters.
9. Append one of the following blacklist commands to the end of the line that begins with linuxefi:
■For RHEL 6. x (32- and 64-bit), enter:
linux dd blacklist=isci blacklist=ahci nodmraid noprobe=<ata drive number >
NOTE: The noprobe values depend on the number of drives. For example, to install RHEL 6.5 on a RAID 5 configuration with three drives, enter Linux dd blacklist=isci blacklist=ahci nodmraid noprobe=ata1 noprobe=ata2
■For RHEL 7.x (32- and 64-bit), enter:
linux dd modprobe.blacklist=ahci nodmraid
10. Optional: To see full, verbose installation status steps during installation, delete the Quiet parameter from the line.
11. On the Boot Manager window, press Ctrl+x to start the interactive installation.
12. Below Driver disk device selection, select the option to install your driver.img file. (Type r to refresh the list if it is not populated.)
NOTE: The installer recognizes the driver file as an.iso file, even though you renamed it to dd.img for mapping.
Type the number of the driver device ISO in the list. Do not select the RHEL ISO image. In the following example, type 6 to select device sdb:
The installer reads the driver ISO and lists the drivers.
13. Under Select drivers to install, type the number of the line that lists the megasr driver. For example, type 1 and press Enter:
Your selection is displayed with an x in brackets.
15. Follow the Red Hat Linux installation wizard to complete the installation.
16. When the wizard’s Installation Destination screen is displayed, ensure that LSI MegaSR is listed as the selection. If it is not listed, the driver did not load successfully. In that case, select Rescan Disc.
For the specific supported OS versions, see the Hardware and Software Interoperability Matrix for your server release.
This section describes the installation of the SUSE Linux Enterprise Server (SLES) driver on a system with the embedded MegaRAID stack.
1. Create a RAID drive group using the LSI SWRAID Configuration utility before you install this driver for the OS. Launch this utility by pressing Ctrl+M when LSI SWRAID is shown during the BIOS POST.
2. Prepare the dud.img file using one of the following methods:
■To install from a physical disk, use one of the procedures in Preparing Physical Installation Disks For Linux.
Then return to Start the Linux driver installation using one of the following methods: of this procedure.
■To install from a virtual floppy disk, download and save the Cisco UCS C-Series drivers’ ISO, as described in Downloading the LSI MegaSR Drivers. Then continue with the next step.
3. Extract the dud.img file that contains the driver:
a. Burn the Cisco UCS C-Series drivers ISO image to a disc.
b. Browse the contents of the drivers folders to the location of the embedded MegaRAID drivers:
d. Copy the dud-< driver version >.img file to a temporary location on your workstation.
4. Start the Linux driver installation using one of the following methods:
■To install from local media, connect an external USB DVD drive to the server and then insert the first SLES install disc into the drive. Skip to Power cycle the server..
■To install from remote ISO, log in to the server’s Cisco IMC interface and continue with the next step.
5. Launch a Virtual KVM console window and click the Virtual Media tab.
a. Click Add Image and browse to select your remote SLES installation ISO file.
NOTE: An.iso file can be mapped only as a virtual CD/DVD.
b. Click Add Image again and browse to select your dud.img file.
NOTE: Map the.img file as a virtual floppy disk or removable disk.
c. Check the check box in the Mapped column for the media that you just added, and then wait for mapping to complete.
7. Press F6 when you see the F6 prompt during bootup. The Boot Menu window opens.
8. On the Boot Manager window, select the physical or virtual SLES installation ISO and press Enter.
The SLES installation begins when the image is booted.
9. When the first SLES screen appears, choose Installation.
10. Enter one of the following field:
■ For SLES 11 and SLES 11 SP1 (32- and 64-bit), in the Boot Options field enter: brokenmodules=ahci
■For SLES 11 SP2 (32-and 64-bit), in the Boot Options field enter: brokenmodules=ahci brokenmodules=isci
■For SLES 12, press e to edit installation parameters. Then append the following parameter to the end of the line that begins with linuxefi : brokenmodules=ahci
11. Optional : To see detailed status information during the installation, add the following parameter to the line that begins with linuxefi : splash=verbose
12. Do one of the following actions:
■For SLES 11, press F6 for the driver and choose Yes.
■For SLES 12, press Ctrl+x to start the installation.
13. Do one of the following actions:
■For SLES 11: If you prepared the dud.img file on a physical disk, insert the USB thumb drive to the target server and then insert the disk in the A:/ drive and press Enter.
■For SLES 11: If you mapped the dud.img file as a virtual disk in Launch a Virtual KVM console window and click the Virtual Media tab. choose the location of the virtual disk. Press Enter to choose Installation.
■For SLES 12: The installer finds the LSI driver automatically in the dud-<driver version>.img file that you provided. With verbose status messages, you see the driver being installed when LSI MegaRAID SW RAID Module is listed
14. Follow the SLES installation wizard to complete the installation. Verify installation of the driver when you reach the Suggested Partitioning screen:
a. On the Suggested Partitioning screen, select Expert Partitioner.
b. Navigate to Linux > Hard disks and verify that there is a device listed for the LSI - LSI MegaSR driver. The device might be listed as a type other than sda. For example:
If no device is listed, the driver did not install properly. In that case, repeat the steps above.
15. When installation is complete, reboot the target server.
The LSI utilities have help documentation for more information about using the utilities.
Full Avago Technologies/LSI documentation is also available:
■For hardware SAS MegaRAID— Avago Technologies/LSI 12 Gb/s MegaRAID SAS Software User’s Guide, Rev. F
■For embedded software MegaRAID— LSI Embedded MegaRAID Software User Guide
■ Cisco UCS S3260 Storage Server Installation and Service Guide
■ Regulatory Compliance and Safety Information For Cisco UCS S-Series Hardware
THE SPECIFICATIONS AND INFORMATION REGARDING THE PRODUCTS IN THIS MANUAL ARE SUBJECT TO CHANGE WITHOUT NOTICE. ALL STATEMENTS, INFORMATION, AND RECOMMENDATIONS IN THIS MANUAL ARE BELIEVED TO BE ACCURATE BUT ARE PRESENTED WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED. USERS MUST TAKE FULL RESPONSIBILITY FOR THEIR APPLICATION OF ANY PRODUCTS.
THE SOFTWARE LICENSE AND LIMITED WARRANTY FOR THE ACCOMPANYING PRODUCT ARE SET FORTH IN THE INFORMATION PACKET THAT SHIPPED WITH THE PRODUCT AND ARE INCORPORATED HEREIN BY THIS REFERENCE. IF YOU ARE UNABLE TO LOCATE THE SOFTWARE LICENSE OR LIMITED WARRANTY, CONTACT YOUR CISCO REPRESENTATIVE FOR A COPY.
The following information is for FCC compliance of Class A devices: This equipment has been tested and found to comply with the limits for a Class A digital device, pursuant to part 15 of the FCC rules. These limits are designed to provide reasonable protection against harmful interference when the equipment is operated in a commercial environment. This equipment generates, uses, and can radiate radio-frequency energy and, if not installed and used in accordance with the instruction manual, may cause harmful interference to radio communications. Operation of this equipment in a residential area is likely to cause harmful interference, in which case users will be required to correct the interference at their own expense.
The following information is for FCC compliance of Class B devices: This equipment has been tested and found to comply with the limits for a Class B digital device, pursuant to part 15 of the FCC rules. These limits are designed to provide reasonable protection against harmful interference in a residential installation. This equipment generates, uses and can radiate radio frequency energy and, if not installed and used in accordance with the instructions, may cause harmful interference to radio communications. However, there is no guarantee that interference will not occur in a particular installation. If the equipment causes interference to radio or television reception, which can be determined by turning the equipment off and on, users are encouraged to try to correct the interference by using one or more of the following measures:
Reorient or relocate the receiving antenna.
Increase the separation between the equipment and receiver.
Connect the equipment into an outlet on a circuit different from that to which the receiver is connected.
Consult the dealer or an experienced radio/TV technician for help.
Modifications to this product not authorized by Cisco could void the FCC approval and negate your authority to operate the product.
The Cisco implementation of TCP header compression is an adaptation of a program developed by the University of California, Berkeley (UCB) as part of UCB’s public domain version of the UNIX operating system. All rights reserved. Copyright © 1981, Regents of the University of California.
NOTWITHSTANDING ANY OTHER WARRANTY HEREIN, ALL DOCUMENT FILES AND SOFTWARE OF THESE SUPPLIERS ARE PROVIDED “AS IS” WITH ALL FAULTS. CISCO AND THE ABOVE-NAMED SUPPLIERS DISCLAIM ALL WARRANTIES, EXPRESSED OR IMPLIED, INCLUDING, WITHOUT LIMITATION, THOSE OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE.
IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THIS MANUAL, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
Any Internet Protocol (IP) addresses and phone numbers used in this document are not intended to be actual addresses and phone numbers. Any examples, command display output, network topology diagrams, and other figures included in the document are shown for illustrative purposes only. Any use of actual IP addresses or phone numbers in illustrative content is unintentional and coincidental.
All printed copies and duplicate soft copies are considered un-Controlled copies and the original on-line version should be referred to for latest version.
Cisco has more than 200 offices worldwide. Addresses, phone numbers, and fax numbers are listed on the Cisco website at www.cisco.com/go/offices.