Maintaining the System

This chapter describes how to use LEDs to diagnose system problems and how to install or replace supported hardware components:

Status LEDs and Buttons

This section describes the location and meaning of LEDs and buttons and includes the following topics:

Front-Panel LEDs

Figure 3-1 shows the front-panel LEDs. Table 3-1 defines the front-panel LED states.

Figure 3-1 Front-Panel LEDs

 

353444.eps
1

System Power button and power status LED

5

Temperature status LED

2

System unit identification button and LED

6

Power supply status LED

3

System status LED

7

Network link activity LED

4

Fan status LED

8

Internal-drive status LEDs

 

Table 3-1 Front-Panel LEDs States

LED Name
State

1

System Power button/power status LED

  • Off—There is no AC power to the system (all power cords are disconnected).
  • Amber—Both server nodes are powered off. Pressing the button will power on both server nodes.
  • Green—At least one server node is powered on. Pressing the button will power off both server nodes, returning the LED to amber.

2

System unit identification

  • Off—The unit identification LED is not in use.
  • Blue, blinking—The unit identification LED is activated.

3

System status

  • Green—The system is running in normal operating condition.
  • Green, blinking—The system is performing system initialization and memory check.
  • Amber, steady—The system is in a degraded operational state. For example:

blank.gif Power supply redundancy is lost (power supply unplugged or failed).

blank.gif CPUs are mismatched.

blank.gif At least one CPU is faulty.

blank.gif At least one DIMM is faulty.

blank.gif At least one drive in a RAID configuration failed.

  • Amber, blinking—The system is in a critical fault state. For example:

blank.gif Boot failed.

blank.gif Fatal CPU and/or bus error is detected.

blank.gif System is in an over-temperature condition.

4

Fan status

  • Green—All fan modules are operating properly.
  • Amber, steady—One fan module has failed.
  • Amber, blinking—Critical fault; two or more fan modules have failed.

5

Temperature status

  • Green—The system is operating at normal temperature.
  • Amber, steady—One or more temperature sensors have exceeded a warning threshold.
  • Amber, blinking—One or more temperature sensors have exceeded a critical threshold.

6

Power supply status

  • Green—All power supplies are operating normally.
  • Amber, steady—One or more power supplies are in a degraded operational state. (An event warning threshold has been reached, but the power supply continues to operate.)
  • Amber, blinking—One or more power supplies are in a critical fault state. (A critical fault threshold has been reached, causing the power supply to shut down.)

7

Network link activity

  • Off—The Ethernet link is idle.
  • Green—One or more Ethernet LOM ports are link-active.
  • Green, blinking—One or more Ethernet LOM ports are traffic-active.

8

Internal-drive status LEDs

Use these LEDs to indicate the location of a failing drive. Then open the system cover to find exactly which drive is failing by looking at the LEDs on the drive trays.

  • The two columns of LEDs correspond to the two halves of the internal drive compartment (under either the right- or left-side top cover).
  • The four numbered rows of LEDs correspond to the four horizontal rows of drive bays (14 drive bays in each row).

See Figure 3-8 for an example. In this example, the red LED indicates that the failing drive is in the right half of the internal drive compartment, in row 3.

Rear-Panel LEDs and Buttons

Figure 3-2 shows the rear-panel LEDs and buttons for a S3260 system. This example is shown with a C3X60 M4 server node and an optional four-drive expansion module. Table 3-2 defines the rear-panel LED states.

Figure 3-2 S3260 System Rear-Panel LEDs and Buttons

 

306575.jpg

 

1

Server node Power button/LED
(on each server node)

8

Not used at this time.

2

Server node unit identification button/LED
(on each server node)

9

10/100/1000 dedicated management port link activity LED
(on each SIOC)

3

1 Gb Ethernet port link speed LED
(on each M5 server node only)

10

10/100/1000 dedicated management port link speed LED
(on each SIOC)

4

1 Gb Ethernet port link activity LED
(on each M5 server node only)

11

Solid state drive activity LED (each drive bay)

5

Power supply status LED (each power supply)

12

Solid state drive fault LED (each drive bay)

6

QSFP port link speed LED (on each port)

Actual ports will vary, depending on which SIOC version is used, or which adapter card is installed in a Version 03 SIOC.

13

Not used at this time.

7

QSFP port link activity LED (on each port)

14

SIOC health LED (one each Version 03 SIOC only)

 

Table 3-2 Rear-Panel LED States

LED Name
State
1

Server node power button/LED

  • Off—There is no AC power to the server node. It is safe to remove the server node from the chassis.
  • Amber—The server node is in standby power mode. Power is supplied only to the Cisco IMC. It is safe to remove the server node from the chassis.
  • Green—The server node is in main power mode. Power is supplied to all server node components.

2

Server node unit identification button/LED

  • Off—The Identification LED is not in use.
  • Blue—The Identification LED is activated.

3

1 Gb Ethernet port link speed LED (on each M5 server node only)

  • Off—Link speed is 10/100 Mbps.
  • Green—Link speed is 1 Gbps.

4

1 Gb Ethernet port link activity LED (on each M5 server node only)

  • Off—No link is present.
  • Green—Link is present, no activity.
  • Green, blinking—Link is present, with activity.

5

Power supply status (on each power supply unit)

AC Power Supplies:

  • Off—No AC power input.
  • Green, steady—The power supply is operating normally and supplying DC power to the system.
  • Green, blinking—AC power is OK, DC output not enabled (standby mode).
  • Amber, blinking—An event warning threshold has been reached, but the power supply continues to operate.
  • Amber, steady—A critical fault threshold has been reached, causing the power supply to shut down.

DC Power Supplies:

  • Off—No DC input (12 V main power off, 12 V standby power off).
  • Green, steady—The power supply is operating normally and supplying DC power to the system (12 V main power on).
  • Green, blinking—DC output not enabled (12 V main power off; 12 V standby power on).
  • Amber, blinking—An event warning threshold has been reached, but the power supply continues to operate with 12 V main power.
  • Amber, steady—A critical fault threshold has been reached, causing the power supply to shut down (for example, over-current, over-voltage or over-temperature failure)

6

QSFP port link LED

  • Off—No link.
  • Green—Link present.

7

QSFP port activity LED

  • Off—No link is present.
  • Green—Link is present, but no activity.
  • Green, blinking—Link is present with activity.

8

Not used at this time.

9

10/100/1000 dedicated management link activity

  • Off—No link is present.
  • Green—Link is present, no activity.
  • Green, blinking—Link is present, with activity.

10

10/100/1000 dedicated management link speed

  • Off—Link speed is 10/100 Mbps.
  • Green—Link speed is 1 Gbps.

11

Solid state drive activity

  • Off—There is no drive connected.
  • Green—Drive connected, no activity.
  • Green, blinking—The drive is reading or writing data.

12

Solid state drive fault

  • Off—The drive is operating properly.
  • Amber—This drive has failed (or the Locate function has been activated for the drive).

13

Not used at this time.

14

SIOC health

  • Off—System is idle.
  • Green—SIOC is working normally.
  • Green, blinking—There is a problem with the SIOC or a component in the SIOC.

If your system has an optional drive expander module, it has one module status LED:

Table 3-3 Drive Expander Module Status LED

LED Status
P3V3
P5V
HDD Present
HDD Status

Off

None

None

None

None

Green

On

On

Plug-in

Good

Amber

On

On

No plug-in

NA

Amber

On

Fail

No plug-in

NA

Amber

On

On

Plug-in

Fail

Internal Diagnostic LEDs

This section contains the following topics:

Diagnostic LEDs in the Main Chassis

The diagnostic LEDs inside the main chassis compartments can be viewed while the system is powered on. See Figure 3-3 for the locations of these internal LEDs. See Table 3-4 for definitions of the LED states.

Figure 3-3 Internal Diagnostic LED Locations in the Main Chassis

 

353446.eps
1

Drive fault (on each drive carrier)

3

Fan module fault (on each fan module)

2

Drive activity (on each drive carrier)

 

 

 

Table 3-4 Internal Diagnostic LED States

LED Name
State

1

SAS drive fault

  • Off—The drive is operating properly.
  • Amber—This drive has failed.
  • Amber, blinking—The device is rebuilding.

2

SAS drive activity

  • Off—There is no drive in the drive tray (no access, no fault).
  • Green—The drive is ready.
  • Green, blinking—The drive is reading or writing data.

3

Fan fault LEDs

  • Off—Component is functioning normally.
  • Amber—Component has failed.

Diagnostic LEDs in the Server Node

For the locations of the internal diagnostic LEDs inside the server node, see the service note for your server node:

Preparing for System Component Installation or Replacement

This section describes how to prepare for component installation, and it includes the following topics:

Required Equipment For Maintenance Procedures

The following equipment is used to perform the procedures in this chapter:

  • Number 1 Phillips-head screwdriver (for SIOC cover screws)
  • 1/4-inch or equivalent flat-head screwdriver (for separating the optional I/O expander from the server node)
  • Electrostatic discharge (ESD) strap or other grounding equipment such as a grounded mat
  • The tools required for CPU replacement are supplied with a CPU spare.

Shutting Down and Powering Off the S3260 System

You can invoke a graceful shutdown or a hard shutdown of the entire S3260 system by using either the Cisco IMC interface or the system power button on the front panel.

caut.gif

Caution blank.gif To completely remove power from the system, you must disconnect power cords from all power supplies.

You can also shut down an individual server node rather than the entire system. See Shutting Down an Individual Server Node.

To use the system power button to shut down the chassis, follow these steps:


Step 1blank.gif Check the color of the System Power Status LED (see Front-Panel LEDs).

  • Green—At least one server node is powered on. Go to Step 2.
  • Amber—Both server nodes are powered off. Go to Step 3.

Step 2blank.gif Invoke either a graceful shutdown or a hard shutdown:

caut.gif

Caution blank.gif To avoid data loss or damage to your operating system, you should always invoke a graceful shutdown of the operating system.

  • Graceful shutdown—Press and release the Power button. The operating system on the server nodes performs a graceful shutdown of both server nodes.
  • Emergency shutdown—Press and hold the Power button for 4 seconds to force the power off both server nodes.

Step 3blank.gif Disconnect power cords from all power supplies in your system to completely remove AC power and power off the system chassis.


 

Shutting Down an Individual Server Node

You can invoke a graceful shutdown or a hard shutdown of a server node by using either the Cisco Integrated Management Controller (Cisco IMC) interface, or the power button that is on the face of the server node.

Shutting Down a Server Node By Using the Cisco IMC GUI

To use the Cisco IMC GUI to shut down the server node, follow these steps:


Step 1blank.gif Use a browser and the management IP address of the system to log in to the Cisco IMC GUI.

Step 2blank.gif In the Navigation pane, click the Chassis menu.

Step 3blank.gif In the Chassis menu, click Summary.

Step 4blank.gif In the toolbar above the work pane, click the Host Power link.

The Server Power Management dialog opens. This dialog lists all servers that are present in the system.

Step 5blank.gif In the Server Power Management dialog, select one of the following buttons for the server that you want to shut down:

caut.gif

Caution blank.gif To avoid data loss or damage to your operating system, you should always invoke a graceful shutdown of the operating system. Do not power off a server if any firmware or BIOS updates are in progress.

  • Shut Down—Performs a graceful shutdown of the operating system.
  • Power Off—Powers off the chosen server, even if tasks are running on that server.

It is safe to remove the server node from the chassis when the Chassis Status pane shows the Power State as Off for the server node that you are removing.

The physical power button on the server node face also turns amber when it is safe to remove the server node from the chassis.


 

Shutting Down a Server Node By Using the Power Button on the Server Node

To use the physical server node power button to shut down the server node only, follow these steps:


Step 1blank.gif Check the color of the server node power status LED:

  • Green—The server node is powered on. Go to step Step 2
  • Amber—the server node is powered off. It is safe to remove the server node from the chassis.

Step 2blank.gif Invoke either a graceful shutdown or a hard shutdown:

caut.gif

Caution blank.gif To avoid data loss or damage to your operating system, you should always invoke a graceful shutdown of the operating system. Do not power off a server if any firmware or BIOS updates are in progress.

  • Graceful shutdown—Press and release the Power button. The software performs a graceful shutdown of the server node.
  • Emergency shutdown—Press and hold the Power button for 4 seconds to force the power off the server node.

When the server node power button turns amber, it is safe to remove the server node from the chassis.

Removing Chassis and Component Covers

This section contains the following topics:

Opening the Main Chassis Top Covers

This system has three hinged top covers on the main chassis. Opening these covers gives access to the internal-drives compartment and the fan module compartment.

note.gif

Noteblank.gif The internal drives and cooling fans in the system are hot-swappable and are accessed by opening the top covers. When you rack and cable the system, be sure to allow enough slack in the power and other cables so that the system can be pulled out on the slide rails far enough to allow clearance for opening the top covers.



Step 1blank.gif Open the left or right internal-drive compartment cover to access the hot-swappable internal drives:

a.blank.gif For either the right or left side cover, pull the latch release buttons on both latches toward the outer edges of the chassis. This causes the spring-loaded latches to pop up.

b.blank.gif With both latches open, swing open the hinged cover from the center toward the outside.

c.blank.gif To secure the cover, close it down flat and then push both latches flat until they click and lock.

Step 2blank.gif Open the fan compartment cover to access the hot-swappable fan modules:

a.blank.gif Push both latch-buttons toward the center.

b.blank.gif While holding both latch-buttons, open the hinged cover from the center toward the rear.

c.blank.gif To secure the cover, hold both latch-buttons while you close the cover flat. Release the latch-buttons.

Figure 3-4 Opening the Main Chassis Top Covers

 

353448.eps
1

Latch-release buttons for left internal-drives compartment

3

Latch-release buttons for fan compartment

2

Latch-release buttons for right internal-drives compartment

 

 

Removing a Server Node Cover

To remove a server node cover, see the service note for your server node:

Removing an I/O Expander Cover (C3X60 M4 and S3260 M5 Server Nodes Only)

To remove an I/O expander cover, see the C3X60 M4 server node service note:

Removing an I/O Expander From a Node (C3X60 M4 and S3260 M5 Server Nodes Only)

The I/O expander attaches to the top of the server node. To remove an I/O expander from a server node so that you can access the components inside the server node, see the server node service note:

Removing the System I/O Controller (SIOC) Cover

note.gif

Noteblank.gif You do not have to slide the system out of the rack to remove the SIOC from the rear of the system.



Step 1blank.gif Power off the connected server nodes based on the scenario:

a.blank.gif If there is one or no NVMe in the SIOC you are removing, power off the server node that is paired with the SIOC that you are removing (SIOC 1 is paired with server node1; SIOC 2 is paired with server node 2).

b.blank.gif If there are two NVMes in the SIOC you are removing, power off both server nodes.

See Shutting Down an Individual Server Node.

Step 2blank.gif Remove the SIOC from the system:

a.blank.gif Loosen the single captive thumbscrew on the SIOC and then open its two hinged levers to evenly disengage the SIOC from its backplane connector.

b.blank.gif Pull the SIOC from the system and set it on an antistatic work surface.

Step 3blank.gif Remove the SIOC cover:

a.blank.gif Use a #1 Phillips-head screwdriver to remove the screws that secure the cover. The version 02 SIOC cover has four securing screws. The Version 03 SIOC cover has six securing screws. See Figure 3-5 or Figure 3-6.

b.blank.gif Lift the cover straight up off the SIOC.

Step 4blank.gif Reinstall the SIOC cover:

a.blank.gif Set the cover back in place.

b.blank.gif Replace the four screws that secure the cover.

Step 5blank.gif Reinstall the SIOC in the system:

a.blank.gif Push the SIOC into its bay until it stops against the internal midplane.

b.blank.gif Close the two levers on the SIOC to fully engage the SIOC connector with its midplane.

c.blank.gif Tighten the thumbscrew on the SIOC levers.

Step 6blank.gif Power on the server node that you powered off in the first step.

Figure 3-5 Removing the SIOC Cover (Version 02 SIOC, UCS-S3260-SIOC)

 

305042.eps
1

SIOC cover

2

Cover screws, four (two on each side of the SIOC)

Figure 3-6 Removing the SIOC Cover (Version 03 SIOC, UCS-S3260-PCISIOC)

 

306576.jpg
1

SIOC cover, including ejector levers

3

Four securing screws on sides of cover (two each side)

2

Two securing screws on top of cover

 

 


 

Installing or Replacing System Components

warn.gif

Warningblank.gif Blank faceplates and cover panels serve three important functions: they prevent exposure to hazardous voltages and currents inside the chassis; they contain electromagnetic interference (EMI) that might disrupt other equipment; and they direct the flow of cooling air through the chassis. Do not operate the system unless all cards, faceplates, front covers, and rear covers are in place. Statement 1029


caut.gif

Caution blank.gif This system weighs approximately 190 pounds (86 kilograms) when fully loaded with components. We recommend that you use a mechanical lift when lifting the system. Attempting this procedure alone could result in personal injury or equipment damage. Consider temporarily removing components such as hard drives while you move the system.

blank.gif Replacing HDDs or SSDs in the Internal Drive Compartment

blank.gif Replacing Hard Drives in the Optional Disk Expansion Tray

blank.gif Replacing SAS/SATA Solid State Drives in the Rear Panel Bays

Replacing Hard Drives or Solid State Drives

This section includes the following topics:

4K Sector Format Drives Considerations

  • You must boot 4K sector format drives in UEFI mode, not legacy mode. See Setting Up Booting in UEFI Mode in the BIOS Setup Utility or Setting Up Booting in UEFI Mode in the Cisco IMC GUI.
  • Do not configure 4K sector format and 512-byte sector format drives as part of the same RAID volume.
  • Operating system support on 4K sector drives is as follows: Windows: Win2012 and Win2012R2; Linux: RHEL 6.5, 6.6, 6.7, 7.0, 7.2; SLES 11 SP3, and SLES 12.
    ESXi/Vmware is not supported.
  • The 6 TB drives have 4096-byte sectors. VMware ESXi does not support this capability and therefore will not work with 6 TB drives.

Setting Up Booting in UEFI Mode in the BIOS Setup Utility


Step 1blank.gif Use a web browser and the management IP address of the system to log into the Cisco IMC GUI management interface.

Step 2blank.gif Select Compute and then Server 1.

Step 3blank.gif Click Launch KVM to open a virtual KVM window for the server node.

Step 4blank.gif In the Launch KVM dialog, select Server 1 and click Launch.

Step 5blank.gif Reboot server node 1. Watch the KVM window for the prompt to press F2.

Step 6blank.gif Enter the BIOS setup utility by pressing the F2 key when prompted during bootup.

Step 7blank.gif Go to the Boot Options tab.

Step 8blank.gif Set UEFI Boot Options to Enabled.

Step 9blank.gif Under Boot Option Priorities, set your OS installation media (such as a virtual DVD) as your
Boot Option #1.

Step 10blank.gif Go to the Advanced tab.

Step 11blank.gif Select LOM and PCIe Slot Configuration.

Step 12blank.gif Set the PCIe Slot ID: HBA Option ROM to UEFI Only.

Step 13blank.gif Press F10 to save changes and exit the BIOS setup utility. Allow the server to reboot.

Step 14blank.gif After the server reboots and the OS installs, verify the installation:

a.blank.gif Enter the BIOS setup utility by pressing the F2 key when prompted during bootup.

b.blank.gif Go to the Boot Options tab.

c.blank.gif Under Boot Option Priorities, verify that the OS you installed is listed as your Boot Option #1.


 

Setting Up Booting in UEFI Mode in the Cisco IMC GUI


Step 1blank.gif Use a web browser and the management IP address of the system to log into the Cisco IMC GUI management interface.

Step 2blank.gif Select Compute and then Server 1.

Step 3blank.gif Select the BIOS tab.

Step 4blank.gif Under BIOS Properties, set Configured Boot Order to UEFI.

Step 5blank.gif Click Save Changes.

Step 6blank.gif Click Configure Boot Order.

Step 7blank.gif Select the Advanced tab.

Step 8blank.gif Click Add Local HDD.

Step 9blank.gif In the Add Local Disk dialog, enter the information for the 4K sector format drive. Enter a name and specify Slot M.

Step 10blank.gif Click Save Changes.

Step 11blank.gif Click Add Virtual Media.

Step 12blank.gif In the Add Virtual Media dialog, enter a name for your OS installation virtual media.

Step 13blank.gif Click Save Changes.

Step 14blank.gif Click Close.

Step 15blank.gif Click Launch KVM to open a virtual KVM window for the server node.

Step 16blank.gif In the Launch KVM dialog, select Server 1 and click Launch.

Step 17blank.gif Activate virtual media. Pull down the Virtual Media menu on the KVM window and select Activate Virtual Devices.

Step 18blank.gif Reboot the server node.

Step 19blank.gif Press F6 during the boot to enter the boot device menu.

Step 20blank.gif Select UEFI: Cisco vKVM-Mapped vDVD and press Enter.

Step 21blank.gif Proceed with the installation of your OS.

After the OS installs and the system reboots, your OS is listed as a boot option.


 

Replacing HDDs or SSDs in the Internal Drive Compartment

This section contains the following topics:

Internal Drive Population Guidelines

The system has 56 internal drive bays in the main chassis. Figure 3-7 shows the internal drive bay numbering. When populating internal drives and for optimal results please adhere to the following guidelines:

  • Populate HDDs starting from the lowest-numbered bays to the highest. Populate row 1 – 14, then row 15 – 28, and so on.
  • Populate internal SSDs starting from the highest-numbered bays to the lowest. Populate bay 56, then bay 55, and so on.

For example, a system with 36 HDDs and 2 SSDs should have the HDDs in bays 1 – 36 and the SSDs in bays 56 and 55.

  • The four colored boxes shown in Figure 3-7 represent the four power groups in which power is distributed to the drive bays.

Figure 3-7 Internal Drive Bay Numbering

 

353451.eps

Identifying a Faulty Internal Drive

The system has internal-drive fault LEDs on the right-front handle (see Figure 3-1). Use these LEDs to get an indication of the location of a failing drive.


Step 1blank.gif Observe the internal-drive fault LEDs on the right-front handle.

  • The two columns of LEDs correspond to the two halves of the internal drive compartment (under either the right- or left-side top cover).
  • The four numbered rows of LEDs correspond to the four horizontal rows of drive bays (14 drive bays in each row).

See Figure 3-8 for an example. In this example, the amber LED indicates that the failing drive is in the right half of the internal drive compartment, in row 3.

Step 2blank.gif Open the right- or left-side cover and look at the fault LEDs on the drive trays.

A solid amber fault LED indicates a failed drive.

Figure 3-8 Internal-Drive Status LED Example

 

353452.eps
1

Internal-drive fault LEDs on right-front handle of system, indicating faulty drive in right side of row 3

3

Front of system

2

Fault LED on drive carrier

 

 

Replacing Internal Drives

note.gif

Noteblank.gif Cisco PID UCSC-C3X60-HD4TB (Toshiba model MG03SCA400) has been EOLed and should be replaced if any problems occur. This Cisco PID now uses a newer HGST model.


note.gif

Noteblank.gif SAS HDDs and SSDs are hot-swappable and can be replaced without removing power from the system.



Step 1blank.gif Slide the system out the front of the rack far enough so that you can open the top cover.

caut.gif

Caution blank.gif If you cannot safely view and access the component, remove the system from the rack.

Step 2blank.gif Identify a failing drive as described in Identifying a Faulty Internal Drive

Step 3blank.gif Open the internal-drive compartment cover.

Step 4blank.gif Remove the faulty drive:

a.blank.gif Press the release button on the drive carrier. The drive lever pops up.

b.blank.gif Lift the drive lever to the fully open, 90-degree position, then lift the drive straight up out of its bay.

Spare drives are already installed in a carrier, so it is not necessary to remove the old drive from its carrier.

Step 5blank.gif Install a new drive:

note.gif

Noteblank.gif Observe the drive population guidelines in Internal Drive Population Guidelines.


a.blank.gif Align the new drive with the empty bay. Orient the drive so that its connector aligns with the connector on the board.

b.blank.gif Lower the drive until it touches the board connector and the drive lever begins to close.

c.blank.gif Press the drive lever down flat until it clicks and locks.

Step 6blank.gif Close the chassis cover and push the system back into the rack.

Figure 3-9 Internal Drive Carrier Features

 

353453.eps
1

Drive lever

3

Drive fault LED

2

Release button

4

Drive activity LED


 

Replacing Hard Drives in the Optional Disk Expansion Tray

This section contains the following topics:

Disk Expansion Tray Drives Population Guidelines

The optional disk expansion tray can hold up to four 3.5-inch hard drives,. Drive numbering is shown in Figure 3-10. When populating these drives, follow these guidelines.

  • Populate drives starting from the lowest-numbered bay to the highest.

Figure 3-10 Disk Expansion Tray and Solid State Drive Numbering

 

305404.jpg

Identifying a Faulty Disk Expansion Tray Drive

Each drive has a fault LED that lights solid amber to indicate a failing drive.

Replacing Drive Expander Module Drives

note.gif

Noteblank.gif SAS/SATA drives are hot-swappable and can be replaced without removing power from the system.



Step 1blank.gif Identify a failing drive as described in Identifying a Faulty Disk Expansion Tray Drive.

Step 2blank.gif Remove the faulty drive:

a.blank.gif Press the release button on the drive carrier. The drive lever pops up.

b.blank.gif Lift the drive lever to the fully open position, then pull the drive straight up out of its bay.

Spare drives are already installed in a carrier, so it is not necessary to remove the old drive from its carrier.

Step 3blank.gif Install a new drive:

a.blank.gif Align the new drive with the empty bay and then push the drive in until it touches the board connector and the drive lever begins to close.

b.blank.gif Press the drive lever down flat until it clicks and locks.

Figure 3-11 Drive Expander Module Drive Carrier Features

 

353455.eps
1

Drive lever

3

Drive fault LED

2

Release button

4

Drive activity LED


 

Replacing SAS/SATA Solid State Drives in the Rear Panel Bays

This section contains the following topics:

Rear-Panel SAS/SATA Solid State Drive Population Guidelines

There are four bays for SAS/SATA SSDs in the rear panel. Drive numbering is shown in Figure 3-12. When populating these drives, follow these guidelines.

  • Server 1 SSD1 and SSD2 can be managed by server node 1. See also Management Architecture.
  • Server 2 SSD1 and SSD1 can be managed by server node 2.
  • Populate drives starting with the lowest-numbered bay to the highest within each server-controlled pair.
  • If a pair of SSDs is not present, keep the blanking panel in the empty two SSD bays to ensure proper air flow (see Figure 3-12).

Identifying a Faulty Solid State Drive

Each solid state drive bay has a fault LED that lights solid amber to indicate a failing drive (see Rear-Panel LEDs and Buttons).

Replacing Solid State Drives

note.gif

Noteblank.gif SAS/SATA solid state drives are hot-swappable and can be replaced without removing power from the system.



Step 1blank.gif Remove a faulty solid state drive:

a.blank.gif Grasp and pinch the release latch toward the center.

b.blank.gif Pull the solid state drive straight out of the bay.

Step 2blank.gif Install a new solid state drive:

note.gif

Noteblank.gif Observe the drive population guidelines in Rear-Panel SAS/SATA Solid State Drive Population Guidelines.


a.blank.gif Align the new drive with the empty bay (with the label facing up) and then push the drive in until it touches the board connector.

b.blank.gif Grasp and pinch the release latch toward the center while you push the drive fully into the bay, and then release the release latch.

Figure 3-12 Solid State Drive Bay Features

 

305405.jpg
1

SSD release latch

3

Solid state drive activity LED (each bay)

2

Solid state drive fault LED (each bay)

4

Blanking panel over lower bays (when not in use)


 

Replacing Fan Modules

Each fan module contains two fans. See Figure 3-13 for the fan numbering.

You do not have to shut down or power off the system to replace fan modules because they are hot-swappable. Do not operate the system with a fan module removed for more than one minute.

tip.gif

Tipblank.gif Each fan module has a fault LED that lights amber if the fan module fails.



Step 1blank.gif Slide the system out the front of the rack far enough so that you can open the fan compartment cover.

caut.gif

Caution blank.gif If you cannot safely view and access the component, remove the system from the rack.

Step 2blank.gif Open the fan compartment cover as described in Opening the Main Chassis Top Covers.

Step 3blank.gif Remove the fan module:

a.blank.gif Grasp the two latches on the top of the fan and pinch them toward the center.

b.blank.gif Lift the fan module straight out of the bay.

Step 4blank.gif Install a new fan module:

note.gif

Noteblank.gif The arrow on the fan module that indicates the air flow direction should point to the rear of the system.


a.blank.gif Align the fan module with the bay so that the connector on the bottom of the fan module is aligned with the socket on the floor of the chassis.

b.blank.gif Lower the fan module until it touches the socket, then push down firmly until the latch locks.

Step 5blank.gif Close the fan compartment cover and then push the system back into the rack.

Figure 3-13 Fan Modules (Top View)

 

353457.eps
1

Fan module fault LED

3

Air flow direction arrow pointing toward rear of system

2

Fan module release latches

 

 


 

Replacing a Server Node

The system can support one or two server nodes.

  • Cisco IMC releases earlier than 2.0(13): If your S3260 system has only one server node, it must be installed in bay 1.
  • Cisco IMC releases 2.0(13) and later: If your S3260 system has only one server node, it can be installed in either server bay.
note.gif

Noteblank.gif Whichever bay a server node is installed to, it must have a corresponding SIOC. That is, a server node in bay 1 must be paired with a SIOC in SIOC slot 1; a server node in bay 2 must be paired with a SIOC in SIOC bay 2. See Overview of Cisco UCS S3260 Architecture for more information.


To replace a server node, including exporting and importing its configuration, see the service note for your server node:

Replacing a Disk Expansion Tray

The system can support one optional disk expansion tray, in server bay 2 only.

tip.gif

Tipblank.gif The tray contains one status LED (see Figure 3-14 for the location and Table 3-3 for definitions of the LED states).


Figure 3-14 Disk Expansion Tray External Features (Four-Drive Version Shown)

 

353459.eps
1

Disk expansion tray status LED

2

Disk expansion tray ejector levers (two)

note.gif

Noteblank.gif The disk expansion tray is hot-swappable, which means that you can remove it without shutting down system power.


The disk expansion tray is accessed from the rear of the system, so you do not have to pull the system out from the rack.


Step 1blank.gif Remove a disk expansion tray from the system:

a.blank.gif Grasp the two ejector levers and pinch their latches to release the levers (see Figure 3-14).

b.blank.gif Rotate both levers to the outside at the same time to evenly disengage the tray from the midplane connectors.

c.blank.gif Pull the tray straight out from the system.

Step 2blank.gif Remove any drives from the old tray and move them to your new disk expansion tray. Install each drive to the same position that it occupied in the old tray.

Step 3blank.gif Install a new disk expansion tray (server bay 2 only):

a.blank.gif With the two ejector levers open, align the new tray with the empty bay.

b.blank.gif Push the tray into the bay until it engages with the midplane connectors.

c.blank.gif Rotate both ejector levers toward the center until they lay flat and their latches lock into the rear of the tray.


 

Replacing a System I/O Controller (SIOC)

The system can support up to two system I/O controllers (SIOCs).

For information about the management architecture and the chassis management controller (CMC) that is in each SIOC, see Management Architecture.

This section contains the following topics:

note.gif

Noteblank.gif If you are replacing a Version 02 SIOC (UCSC-S3260-SIOC) with a Version 03 SIOC (UCS-S3260-PCISIOC), use the procedure in Migrating From Version 02 SIOC to Version 03 SIOC. Version 03 SIOCs require Cisco IMC 4.0(1n) or later and for UCS Manager-controlled systems, Cisco UCS Manager 4.0(1n) or later.


note.gif

Noteblank.gif The Version 03 SIOC is supported only with S3260 M5 or later server nodes.


note.gif

Noteblank.gif Do not mix different versions of SIOCs in the same system.


note.gif

Noteblank.gif If your system has two Version 03 SIOCs, they must have identical adapter cards. Do not mix adapter cards between Version 03 SIOCs in a system.


note.gif

Noteblank.gif A Version 01 SIOC from a Cisco C3160 system cannot be installed in a Cisco S3260 system.


note.gif

Noteblank.gif If you move a SIOC from one chassis to another, the SIOC’s CMC configuration is treated as incompatible and is automatically deleted. The CMC will sync with the active CMC configuration.


Figure 3-15 SIOC External Mechanical Features (Version 03 SIOC shown)

 

306577.jpg
1

Lever thumbscrew

2

Right ejector lever.

2

Left ejector lever.

 

 

Replacing a SIOC in a Single-SIOC System

You do not have to slide the system out of the rack to remove the SIOC from the rear of the system. The SIOC can be replaced without powering off the chassis.


Step 1blank.gif Export the CMC configuration from the SIOC.

Step 2blank.gif Export the adapter configuration from the SIOC.

Step 3blank.gif Power off the connected server nodes based on the scenario:

a.blank.gif If there is one or no NVMe in the SIOC you are removing, power off the server node that is paired with the SIOC that you are removing (SIOC 1 is paired with server node1; SIOC 2 is paired with server node 2).

b.blank.gif If there are two NVMes in the SIOC you are removing, power off both server nodes.

See Shutting Down an Individual Server Node.

Step 4blank.gif Remove the SIOC from the system:

a.blank.gif Loosen the single captive thumbscrew on the SIOC and then open its two hinged ejector levers to evenly disengage the SIOC from its midplane connector.

b.blank.gif Pull the SIOC from the system.

Step 5blank.gif Version 03 SIOC Only : If desired, transfer the adapter card and any NVMe SSDs from the existing Version 03 SIOC to the new Version 03 SIOC:

a.blank.gif Remove the cover from the existing Version 03 SIOC as described in Removing the System I/O Controller (SIOC) Cover.

b.blank.gif Remove the adapter card as described in Replacing a PCIe Adapter Card Inside the S3260 SIOC (Version 03 SIOC Only).

c.blank.gif Remove the NVMe SSDs as described in Replacing an NVMe SSD Inside the S3260 SIOC (Version 03 SIOC Only).

d.blank.gif Remove the cover from the replacement Version 03 SIOC.

e.blank.gif Install the adapter card to the replacement SIOC as described in Replacing a PCIe Adapter Card Inside the S3260 SIOC (Version 03 SIOC Only).

f.blank.gif Install the NVME SSDs to the replacement SIOC as described in Replacing an NVMe SSD Inside the S3260 SIOC (Version 03 SIOC Only).

g.blank.gif Install the cover to the replacement SIOC.

Step 6blank.gif Install the new SIOC:

a.blank.gif Push the SIOC into its bay until it stops against the internal backplane.

b.blank.gif Close the two ejector levers on the SIOC to fully engage its connector with the midplane connector.

c.blank.gif Tighten the thumbscrew on the SIOC ejector levers.

Step 7blank.gif Wait for the CMC in the new SIOC to finish booting.

Step 8blank.gif Import the CMC configuration that you exported earlier to the new SIOC’s CMC.

Step 9blank.gif Import the adapter configuration that you exported earlier to the new SIOC’s CMC.

Step 10blank.gif Power on the server node that you powered off earlier.

Step 11blank.gif Update the SIOC firmware using the Host Upgrade Utility. This will ensure that the firmware version on the SIOC is compatible with the latest system firmware.

See the Cisco Host Upgrade Utility User Guide For S3260 Storage Servers for instructions on updating the firmware.

note.gif

Noteblank.gif If you are using Cisco Card NIC mode, which uses the SIOC’s uplink interfaces to manage the system, you might need to configure the uplink’s port speed to either 4x10 Gbps or 40 Gbps to match your network installation. See Setting SIOC QSFP Port Speed.


Replacing a SIOC in a Dual-SIOC System

You do not have to slide the system out of the rack to remove the SIOC from the rear of the system. The SIOC can be replaced without powering off the chassis.


Step 1blank.gif Set the CMC in the SIOC as the standby CMC, if it is not already.

Step 2blank.gif Export the adapter configuration from the SIOC.

Step 3blank.gif Power off the connected server nodes based on the scenario:

a.blank.gif If there is one or no NVMe in the SIOC you are removing, power off the server node that is paired with the SIOC that you are removing (SIOC 1 is paired with server node1; SIOC 2 is paired with server node 2).

b.blank.gif If there are two NVMes in the SIOC you are removing, power off both server nodes.

See Shutting Down an Individual Server Node.

Step 4blank.gif Remove the SIOC from the system:

a.blank.gif Loosen the single captive thumbscrew on the SIOC and then open its two hinged ejector levers to evenly disengage the SIOC from its midplane connector.

b.blank.gif Pull the SIOC from the system.

Step 5blank.gif Version 03 SIOC Only: If desired, transfer the adapter card and any NVMe SSDs from the existing Version 03 SIOC to the new Version 03 SIOC:

a.blank.gif Remove the cover from the old Version 03 SIOC as described in Removing the System I/O Controller (SIOC) Cover.

b.blank.gif Remove the adapter card as described in Replacing a PCIe Adapter Card Inside the S3260 SIOC (Version 03 SIOC Only).

c.blank.gif Remove the NVMe SSDs as described in Replacing an NVMe SSD Inside the S3260 SIOC (Version 03 SIOC Only).

d.blank.gif Remove the cover from the new Version 03 SIOC.

e.blank.gif Install the adapter card to the replacement SIOC as described in Replacing a PCIe Adapter Card Inside the S3260 SIOC (Version 03 SIOC Only).

f.blank.gif Install the NVME SSDs to the replacement SIOC as described in Replacing an NVMe SSD Inside the S3260 SIOC (Version 03 SIOC Only).

g.blank.gif Install the cover to the replacement SIOC.

Step 6blank.gif Install the new SIOC:

a.blank.gif Push the SIOC into its bay until it stops against the internal backplane.

b.blank.gif Close the two ejector levers on the SIOC to fully engage the SIOC connector with the midplane.

c.blank.gif Tighten the thumbscrew on the SIOC ejector levers.

Step 7blank.gif Wait for the CMC in the new SIOC to finish booting.

Step 8blank.gif Power on the server node that you powered off earlier.

The configuration in the active CMC is automatically synched with the standby CMC in the SIOC that you just installed.

Step 9blank.gif Update the SIOC firmware using the Host Upgrade Utility. This will ensure that the firmware version on the SIOC is compatible with the latest system firmware.

See the Cisco Host Upgrade Utility User Guide For S3260 Storage Servers for instructions on updating the firmware.

note.gif

Noteblank.gif If you are using Cisco Card NIC mode, which uses the SIOC’s uplink interfaces to manage the system, you might need to configure the uplink’s port speed to either 4x10 Gbps or 40 Gbps to match your network installation. See Setting SIOC QSFP Port Speed.


Step 10blank.gif Import the adapter configuration that you exported earlier to the new SIOC’s CMC.


 

Setting SIOC QSFP Port Speed

Optional: Set the port speeds for the SIOC QSFP ports:


Step 1blank.gif Connect a keyboard and console to the system or log in remotely and view a virtual KVM window.

Step 2blank.gif Reboot the system and press F8 when prompted to launch the Cisco IMC Configuration Utility.

Step 3blank.gif After the first utility screen is displayed, press F1 twice to go to the third utility screen, which has the settings for Adapter Port Speeds.

Step 4blank.gif Set the desired port speeds. “Adapter-1” is SIOC 1; “Adapter-2” is SIOC 2, if present.

Step 5blank.gif Press F10 to save your changes and exit the utility.


 

Updating SIOC Adapter Firmware in a System With Dual SIOCs But Only One Server Node

The recommended method for updating SIOC adapter firmware is to run the Cisco Host Upgrade Utility (HUU) on the server node that is associated with the SIOC. However, if a system has only one server node and two SIOCs, this method does not work to upgrade firmware on the SIOC that is not associated with a server node (SIOC 1 is associated with a server node in bay 1; SIOC 2 is associated with a server node in bay 2).

In this case, you must manually upgrade the adapter firmware on the un-associated SIOC:

Using the Cisco IMC GUI Interface to Update SIOC Adapter Firmware


Step 1blank.gif Install the SIOC adapter firmware update:

a.blank.gif In the Admin menu, click Firmware Management.

b.blank.gif In the Component Column, select Adapter-SIOC1 or Adapter-SIOC2.

c.blank.gif Click Update. The Update Firmware dialog opens.

d.blank.gif Browse to select the firmware for the SIOC adapter.

e.blank.gif Click Install Firmware to begin download and installation.

Step 2blank.gif Activate the firmware:

a.blank.gif In the Component Column, select Adapter-SIOC1 or Adapter-SIOC2.

b.blank.gif Click Activate.

note.gif

Noteblank.gif You must reset the adapter to make the activated firmware version the running version. Resetting the adapter also causes the host to reset.


Step 3blank.gif Reset the adapter in the SIOC:

a.blank.gif In the Navigation pane, click the Networking menu.

b.blank.gif Click the Adapter Card tab. The General tab appears.

c.blank.gif Select Adapter-SIOC1 or Adapter-SIOC2.

d.blank.gif In the Actions area of the General tab, click Reset and then click Yes to confirm.

The adapter and host reset. The activated firmware becomes the running version.


 

Using the Cisco IMC PMCLI Interface to Update SIOC Adapter Firmware


Step 1blank.gif Update and activate the adapter firmware by using the following commands:

Server# scope chassis
Server/chassis# update-adapter-fw <protocol> <remote server IP address> <image file path> <activate|no-activate> [SIOC slot number]
 

For example:

server1/chassis# update-adapter-fw tftp nnn.nnn.nnn.nnn /fw-image.bin activate 1
Adapter firmware update has started. Please check the status using "show adapter detail". You have chosen to automatically activate the new firmware image. Please reset your adapter after the update finishes.
 

Step 2blank.gif Reset the adapter:

note.gif

Noteblank.gif You must reset the adapter to make the activated firmware version the running version. Resetting the adapter also causes the host to reset.


server/chassis# adapter-reset <SIOC1 or SIOC2>
 

For example:

server/chassis# adapter-reset SIOC1
 

The adapter and host reset. The activated firmware becomes the running version.


 

Replacing a Power Supply

The system requires four power supplies, which are redundant as 3+1.

note.gif

Noteblank.gif Do not mix power supply types in the server. Both power supplies must be identical.


This section contains these procedures:

Replacing an AC Power Supply

To replace or install an AC power supply, follow these steps:

note.gif

Noteblank.gif You do not have to power off the system to replace up to one power supply because they are redundant as 3+1.



Step 1blank.gif Remove the power supply that you are replacing or a blank panel from an empty bay (see Figure 3-16):

a.blank.gif Remove the power cord from the power supply that you are replacing.

b.blank.gif Grasp the power supply handle while pinching the release lever toward the handle.

c.blank.gif Pull the power supply out of the bay.

Step 2blank.gif Install a new power supply:

a.blank.gif Grasp the power supply handle and insert the new power supply into the empty bay.

b.blank.gif Push the power supply into the bay until the release lever locks.

c.blank.gif Connect the power cord to the new power supply.

d.blank.gif If you powered off the system, press and hold the system Power button for four seconds to return the system to main power mode.

Figure 3-16 Removing and Replacing Power Supplies

 

305373.jpg
1

Power supply release lever

2

Power supply handle


 

Installing a DC Power Supply (First-Time Installation)

warn.gif

Warningblank.gif A readily accessible two-poled disconnect device must be incorporated in the fixed wiring. Statement 1022


warn.gif

Warningblank.gif This product requires short-circuit (overcurrent) protection, to be provided as part of the building installation. Install only in accordance with national and local wiring regulations. Statement 1045


warn.gif

Warningblank.gif Installation of the equipment must comply with local and national electrical codes. Statement 1074


caut.gif

Caution blank.gif Before beginning this wiring procedure, turn off the DC power source from your facility’s circuit breaker to avoid electric shock hazard.


Step 1blank.gif Turn off the DC power source from your facility’s circuit breaker to avoid electric shock hazard.

note.gif

Noteblank.gif The required DC input cable is Cisco part CAB-48DC-40A-8AWG. This 3-meter cable has a 3-pin connector on one end that is keyed to the DC input socket on the power supply, as shown in Figure 3-17. The other end of the cable has no connector so that you can wire it to your facility’s DC power.


Step 2blank.gif Wire the non-terminated end of the cable to your facility’s DC power input source.

Step 3blank.gif Connect the terminated end of the cable to the socket on the power supply. The connector is keyed so that the wires align for correct polarity and ground, as shown in Figure 3-17.

Step 4blank.gif Return DC power from your facility’s circuit breaker.

Figure 3-17 1050 W, –48 VDC Power Supply and Cable

 

305145a.eps
1

Keyed cable connector (CAB-48DC-40A-8AWG)

3

PSU status LED

See Rear-Panel LEDs and Buttons for details.

2

Keyed DC input socket

 

 


 

Replacing a DC Power Supply

note.gif

Noteblank.gif You do not have to power off the system to replace up to one power supply because they are redundant as 3+1.



Step 1blank.gif Remove the power supply that you are replacing or a blank panel from an empty bay:

a.blank.gif Remove the CAB-48DC-40A-8AWG keyed cable connector from the power supply that you are replacing.

b.blank.gif Grasp the power supply handle while pinching the release lever toward the handle.

c.blank.gif Pull the power supply out of the bay.

Step 2blank.gif Install a new power supply:

a.blank.gif Grasp the power supply handle and insert the new power supply into the empty bay.

b.blank.gif Push the power supply into the bay until the release lever locks.

c.blank.gif Connect the CAB-48DC-40A-8AWG keyed cable connector to the new power supply.

d.blank.gif If you powered off the system, press and hold the system Power button for four seconds to return the system to main power mode.


 

Replacing DIMMs Inside a Server Node

To replace DIMMs inside a server node, see the service note for your server node:

Replacing CPUs and Heatsinks Inside a Server Node

To replace CPUs inside a server node, see the service note for your server node:

Replacing a Storage Controller Card Inside the Server Node

To replace a storage controller card inside a server node, see the service note for your server node:

Replacing an SSD Inside the Server Node (C3X60 M4 or S3260 M5 Only)

To replace a solid state drive (SSD) inside a C3X60 M4 or S3260 M5 server node, see the service note for your server node:

Replacing an RTC Battery Inside the Server Node

To replace an RTC battery inside a server node, see the service note for your server node:

Replacing an Internal USB Drive Inside the Server Node (C3X60 M3 Only)

For instructions on replacing an internal USB drive inside a C3X60 M3 server node, and for enabling or disabling the USB port, see the C3X60 M3 service note:

Installing a Trusted Platform Module (TPM) Inside the Server Node

To install a TPM inside a server node, see the service note for your server node:

Replacing an I/O Expander (C3X60 M4 or S3260 M5 Server Nodes Only)

The C3X60 M4 or S3260 M5 server node might have an optional I/O expander attached to its top. To replace an I/O expander, see the service note for your server node:

Replacing a PCIe Card Inside the I/O Expander (C3X60 M4 or S3260 M5 Server Nodes Only)

To replace a PCIe card inside an optional I/O expander, see the service note for your server node:

Replacing a Storage Controller Card Inside the I/O Expander (C3X60 M4 Server Nodes Only)

To replace a storage controller card inside an optional I/O expander, see the service note for the Cisco UCS C3000 M4 server node:

  • Cisco UCS C3000 M4 Server Node For Cisco UCS C3260 System Service Note
note.gif

Noteblank.gif The storage controllers that are supported with the S3260 M5 server node are not supported in the I/O expander.)


Replacing an NVMe SSD Inside the I/O Expander (C3X60 M4 or S3260 M5 Server Nodes Only)

To replace a PCIe card inside an optional I/O expander, see the service note for your server node:

Replacing an RTC Battery Inside the S3260 SIOC

The real-time clock (RTC) battery retains settings when the SIOC is disconnected from power. The battery type in the SIOC is CR1632. Cisco supports the industry-standard CR1632 battery, which can be purchased from most electronic stores.

note.gif

Noteblank.gif When the RTC battery is removed or it completely loses power, settings that were stored in the CMC of the SIOC are lost. You must reconfigure the CMC settings after installing a new battery.



Step 1blank.gif Power off the connected server nodes based on the scenario:

a.blank.gif If there is one or no NVMe in the SIOC you are removing, power off the server node that is paired with the SIOC that you are removing (SIOC 1 is paired with server node1; SIOC 2 is paired with server node 2).

b.blank.gif If there are two NVMes in the SIOC you are removing, power off both server nodes.

See Shutting Down an Individual Server Node.

Step 2blank.gif Remove the SIOC from the system:

a.blank.gif Loosen the single captive thumbscrew on the SIOC and then open its two hinged ejector levers to evenly disengage the SIOC from its midplane connector.

b.blank.gif Pull the SIOC from the system.

Step 3blank.gif Remove the SIOC cover as described in Removing the System I/O Controller (SIOC) Cover.

Step 4blank.gif Gently pry under the battery to lift it from its socket on the SIOC board.

note.gif

Noteblank.gif The battery socket is in different locations in version 02 and Version 03 SIOCs. See the following figures.


Step 5blank.gif Insert the new battery into the socket and then press down until it sits flat.

Step 6blank.gif Replace the cover to the SIOC.

Step 7blank.gif Replace the SIOC to the system:

a.blank.gif Push the SIOC into its bay until it stops against the internal midplane.

b.blank.gif Close the two levers on the SIOC to fully engage the SIOC connector with its backplane.

c.blank.gif Tighten the thumbscrew on the SIOC levers.

Step 8blank.gif Power on the server node that you powered off earlier.

Step 9blank.gif Reconfigure the settings of the CMC in this SIOC.

Figure 3-18 RTC Battery Location Inside the Version 02 S3260 SIOC (UCS-S3260-SIOC)

 

305039.eps
1

Battery socket on version 02 SIOC board,
with CR1632 battery

 

 

Figure 3-19 RTC Battery Location Inside the Version 03 S3260 SIOC (UCS-S3260-PCISIOC)

306578.jpg
1

Battery socket on Version 03 SIOC board,
with CR1632 battery

 

 


 

Replacing a PCIe Adapter Card Inside the S3260 SIOC (Version 03 SIOC Only)

The Version 03 SIOC uses a removable PCIe adapter card that plugs into a horizontal socket. (The version 02 SIOC uses an embedded chip and so does not have a removable adapter card.)

Cisco VIC Cards Supported in Version 03 SIOC (UCSC-S3260-PCISIOC)

At this time, the following Cisco Virtual Interface Cards (VICs) are supported in the Version 03 SIOC:

  • Cisco UCS VIC 1455 (UCSC-PCIE-C25Q-04)
  • Cisco UCS VIC 1495 (UCSC-PCIE-C100-04)
note.gif

Noteblank.gif If your S3260 system has two Version 03 SIOCs, the adapter card in each must be identical. Do not mix adapter cards between Version 03 SIOCs.



Step 1blank.gif Power off the connected server nodes based on the scenario:

a.blank.gif If there is one or no NVMe in the SIOC you are removing, power off the server node that is paired with the SIOC that you are removing (SIOC 1 is paired with server node1; SIOC 2 is paired with server node 2)

b.blank.gif If there are two NVMes in the SIOC you are removing, power off both server nodes.

See Shutting Down an Individual Server Node.

Step 2blank.gif Remove the SIOC from the system:

a.blank.gif Loosen the single captive thumbscrew on the SIOC and then open its two hinged ejector levers to evenly disengage the SIOC from its midplane connector.

b.blank.gif Pull the SIOC from the system.

Step 3blank.gif Remove the SIOC cover as described in Removing the System I/O Controller (SIOC) Cover.

Step 4blank.gif Remove an existing adapter card:

a.blank.gif Remove the SIOC side panel that is closest to the adapter card. Turn the SIOC upside down and remove the three screws that secure the side panel from the underside of the SIOC.

b.blank.gif Turn the SIOC over.

c.blank.gif Slide the adapter card horizontally to disconnect its edge connector from the socket.

Step 5blank.gif Install a new adapter card:

a.blank.gif Set the card in place on the SIOC floor. Align the card’s edge connector with the socket.

b.blank.gif Gently slide the card into the socket, pushing evenly on both ends of the card. Stop when the card is fully in the socket and its rear-panel tab sits flat.

c.blank.gif Return the side panel to the SIOC. Set the side panel in place so that its edge sits on top of the SIOC floor and its three screw holes align with those on the SIOC.

d.blank.gif Turn the SIOC upside down and install the three screws that secure the side panel.

Step 6blank.gif Turn the SIOC over and reinstall the SIOC cover. Install the six screws that secure the cover.

Step 7blank.gif Replace the SIOC to the system:

a.blank.gif Push the SIOC into its bay until it stops against the internal midplane.

b.blank.gif Close the two levers on the SIOC to fully engage the SIOC connector with its backplane.

c.blank.gif Tighten the thumbscrew on the SIOC levers.

Step 8blank.gif Power on the server node that you powered off earlier.

Figure 3-20 Adapter Card in Version 03 SIOC (UCS-S3260-PCISIOC)

306579.jpg
1

Horizontal socket for adapter card

3

Location of three screws that secure the side panel (access from the underside of the SIOC)

2

Removable SIOC side panel (three screws on underside)

 

 


 

Replacing an NVMe SSD Inside the S3260 SIOC (Version 03 SIOC Only)

SIOC NVMe SSD Population Rules

The Version 03 SIOC can contain up to two 2.5-inch NVMe SSDs in a sled assembly. You must remove the sled when installing or replacing the SSDs.

Each NVMe SSD slot in a SIOC is numbered for the server that supports it (SERVER 1 NVME or SERVER 2 NVME).

  • Populate an NVMe SSD in SERVER 1 NVME only if you have a server node in upper server bay 1.
  • Populate an NVME SSD in SERVER 2 NVME only if you have a server node in lower server bay 2.
note.gif

Noteblank.gif If your system configuration has an M5 server node in lower server bay 2 and an I/O expander in upper server bay 1, you cannot populate an NVMe SSD in the SERVER 1 NVME slot in either SIOC. See Figure 3-21and Figure 3-22 for illustrations of the system connectivity and labeling.


Figure 3-21 NVMe SSD Connectivity in Version 03 SIOC (UCS-S3260-PCISIOC)

306805.jpg

Procedure


Step 1blank.gif Power off the connected server nodes based on the scenario:

a.blank.gif If there is one or no NVMe in the SIOC you are removing, power off the server node that is paired with the SIOC that you are removing (SIOC 1 is paired with server node1; SIOC 2 is paired with server node 2).

b.blank.gif If there are two NVMes in the SIOC you are removing, power off both server nodes.

See Shutting Down an Individual Server Node.

Step 2blank.gif Remove the SIOC from the system:

a.blank.gif Loosen the single captive thumbscrew on the SIOC and then open its two hinged ejector levers to evenly disengage the SIOC from its midplane connector.

b.blank.gif Pull the SIOC from the system.

Step 3blank.gif Remove the SIOC cover as described in Removing the System I/O Controller (SIOC) Cover.

note.gif

Noteblank.gif The NVMe SSD sled (UCS-S3260-NVMSLD2) in the Version 03 SIOC is not interchangeable with the NVMe SSD sled (UCS-S3260-NVMSLD1) in the S3260 M5 server node.


Step 4blank.gif Remove the NVMe SSD sled assembly from the SIOC.

Loosen the two captive thumbscrews that secure the sled to the SIOC floor and then pull the sled so that the connectors on any installed SSDs disengage from the two sockets on the SIOC board.

Figure 3-22 NVMe SSDs in Version 03 SIOC (UCS-S3260-PCISIOC)

306580.jpg
1

NVMe sled-assembly thumbscrews

3

SERVER 2 NVME

Populate this slot only when you have a server node in lower server bay 2.

2

SERVER 1 NVME

Populate this slot only when you have a server node in upper server bay 1.

 

 

Step 5blank.gif Remove the three screws that secure an SSD to the sled. Two screws enter from the outside edge and one screw enters from the underside of the sled.

Figure 3-23 NVMe SSD Securing Screws on Sled

306581.jpg
1

Screws that secure NVME 1 SERVER SSD to the sled assembly (three)

Two screws enter from the side and one from the underside.

2

Screws that secure NVME 2 SERVER SSD to the sled assembly (three)

Two screws enter from the side and one from the underside.

Step 6blank.gif Install a new NVMe SSD:

note.gif

Noteblank.gif Before installing, see the SIOC NVMe SSD Population Rules.


a.blank.gif Install the new SSD to the sled. Set the SSD in place so that the connector end faces away from the bracket.

b.blank.gif Install the 3 screws that secure the SSD to the sled.

c.blank.gif Set the sled into the SIOC so that the connectors on the two SSDs align with the two sockets on the SIOC board.

d.blank.gif Slide the SSD connectors into the sockets as you tilt the sled down flat.

e.blank.gif Tighten the two thumbscrews on the sled assembly to secure it to the SIOC board standoffs.

Step 7blank.gif Reinstall the SIOC cover.

Step 8blank.gif Replace the SIOC to the system:

a.blank.gif Push the SIOC into its bay until it stops against the internal midplane.

b.blank.gif Close the two levers on the SIOC to fully engage the SIOC connector with its backplane.

c.blank.gif Tighten the thumbscrew on the SIOC levers.

Step 9blank.gif Power on the server node that you powered off earlier.


 

Replacing an S3260 Chassis

This procedure is for replacing an S3260 chassis and transferring all components to the new chassis.


Step 1blank.gif Export the component configurations (CMC, BMC, VIC adapters) and save them to a local computer using the import-export command mode.

Refer to the Cisco UCS Integrated Management Controller CLI Configuration Guide for S3260 Storage Servers. See the chapter, Server Utilities and the section, Exporting and Importing the Cisco IMC and BMC Configuration.

Step 2blank.gif Perform a graceful shutdown of all server nodes.

See Shutting Down an Individual Server Node for more information.

Step 3blank.gif Disconnect all power cables from all power supplies to fully remove power from the chassis.

Step 4blank.gif Remove each server node and install each to the same bay position in the new chassis.

If a server node has an I/O expander attached, move the node and expander attached together.

See the service note for your server node version for more information: Server Node Service Notes.

Step 5blank.gif If the system has a disk expansion tray in place of a server node, remove it with its disks in place and install the tray with disks to the same bay position in the new chassis.

See Replacing a Disk Expansion Tray for more information.

Step 6blank.gif Remove each SIOC and install each to the same bay position in the new chassis.

See Replacing a System I/O Controller (SIOC) for more information.

Step 7blank.gif Remove each rear-panel boot SSD and install each to the same bay position in the new chassis.

See Replacing SAS/SATA Solid State Drives in the Rear Panel Bays for more information.

Step 8blank.gif Remove each power supply and install each to the same bay position in the new chassis.

See Replacing a Power Supply for more information.

Step 9blank.gif Open the main chassis top cover on the old and new chassis.

See Opening the Main Chassis Top Covers for more information.

Step 10blank.gif Remove each top-loading hard drive and install each to the same bay position in the new chassis.

See Replacing HDDs or SSDs in the Internal Drive Compartment for more information.

Step 11blank.gif Open the fan compartment cover on the old and new chassis.

Step 12blank.gif Remove each cooling fan and install each to the same bay position in the new chassis.

See Replacing Fan Modules for more information.

Step 13blank.gif If you have not done so already, remove the empty chassis from your rack.

Step 14blank.gif Close all covers on the new chassis.

Step 15blank.gif Install the new chassis to your rack.

caut.gif

Caution blank.gif This system weighs approximately 190 pounds (86 kilograms) when fully loaded with components. We recommend that you use a mechanical lift when lifting the system. Attempting this procedure without a lift could result in personal injury or equipment damage. Although not required, consider temporarily removing components such as hard drives while you move the system.

See Installing the System in a Rack for instructions.

note.gif

Noteblank.gif When a SIOC is moved to a new chassis, the CMC settings are reset to the factory defaults and so you must perform initial setup as you would for a new system.


Step 16blank.gif Power on, connect cables to the system, and log in to the Cisco IMC Configuration Utility as described in Connecting the System and Opening the Setup Utility.

Step 17blank.gif Configure the network settings for the system as described in Setting Up the System Using the Cisco IMC Configuration Utility.

Step 18blank.gif After the system has connection to the network, use the Host Upgrade Utility to update the system to your desired level and ensure that all components in the system are running compatible software levels.

See the Cisco Host Upgrade Utility User Guide For S3260 Storage Servers for instructions.

Step 19blank.gif Import the component configurations that you exported in step 1.

Refer to the Cisco UCS Integrated Management Controller CLI Configuration Guide for S3260 Storage Servers. See the chapter Server Utilities and the section Exporting and Importing the Cisco IMC and BMC Configuration.

Step 20blank.gif Verify that all configurations imported successfully.

Step 21blank.gif Verify that chassis health is optimal.


 

Service Headers on the Server Node Board

The server node includes headers that you can jumper for certain service functions.

For information about the service headers, see the service note for your server node: