Cisco UCS S3260 System Storage Management

Storage Server Features and Components Overview

Storage Server Features

The following table summarizes the Cisco UCS S3260 system features:

Table 1. Cisco UCS S3260 System Features

Feature

Description

Chassis

Four rack unit (4RU) chassis

Processors

  • Cisco UCS S3260 M3 server nodes: Two Intel Xeon E5-2600 v2 Series processors inside each server node.

  • Cisco UCS S3260 M4 server nodes: Two Intel Xeon E5-2600 v4 Series processors inside each server node.

  • Cisco UCS S3260 M5 server nodes: Two Intel Skylake 2S-EP processors inside each server node.

Memory

Up to 16 DIMMs inside each server node.

Multi-bit error protection

This system supports multi-bit error protection.

Storage

The system has the following storage options:

  • Up to 56 top-loading 3.5-inch drives

  • Up to four 3.5-inch, rear-loading drives in the optional drive expander module

  • Up to four 2.5-inch, rear-loading SAS solid state drives (SSDs)

  • One 2.5-inch, NVMe drive inside the server node

    Note

     

    This is applicable for S3260 M4 servers only.

  • Two 7 mm NVMe drive inside the server node

    Note

     

    This is applicable for S3260 M5 servers only.

  • Two 15 mm NVMe drive supported for IO Expander

Disk Management

The system supports up to two storage controllers:

  • One dedicated mezzanine-style socket for a Cisco storage controller card inside each server node

RAID Backup

The supercap power module (SCPM) mounts to the RAID controller card.

PCIe I/O

The optional I/O expander provides two 8x Gen 3 PCIe expansion slots.

Release 3.2(3) and later supports the following for S3260 M5 servers:

  • Intel X550 dual-port 10GBase-T

  • Qlogic QLE2692 dual-port 16G Fiber Channel HBA

  • N2XX-AIPCI01 Intel X520 Dual Port 10Gb SFP+ Adapter

Network and Management I/O

The system can have one or two system I/O controllers (SIOCs). These provide rear-panel management and data connectivity.

  • Two SFP+ 40 Gb ports each SIOC.

  • One 10/100/1000 Ethernet dedicated management port on each SIOC.

The server nodes each have one rear-panel KVM connector that can be used with a KVM cable, which provides two USB, one VGA DB-15, and one serial DB-9 connector.

Power

Two or four power supplies, 1050 W each (hot-swappable and redundant as 2+2).

Cooling

Four internal fan modules that pull front-to-rear cooling, hot-swappable. Each fan module contains two fans.

In addition, there is one fan in each power supply.

Front Panel Features

The following image shows the front panel features for the Cisco UCS S3260 system:

Figure 1. Front Panel Features


1

Operations panel

6

Temperature status LED

2

System Power button/LED

7

Power supply status LED

3

System unit identification button/LED

8

Network link activity LED

4

System status LED

9

Pull-out asset tag (not visible under front bezel)

5

Fan status LED

10

Internal-drive status LEDs

Rear Panel Features

The following image shows the rear panel features for the Cisco UCS S3260 system:

Figure 2. Front Panel Features


Disk Slots

1

Server bay 1

  • (Optional) I/O expander, as shown (with Cisco UCS S3260 M4 and M5 server node only)

  • (Optional) server node

  • (Optional) drive expansion module

8

Not used at this time

2

Server bay 2

  • (Optional) server node (Cisco UCS S3260 M4 and M5 shown)

    (Optional) drive expansion module

9

Not used at this time

3

System I/O controller (SIOC)

  • SIOC 1 is required if you have a server node in server bay 1

  • SIOC 2 is required if you have server node in server bay 2

10

Solid state drive bays (up to four 2.5-inch SAS SSDs)

  • SSDs in bays 1 and 2 require a server node in server bay 1

  • SSDs in bays 3 and 4 require a server node in server bay 2

4

Power supplies (four, redundant as 2+2)

11

Cisco UCS S3260 M4 server node label (M4 SVRN)

Note

 

This label identifies a Cisco UCS S3260 M4 and M5 server node.

The Cisco UCS S3260 M3 server node does not have a label.

5

40-Gb SFP+ ports (two on each SIOC)

12

KVM console connector (one each server node).

Used with a KVM cable that provides two USB, one VGA, and one serial connector

6

Chassis Management Controller (CMS) Debug Firmware Utility port (one each SIOC)

13

Server node unit identification button/LED

7

10/100/1000 dedicated management port, RJ-45 connector (one each SIOC)

14

Server node power button

15

Server node reset button (resets chipset in the server node

Storage Server Components

Server Nodes

The Cisco UCS S3260 system consists of one or two server nodes, each with two CPUs, DIMM memory of 128, 256, or 512 GB, and a RAID card up to 4 GB cache or a pass-through controller. The server nodes can be one of the following:

  • Cisco UCS S3260 M3 Server Node

  • Cisco UCS S3260 M4 Server Node—This node might include an optional I/O expander module that attaches to the top of the server node.

  • Cisco UCS S3260 M5 Server Node—This node might include an optional I/O expander module that attaches to the top of the server node.

Disk Slots

The Cisco UCS S3260 chassis has 4 rows of 14 disk slots on the HDD motherboard and 4 additional disk slots on the HDD expansion tray. The following image shows the disk arrangement for the 56 top-accessible, hot swappable 3.5-inch 6 TB or 4 TB 7200 rpm NL-SAS HDD drives. A disk slot has two SAS ports and each is connected a SAS expander in the chassis.

Figure 3. Cisco UCS S3260 Top View

The following image shows the Cisco UCS S3260 chassis with the 4 additional disk slots on the HDD expansion tray.

Figure 4. Cisco UCS 3260 with the HDD expansion tray (Rear View)


If you have two server nodes with two SIOCs, you will have the following functionality:

  1. The top server node works with the left SIOC (Server Slot1 with SIOC1).

  2. The bottom server works with the right SIOC (Sever Slot 2 with SIOC2).

If you have one server node with two SIOCs, you can enable Server SIOC Connectivity functionality. Beginning with release 3.1(3), Cisco UCS S3260 system supports Server SIOC Connectivity functionality. Using this functionality, you can configure the data path through both the primary and auxiliary SIOCs when the chassis has single server and dual SIOCs set up.

SAS Expanders

The Cisco UCS S3260 system has two SAS expanders that run in redundant mode and connect the disks at the chassis level to storage controllers on the servers. The SAS expanders provide two paths between a storage controller, and hence enable high availability. They provide the following functionality:

  • Manage the pool of hard drives.

  • Disk zone configuration of the hard drives to storage controllers on the servers.

Beginning with release 3.2(3a), Cisco UCS Manager can enable single path access to disk by configuring single DiskPort per disk slot. This ensures that the server discovers only a single device and avoid a multi-path configuration.

The following table describes how the ports in each SAS expander are connected to the disks based on the type of deployment.

Port range

Connectivity

1-56

Top accessible disks

57-60

Disks in the HDD expansion tray.


Note


The number of SAS uplinks between storage controller and SAS expander can vary based on the type of controller equipped in the server.


Storage Enclosures

A Cisco UCS S3260 system has the following types of storage enclosures:

Chassis Level Storage Enclosures
  • HDD motherboard enclosure—The 56 dual port disk slots in the chassis comprise the HDD motherboard enclosure.

  • HDD expansion tray—The 4 additional dual disk slots in the Cisco UCS S3260 system comprise the HDD expansion tray.


    Note


    The HDD expansion tray is a field replaceable unit (FRU). The disks will remain unassigned upon insertion, and can be assigned to storage controllers. For detailed steps on how to perform disk zoning, see Disk Zoning Policies
Server level Storage Enclosures

Server level storage enclosures are pre-assigned dedicated enclosures to the server. These can be one of the following:

  • Rear Boot SSD enclosure—This enclosure contains two 2.5 inch disk slots on the rear panel of the Cisco UCS S3260 system. Each server has two dedicated disk slots. These disk slots support SATA SSDs.

  • Server board NVMe enclosure—This enclosure contains one PCIe NVMe controller.


Note


In the Cisco UCS S3260 system, even though disks can be physically present on the two types of enclosures described above, from the host OS all the disks are viewed as part of one SCSI enclosure. They are connected to SAS expanders that are configured to run as single SES enclosure.


Storage Controllers

Mezzanine Storage Controllers

The following table lists the storage controller type, firmware type, modes, sharing and OOB support for the various storage controllers.

Table 2.

Storage Controller Type

Firmware type

Modes

Sharing

OOB Support

UCSC-S3X60-R1GB

Mega RAID

HW RAID, JBOD

No

Yes

UCS-C3K-M4RAID

Mega RAID

HW RAID, JBOD

No

Yes

UCSC-S3X60-HBA

Initiator Target

Pass through

Yes

Yes

UCS-S3260-DHBA

Initiator Target

Pass through

Yes

Yes

UCS-S3260-DRAID

Mega RAID

HW RAID, JBOD

No

Yes

Other storage controllers
SW RAID Controller—The servers in the Cisco UCS S3260 system support two dedicated internal SSDs embedded into the PCIe riser that is connected to the SW RAID Controller. This controller is supported on the Cisco C3000 M3 servers.

NVMe Controller—This controller is used by servers in the Cisco UCS S3260 system for inventory and firmware updates of NVMe disks.

For more details about the storage controllers supported in the various server nodes, see the related service note:

Cisco UCS S3260 Storage Management Operations

The following table summarizes the various storage management operations that you can perform with the Cisco UCS Manager integrated Cisco UCS S3260 system.

Operation

Description

See:

Disk Sharing for High Availability

The SAS expanders in the Cisco UCS S3260 system can manage the pool of drives at the chassis level. To share disks for high availability, perform the following:

  1. Creating disk zoning policies.

  2. Creating disk slots and assigning ownership.

  3. Associating disks to chassis profile.

"Disk Zoning Policies" section in this guide.

Storage Profiles, Disk Groups and Disk Group Configuration Policies

You can utilize Cisco UCS Manager's Storage Profile and Disk Group Policies for defining storage disks, disk allocation and management in the Cisco UCS S3260 system.

"Storage Profiles" section in the Cisco UCS Manager Storage Management Guide, Release 3.2.

Storage Enclosure Operations

You can swap the HDD expansion tray with a server, or remove the tray if it was previously inserted.

"Removing Chassis Level Storage Enclosures" section in this guide.

Disk Sharing for High Availability

Disk Zoning Policies

You can assign disk drives to the server nodes using disk zoning. Disk zoning can be performed on the controllers in the same server or on the controllers on different servers. Disk ownership can be one of the following:
Unassigned

Unassigned disks are those not visible to the server nodes.

Dedicated

If this option is selected, you will need to set the values for the Server, Controller, Drive Path, and Slot Range for the disk slot.


Note


A disk is visible only to the assigned controller.


Beginning with release 3.2(3a), Cisco UCS Manager can enable single path access to disk by configuring single DiskPort per disk slot for Cisco UCS S3260 M5 and higher servers. Setting single path configuration ensures that the server discovers the disk drive only through a single drive path chosen in the configuration. Single path access is supported only for Cisco UCS S3260 Dual Pass Through Controller (UCS-S3260-DHBA)

Once single path access is enabled, you cannot downgrade to any release earlier than 3.2(3a). To downgrade, disable this feature and assign all the disk slots to both the disk ports by configuring disk path of the disk slots to Path Both in disk zoning policy.

Shared

Shared disks are those assigned to more than one controller. They are specifically used when the servers are running in a cluster configuration, and each server has its storage controllers in HBA mode.


Note


Shared mode cannot be used under certain conditions when dual HBA controllers are used.


Chassis Global Hot Spare

If this option is selected, you will need to set the value for the Slot Range for the disk.


Important


Disk migration and claiming orphan LUNs: To migrate a disk zoned to a server (Server 1) to another server (Server 2), you must mark the virtual drive (LUN) as transport ready or perform a hide virtual drive operation. You can then change the disk zoning policy assigned for that disk. For more information on virtual drive management, see the Disk Groups and Disk Configuration Policies section of the Cisco UCS Manager Storage Management Guide.


Creating a Disk Zoning Policy

Procedure


Step 1

In the Navigation pane, click Chassis.

Step 2

Expand Policies > root.

Step 3

Right-click Disk Zoning Policies and choose Create Disk Zoning Policy.

Step 4

In the Create Disk Zoning Policy dialog box, complete the following:

Name Description

Name field

The name of the policy.

This name can be between 1 and 16 alphanumeric characters. You cannot use spaces or any special characters other than - (hyphen), _ (underscore), : (colon), and . (period), and you cannot change this name after the object is saved.

Description field

A description of the policy. Cisco recommends including information about where and when to use the policy.

Enter up to 256 characters. You can use any characters or spaces except ` (accent mark), \ (backslash), ^ (carat), " (double quote), = (equal sign), > (greater than), < (less than), or ' (single quote).

Preserve Config check box

If this check box is selected, it preserves all configuration related information for the disks such as slot number, ownership, server assigned, controller assigned, and controller type.

Note

 

By default the Preserve Config check box remains unchecked.

In the Disk Zoning Information area, complete the following:

Name

Field

Name column

The name for the disk slot.

Slot Number column

The slot number for the disk.

Ownership column

The slot ownership value. This can be one of the following:

  • Unassigned—This option is selected by default. You can set the slot number in the Slot Range field.

  • Dedicated—If this option is selected, you will need to set the values for the Server, Controller, Drive Path, and Slot Range for the disk slot.

    Beginning with release 3.2(3a), Cisco UCS Manager can enable single path access to disk by configuring single DiskPort per disk slot. This ensures that the server discovers only a single device and avoid a multi-path configuration.

    Drive Path options are:

    • Path Both (Default) - Drive path is zoned to both the SAS expanders.

    • Path 0 - Drive path is zoned to SAS expander 1.

    • Path 1 - Drive path is zoned to SAS expander 2.

  • Shared—If this option is selected, you will need to set the values for the Slot Range and controller information such as server assigned, controller assigned, and controller type for the disk slot.

    Note

     

    Shared mode cannot be used under certain conditions when dual HBA controllers are used. To view the conditions for Shared mode for Dual HBA controller, see Table 1.

  • Chassis Global Hot Spare—If this option is selected, you will need to set the value for the Slot Range for the disk.

Assigned to Server column

The ID of the server that the disk is assigned.

Assigned to Controller column

The ID of the controller that the disk is assigned.

Note

 

In a Dual RAID setup, to migrate the disk from first controller to second, change the Assigned to Controller to the second controller.

Controller Type column

The type for the controller. If the disk is either dedicated or shared, the controller type is always SAS.

Table 3. Limitations for Shared Mode for Dual HBA Controller

Server

HDD Tray

Controller

Shared mode Support

Cisco UCS S3260

No

Dual HBA

Not Supported

Cisco UCS S3260

HDD Tray

Dual HBA

Not Supported

Pre-Provisioned

HDD Tray

Dual HBA

Not Supported


Creating Disk Slots and Assigning Ownership

After you create a disk zoning policy, you must create the disk slots, and assign ownership.

Procedure


Step 1

In the Navigation pane, click Chassis.

Step 2

Expand Policies > root > Disk Zoning Policies, and select the disk zoning policy that you want to add disk slots.

Step 3

In the Work pane, under Actions, click Add Slots to Policy.

Step 4

In the Add Slots to Policy dialog box, complete the following:

Name Description

Ownership check box

The ownership for the disk slot. This can be one of the following:

  • Unassigned—This option is selected by default. You can set the slot number in the Slot Range field.

  • Dedicated—If this option is selected, you will need to set the values for the Server, Controller, and Slot Range for the disk slot.

  • Shared—If this option is selected, you will need to set the values for the Slot Range and controller information such as server assigned, controller assigned, and controller type for the disk slot.

    Note

     

    Shared mode cannot be used under certain conditions when dual HBA controllers are used. To view the conditions for Shared mode for Dual HBA controller, see Table 1.

  • Chassis Global Hot Spare—If this option is selected, you will need to set the value for the Slot Range for the disk.

Step 5

Click OK.


Associating Disk Zoning Policies to Chassis Profile

Procedure


Step 1

In the Navigation pane, click Chassis.

Step 2

Expand Chassis > Chassis Profiles.

Step 3

Expand the node for the organization where you want to create the chassis profile.

If the system does not include multi tenancy, expand the root node.

Step 4

Right-click the organization and select Create Chassis Profile.

Step 5

In the Identify Chassis Profile page, specify the name for the chassis profile, and click Next.

Step 6

(Optional) In the Maintenance Policy page, specify the name for the maintenance policy, and click Next.

Step 7

In the Chassis Assignment page, select Select existing Chassis under Chassis Assignment, and then select the chassis that you want to associate with this chassis profile. Click Next.

Step 8

In the Disk Zoning page, specify the disk zoning policy that you want to associate with this chassis profile.

Step 9

Click Finish.


Disk Migration

Before you can migrate a disk zoned from one server to another, you must mark the virtual drive(LUN) as transport ready or perform a hide virtual drive operation. This will ensure that all references from the service profile have been removed prior to disk migration. For more information on virtual drives, please refer to the "virtual drives" section in the Cisco UCS Manager Storage Management Guide, Release 3.2

Note


In a Dual RAID setup, to migrate the disk from first controller to second change the Assigned to Controller to the second controller in the disk zoning policy. Refer Creating a Disk Zoning Policy.


Procedure


Step 1

In the Navigation pane, click Equipment > Chassis > Servers.

Step 2

Choose the Sever where you want to perform disk migration.

Step 3

In the Work pane, click the Inventory tab.

Step 4

Click the Storage subtab.

Step 5

Click the LUNs subtab.

Step 6

Choose the storage controller where you want to prepare the virtual drives for migration to another server.

Step 7

Choose the disk that you want to migrate.

Step 8

In the Actions area, choose one of the following:

Name

Description

Rename

Click on this link to rename your disk.

Delete

Click on this link to delete your disk.

Set Transportation Ready

Click on this link for the safe migration of the virtual drive from one server to another.

Note

 

All virtual drives on a disk group must be marked as hidden before migrating or unassigning the disks from a server node.

ClearTransportation Ready

Click on this link to set the state of the virtual drive to no longer be transport ready.

Hide Virtual Drive

Click on this option for the safe migration of the virtual drive from one server to another.

Note

 

All virtual drives on a disk group must be marked as hidden before migrating or unassigning the disks from a server node.

Unhide Virtual Drive

Click on this link to unhide the virtual drive and enable IO operations.


Storage Enclosure Operations

Removing Chassis Level Storage Enclosures

You can remove the storage enclosure corresponding to HDD expansion tray in Cisco UCS Manager after it is physically removed. You cannot remove server level or any other chassis level storage enclosures.

Procedure


Step 1

In the Navigation pane, click Equipment.

Step 2

Expand Chassis > Servers > Storage Enclosures.

Step 3

Choose the storage enclosure that you want to remove.

Step 4

In the Actions area, click Remove Enclosure.


Sas Expander Configuration Policy

Creating Sas Expander Configuration Policy

Procedure

  Command or Action Purpose

Step 1

In the Navigation pane, click Chassis.

Step 2

Expand Chassis > Policies.

Step 3

Expand the node for the organization where you want to create the policy.

If the system does not include multi tenancy, expand the root node.

Step 4

Right-click Sas Expander Configuration Policies and choose Create Sas Expander Configuration Policy.

Step 5

In the Create Sas Expander Configuration Policy dialog box, complete the following fields:

Name Description

Name field

The name of the policy.

This name can be between 1 and 16 alphanumeric characters. You cannot use spaces or any special characters other than - (hyphen), _ (underscore), : (colon), and . (period), and you cannot change this name after the object is saved.

Description field

A description of the policy. Cisco recommends including information about where and when to use the policy.

Enter up to 256 characters. You can use any characters or spaces except ` (accent mark), \ (backslash), ^ (carat), " (double quote), = (equal sign), > (greater than), < (less than), or ' (single quote).

6G-12G Mixed Mode field

This can be one of the following:

  • Disabled—Connection Management is disabled in this policy and the Sas Expander uses only 6G speeds even if 12G is available.

  • Enabled—Connection Management is enabled in this policy and it intelligently shifts between 6G and 12 G speeds based on availability.

    After 6G-12G Mixed Mode is enabled, you cannot downgrade to any release earlier than 3.2(3a). To downgrade, disable this mode.

  • No Change (Default) —Pre-existing configuration is retained.

Note

 

Enabling or disabling 6G-12G Mixed Mode causes system reboot.

6G-12G Mixed Mode field is available only for Cisco UCS S3260 M5 and higher servers.

Step 6

Click OK.

Deleting a Sas Expander Configuration Policy

Procedure


Step 1

In the Navigation pane, click Chassis.

Step 2

Expand Chassis > Policies.

Step 3

Expand the node for the organization containing the pool.

If the system does not include multi tenancy, expand the root node.

Step 4

Expand Sas Expander Configuration Policies.

Step 5

Right-click the Sas Expander Configuration policy you want to delete and choose Delete.

Step 6

If a confirmation dialog box displays, click Yes.