Cisco UCS X215c M8 Compute Node Overview
The Cisco UCS X215c M8 is a single-slot compute node that has two CPU sockets that can support a maximum of two Fourth Gen AMD EPYC™ Processors with up to 96 cores per processor and up to 384 MB of Level 3 cache per CPU. The minimum system configuration requires one CPU installed in the CPU1 slot.
Additionally, the compute node supports the following features with one CPU or two identical CPUs:
-
24 total DIMMs, 12 channels per CPU socket, 1 DIMM per channel.
-
Up to 6TB of main memory with a maximum of 24 256 GB DDR5 5600 MT/s or DDR5 4800 MT/s DIMMs.
-
RAS is supported.
-
One front mezzanine module can support the following:
-
A front storage module, which supports multiple different storage device configurations:
-
Up to six hot pluggable SAS/SATA/U.3 NVMe 2.5inch SSDs (slots 1-6).
-
SATA/SAS/U.3 drives can co-exist on the front mezzamine module. RAID volumes are restricted to same type of drives only. For example, RAID 1 volume need to use a set of SATA or SAS or U.3 NVMe drives.
For additional information, see Front Mezzanine Options.
-
-
-
1 modular LAN on motherboard (mLOM/VIC) module supporting a maximum of 200G traffic, 100G to each fabric. For more information, see mLOM and Rear Mezzanine Slot Support.
-
1 rear mezzanine module (UCSX-V4-PCIME or UCSX-ME-V5Q50G).
-
A mini-storage module with slots for up to two M.2 drives with optional hardware RAID. Two options of mini-storage exist:
-
One supporting M.2 SATA drives with a RAID controller (UCSX-M2-HWRD-FPS)
-
One supporting M.2 NVMe drives direct-attached to CPU 1 through a pass-through controller (UCSX-M2-PT-FPN).
-
-
Local console connectivity through a OCU connector.
-
Connection with a paired UCS PCIe module, such as the Cisco UCS X440p PCIe node, to support GPU offload and acceleration. For more information, see the Optional Hardware Configuration.
-
Up to eight UCS X215c M8 compute nodes can be installed in a Cisco UCS X9508 modular system.
-
Through the Cisco UCS X9508 that hosts the Cisco X215c M8, connections to the following Cisco Fabric Interconnects are supported:
-
Cisco UCS Fabric Interconnect 6454
-
Cisco UCS Fabric Interconnect 64108
-
Cisco UCS Fabric Interconnect 6536
-
Compute Node Front Panel
The Cisco UCS X215c M8 front panel contains system LEDs that provide visual indicators for how the overall compute node is operating. An external connector is also supported.
Compute Node Front Panel
1 |
Power LED and Power Switch The LED provides a visual indicator about whether the compute node is on or off.
The switch is a push button that can power off or power on the compute node. See Front Panel Buttons. |
2 |
System Activity LED The LED blinks to show whether data or network traffic is written to or read from the compute node. If no traffic is detected, the LED is dark. The LED is updated every 10 seconds. |
3 |
System Health LED A multifunction LED that indicates the state of the compute node.
|
4 |
Locator LED/Switch The LED provides a visual indicator that glows solid blue to identify a specific compute node. The switch is a push button that toggles the Indicator LED on or off. See Front Panel Buttons. |
5 |
External Optical Connector (Oculink) that supports local console functionality. |
Front Panel Buttons
The front panel has some buttons that are also LEDs. See Compute Node Front Panel.
-
The front panel Power button is a multi-function button that controls system power for the compute node.
-
Immediate power up: Quickly pressing and releasing the button, but not holding it down, causes a powered down compute node to power up.
-
Immediate power down: Pressing the button and holding it down 7 seconds or longer before releasing it causes a powered-up compute node to immediately power down.
-
Graceful power down: Quickly pressing and releasing the button, but not holding it down, causes a powered-up compute node to power down in an orderly fashion.
-
-
The front panel Locator button is a toggle that controls the Locator LED. Quickly pressing the button, but not holding it down, toggles the locator LED on (when it glows a steady blue) or off (when it is dark). The LED can also be dark if the compute node is not receiving power.
For more information, see Interpreting LEDs.
Drive Front Panels
The front drives are installed in the front mezzanine slot of the compute node. SAS/SATA and NVMe drives are supported.
Compute Node Front Panel with SAS/SATA Drives
The compute node front panel contains the front mezzanine module, which can support a maximum of 6 SAS/SATA drives. The drives have additional LEDs that provide visual indicators about each drive's status.
1 |
Drive Health LED |
2 |
Drive Activity LED |
Compute Node Front Panel with NVMe Drives
The compute node front panel contains the front mezzanine module, which can support a maximum of six 2.5-inch NVMe drives.