Overview

Overview

Cisco Nexus Dashboard provides a common platform for deploying Cisco Data Center applications. These applications provide real time analytics, visibility and assurance for policy and infrastructure.

The Cisco Nexus Dashboard server is required for installing and hosting the Cisco Nexus Dashboard application.

The server is orderable in the following version:
  • ND-NODE-L4 — Small form-factor (SFF) drives, with 10-drive backplane. Supports up to 10 2.5-inch SAS/SATA drives. Drive bays 1 and 2 support NVMe SSDs.

Following PCIe Riser combinations are available:

  • One half-height riser card in PCIe Riser 1

  • Three half-height riser cards in PCIe Riser 1, 2, 3

  • Two full-height riser cards Riser 1 and 3

  • Riser 1—Supports Riser1. Supports single x16 PCIe supporting full height 3/4 length cards in 2 riser configuration (or) Half-height 3/4-length cards in 3 riser configuration and NC-SI from Pilot4.

  • Riser 2—Supports Riser 1. Supports single x16 PCIe supporting only Half-height 3/4-length cards in 3-riser configuration.

  • Riser 3—Supports Riser 3A, 3B. PCIe slot 3 with the following options:

    • Riser3A Supports single x16 PCIe supporting half height 3/4 length cards in 3 riser configuration and NC-SI.

    • Riser3B Supports single x16 PCIe supporting full height 3/4-length cards in 2 riser configuration and NC-SI.

  • 2 10GBase-T Ethernet LAN over Motherboard (LOM) ports for network connectivity, plus one 1 Gigabit Ethernet dedicated management port

  • One mLOM/VIC card provides 10G/25G/40G/50G/100G connectivity. Supported cards are:

    • Cisco VIC 1455 VIC PCIE – Quad Port 10/25G SFP28 (UCSC-PCIE-C25Q-04)

External Features

This topic shows the external features of the server versions.

Cisco ND-NODE-L4 (SFF Drives) Front Panel Features

The following figure shows the front panel features of the small form-factor drive versions of the server.

For definitions of LED states, see Front-Panel LEDs.

Figure 1. ND-NODE-L4 (SFF Drives) Front Panel

1

UCSC-C225-M6S Version—Drive bays 1 – 10 support SAS/SATA hard disk drives (HDDs) and solid state drives (SSDs). As an option, drive bays 1-4 can contain up to 4 NVMe drives in any number up to 4. Drive bays 5 through 10 support only SAS/SATA HDDs or SSDs.

UCSC-C225-M6N Version—Drive bays 1—10 supports 2.5-inch NVMe-only SSDs.

2

Unit identification button/LED

3

Power button/power status LED

4

KVM connector

(used with KVM cable that provides one DB-15 VGA, one DB-9 serial, and two USB 2.0 connectors)

5

System LED cluster:

  • Fan status LED

  • System Status LED

  • Power supply status LED

  • Network link activity LED

  • Temperature status LED

-

Cisco ND-NODE-L4 Rear Panel Features

The rear panel features can be different depending on the number and type of PCIe cards in the server.

By default, single CPU servers come with only one half-height riser 1 installed, and dual CPU servers support all three half-height risers.

The following figure shows the rear panel features of the server with three riser configuration.

For definitions of LED states, see Rear-Panel LEDs.

Figure 2. Cisco ND-NODE-L4 Rear Panel Three Riser Configuration

The following figure shows the rear panel features of the server with three riser configuration.

1

PCIe slots

Following PCIe Riser combinations are available:

  • One half-height riser card in PCIe Riser 1

  • Three half-height riser cards in PCIe Riser 1, 2, 3

  • Two full-height riser cards Riser 1 and 3

  • Riser 1—Supports Riser1. Supports single x16 PCIe supporting full height 3/4 length cards in 2 riser configuration (or) Half-height 3/4-length cards in 3 riser configuration and NC-SI from Pilot4.

  • Riser 2—Supports Riser 1. Supports single x16 PCIe supporting only Half-height 3/4-length cards in 3-riser configuration.

  • Riser 3—Supports Riser 3A, 3B. PCIe slot 3 with the following options:

    • Riser3A Supports single x16 PCIe supporting half height 3/4 length cards in 3 riser configuration and NC-SI.

    • Riser3B Supports single x16 PCIe supporting full height 3/4-length cards in 2 riser configuration and NC-SI.

2

Power supply units (PSUs), two which can be redundant when configured in 1+1 power mode.

3

Modular LAN-on-motherboard (mLOM) card bay (x16 PCIe lane)

4

System identification button/LED

5

USB 3.0 ports (two)

6

Dedicated 1 GB Ethernet management port

7

COM port (RJ-45 connector)

8

VGA video port (DB-15 connector)

Component Location

This topic shows the locations of the field-replaceable components and service-related items. The view in the following figure shows the server with the top cover removed.

Figure 3. ND-NODE-L4 , Serviceable Component Locations

1

Front-loading drive bays 1–10 support SAS/SATA/NVMe drives

2

Cisco M6 12G SAS RAID card or Cisco M6 12G SAS HBA Controller

3

8 hot-swappable cooling fan modules

4

SuperCap module mounting bracket

The SuperCap module (not shown) that mounts into this location provides RAID write-cache backup.

5

DIMM sockets on motherboard, 32 total, 16 per CPU

CPUs are arranged in groups of eight sockets above the top CPU and below the bottom CPU, and 16 sockets between the CPUs.

6

Motherboard CPU socket two (CPU2)

7

Motherboard CPU socket one (CPU1)

8

M.2 module connector

Supports a boot-optimized RAID controller with connectors for up to two SATA M.2 SSDs

9

2 Power Supply Units (PSUs)

10

PCIe riser slot 2

11

PCIe riser slot 1

12

Modular LOM (mLOM) card bay on chassis floor (x16 PCIe lane)

Summary of Server Features

The following table lists a summary of server features.

Feature

Description

Chassis

One rack-unit (1RU) chassis

Central Processor

Up to two CPUs from the Intel Xeon Processor Scalable Family.

This includes CPUs from the following series:

  • Intel Xeon Silver 4XXX Processors

Up to two Socket AMD Zen2/3 Architecture supporting Rome/Milan processors

Memory

24 DDR4 DIMM sockets on the motherboard (12 each CPU)

32 DDR4 DIMMs, up to 3200 MHz(1DPC), 2933 MHz (2DPC), with support for RDIMMs, LRDIMMs

Multi-bit error protection

Multi-bit error protection is supported

Video

The Cisco Integrated Management Controller (CIMC) provides video using the Matrox G200e video/graphics controller:

  • Integrated 2D graphics core with hardware acceleration

  • Embedded DDR memory interface supports up to 512 MB of addressable memory (8 MB is allocated by default to video memory)

  • Supports display resolutions up to 1920 x 1200 16bpp @ 60Hz

  • High-speed integrated 24-bit RAMDAC

  • Single lane PCI-Express host interface running at Gen 1 speed

Baseboard management

BMC, running Cisco Integrated Management Controller (Cisco IMC) firmware.

Depending on your Cisco IMC settings, Cisco IMC can be accessed through the 1-Gb dedicated management port, the 1-Gb/10-Gb Ethernet LAN ports or a Cisco virtual interface card.

Network and management I/O

Rear panel:

  • One 1-Gb Ethernet dedicated management port (RJ-45 connector)

  • One RS-232 serial port (RJ-45 connector)

  • One VGA video connector port (DB-15 connector)

  • Two USB 3.0 ports

  • One flexible modular LAN on motherboard (mLOM)/OCP 3.0 slot that can accommodate various interface cards

  • One KVM console connector (supplies two USB 2.0 connectors, one VGA DB15 video connector, and one serial port (RS232) RJ45 connector)

  • Two 1-Gb/10-Gb BASE-T Ethernet LAN ports (RJ-45 connectors)

    The dual LAN ports can support 1 Gbps and 10 Gbps, depending on the link partner capability.

Front panel:

  • One KVM console connector (supplies two USB 2.0 connectors, one VGA DB15 video connector, and one serial port (RS232) RJ45 connector)

  • One front-panel keyboard/video/mouse (KVM) connector that is used with the KVM cable, which provides two USB 2.0, one VGA, and one DB-9 serial connector.

Modular LAN on Motherboard (mLOM)/ OCP3 3.0 slot

One dedicated socket (x16 PCIe lane) that can be used to add an mLOM card for additional rear-panel connectivity.

The dedicated mLOM/OCP 3.0 slot on the motherboard can flexibly accommodate the following cards:

  • Cisco Virtual Interface Cards

  • OCP 3.0 network interface card (UCSC-O-ID10GC)

Power

One power supply:

  • AC power supplies 1050 W AC each

Up to two of the following hot-swappable power supplies:

  • 770 W (AC)

  • 1050 W (AC)

  • 1050 W (DC)

  • 1600 W (AC)

  • 2300 W (AC)

One power supply is mandatory; one more can be added for 1 + 1 redundancy.

ACPI

The advanced configuration and power interface (ACPI) 4.0 standard is supported.

Front Panel

The front panel controller provides status indications and control buttons

Cooling

Eight hot-swappable fan modules for front-to-rear cooling.

PCIe I/O

Two horizontal PCIe expansion slots on a PCIe riser assembly.

See PCIe Slot Specifications, on page 60 for specifications of the slots.

Horizontal PCIe expansion slots are supported by PCIe riser assemblies. The server supports either of the following configurations:

  • One half-height riser card in PCIe Riser 1

  • Three half-height riser cards in PCIe Riser 1, 2, 3

  • Two full-height riser cards

InfiniBand

The PCIe bus slots in this server support the InfiniBand architecture.

Expansion Slots

Three half-height riser slots

  • Riser 1 (controlled by CPU 1): One x16 PCIe Gen4 Slot, (Cisco VIC), half-height, 3/4 length

  • Riser 2 (controlled by CPU 1): One x16 PCIe Gen4 Slot, electrical x8, half-height, 3/4 length

  • Riser 3 (controlled by CPU 1): One x16 PCIe Gen4 Slot, (Cisco VIC), half-height, 3/4 length

Two full-height riser slots

  • Riser 1 (controlled by CPU 1): One x16 PCIe Gen4 Slot, (Cisco VIC), full-height, 3/4 length

  • Riser 3 (controlled by CPU 1): One x16 PCIe Gen4 Slot, (Cisco VIC), full-height, 3/4 length

Interfaces

Rear panel:

  • One 1Gbase-T RJ-45 management port

  • One RS-232 serial port (RJ45 connector)

  • One DB15 VGA connector

  • Two USB 3.0 port connectors

  • One flexible modular LAN on motherboard (mLOM) slot that can accommodate various interface cards

Front panel:

  • One KVM console connector (supplies two USB 2.0 connectors, one

  • VGA DB15 video connector, and one serial port (RS232) RJ45 connector)

Storage, front-panel

The server is orderable in the following version.

  • ND-NODE-L4 , Small form-factor (SFF) drives, with 10-drive backplane. Supports up to 10 2.5-inch SAS/SATA drives. Drive bays 1 and 2 support NVMe SSDs.

Internal Storage Devices

Apart from the front panel, server supports a mini-storage module connector on the motherboard supports a boot-optimized RAID controller carrier that holds up two SATA M.2 SSDs. Mixing different capacity SATA M.2 SSDs is not supported. It also supports USB3.0 TypeA connector.

Integrated Management Processor

Baseboard Management Controller (BMC) running Cisco Integrated Management Controller (CIMC) firmware.

Depending on your CIMC settings, the CIMC can be accessed through the 1GE dedicated management port, the 1GE/10GE LOM ports, or a Cisco virtual interface card (VIC).

CIMC manages certain components within the server, such as the Cisco 12G SAS HBA.

Storage Controllers

The Cisco 12G SAS RAID controller or Cisco 12G SAS HBA plugs into a dedicated slot. Only one of these at a time can be used at a time.

  • Cisco 12G SAS RAID controller

    • RAID support (RAID 0, 1, 5, 6, 10, 50, 60, SRAID0, and JBOD mode)

    • Supports up to 10 internal SAS/SATA drives

    • Plugs into drive backplane

  • Cisco 12G SAS HBA

    • No RAID support

      JBOD/Pass-through Mode support

      Supports up to 10 SAS/SATA internal drives

      Plugs into drive backplane

Modular LAN over Motherboard (mLOM) slot

The dedicated mLOM slot on the motherboard can flexibly accommodate the following cards:

  • Cisco Virtual Interface Cards (VICs)

Intersight

Intersight provides server management capabilities