Overview

Overview

Cisco Nexus Dashboard provides a common platform for deploying Cisco Data Center applications. These applications provide real time analytics, visibility and assurance for policy and infrastructure.

The Cisco Nexus Dashboard server is required for installing and hosting the Cisco Nexus Dashboard application.

The server is orderable in the following version:
  • ND-NODE-L4 — Small form-factor (SFF) drives, with 10-drive backplane. Supports up to 10 2.5-inch SAS/SATA drives. Drive bays 1 and 2 support NVMe SSDs.

Following PCIe Riser combinations are available:

  • One half-height riser card in PCIe Riser 1

  • Three half-height riser cards in PCIe Riser 1, 2, 3

  • Two full-height riser cards Riser 1 and 3

  • Riser 1—Supports Riser1. Supports single x16 PCIe supporting full height 3/4 length cards in 2 riser configuration (or) Half-height 3/4-length cards in 3 riser configuration and NC-SI from Pilot4.

  • Riser 2—Supports Riser 1. Supports single x16 PCIe supporting only Half-height 3/4-length cards in 3-riser configuration.

  • Riser 3—Supports Riser 3A, 3B. PCIe slot 3 with the following options:

    • Riser3A Supports single x16 PCIe supporting half height 3/4 length cards in 3 riser configuration and NC-SI.

    • Riser3B Supports single x16 PCIe supporting full height 3/4-length cards in 2 riser configuration and NC-SI.

  • 2 10GBase-T Ethernet LAN over Motherboard (LOM) ports for network connectivity, plus one 1 Gigabit Ethernet dedicated management port

  • One mLOM/VIC card provides 10G/25G/40G/50G/100G connectivity. Supported cards are:

    • Cisco VIC 1455 VIC PCIE – Quad Port 10/25G SFP28 (UCSC-PCIE-C25Q-04)

External Features

This topic shows the external features of the server versions.

Cisco ND-NODE-L4 (SFF Drives) Front Panel Features

The following figure shows the front panel features of the small form-factor drive versions of the server.

For definitions of LED states, see Front-Panel LEDs.

Figure 1. ND-NODE-L4 (SFF Drives) Front Panel

1

Drive bays 1 – 10 support SAS/SATA hard disk drives (HDDs) and solid state drives (SSDs)

7

Fan status LED

2

  • ND-NODE-L4 : Drive bays 1 and 2 support NVMe PCIe SSDs.

8

Network link activity LED

3

Power button/power status LED

9

Temperature status LED

4

Unit identification button/LED

10

Pull-out asset tag

5

System status LED

11

KVM connector

(used with KVM cable that provides one DB-15 VGA, one DB-9 serial, and two USB connectors)

6

Power supply status LED

-

Cisco ND-NODE-L4 Rear Panel Features

The rear panel features can be different depending on the number and type of PCIe cards in the server.

By default, single CPU servers come with only one half-height riser 1 installed, and dual CPU servers support all three half-height risers.

The following figure shows the rear panel features of the server with three riser configuration.

For definitions of LED states, see Rear-Panel LEDs.

Figure 2. Cisco ND-NODE-L4 Rear Panel Three Riser Configuration

The following figure shows the rear panel features of the server with three riser configuration.

1

PCIe slots

Following PCIe Riser combinations are available:

  • One half-height riser card in PCIe Riser 1

  • Three half-height riser cards in PCIe Riser 1, 2, 3

  • Two full-height riser cards Riser 1 and 3

  • Riser 1—Supports Riser1. Supports single x16 PCIe supporting full height 3/4 length cards in 2 riser configuration (or) Half-height 3/4-length cards in 3 riser configuration and NC-SI from Pilot4.

  • Riser 2—Supports Riser 1. Supports single x16 PCIe supporting only Half-height 3/4-length cards in 3-riser configuration.

  • Riser 3—Supports Riser 3A, 3B. PCIe slot 3 with the following options:

    • Riser3A Supports single x16 PCIe supporting half height 3/4 length cards in 3 riser configuration and NC-SI.

    • Riser3B Supports single x16 PCIe supporting full height 3/4-length cards in 2 riser configuration and NC-SI.

2

Power supply units (PSUs), two which can be redundant when configured in 1+1 power mode.

3

Modular LAN-on-motherboard (mLOM) card bay (x16 PCIe lane)

4

System identification button/LED

5

USB 3.0 ports (two)

6

Dedicated 1 GB Ethernet management port

7

COM port (RJ-45 connector)

8

VGA video port (DB-15 connector)

Component Location

This topic shows the locations of the field-replaceable components and service-related items. The view in the following figure shows the server with the top cover removed.

Figure 3. ND-NODE-L4 , Serviceable Component Locations

1

Front-loading drive bays 1–10 support SAS/SATA drives.

  • ND-NODE-L4 : Drive bays 1 and 2 support NVMe PCIe SSDs.

10

Power supplies (hot-swappable when redundant as 1+1)

2

Cooling fan modules (seven, hot-swappable)

11

Trusted platform module (TPM) socket on motherboard (not visible in this view)

3

Supercap unit mounting bracket (RAID backup)

12

PCIe riser 2/slot 2 (half-height, x16 lane)

Includes PCIe cable connectors for front-loading NVMe SSDs (x8 lane)

4

DIMM sockets on motherboard (12 per CPU)

13

PCIe riser 1/slot 1 (full-height, x16 lane)

Includes socket for Micro-SD card

5

CPUs and heatsinks (up to two)

14

Modular LOM (mLOM) card bay on chassis floor (x16 PCIe lane), not visible in this view

6

Mini-storage module socket. Options:

  • SD card module with two SD card slots

  • M.2 module with slots for either two SATA M.2 drives or two NVMe M.2 drives

  • Cisco Boot-Optimized M.2 RAID Controller (module with two slots for SATA M.2 drives, plus an integrated SATA RAID controller that can control the two M.2 drives in a RAID 1 array)

15

Modular RAID (mRAID) riser, can optionally be a riser that supports either:

  • Hardware RAID controller card

  • Interposer card for embedded SATA RAID

7

Chassis intrusion switch (optional)

16

PCIe cable connectors for front-loading NVMe SSDs on PCIe riser 2

8

Internal USB 3.0 port on motherboard

17

Micro-SD card socket on PCIe riser 1

9

RTC battery, vertical socket

-

Summary of Server Features

The following table lists a summary of server features.

Feature

Description

Chassis

One rack-unit (1RU) chassis

Central Processor

Up to two CPUs from the Intel Xeon Processor Scalable Family.

This includes CPUs from the following series:

  • Intel Xeon Silver 4XXX Processors

Up to two Socket AMD Zen2/3 Architecture supporting Rome/Milan processors

Memory

24 DDR4 DIMM sockets on the motherboard (12 each CPU)

32 DDR4 DIMMs, up to 3200 MHz(1DPC), 2933 MHz (2DPC), with support for RDIMMs, LRDIMMs

Multi-bit error protection

Multi-bit error protection is supported

Video

The Cisco Integrated Management Controller (CIMC) provides video using the Matrox G200e video/graphics controller:

  • Integrated 2D graphics core with hardware acceleration

  • Embedded DDR memory interface supports up to 512 MB of addressable memory (8 MB is allocated by default to video memory)

  • Supports display resolutions up to 1920 x 1200 16bpp @ 60Hz

  • High-speed integrated 24-bit RAMDAC

  • Single lane PCI-Express host interface running at Gen 1 speed

Baseboard management

BMC, running Cisco Integrated Management Controller (Cisco IMC) firmware.

Depending on your Cisco IMC settings, Cisco IMC can be accessed through the 1-Gb dedicated management port, the 1-Gb/10-Gb Ethernet LAN ports or a Cisco virtual interface card.

Network and management I/O

Rear panel:

  • One 1-Gb Ethernet dedicated management port (RJ-45 connector)

  • One RS-232 serial port (RJ-45 connector)

  • One VGA video connector port (DB-15 connector)

  • Two USB 3.0 ports

  • One flexible modular LAN on motherboard (mLOM)/OCP 3.0 slot that can accommodate various interface cards

  • One KVM console connector (supplies two USB 2.0 connectors, one VGA DB15 video connector, and one serial port (RS232) RJ45 connector)

  • Two 1-Gb/10-Gb BASE-T Ethernet LAN ports (RJ-45 connectors)

    The dual LAN ports can support 1 Gbps and 10 Gbps, depending on the link partner capability.

Front panel:

  • One KVM console connector (supplies two USB 2.0 connectors, one VGA DB15 video connector, and one serial port (RS232) RJ45 connector)

  • One front-panel keyboard/video/mouse (KVM) connector that is used with the KVM cable, which provides two USB 2.0, one VGA, and one DB-9 serial connector.

Modular LAN on Motherboard (mLOM)/ OCP3 3.0 slot

One dedicated socket (x16 PCIe lane) that can be used to add an mLOM card for additional rear-panel connectivity.

The dedicated mLOM/OCP 3.0 slot on the motherboard can flexibly accommodate the following cards:

  • Cisco Virtual Interface Cards

  • OCP 3.0 network interface card (UCSC-O-ID10GC)

Power

One power supply:

  • AC power supplies 1050 W AC each

Up to two of the following hot-swappable power supplies:

  • 770 W (AC)

  • 1050 W (AC)

  • 1050 W (DC)

  • 1600 W (AC)

  • 2300 W (AC)

One power supply is mandatory; one more can be added for 1 + 1 redundancy.

ACPI

The advanced configuration and power interface (ACPI) 4.0 standard is supported.

Front Panel

The front panel controller provides status indications and control buttons

Cooling

Seven hot-swappable fan modules for front-to-rear cooling.

Eight hot-swappable fan modules for front-to-rear cooling.

PCIe I/O

Two horizontal PCIe expansion slots on a PCIe riser assembly.

See PCIe Slot Specifications, on page 60 for specifications of the slots.

Horizontal PCIe expansion slots are supported by PCIe riser assemblies. The server supports either of the following configurations:

  • One half-height riser card in PCIe Riser 1

  • Three half-height riser cards in PCIe Riser 1, 2, 3

  • Two full-height riser cards

InfiniBand

The PCIe bus slots in this server support the InfiniBand architecture.

Expansion Slots

Three half-height riser slots

  • Riser 1 (controlled by CPU 1): One x16 PCIe Gen4 Slot, (Cisco VIC), half-height, 3/4 length

  • Riser 2 (controlled by CPU 1): One x16 PCIe Gen4 Slot, electrical x8, half-height, 3/4 length

  • Riser 3 (controlled by CPU 1): One x16 PCIe Gen4 Slot, (Cisco VIC), half-height, 3/4 length

Two full-height riser slots

  • Riser 1 (controlled by CPU 1): One x16 PCIe Gen4 Slot, (Cisco VIC), full-height, 3/4 length

  • Riser 3 (controlled by CPU 1): One x16 PCIe Gen4 Slot, (Cisco VIC), full-height, 3/4 length

Interfaces

Rear panel:

  • One 1Gbase-T RJ-45 management port

  • One RS-232 serial port (RJ45 connector)

  • One DB15 VGA connector

  • Two USB 3.0 port connectors

  • One flexible modular LAN on motherboard (mLOM) slot that can accommodate various interface cards

Front panel:

  • One KVM console connector (supplies two USB 2.0 connectors, one

  • VGA DB15 video connector, and one serial port (RS232) RJ45 connector)

Storage, front-panel

The server is orderable in the following version.

  • ND-NODE-L4 , Small form-factor (SFF) drives, with 10-drive backplane. Supports up to 10 2.5-inch SAS/SATA drives. Drive bays 1 and 2 support NVMe SSDs.

Internal Storage Devices

Apart from the front panel, server supports a mini-storage module connector on the motherboard supports a boot-optimized RAID controller carrier that holds up two SATA M.2 SSDs. Mixing different capacity SATA M.2 SSDs is not supported. It also supports USB3.0 TypeA connector.

Integrated Management Processor

Baseboard Management Controller (BMC) running Cisco Integrated Management Controller (CIMC) firmware.

Depending on your CIMC settings, the CIMC can be accessed through the 1GE dedicated management port, the 1GE/10GE LOM ports, or a Cisco virtual interface card (VIC).

CIMC manages certain components within the server, such as the Cisco 12G SAS HBA.

Storage Controllers

The Cisco 12G SAS RAID controller or Cisco 12G SAS HBA plugs into a dedicated slot. Only one of these at a time can be used at a time.

  • Cisco 12G SAS RAID controller

    • RAID support (RAID 0, 1, 5, 6, 10, 50, 60, SRAID0, and JBOD mode)

    • Supports up to 10 internal SAS/SATA drives

    • Plugs into drive backplane

  • Cisco 12G SAS HBA

    • No RAID support

      JBOD/Pass-through Mode support

      Supports up to 10 SAS/SATA internal drives

      Plugs into drive backplane

Modular LAN over Motherboard (mLOM) slot

The dedicated mLOM slot on the motherboard can flexibly accommodate the following cards:

  • Cisco Virtual Interface Cards (VICs)

Intersight

Intersight provides server management capabilities