Recommended Computing Resources for Cisco Catalyst SD-WAN Control Components Release 20.11.x


Note


Starting from Cisco Catalyst SD-WAN Control Components Release 20.9.x, the recommended computing resources are specified for single tenant and multitenants according to the instance type definitions. Prior to Cisco Catalyst SD-WAN Control Components Release 20.9.x, the recommended computing resources were specified based the deployment modes.


Single Tenant (ST)

The supported instance specifications for the Cisco vManage, Cisco vBond Orchestrators, and Cisco vSmart Controllers are as follows:


Note


The controller and the device software versions should be the same, to achieve the following scale.


Table 1. Instance Type Definitions
Instance Type Specifications (Approximation) Qualified Instance Type
vCPUs* RAM* Storage Size* Azure AWS

Small

16 vCPUs

32 GB RAM

500 GB

Standard_F16s_v2

c5.4xlarge

Medium

32 vCPUs

64 GB RAM

1 TB

Standard_F32s_v2

c5.9xlarge

Large

32 vCPUs

128 GB RAM

5 TB

Standard_D32ds_v5

c5.18xlarge and m6i.8xlarge

* vCPU, RAM, and Storage Size numbers are on per Cisco vManage basis. The Storage Size numbers can be sized up to 10 TB for on-prem and customer cloud hosted.

Table 2. Instance Types with Number of Devices, Nodes and Deployment Models
Devices Nodes and Deployment Models with Instance Type Data Processing Factor Number of days the data can be stored Max Daily Processing Volume Cisco Cloud On-Prem (UCS) Customer Cloud

** Cisco SD-WAN Application Intelligence Engine (SAIE) Disabled

<250

One Node Small Cisco vManage

NA

NA

NA

Yes

Yes

Yes

250-1000

One Node Medium vManage

NA

NA

NA

Yes

Yes

Yes

1000-1500

One Node Large vManage

NA

NA

NA

Yes

Yes

Yes

1500-2000

Three Node Medium vManage Cluster (All Services)

NA

NA

NA

Yes

Yes

Yes

2000-5000

Three Node Large vManage Cluster (All Services)

NA

NA

NA

Yes

Yes

Yes

5000-10000

Six Node Large vManage Cluster (3 Nodes with ConfigDB) and all nodes messaging server, stats and AppServer

NA

NA

NA

Yes

Yes

Yes

** Cisco SD-WAN Application Intelligence Engine (SAIE) Enabled

<250

One Node Medium vManage

25 GB/Day

20 Days

25 GB/Day

Yes

NA

NA

<250

One Node Large vManage

50 GB/Day

30 Days

50 GB/Day

NA

Yes

Yes

250-1000

One Node Large vManage

50 GB/Day

30 Days

50 GB/Day

Yes

Yes

Yes

1000-4000

Three Node Large vManage Cluster (All Services)

100 GB/Day

14 Days

300 GB/Day

Yes

Yes

Yes

4000-7000

Six Node Large vManage Cluster (3 Node with ConfigDB) and all nodes messaging server, stats, and AppServer

100 GB/Day

14 Days

2 TB/Day*

Yes

Yes

Yes

7000-10000

Six Node Large vManage Cluster (3 Node with ConfigDB) and all nodes messaging server, stats, and AppServer

100 GB/Day

14 Days

1 TB/Day*

Yes

Yes

Yes


Note


  • *For a larger dataset per day, run Stats on all the servers.

  • ** Along with the SAIE, the following statistics are also considered in the recommendations:

    • Approute

    • Performance Monitor


Table 3. Supported Scale on Cisco HyperFlex (HX), SAIE Disabled
Devices Nodes and Deployment Models with Instance Types

0-2000

Three Node Medium Cisco vManage Cluster

2000-5000

Three Node Large Cisco vManage Cluster

To achieve scale beyond the numbers mentioned in the tables above, deploy multiple overlays.


Note


  • The number of days the data can be stored in Cisco SD-WAN Manager, depends on per-day processing volume of the device nodes. To store the data for a longer time or to accommodate the increase in per-day processing volume, use the following formulas to calculate the required Cisco SD-WAN Manager disk size:

  • Formula to calculate the Cisco SD-WAN Manager disk size required for single node deployment: (Data per day × number of days) + 500 GB buffer . For example, if the data per day is 100 Gigabytes and the number of days the data must be stored is 10, then the required Cisco SD-WAN Manager disk size is 1.5 Terabytes.

  • Formula to calculate the Cisco SD-WAN Manager disk size required for cluster deployment: (Data per day × number of days × 3) + 500 GB buffer . For example, if the data per day is 100 Gigabytes, the number of days the data must be stored is 10, then the required Cisco SD-WAN Manager disk size is 3.5 Terabytes.



Note


Maximum tested disk size for On-prem is 10 TB per instance.



Note


Starting from Cisco vManage Release 20.6.1, you can achieve the above mentioned storage size numbers by modifying the aggregated SAIE size. The aggregated SAIE size is unidimensional and varies when the deployment includes edge devices that run on a mix of releases (Cisco SD-WAN Release 20.6.x and earlier releases). The aggregated SAIE also varies when on-demand troubleshooting is enabled for the devices.

Ensure that both the SAIE and aggregated SAIE index sizes are configured to enable on-demand troubleshooting.

To modify the aggregated SAIE value,

  1. From the Cisco SD-WAN Manager menu, choose Administration > Settings.

  2. Click Edit next to Statistics Database Configuration.

  3. Modify the Aggregated SAIE size to the desired value based on your SAIE traffic, the default disk size allocation is 5 GB.



Note


When SAIE is enabled, you must set the Statistics Collection timer to 30 minutes or higher.

To set the Statistics Collection timer,

  1. From the Cisco SD-WAN Manager menu, choose Administration > Settings.

  2. Click Edit next to Statistics Configuration.

  3. Modify the Collection Interval minutes to the desired value based on your SAIE traffic, the default collection interval is 30 minutes.

  4. Click Save.


Table 4. Cisco SD-WAN Validator Recommended Computing Resources
Devices Number of Cisco vBond vCPU RAM OS Volume vNICs Azure AWS

<1000

2

2

4 GB

10 GB

2 (one for tunnel interface, one for management)

Standard_F2s_v2

c5.large

1000-4000

2

4

8 GB

10 GB

2 (one for tunnel interface, one for management)

Standard_F4s_v2

c5.xlarge

4000-8000

4

4

8 GB

10 GB

2 (one for tunnel interface, one for management)

Standard_F4s_v2

c5.xlarge

8000-10000

6

4

8 GB

10 GB

2 (one for tunnel interface, one for management)

Standard_F4s_v2

c5.xlarge

Table 5. Cisco Cisco SD-WAN Controller Recommended Computing Resources
Devices Number of Cisco vSmart vCPU RAM OS Volume vNICs Azure AWS

<250

2

4

8 GB

10 GB

2 (one for tunnel interface, one for management)

Standard_F4s_v2

c5.xlarge

250-1000

2

4

16 GB

10 GB

2 (one for tunnel interface, one for management)

Standard_D4ds_v5

c5.2xlarge

1000-2500

2

8

16 GB

10 GB

2 (one for tunnel interface, one for management)

Standard_F8s_v2

c5.2xlarge

2500-5000

4

8

16 GB

10 GB

2 (one for tunnel interface, one for management)

Standard_F8_v2

c5.2xlarge

5000-7500

6

8

16 GB

10 GB

2 (one for tunnel interface, one for management)

Standard_F8_v2

c5.2xlarge

7500-10000

8

8

16 GB

10 GB

2 (one for tunnel interface, one for management)

Standard_F8_v2

c5.2xlarge


Note


  • The tested and recommended limit of supported Cisco SD-WAN Validator instances in a single Cisco Catalyst SD-WAN overlay are eight, similarly the maximum number of tested Cisco SD-WAN Controller instances is twelve.

  • The required number of vCPUs and RAM for Cisco SD-WAN Validator and Cisco SD-WAN Controller for Cisco Cloud Hosted overlays are determined by the Cisco Cloud Ops and provisioned accordingly.

  • The number of Cisco SD-WAN Validator and Cisco SD-WAN Controller instances recommended in the table above assumes a deployment with Cisco SD-WAN Control Components in two locations (i.e. data centers) designed for redundancy – with half the controllers in one data center and half the controllers in another data center. In other words, the table above already considers the 1:1 redundancy in the number of Cisco SD-WAN Validator and Cisco SD-WAN Controller instances recommended to be deployed across the two data centers – without considering any Cisco vSmart controller group/affinity configuration.

    If you are deploying Cisco SD-WAN Validator and Cisco SD-WAN Controller instances with a different set of assumptions, for example, across three data centers, or if you are using Cisco SD-WAN Validator groups/affinity within your deployment, refer to the Points to Consider chapter for additional guidance.


Table 6. Testbed Specifications for UCS Platforms
Hardware SKU Specifications

UCSC-C240-M5SX

UCS C240 M5 24 SFF + 2 rear drives without CPU, memory cards, hard disk, PCIe, and PS.

UCS-MR-X16G1RT-H

16GB DDR4-2933-MHz RDIMM/1Rx4/1.2v

UCS-CPU-I6248R

Intel 6248R 3GHz/205W 24C/35.75MB DDR4 2933MHz

Intel(R) Xeon(R) Gold 6330N CPU @ 2.20GHz

UCS-SD16T123X-EP

1.6TB 2.5in Enterprise Performance 12G SAS SSD (3X endurance)


Note


  • Any UCS Platform (Fifth and sixth generation above) with the same or higher hardware specifications mentioned in the above table supports Cisco SD-WAN Controllers with similar scale numbers mentioned in this document.

  • The CPU specifications are not tied to any brand, both AMD and Intel brands with specifications above are supported.


Table 7. Testbed Specifications for HX Platforms
Hardware SKU Specifications

HXAF240-M5SX

Cisco HyperFlex HX240c M5 All Flash Node

HX-MR-X32G2RT-H

32GB DDR4-2933-MHz RDIMM/2Rx4/1.2v

HX-CPU-I6248

Intel 6248 2.5GHz/150W 20C/24.75MB 3DX DDR4 2933 MHz

HX-SD38T61X-EV

3.8TB 2.5 inch Enterprise Value 6G SATA SSD

HX-NVMEXPB-I375

375GB 2.5 inch Intel Optane NVMe Extreme Performance SSD


Note


  • The tested replication factor is three.

  • The default compression on the HX system is applicable to all cases. This compression is automatically determined by the system and cannot be configured.


Multitenant (MT)

The supported instance specifications for the Cisco vManage, Cisco vBond Orchestrators, and Cisco vSmart Controllers are as follows:

Table 8. Instance Type Definitions
Instance Type Specifications (Approximation) Qualified Instance Type
vCPUs RAM Storage Size

Azure

AWS

Large

32 vCPUs*

128 GB RAM

5 TB

Standard_F64s_v2

c5.18xlarge and m6i.8xlarge

* requires 64 vCPU for multi-tenant deployment in the Cisco vManage Specifications table for deploying beyond 2500 devices.

Table 9. Cisco vManage Specifications
Max Tenants (T) and Devices (D) Nodes and Deployment Models with Instances Type Data Processing Factor Number of Days the Data Can be Stored Cisco Cloud On-Prem (UCS) Customer Cloud

75(T) and 2500(D)*

Three Node Large vManage

100 GB/Day

14 Days

Yes

Yes

Yes

150(T) and 7500(D)*

Six Node Large vManage (64 vCPUs required)

100 GB/Day

14 Days

No

Yes

Yes


Note


* indicates that a pair of Cisco vSmart Controllers supports 24 tenants and 1000 devices across all the tenants.


Table 10. Cisco vBond Orchestrators Recommended Computing Resources

Devices

Number of Cisco vBond vCPU RAM OS Volume vNICs AWS

Azure

<1000

2

2

4 GB

10 GB

2 (one for tunnel interface, one for management)

c5.large

Standard_F2s_v2

1000-4000

2

4

8 GB

10 GB

2 (one for tunnel interface, one for management)

c5.xlarge

Standard_F4s_v2

4000-7500

4

4

8 GB

10 GB

2 (one for tunnel interface, one for management)

c5.xlarge

Standard_F4s_v2

Table 11. Cisco vSmart Controllers Recommended Computing Resources
Devices vCPU RAM OS Volume vNICs AWS

Azure

< 250

4

8 GB

10 GB

2 (one for tunnel interface, one for management)

c5.xlarge

Standard_F4s_v2

250-2500

8

16 GB

10 GB

2 (one for tunnel interface, one for management)

c5.2xlarge

Standard_F8_v2

2500-5000

8

16 GB

10 GB

2 (one for tunnel interface, one for management)

c5.2xlarge

Standard_F8_v2

5000-7500

8

16 GB

10 GB

2 (one for tunnel interface, one for management)

c5.2xlarge

Standard_F8_v2

Table 12. Cisco vBond and vSmart Specifications
Devices Number of Cisco vBond Orchestrators Required Number of Cisco vSmart Controllers Required

75 Tenants or 2500 Devices

2

A pair for every 24 tenants

150 Tenants or 7500 Devices

2 (additional 2 if deployment goes beyond 4000 devices)

A pair for every 24 tenants


Note


  • A pair of Cisco vSmart Controllers supports 24 tenants and 1000 devices across all the tenants. For example, 24 tenants require 2 vSmart Controllers, 50 tenants require 6 vSmart Controllers, and 150 tenants require 14 vSmart Controllers.

  • The SAIE numbers are for the entire multi-tenant (cluster) deployment and there is no per tenant SAIE limitation.

  • If SAIE is enabled, we recommend that the aggregated SAIE data (across all Cisco vManage nodes and all tenants in the multitenant system) does not exceed 350 GB per day. If the SAIE data exceeds 350 GB per day, increase the Hard Disk capacity of each Cisco vManage node up to 10 TB.

  • A pair of Cisco vSmart Controllers supports 24 tenants and 1000 devices across all tenants.

  • A tenant can add a maximum of 1000 devices.

  • The tested and recommended limit of supported Cisco vBond Orchestrator instances in a single Cisco SD-WAN overlay is eight.