Virtualization for Cisco Unified Communications Manager (CUCM) - Supplemental Information


Version 12.5(1) - SU1

(top)

ALERT! This page has been updated to reflect compatibility with the latest M5 hardware. Click here to download support for M4 and older hardware.

Co-residency support= Full
Supported Versions of VMware vSphere ESXi= 6.5, 6.7 (For application fresh installs on ESXi 6.5 (VMFS5 or 6) and/or vCenter 6.5, use minimum OVA file version cucm_12.5_vmv13_v1.0.ova. Does not apply to ESXi version upgrades.)
Click for "IOPS"
Component &
Capacity Point

VM Configuration Requirements
click to download OVA file for this version

Supported Hardware (Latest)
Click here to download support of older/non-orderable servers

vCPU

vRAM

vDisk

vNIC

UCS Tested Reference Configurations

UCS or 3rd-party Specs-based on Intel Xeon

150 users
See notes

2

4 GB

1 x 80 GB

1

Yes
ESXi 6.7 only

Yes

Yes

Yes

Yes

Yes

Yes

1,000 users
See notes

2

6 GB

1 x 80 GB

1

No

Yes

Yes

Yes

Yes

Yes

Yes

2,500 users
See notes

1
See notes

6 GB

1 x 80 GB

1

No

No

No

Yes

Yes

Yes

No

7,500 users
See notes

2

8 GB

1 x 110 GB

1

No

No

No

Yes

Yes

Yes

No

10,000 users
See notes

4

8 GB

1 x 110 GB

1

No

No

No

Yes

Yes

Yes

No

 


Version 12.0(x)

(top)

ALERT! This page has been updated to reflect compatibility with the latest M5 hardware. Click here to download support for M4 and older hardware.

Co-residency support= Full
Supported Versions of VMware vSphere ESXi= 5.0 U1, 5.1, 5.5, 6.0, and 6.5 (For application fresh installs on ESXi 6.5 (VMFS5 or 6) and/or vCenter 6.5, use minimum OVA file version cucm_12.0_vmv8_v1.0.ova. Does not apply to ESXi version upgrades.); HX TRC 6.0 only
Click for "IOPS"
Component &
Capacity Point

VM Configuration Requirements
click to download OVA file for this version

Supported Hardware (Latest)
Click here to download support of older/non-orderable servers

vCPU

vRAM

vDisk

vNIC

UCS Tested Reference Configurations

UCS or 3rd-party Specs-based on Intel Xeon

150 users
See notes

2

4 GB

1 x 80 GB

1

Yes
Except ESXi 6.5

Yes

Yes

Yes

Only C240 M5SX TRC#2

Yes

Yes

1,000 users
See notes

2

6 GB

1 x 80 GB

1

No

Yes

Yes

Yes

Only C240 M5SX TRC#2

Yes

Yes

2,500 users
See notes

1
See notes

6 GB

1 x 80 GB

1

No

No

No

Yes

Only C240 M5SX TRC#2

Yes

No

7,500 users
See notes

2

8 GB

1 x 110 GB

1

No

No

No

Yes

Only C240 M5SX TRC#2

Yes

No

10,000 users
See notes

4

8 GB

1 x 110 GB

1

No

No

No

Yes

Only C240 M5SX TRC#2

Yes

No

 


Version 11.5

(top)

ALERT! This page has been updated to reflect compatibility with the latest M5 hardware. Click here to download support for M4 and older hardware.

Co-residency support= Full
Supported Versions of VMware vSphere ESXi= 5.0 U1, 5.1, 5.5, 6.0, 6.5, 6.7 (For application fresh installs on ESXi 6.5 (VMFS5 or 6) and/or vCenter 6.5, use minimum OVA file version cucm_11.5_vmv8_v1.1.ova. Does not apply to ESXi version upgrades.)
Supported VMware Virtual Machine Hardware Version= Version 11.5 SU10 requires 13 or later
Click for "IOPS"
Component &
Capacity Point

VM Configuration Requirements
click to download OVA file

Supported Hardware (Latest)
Click here to download support of older/non-orderable servers

vCPU

vRAM

vDisk

vNIC

UCS Tested Reference Configurations

UCS or 3rd-party Specs-based on Intel Xeon

150 users
See notes

2

4 GB

1 x 80 GB

1

Yes
Except ESXi 6.7

Yes

Yes

Yes

Yes

Yes

Yes

1,000 users
See notes

2

6 GB

1 x 80 GB

1

No

Yes

Yes

Yes

Yes

Yes

Yes

2,500 users
See notes

1
See notes

6 GB

1 x 80 GB

1

No

No

No

Yes

Yes

Yes

No

7,500 users
See notes

2

8 GB

1 x 110 GB

1

No

No

No

Yes

Yes

Yes

No

10,000 users
See notes

4

8 GB

1 x 110 GB

1

No

No

No

Yes

Yes

Yes

No



Version 11.0(x)

(top)

ALERT! This page has been updated to reflect compatibility with the latest M5 hardware. Click here to download support for M4 and older hardware.

Co-residency support= Full
Supported Versions of VMware vSphere ESXi= 5.0 U1, 5.1, 5.5, and 6.0
Click for "IOPS"
Component &
Capacity Point

VM Configuration Requirements
click to download OVA file

Supported Hardware (Latest)
Click here to download support of older/non-orderable servers

vCPU

vRAM

vDisk

vNIC

UCS Tested Reference Configurations

UCS or 3rd-party Specs-based on Intel Xeon

150 users
See notes

2

4 GB

1 x 80 GB

1

Yes

No

No

No

No

No

No

1,000 users
See notes

2

6 GB

1 x 80 GB

1

No

No

No

No

No

No

No

2,500 users
See notes

1
See notes

6 GB

1 x 80 GB

1

No

No

No

No

No

No

No

7,500 users
See notes

2

8 GB

1 x 110 GB

1

No

No

No

No

No

No

No

10,000 users
See notes

4

8 GB

1 x 110 GB

1

No

No

No

No

No

No

No

* See mouse-over column heading for details


Version 10.5(2)

(top)

ALERT! This page has been updated to reflect compatibility with the latest M5 hardware. Click here to download support for M4 and older hardware.

Co-residency support= Full
Supported Versions of VMware vSphere ESXi= 4.0 U4, 4.1 U2, 5.0 U1, 5.1, 5.5, 6.0, 6.5 (For application fresh installs on ESXi 6.5 (VMFS5 or 6) and/or vCenter 6.5, use minimum OVA file version cucm_10.5_vmv8_v2.0.ova. Does not apply to ESXi version upgrades.) ; For HX TRC, see Collaboration Virtualization Hardware.
Click for "IOPS"
Component &
Capacity Point

VM Configuration Requirements
click to download OVA file

Supported Hardware (Latest)
Click here to download support of older/non-orderable servers

vCPU

vRAM

vDisk

vNIC

UCS Tested Reference Configurations

UCS or 3rd-party Specs-based on Intel Xeon

1,000 users
See notes

2

4 GB

1 x 80 GB

1

See notes

No

No

No

No

Yes
ESXi 5.5 - 6.5

Yes
ESXi 5.5 - 6.5

2,500 users
See notes

1
See notes

4 GB

1 x 80 GB

1

No

No

No

No

No

Yes
ESXi 5.5 - 6.5

No

7,500 users
See notes

2

6 GB

1 x 110 GB

1

No

No

No

No

No

Yes
ESXi 5.5 - 6.5

No

10,000 users
See notes

4

6 GB

1 x 110 GB

1

No

No

No

No

No

Yes
ESXi 5.5 - 6.5

No

* See mouse-over column heading for details


Version 10.0(x)

(top)

ALERT! This page has been updated to reflect compatibility with the latest M5 hardware. Click here to download support for M4 and older hardware.

Co-residency support= Full
Supported Versions of VMware vSphere ESXi= 4.0 U4, 4.1 U2, 5.0 U1, 5.1, 5.5, and 6.0
Click for "IOPS"
Component &
Capacity Point

VM Configuration Requirements
click to download OVA file

Supported Hardware (Latest)
Click here to download support of older/non-orderable servers

vCPU

vRAM

vDisk

vNIC

UCS Tested Reference Configurations

UCS or 3rd-party Specs-based on Intel Xeon

1,000 users
See notes

2

4 GB

1 x 80 GB

1

See notes

No

No

No

No

No

No

2,500 users
See notes

1
See notes

4 GB

1 x 80 GB

1

No

No

No

No

No

No

No

7,500 users
See notes

2

6 GB

1 x 110 GB

1

No

No

No

No

No

No

No

10,000 users
See notes

4

6 GB

1 x 110 GB

1

No

No

No

No

No

No

No

* See mouse-over column heading for details


Notes on 150 user VM configurations

(top)

  • When deployment on a BE6000S server and version is 11.0 or higher, capacity is limited to 150 users/ 300 devices and design must follow BE6000S requirements in www.cisco.com/go/ucsrnd.

UCM cluster nodes require fixed capacity points with fixed-configuration VMs in the Cisco-provided OVA for UCM. For a given capacity point (such as the 10K user VM), the virtual hardware specs represent the minimum for that capacity point. Customers who wish to add additional vCPU and/or additional vRAM beyond this minimum to improve performance may do so, but note the following:

  • vCPU/vRAM increases alone do not increase supported capacity, max density per cluster node or max scale per cluster. Customers seeking capacity increases should migrate all cluster nodes to a higher fixed capacity point as described in the design guide and upgrade guide.
  • All cluster nodes must get the same vCPU/vRAM increase. If mixing capacity points in the same UCM cluster then the scale per cluster and the density per node continue to be limited to that of the lowest capacity point (as described in the design guide).
  • Performance is dependent on many factors including deployment specifics, so vCPU/vRAM increases may or may not improve performance.


Notes on 1,000 user VM configurations

(top)

  • When deployed on a BE6000S server and version is 10.5(2), capacity is limited to 150 users/ 300 devices and design must follow BE6000S requirements in www.cisco.com/go/ucsrnd.
  • When deployed on other supported servers, use for publishers, subscribers, standalone TFTP, standalone multicast MOH nodes, ELM or PAWS-M.
    • User count based on:
      • 1 device per user
      • 1K phones per VM
      • 4K max phones per cluster
      • 6K max BHCC per VM
      • 24K max BHCC per cluster
      • Your design/results may vary

UCM cluster nodes require fixed capacity points with fixed-configuration VMs in the Cisco-provided OVA for UCM. For a given capacity point (such as the 10K user VM), the virtual hardware specs represent the minimum for that capacity point. Customers who wish to add additional vCPU and/or additional vRAM beyond this minimum to improve performance may do so, but note the following:

  • vCPU/vRAM increases alone do not increase supported capacity, max density per cluster node or max scale per cluster. Customers seeking capacity increases should migrate all cluster nodes to a higher fixed capacity point as described in the design guide and upgrade guide.
  • All cluster nodes must get the same vCPU/vRAM increase. If mixing capacity points in the same UCM cluster then the scale per cluster and the density per node continue to be limited to that of the lowest capacity point (as described in the design guide).
  • Performance is dependent on many factors including deployment specifics, so vCPU/vRAM increases may or may not improve performance.


Notes on 2,500 user VM configurations

(top)

Note: The 2,500 user VM configuration with one vCPU may exhibit performance issues during CPU/IO-intensive operations (such as installs, upgrades, backups and CDR writes, significant usage of CTI), or if your deployment has certain characteristics such as a large quantity of TFTP files. Changing the VM configuration to 2 vCPU is recommended as a prevention strategy. Otherwise, you may deploy with 1 vCPU and be TAC-supported, but note that if the root cause of performance issues is found to be insufficient vCPU, Cisco TAC will ask that you change this to 2vCPU. If your deployment is not experiencing any issues, you are not required to change to 2 vCPU and may remain on 1 vCPU.

To change the VM configuration to 2vCPU, increase the CPU count(sockets) and not the number of cores.

  • Use for publishers, subscribers, standalone TFTP, standalone multicast MOH nodes, ELM or PAWS-M.
  • User count based on:
    • 1 device per user
    • 2.5K phones per VM
    • 10K max phones per cluster
    • 15K max BHCC per VM
    • 60K max BHCC per cluster
    • Your design/results may vary
Cisco Hosted Collaboration Solutions has an alternative 2,500 user VM configuration. For more details, see www.cisco.com/c/en/us/support/unified-communications/hosted-collaboration-solution-hcs/tsd-products-support-series-home.html (specifically the OVA Requirements of the Compatibility Matrix for the HCS solution release desired).

UCM cluster nodes require fixed capacity points with fixed-configuration VMs in the Cisco-provided OVA for UCM. For a given capacity point (such as the 10K user VM), the virtual hardware specs represent the minimum for that capacity point. Customers who wish to add additional vCPU and/or additional vRAM beyond this minimum to improve performance may do so, but note the following:

  • vCPU/vRAM increases alone do not increase supported capacity, max density per cluster node or max scale per cluster. Customers seeking capacity increases should migrate all cluster nodes to a higher fixed capacity point as described in the design guide and upgrade guide.
  • All cluster nodes must get the same vCPU/vRAM increase. If mixing capacity points in the same UCM cluster then the scale per cluster and the density per node continue to be limited to that of the lowest capacity point (as described in the design guide).
  • Performance is dependent on many factors including deployment specifics, so vCPU/vRAM increases may or may not improve performance.


Notes on 7,500 user VM configurations

(top)

  • Use for publishers, subscribers, standalone TFTP, standalone multicast MOH nodes, ELM or PAWS-M.
  • User count based on:
    • 1 device per user
    • 7.5K phones per VM
    • 30K max phones per cluster
    • 45K max BHCC per VM
    • 180K max BHCC per cluster
    • Your design/results may vary
  • vDisk configurations:
    • New installs of 9.1 and above must use 1x110 GB vDisk. Older versions used 2x80 GB or other configurations as shown in tables above.
    • VM configurations with 2 virtual disks use them as follows:
      • vDisk 1 = Operating System + app binaries
      • vDisk 2 = Logs
    • Upgrades from 8.x VMs with 2x80 GB vDisk to 9.1 or higher are not required to change the vDisk. 9.1 and higher will support these two-vDisk options: 2x80 GB and 1x80 GB + 1x110GB (where auto-grow COP was used on second vDisk).
    • Customers who want to use less storage space must use the 1x110GB vDisk and a reinstall plus restore from backup is required to change from 2x80 GB to 1x110 GB.
    • Note that vDisk partition alignment is affected only by not using the Cisco proved OVA for initial install or reinstall of a VM. Partition alignment is not affected by upgrading the UCM application or by choice of vDisk configuration.

UCM cluster nodes require fixed capacity points with fixed-configuration VMs in the Cisco-provided OVA for UCM. For a given capacity point (such as the 10K user VM), the virtual hardware specs represent the minimum for that capacity point. Customers who wish to add additional vCPU and/or additional vRAM beyond this minimum to improve performance may do so, but note the following:

  • vCPU/vRAM increases alone do not increase supported capacity, max density per cluster node or max scale per cluster. Customers seeking capacity increases should migrate all cluster nodes to a higher fixed capacity point as described in the design guide and upgrade guide.
  • All cluster nodes must get the same vCPU/vRAM increase. If mixing capacity points in the same UCM cluster then the scale per cluster and the density per node continue to be limited to that of the lowest capacity point (as described in the design guide).
  • Performance is dependent on many factors including deployment specifics, so vCPU/vRAM increases may or may not improve performance.


Notes on 10,000 user VM configurations

(top)

  • Use for publishers, subscribers, standalone TFTP, standalone multicast MOH nodes, ELM or PAWS-M.
  • User count based on:
    • 1 device per user
    • 10K phones per VM
    • 40K max phones per cluster
    • 60K max BHCC per VM
    • 240K max BHCC per cluster
    • Your design/results may vary
  • vdisk configurations
    • New installs of 9.1 and above must use 1x110 GB vdisk. Older versions used 2x80GB or other configurations as shown in tables above.
    • VM configurations with 2 virtual disks use them as follows:
      • vDisk 1 = Operating System + app binaries
      • vDisk 2 = Logs
    • Upgrades from 8.x VMs with 2x80GB vdisks to 9.1 or higher are not required to change the vdisk. 9.1 and higher will support these two-vdisk options: 2x80GB and 1x80GB+1x110GB (where auto-grow COP was used on second vdisk).
    • Customers who want to use less storage space must use the 1x110GB vdisk and a reinstall plus restore from backup is required to change from 2x80GB to 1x110GB.
    • Note that vdisk partition alignment is affected only by not using the Cisco-provided OVA for initial install or reinstall of a VM. Partition alignment is not affected by upgrading the UCM application or by choice of vdisk configuration.

UCM cluster nodes require fixed capacity points with fixed-configuration VMs in the Cisco-provided OVA for UCM. For a given capacity point (such as the 10K user VM), the virtual hardware specs represent the minimum for that capacity point. Customers who wish to add additional vCPU and/or additional vRAM beyond this minimum to improve performance may do so, but note the following:

  • vCPU/vRAM increases alone do not increase supported capacity, max density per cluster node or max scale per cluster. Customers seeking capacity increases should migrate all cluster nodes to a higher fixed capacity point as described in the design guide and upgrade guide.
  • All cluster nodes must get the same vCPU/vRAM increase. If mixing capacity points in the same UCM cluster then the scale per cluster and the density per node continue to be limited to that of the lowest capacity point (as described in the design guide).
  • Performance is dependent on many factors including deployment specifics, so vCPU/vRAM increases may or may not improve performance.


IOPS and Storage System Performance Requirements

(top)

This section provides the IOPS data for a Cisco Unified Communications Manager system under load. These values are per active VM. Which VMs are active, and how many are active simultaneously, depends on how the CUCM cluster nodes are setup with respect to service activation, redundancy groups, etc. (see www.cisco.com/go/ucsrnd for details).

93-98% of total IO is "sequential writes" with an IO block size of 4 kilobytes.

  • Active call processing: As a reference, the following steady state IOPS were observed at various loads (expressed in Busy Hour Call Attempts):
    • 10K BHCA produces ~ 35 IOPS
    • 25K BHCA produces ~ 50 IOPS
    • 50K BHCA produces ~ 100 IOPS
    • 100K BHCA produces ~ 150 IOPS
  • Software upgrades during business hours generate 800 to 1200 IOPS in addition to steady state IOPS.
  • CDR/CMR via CDR Analysis and Reporting (CAR)
    • CUCM sending CDR/CMR to the external billing server does not incur any additional IOPS.
    • Enabling CAR continuous loading results in around 300 IOPS average on the system.
    • Scheduled uploads are around 250 IOPS for Publisher's VM only.
  • Trace collection is 100 IOPS (occurs on all VMs for which tracing is enabled).
  • Nightly backup (usually Publisher's VM only) is 50 IOPS


CUCM in Cisco Mobile Ready Net (MRN)

(top)

  • Click here for CUCM authorized platforms when part of Cisco MRN.
  • From a technical perspective, CUCM 8.6 or higher is allowed on these platforms, but customers must verify that the desired CUCM version meets certification requirements (JITC, etc.).