To prevent resource allocation issues, it is important that all VMs used for in the system have the same size CPU and the
same size memory. To balance performance across all interfaces, make sure that the service ports and DI ports have the same
throughput capabilities.
To verify the hardware configuration for all cards or a specific card, use the show cloud hardware [card_number} command. Sample output from this command on card 1 (CF) is shown here:
[local]s1# show cloud hardware 1
Card 1:
CPU Nodes : 1
CPU Cores/Threads : 8
Memory : 16384M (qvpc-di-medium)
Hugepage size : 2048kB
cpeth0 :
Driver : virtio_net
loeth0 :
Driver : virtio_net
Sample output from this command on card 3 (SF) is shown here:
[local]s1# show cloud hardware 1
Card 3:
CPU Nodes : 1
CPU Cores/Threads : 8
Memory : 16384M (qvpc-di-medium)
Hugepage size : 2048kB
cpeth0 :
Driver : vmxnet3
port3_10 :
Driver : vmxnet3
port3_11 :
Driver : vmxnet3
To display the optimum configuration of the underlying VM hardware, use the show hardware optimum . To compare your current VM configuration to the optimum configuration, use the show cloud hardware test command. Any parameters not set to the optimum are flagged with an asterisk, as shown in this sample output. In this example,
the CPU cores/threads and memory are not configured optimally.
[local]s1# show cloud hardware test 1
Card 1:
CPU Nodes : 1
* CPU Cores/Threads : 8 Optimum value is 4
* Memory : 8192M (qvpc-di-medium) Optimum value is 16384
Hugepage size : 2048kB
cpeth0 :
Driver : virtio_net
loeth0 :
Driver : virtio_net
To display the configuration file on the config disk or local flash, use the show cloud configuration
card_number command. The location parameter file on flash memory is defined during the installation. And the config disk is usually created
by the orchestrator and then attached to the card. Sample output from this command is shown here for card 1:
[local]s1# show cloud configuration 1
Card 1:
Config Disk Params:
-------------------------
No config disk available
Local Params:
-------------------------
CARDSLOT=1
CARDTYPE=0x40010100
CPUID=0
To display the IFTASK configuration for all cards or a specific card, use the show cloud hardware iftask command. By default, the cores are configured to be used for both PMD and VNPU. Sample output from this command on card
4 is shown here:
[local]mySystem# show cloud hardware iftask 4
Card 4:
Total number of cores on VM: 24
Number of cores for PMD only: 0
Number of cores for VNPU only: 0
Number of cores for PMD and VNPU: 3
Number of cores for MCDMA: 4
Hugepage size: 2048 kB
Total hugepages: 16480256 kB
NPUSHM hugepages: 0 kB
CPU flags: avx sse sse2 ssse3 sse4_1 sse4_2
Poll CPU's: 1 2 3 4 5 6 7
KNI reschedule interval: 5 us