Cisco and Hitachi Adaptive Solution VDI for VMware Horizon 8 VMware vSphere 8.0 U2

Available Languages

Download Options

  • PDF
    (18.3 MB)
    View with Adobe Reader on a variety of devices
  • ePub
    (11.7 MB)
    View in various apps on iPhone, iPad, Android, Sony Reader, or Windows Phone
  • Mobi (Kindle)
    (9.8 MB)
    View on Kindle device or Kindle app on multiple devices
Updated:May 31, 2024

Bias-Free Language

The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product. Learn more about how Cisco is using Inclusive Language.

Available Languages

Download Options

  • PDF
    (18.3 MB)
    View with Adobe Reader on a variety of devices
  • ePub
    (11.7 MB)
    View in various apps on iPhone, iPad, Android, Sony Reader, or Windows Phone
  • Mobi (Kindle)
    (9.8 MB)
    View on Kindle device or Kindle app on multiple devices
Updated:May 31, 2024

Table of Contents

 

 

Published: May 2024

A logo for a companyDescription automatically generated

In partnership with:

A black text on a white backgroundDescription automatically generated

About the Cisco Validated Design Program

The Cisco Validated Design (CVD) program consists of systems and solutions designed, tested, and documented to facilitate faster, more reliable, and more predictable customer deployments. For more information, go to: http://www.cisco.com/go/designzone.

Executive Summary

Cisco Validated Designs (CVDs) consist of systems and solutions that are designed, tested, and documented to facilitate and improve customer deployments. These designs incorporate a wide range of technologies and products into a portfolio of solutions that have been developed to address the business needs of our customers.

This document details the design of the Cisco and Hitachi Adaptive Solution Virtual Desktop Infrastructure for VMware Horizon 8 VMware vSphere 8.0 U2 Design Guide, which describes a validated Converged Infrastructure (CI) jointly developed by Cisco and Hitachi.

This solution explains the deployment of a predesigned, best-practice data center architecture with:

     VMware Horizon and VMware vSphere.

     Cisco Unified Computing System (Cisco UCS) incorporating the Cisco B-Series modular platform.

     Cisco Nexus 9000 family of switches.

     Cisco MDS 9000 family of Fibre Channel switches.

     Hitachi VSP E1090 Virtual Storage Platform (VSP E1090) supporting Fibre Channel storage access.

Cisco Intersight cloud platform delivers monitoring, orchestration, workload optimization and lifecycle management capabilities for the solution.

When deployed, the architecture presents a robust infrastructure viable for a wide range of application workloads implemented as a Virtual Desktop Infrastructure (VDI).

Additional Cisco Validated Designs created in a partnership between Cisco and Hitachi can be found here: https://www.cisco.com/c/en/us/solutions/design-zone/data-center-design-guides/data-center-design-guides-all.html#Hitachi

Solution Overview

This chapter contains the following:

     Audience

     Purpose of this Document

     What’s New in this Release?

The current industry trend in data center design is towards shared infrastructures. By using virtualization along with pre-validated IT platforms, enterprise customers have embarked on the journey to the cloud by moving away from application silos and toward shared infrastructure that can be quickly deployed, thereby increasing agility, and reducing costs. Cisco, Hitachi VSP storage and VMware have partnered to deliver this Cisco Validated Design, which uses best of breed storage, server, and network components to serve as the foundation for desktop virtualization workloads, enabling efficient architectural designs that can be quickly and confidently deployed.

Audience

The intended audience for this document includes, but is not limited to IT architects, sales engineers, field consultants, professional services, IT managers, IT engineers, partners, and customers who are interested in learning about and deploying the Virtual Desktop Infrastructure (VDI).

Purpose of this Document

This document provides a step-by-step design, configuration, and implementation guide for the Cisco Validated Design as follows:

     Large-scale VMware Horizon 8 VDI

     Hitachi VSP E1090 Storage System

     Cisco UCS X210c M7 Blade Servers running VMware vSphere 8.0 U2

     Cisco Nexus 9000 Series Ethernet Switches

     Cisco MDS 9100 Series Multilayer Fibre Channel Switches

What’s New in this Release?

Highlights for this design include:

     Support for Cisco UCS 9508 blade server chassis with Cisco UCS X210c M7 compute nodes

     Support for Hitachi Virtual Storage Platform E1090 (VSP E1090) storage system with Hitachi code version 93-07-21-80/00

     Hitachi Ops Center 10.9.3

     VMware Horizon 8 2312 (Horizon 8 version 8.12)

     Support for VMware vSphere 8.0 U2

     Support for the Cisco UCS Manager 4.2

     Support for VMware vCenter 8.0 U2 to set up and manage the virtual infrastructure as well as integration of the virtual environment with Cisco Intersight software

     Support for Cisco Intersight platform to deploy, maintain, and support the Cisco Hitachi Adaptive Components

     Support for Cisco Intersight Assist virtual appliance to help connect the Hitachi Storage and VMware vCenter with the Cisco Intersight platform

     Support for Microsoft Server 2022 OS for Remote Desktop Session Host (RDSH) server multi-sessions deployment

     Support for Microsoft Windows 11 OS for Instant clone/non-persistent and full clone/persistent clone VDI virtual machine deployment

These factors have led to the need for a predesigned computing, networking and storage building blocks optimized to lower the initial design cost, simplify management, and enable horizontal scalability and high levels of utilization.

The use cases include:

     Enterprise Data Center

     Service Provider Data Center

     Large Commercial Data Center

Technology Overview

This chapter contains the following:

     Cisco and Hitachi Adaptive Solutions

     Cisco Unified Computing System

     Cisco UCS Fabric Interconnect

     Cisco UCS Virtual Interface Cards (VICs)

     Cisco Switches

     VMware Horizon

     VMware Horizon Remote Desktop Session Host (RDSH) Sessions and Windows 11 Desktops

     VMware vSphere 8.0 Update 2

     Cisco Intersight Assist Device Connector for VMware vCenter and Hitachi Storage

     Hitachi Virtual Storage Platform

     Hitachi Storage Virtualization Operating System RF

     Hitachi Ops Center

     Hitachi Virtual Storage Platform Sustainability

Cisco and Hitachi VSP Storage have partnered to deliver several Cisco Validated Designs, which use best-in-class storage, server, and network components to serve as the foundation for virtualized workloads such as Virtual Desktop Infrastructure (VDI), enabling efficient architectural designs that you can deploy quickly and confidently.

Cisco and Hitachi Adaptive Solutions

Cisco and Hitachi jointly developed the Cisco and Hitachi Adaptive Solution architecture. All components are integrated, allowing customers to deploy the solution quickly and economically while eliminating many of the risks associated with researching, designing, building, and deploying similar solutions from the foundation. One of the main benefits of Cisco and Hitachi Adaptive Solution is its ability to maintain consistency at scale. Each of the component families shown in Figure 1 (Cisco UCS, Cisco Nexus, Cisco MDS, and Hitachi VSP) offers platform and resource options to scale up or scale out the infrastructure while supporting the same features and functions.

Figure 1.          Hitachi Storage Components

Related image, diagram or screenshot

Cisco Unified Computing System

Cisco Unified Computing System (Cisco UCS) is a next-generation data center platform that integrates computing, networking, storage access, and virtualization resources into a cohesive system designed to reduce total cost of ownership and increase business agility. The system integrates a low-latency, lossless 10-100 Gigabit Ethernet unified network fabric with enterprise-class, x86-architecture servers. The system is an integrated, scalable, multi-chassis platform with a unified management domain for managing all resources.

Cisco Unified Computing System consists of the following subsystems:

     Compute: The compute piece of the system incorporates servers based on the fourth/fifth-generation Intel Xeon Scalable processors. Servers are available in blade and rack form factor, managed by Cisco UCS Manager.

     Network: The integrated network fabric in the system provides a low-latency, lossless, 10/25/40/100 Gbps Ethernet fabric. Networks for LAN, SAN and management access are consolidated within the fabric. The unified fabric uses the innovative Single Connect technology to lower costs by reducing the number of network adapters, switches, and cables. This in turn lowers the power and cooling needs of the system.

     Virtualization: The system unleashes the full potential of virtualization by enhancing the scalability, performance, and operational control of virtual environments. Cisco security, policy enforcement, and diagnostic features are now extended into virtual environments to support evolving business needs.

     Storage access: Cisco UCS system provides consolidated access to both SAN storage and Network Attached Storage over the unified fabric. This provides customers with storage choices and investment protection. Also, the server administrators can pre-assign storage-access policies to storage resources, for simplified storage connectivity and management leading to increased productivity.

     Management: The system uniquely integrates compute, network, and storage access subsystems, enabling it to be managed as a single entity through Cisco UCS Manager software. Cisco UCS Manager increases IT staff productivity by enabling storage, network, and server administrators to collaborate on Service Profiles that define the desired physical configurations and infrastructure policies for applications. Service Profiles increase business agility by enabling IT to automate and provision re-sources in minutes instead of days.

Cisco UCS Differentiators

Cisco Unified Computing System is revolutionizing the way servers are managed in the datacenter. The following are the unique differentiators of Cisco Unified Computing System and Cisco UCS Manager:

     Embedded Management: In Cisco UCS, the servers are managed by the embedded firmware in the Fabric Interconnects, eliminating the need for any external physical or virtual devices to manage the servers.

     Unified Fabric: In Cisco UCS, from blade server chassis or rack servers to FI, there is a single Ethernet cable used for LAN, SAN, and management traffic. This converged I/O results in reduced cables, SFPs and adapters – reducing capital and operational expenses of the overall solution.

     Auto Discovery: By simply inserting the blade server in the chassis or connecting the rack server to the fabric interconnect, discovery and inventory of compute resources occurs automatically without any management intervention. The combination of unified fabric and auto-discovery enables the wire-once architecture of Cisco UCS, where compute capability of Cisco UCS can be extended easily while keeping the existing external connectivity to LAN, SAN, and management networks.

     Policy Based Resource Classification: Once Cisco UCS Manager discovers a compute resource, it can be automatically classified to a given resource pool based on policies defined. This capability is useful in multi-tenant cloud computing. This CVD showcases the policy-based resource classification of Cisco UCS Manager.

     Combined Rack and Blade Server Management: Cisco UCS Manager can manage Cisco UCS B-series blade servers and Cisco UCS C-series rack servers under the same Cisco UCS domain. This feature, along with stateless computing makes compute resources truly hardware form factor agnostic.

     Model based Management Architecture: The Cisco UCS Manager architecture and management database is model based, and data driven. An open XML API is provided to operate on the management model. This enables easy and scalable integration of Cisco UCS Manager with other management systems.

     Policies, Pools, Templates: The management approach in Cisco UCS Manager is based on defining policies, pools, and templates, instead of cluttered configuration, which enables a simple, loosely coupled, data driven approach in managing compute, network, and storage resources.

     Loose Referential Integrity: In Cisco UCS Manager, a service profile, port profile or policies can refer to other policies or logical resources with loose referential integrity. A referred policy cannot exist at the time of authoring the referring policy or a referred policy can be deleted even though other policies are referring to it. This provides different subject matter experts to work independently from each other. This provides great flexibility where different experts from different domains, such as network, storage, security, server, and virtualization work together to accomplish a complex task.

     Policy Resolution: In Cisco UCS Manager, a tree structure of organizational unit hierarchy can be created that mimics the real-life tenants and/or organization relationships. Various policies, pools and templates can be defined at different levels of organization hierarchy. A policy referring to another policy by name is resolved in the organizational hierarchy with closest policy match. If no policy with specific name is found in the hierarchy of the root organization, then the special policy named “default” is searched. This policy resolution practice enables automation friendly management APIs and provides great flexibility to owners of different organizations.

     Service Profiles and Stateless Computing: A service profile is a logical representation of a server, carrying its various identities and policies. This logical server can be assigned to any physical compute resource as far as it meets the resource requirements. Stateless computing enables procurement of a server within minutes, which used to take days in legacy server management systems.

     Built-in Multi-Tenancy Support: The combination of policies, pools and templates, loose referential integrity, policy resolution in the organizational hierarchy and a service profiles-based approach to compute resources makes Cisco UCS Manager inherently friendly to multi-tenant environments typically observed in private and public clouds.

     Extended Memory: The enterprise-class Cisco UCS Blade server extends the capabilities of the Cisco Unified Computing System portfolio in a half-width blade form factor. It harnesses the power of the latest Intel Xeon Scalable Series processor family CPUs and Intel Optane DC Persistent Memory (DCPMM) with up to 18TB of RAM (using 256GB DDR4 DIMMs and 512GB DCPMM).

     Simplified QoS: Even though Fibre Channel and Ethernet are converged in the Cisco UCS fabric, built-in support for QoS and lossless Ethernet makes it seamless. Network Quality of Service (QoS) is simplified in Cisco UCS Manager by representing all system classes in one GUI panel.

Cisco Intersight

Cisco Intersight is a lifecycle management platform for your Cisco UCS ecosystem as well as the Hitachi VSP, regardless of where it resides. In your enterprise data center, at the edge, in remote and branch offices, at retail and industrial sites—all these locations present unique management challenges and have typically required separate tools. Cisco Intersight Software as a Service (SaaS) unifies and simplifies your experience of the Cisco Unified Computing System (Cisco UCS).

Cisco Intersight software delivers a new level of cloud-powered intelligence that supports lifecycle management with continuous improvement. It is tightly integrated with the Cisco Technical Assistance Center (TAC).

Expertise and information flow seamlessly between Cisco Intersight and IT teams, providing global management of Cisco infrastructure, anywhere. Remediation and problem resolution are supported with automated upload of error logs for rapid root-cause analysis.

Figure 2.          Cisco Intersight

A screenshot of a cell phoneDescription automatically generatedI

     Automate your infrastructure.

Cisco has a strong track record for management solutions that deliver policy-based automation to daily operations. Intersight SaaS is a natural evolution of our strategies. Cisco designed Cisco UCS to be 100 percent programmable. Cisco Intersight simply moves the control plane from the network into the cloud. Now you can manage your Cisco UCS and infrastructure wherever it resides through a single interface.

     Deploy your way.

If you need to control how your management data is handled, comply with data locality regulations, or consolidate the number of outbound connections from servers, you can use the Cisco Intersight Virtual Appliance for an on-premises experience. Cisco Intersight Virtual Appliance is continuously updated just like the SaaS version, so regardless of which approach you implement, you never have to worry about whether your management software is up to date.

     DevOps ready.

If you are implementing DevOps practices, you can use the Cisco Intersight API with either the cloud-based or virtual appliance offering. Through the API you can configure and manage infrastructure as code—you are not merely configuring an abstraction layer; you are managing the real thing. Through the API and support of cloud-based RESTful API, Terraform providers, Microsoft PowerShell scripts, or Python software, you can automate the deployment of settings and software for both physical and virtual layers. Using the API, you can simplify infrastructure lifecycle operations and increase the speed of continuous application delivery.

     Pervasive simplicity.

Simplify the user experience by managing your infrastructure regardless of where it is installed.

     Actionable intelligence.

     Use best practices to enable faster, proactive IT operations.

     Gain actionable insight for ongoing improvement and problem avoidance.

     Manage anywhere.

     Deploy in the data center and at the edge with massive scale.

     Get visibility into the health and inventory detail for your Intersight Managed environment on-the-go with the Cisco Inter-sight Mobile App.

For more information about Cisco Intersight and the different deployment options, go to: Cisco Intersight – Manage your systems anywhere.

Cisco UCS Fabric Interconnect

The Cisco UCS Fabric Interconnect (FI) is a core part of the Cisco Unified Computing System, providing both network connectivity and management capabilities for the system. Depending on the model chosen, the Cisco UCS Fabric Interconnect offers line-rate, low-latency, lossless 10 Gigabit, 25 Gigabit, 40 Gigabit, or 100 Gigabit Ethernet, Fibre Channel over Ethernet (FCoE) and Fibre Channel connectivity. Cisco UCS Fabric Interconnects provide the management and communication backbone for the Cisco UCS C-Series, S-Series and HX-Series Rack-Mount Servers, Cisco UCS B-Series Blade Servers, and Cisco UCS 9500 Series Blade Server Chassis. All servers and chassis, and therefore all blades, attached to the Cisco UCS Fabric Interconnects become part of a single, highly available management domain. In addition, by supporting unified fabrics, the Cisco UCS Fabric Interconnects provide both the LAN and SAN connectivity for all servers within its domain.

For networking performance, the Cisco UCS 6536 Series uses a cut-through architecture, supporting deterministic, low latency, line rate 10/25/40/100 Gigabit Ethernet ports, 3.82 Tbps of switching capacity, and 320 Gbps bandwidth per Cisco 9508 blade chassis when connected through the IOM 2208/2408 model. The product family supports Cisco low-latency, lossless 10/25/40/100 Gigabit Ethernet unified network fabric capabilities, which increase the reliability, efficiency, and scalability of Ethernet networks. The Fabric Interconnect supports multiple traffic classes over the Ethernet fabric from the servers to the uplinks. Significant TCO savings come from an FCoE-optimized server design in which network interface cards (NICs), host bus adapters (HBAs), cables, and switches can be consolidated.

Cisco UCS 6536 Fabric Interconnects

The Cisco UCS Fabric Interconnects (FIs) provide a single point for connectivity and management for the entire Cisco Unified Computing System. Typically deployed as an active/active pair, the system’s FIs integrate all components into a single, highly available management domain controlled by Cisco Intersight. Cisco UCS FIs provide a single unified fabric for the system, with low-latency, lossless, cut-through switching that supports LAN, SAN, and management traffic using a single set of cables.

Figure 3.          Cisco UCS  6536 Fabric Interconnects

Related image, diagram or screenshot

The Cisco UCS 6536 utilized in the current design is a 36-port Fabric Interconnect. This single RU device includes up to 36 10/25/40/100 Gbps Ethernet ports, 16 8/16/32-Gbps Fibre Channel ports via 4 128 Gbps to 4x32 Gbps breakouts on ports 33-36. All 36 ports support breakout cables or QSA interfaces.

Figure 4.          Cisco UCS X210c M7 Compute Node

A close up of a computerDescription automatically generated

The Cisco UCS X210c M7 features:

     CPU: Up to 2x 4th Gen Intel Xeon Scalable Processors with up to 60 cores per processor and up to 2.625 MB Level 3 cache per core and up to 112.5 MB per CPU.

     Memory: Up to 8TB of main memory with 32x 256 GB DDR5-4800 DIMMs.

     Disk storage: Up to six hot-pluggable, solid-state drives (SSDs), or non-volatile memory express (NVMe) 2.5-inch drives with a choice of enterprise-class redundant array of independent disks (RAIDs) or passthrough controllers, up to two M.2 SATA drives with optional hardware RAID.

     Optional front mezzanine GPU module: The Cisco UCS front mezzanine GPU module is a passive PCIe Gen 4.0 front mezzanine option with support for up to two U.2 NVMe drives and two HHHL GPUs.

     mLOM virtual interface cards:

          Cisco UCS Virtual Interface Card (VIC) 15420 occupies the server's modular LAN on motherboard (mLOM) slot, enabling up to 50 Gbps of unified fabric connectivity to each of the chassis intelligent fabric modules (IFMs) for 100 Gbps connectivity per server.

          Cisco UCS Virtual Interface Card (VIC) 15231 occupies the server's modular LAN on motherboard (mLOM) slot, enabling up to 100 Gbps of unified fabric connectivity to each of the chassis intelligent fabric modules (IFMs) for 100 Gbps connectivity per server.

     Optional mezzanine card:

          Cisco UCS 5th Gen Virtual Interface Card (VIC) 15422 can occupy the server's mezzanine slot at the bottom rear of the chassis. This card's I/O connectors link to Cisco UCS X-Fabric technology. An included bridge card extends this VIC's 2x 50 Gbps of network connections through IFM connectors, bringing the total bandwidth to 100 Gbps per fabric (for a total of 200 Gbps per server).

          Cisco UCS PCI Mezz card for X-Fabric can occupy the server's mezzanine slot at the bottom rear of the chassis. This card's I/O connectors link to Cisco UCS X-Fabric modules and enable connectivity to the Cisco UCS X440p PCIe Node.

     All VIC mezzanine cards also provide I/O connections from the Cisco UCS X210c M7 compute node to the X440p PCIe Node.

     Security: The server supports an optional Trusted Platform Module (TPM). Additional security features include a secure boot FPGA and ACT2 anticounterfeit provisions.

Cisco UCS Virtual Interface Cards (VICs)

The Cisco UCS VIC 15000 series is designed for Cisco UCS X-Series M6/M7 Blade Servers, Cisco UCS B-Series M6 Blade Servers, and Cisco UCS C-Series M6/M7 Rack Servers. The adapters are capable of supporting 10/25/40/50/100/200-Gigabit Ethernet and Fibre Channel over Ethernet (FCoE). They incorporate Cisco’s next-generation Converged Network Adapter (CNA) technology and offer a comprehensive feature set, providing investment protection for future feature software releases.

Cisco UCS VIC 15231

The Cisco UCS VIC 15231 (Figure 5) is a 2x100-Gbps Ethernet/FCoE-capable modular LAN on motherboard (mLOM) designed exclusively for the Cisco UCS X210 Compute Node. The Cisco UCS VIC 15231 enables a policy-based, stateless, agile server infrastructure that can present to the host PCIe standards-compliant interfaces that can be dynamically configured as either NICs or HBAs.

Figure 5.          Cisco UCS VIC 15231

A close-up of a computer chipDescription automatically generated

Figure 6.          Cisco UCS VIC 15231

A diagram of a computerDescription automatically generated

Cisco Switches

Cisco Nexus 93180YC-FX Switches

The Cisco Nexus 93180YC-FX Switch provides a flexible line-rate Layer 2 and Layer 3 feature set in a compact form factor. Designed with Cisco Cloud Scale technology, it supports highly scalable cloud architectures. With the option to operate in Cisco NX-OS or Application Centric Infrastructure (ACI) mode, it can be deployed across enterprise, service provider, and Web 2.0 data centers.

     Architectural Flexibility

          Includes top-of-rack or middle-of-row fiber-based server access connectivity for traditional and leaf-spine architectures.

          Leaf node support for Cisco ACI architecture is provided in the roadmap.

          Increase scale and simplify management through Cisco Nexus 2000 Fabric Extender support.

     Feature Rich

          Enhanced Cisco NX-OS Software is designed for performance, resiliency, scalability, manageability, and programmability.

          ACI-ready infrastructure helps users take advantage of automated policy-based systems management.

          Virtual Extensible LAN (VXLAN) routing provides network services.

          Rich traffic flow telemetry with line-rate data collection.

          Real-time buffer utilization per port and per queue, for monitoring traffic micro-bursts and application traffic patterns.

     Highly Available and Efficient Design

          High-density, non-blocking architecture.

          Easily deployed into either a hot-aisle and cold-aisle configuration.

          Redundant, hot-swappable power supplies and fan trays.

     Simplified Operations

          Power-On Auto Provisioning (POAP) support allows for simplified software upgrades and configuration file installation.

          An intelligent API offers switch management through remote procedure calls (RPCs, JSON, or XML) over a HTTP/HTTPS infrastructure.

          Python Scripting for programmatic access to the switch command-line interface (CLI).

          Hot and cold patching, and online diagnostics.

     Investment Protection

A Cisco 40 Gbe bidirectional transceiver allows reuse of an existing 10 Gigabit Ethernet multimode cabling plant for 40 Giga-bit Ethernet Support for 1 Gbe and 10 Gbe access connectivity for data centers migrating access switching infrastructure to faster speed. The following is supported:

     1.8 Tbps of bandwidth in a 1 RU form factor.

     48 fixed 1/10/25-Gbe SFP+ ports.

     6 fixed 40/100-Gbe QSFP+ for uplink connectivity.

     Latency of less than 2 microseconds.

     Front-to-back or back-to-front airflow configurations.

     1+1 redundant hot-swappable 80 Plus Platinum-certified power supplies.

     Hot swappable 3+1 redundant fan trays.

Figure 7.          Cisco Nexus 93180YC-FX Switch

https://www.cisco.com/c/dam/en/us/products/collateral/switches/nexus-9000-series-switches/datasheet-c78-736651.docx/_jcr_content/renditions/datasheet-c78-736651_0.jpg

Cisco MDS 9132T 32-Gb Fiber Channel Switch

The next-generation Cisco MDS 9132T 32-Gb 32-Port Fibre Channel Switch (Figure 8) provides high-speed Fibre Channel connectivity from the server rack to the SAN core. It empowers small, midsize, and large enterprises that are rapidly deploying cloud-scale applications using extremely dense virtualized servers, providing the dual benefits of greater bandwidth and consolidation.

Small-scale SAN architectures can be built from the foundation using this low-cost, low-power, non-blocking, line-rate, and low-latency, bi-directional airflow capable, fixed standalone SAN switch connecting both storage and host ports.

Medium-size to large-scale SAN architectures built with SAN core directors can expand 32-Gb connectivity to the server rack using these switches either in switch mode or Network Port Virtualization (NPV) mode.

Additionally, investing in this switch for the lower-speed (4- or 8- or 16-Gb) server rack gives you the option to upgrade to 32-Gb server connectivity in the future using the 32-Gb Host Bus Adapter (HBA) that are available today. The Cisco MDS 9132T 32-Gb 32-Port Fibre Channel switch also provides unmatched flexibility through a unique port expansion module (Figure 9) that provides a robust cost-effective, field swappable, port upgrade option.

This switch also offers state-of-the-art SAN analytics and telemetry capabilities that have been built into this next-generation hardware platform. This new state-of-the-art technology couples the next-generation port ASIC with a fully dedicated Network Processing Unit designed to complete analytics calculations in real time. The telemetry data extracted from the inspection of the frame headers are calculated on board (within the switch) and, using an industry-leading open format, can be streamed to any analytics-visualization platform. This switch also includes a dedicated 10/100/1000BASE-T telemetry port to maximize data delivery to any telemetry receiver including Cisco Data Center Network Manager.

Figure 8.          Cisco MDS 9132T 32-Gb Fibre Channel Switch

https://www.cisco.com/c/dam/en/us/products/collateral/storage-networking/mds-9100-series-multilayer-fabric-switches/datasheet-c78-739613.docx/_jcr_content/renditions/datasheet-c78-739613_0.jpg

Figure 9.          Cisco MDS 9132T 32-Gb 16-Port Fibre Channel Port Expansion Module

A picture containing musicDescription automatically generated

     Features

          High performance: Cisco MDS 9132T architecture, with chip-integrated nonblocking arbitration, provides consistent 32-Gb low-latency performance across all traffic conditions for every Fibre Channel port on the switch.

          Capital Expenditure (CapEx) savings: The 32-Gb ports allow users to deploy them on existing 16- or 8-Gb transceivers, reducing initial CapEx with an option to upgrade to 32-Gb transceivers and adapters in the future.

          High availability: Cisco MDS 9132T switches continue to provide the same outstanding availability and reliability as the previous-generation Cisco MDS 9000 Family switches by providing optional redundancy on all major components such as the power supply and fan. Dual power supplies also facilitate redundant power grids.

          Pay-as-you-grow: The Cisco MDS 9132T Fibre Channel switch provides an option to deploy as few as eight 32-Gb Fibre Channel ports in the entry-level variant, which can grow by 8 ports to 16 ports, and thereafter with a port expansion module with sixteen 32-Gb ports, to up to 32 ports. This approach results in lower initial investment and power consumption for entry-level configurations of up to 16 ports compared to a fully loaded switch. Upgrading through an expansion module also reduces the overhead of managing multiple instances of port activation licenses on the switch. This unique combination of port upgrade options allow four possible configurations of 8 ports, 16 ports, 24 ports and 32 ports.

          Next-generation Application-Specific Integrated Circuit (ASIC): The Cisco MDS 9132T Fibre Channel switch is powered by the same high-performance 32-Gb Cisco ASIC with an integrated network processor that powers the Cisco MDS 9700 48-Port 32-Gb Fibre Channel Switching Module. Among all the advanced features that this ASIC enables, one of the most notable is inspection of Fibre Channel and Small Computer System Interface (SCSI) headers at wire speed on every flow in the smallest form-factor Fibre Channel switch without the need for any external taps or appliances. The recorded flows can be analyzed on the switch and also exported using a dedicated 10/100/1000BASE-T port for telemetry and analytics purposes.

          Intelligent network services: Slow-drain detection and isolation, VSAN technology, Access Control Lists (ACLs) for hardware-based intelligent frame processing, smart zoning, and fabric wide Quality of Service (QoS) enable migration from SAN islands to enterprise-wide storage networks. Traffic encryption is optionally available to meet stringent security requirements.

          Sophisticated diagnostics: The Cisco MDS 9132T provides intelligent diagnostics tools such as Inter-Switch Link (ISL) diagnostics, read diagnostic parameters, protocol decoding, network analysis tools, and integrated Cisco Call Home capability for greater reliability, faster problem resolution, and reduced service costs.

          Virtual machine awareness: The Cisco MDS 9132T provides visibility into all virtual machines logged into the fabric. This feature is available through HBAs capable of priority tagging the Virtual Machine Identifier (VMID) on every FC frame. Virtual machine awareness can be extended to intelligent fabric services such as analytics[1] to visualize performance of every flow originating from each virtual machine in the fabric.

          Programmable fabric: The Cisco MDS 9132T provides powerful Representational State Transfer (REST) and Cisco NX-API capabilities to enable flexible and rapid programming of utilities for the SAN as well as polling point-in-time telemetry data from any external tool.

          Single-pane management: The Cisco MDS 9132T can be provisioned, managed, monitored, and troubleshot using Cisco Data Center Network Manager (DCNM), which currently manages the entire suite of Cisco data center products.

          Self-contained advanced anticounterfeiting technology: The Cisco MDS 9132T uses on-board hardware that protects the entire system from malicious attacks by securing access to critical components such as the bootloader, system image loader and Joint Test Action Group (JTAG) interface.

VMware Horizon

VMware Horizon is a modern platform for running and delivering virtual desktops and apps across the hybrid cloud. For administrators, this means simple, automated, and secure desktop and app management. For users, it provides a consistent experience across devices and locations.

For more information, go to: VMware Horizon.

VMware Horizon Remote Desktop Session Host (RDSH) Sessions and Windows 11 Desktops

The virtual app and desktop solution is designed for an exceptional experience.

Today's employees spend more time than ever working remotely, causing companies to rethink how IT services should be delivered. To modernize infrastructure and maximize efficiency, many are turning to desktop as a service (DaaS) to enhance their physical desktop strategy, or they are updating on-premises virtual desktop infrastructure (VDI) deployments. Managed in the cloud, these deployments are high-performance virtual instances of desktops and apps that can be delivered from any datacenter or public cloud provider.

DaaS and VDI capabilities provide corporate data protection as well as an easily accessible hybrid work solution for employees. Because all data is stored securely in the cloud or datacenter, rather than on devices, end-users can work securely from anywhere, on any device, and over any network—all with a fully IT-provided experience. IT also gains the benefit of centralized management, so they can scale their environments quickly and easily. By separating endpoints and corporate data, resources stay protected even if the devices are compromised.

As a leading VDI and DaaS provider, VMware provides the capabilities organizations need for deploying virtual apps and desktops to reduce downtime, increase security, and alleviate the many challenges associated with traditional desktop management.

For more information, go to:

https://docs.vmware.com/en/VMware-Horizon/8-2312/rn/vmware-horizon-8-2312-release-notes/index.html

https://customerconnect.vmware.com/en/downloads/info/slug/desktop_end_user_computing/vmware_horizon/2312

VMware Horizon 8 2312

VMware Horizon 8 version 2312 includes the following new features and enhancements.

Horizon 8 2312 is an Extended Service Branch (ESB). Approximately once a year, VMware designates one VMware Horizon release as an Extended Service Branch (ESB). An ESB is a parallel release branch to the existing Current Releases (CR) of the product. By choosing to deploy an ESB, customers receive periodic service packs (SP) updates, which include cumulative critical bug fixes and security fixes. Most importantly, there are no new features in the SP updates, so customers can rely on a stable Horizon platform for their critical deployments. For more information on the ESB and the Horizon versions that have been designated an ESB, see VMware Knowledge Base (KB) article 86477.

VMware Horizon 8 version 2312 includes the following new features and enhancements. This information is grouped by installable component.

     This release adds support for the following Linux distributions:

          Red Hat Enterprise Linux (RHEL) Workstation 8.8 and 9.3

          Red Hat Enterprise Linux (RHEL) Server 8.9 and 9.3

          Rocky Linux 8.9 and 9.3

          Debian 12.2

     This release drops support for the following Linux distributions:

          RHEL Workstation 8.4

          RHEL Server 8.4

     This release adds support for VMware Horizon Recording. This feature allows administrators to record desktop and application sessions to monitor user behavior for Linux remote desktops and applications.

     VMware Integrated Printing now supports the ability to add a watermark to printed jobs. Administrators can enable this feature using the printSvc.watermarkEnabled property in /etc/vmware/config.

     Administrators can store the VMwareBlastServer CA-signed certificate and private key in a BCFKS keystore. Two new configuration options in /etc/vmware/viewagent-custom.conf, SSLCertName and SSLKeyName, can be used to customize the names of the certificate and private key.

     Horizon Client

          For client certificates, Horizon 8 administrators can set the Connection Server enforcement state for Windows Client sessions and specify the minimum desired security setting to restrict client connections.

          The Unlock a Desktop with True SSO and Workspace ONE feature is now supported on Horizon Mac and Linux clients.

     Horizon RESTful APIs

          New REST APIs for Application pool, Application icon, Virtual Center summary statistics, UserOrGroupsSummary, Datastore usage, Session statistics, Machines etcetera are available along with existing API updates to create robust automations. For more details, refer to API documentation.

          Horizon Agent for Linux now supports Real-Time Audio-Video redirection, which includes both audio-in and webcam redirection of locally connected devices from the client system to the remote session.

          VMware Integrated Printing now supports printer redirection from Linux remote desktops to client devices running Horizon Client for Chrome or HTML Access.

          Session Collaboration is now supported on the MATE desktop environment.

          A vGPU desktop can support a maximum resolution of 3840x2160 on one, two, three, or four monitors configured in any arrangement.

          A non-vGPU desktop can support a maximum resolution of 3840x2160 on a single monitor only.

          Agent and Agent RDS levels are now supported for Horizon Agent for Linux.

     Virtual Desktops

          Option to create multiple custom compute profiles (CPU, RAM, Cores per Socket) for a single golden image snapshot during desktop pool creation.

          VMware Blast now detects the presence of a vGPU system and applies higher quality default settings.

          VMware vSphere Distributed Resource Scheduler (DRS) in vSphere 7.0 U3f and later can now be configured to automatically migrate vGPU desktops when entering ESXi host maintenance mode. For more information, see https://kb.vmware.com/s/article/87277

          Forensic quarantine feature that archives the virtual disks of selected dedicated or floating Instant Clone desktops for forensic purposes.

          The VMware Blast CPU controller feature is now enabled by default in scale environments.

          Removed pop-up for suggesting best practices during creation of an instant clone. The information now appears on the first tab/screen of the creation wizard.

          Help Desk administrator roles are now applicable to all access groups, not just to the root access group as with previous releases.

          Administrators can encrypt a session recording file into a .bin format so that it cannot be played from the file system.

          If using an older Web browser, a message appears when launching the Horizon Console indicating that using a modern browser will provide a better user experience.

          Added Support for HW encoding on Intel GPU for Windows:

-       Supports Intel 11th Generation Integrated Graphics and above (Tigerlake+) with a minimum required driver version of 30.0.101.1660. See https://www.intel.com/content/www/us/en/download/19344/727284/intel-graphics-windows-dch-drivers.html.

          VVC-based USB-R is enabled by default for non-desktop (Chrome/HTML Access/Android) Blast clients.

          Physical PC (as Unmanaged Pool) now supports Windows 11 Education and Pro (20H2 and later) and Windows 11 Education and Pro (21H2) with the VMware Blast protocol.

          Storage Drive Redirection (SDR) is now available as an alternative option to USB or CDR redirection for better I/O performance.

     Horizon Connection Server

          Horizon Connection Server now enables you to configure a customized timeout warning and set the timer for the warning to appear before a user is forcibly disconnected from a remote desktop or published application session. This warning is supported with Horizon Client for Windows 2312 and Horizon Client for Mac 2312 or later.

          Windows Hello for Business with certificate authentication is now supported when you Log in as Current User on the Horizon Client for Windows.

          Added limited support for Hybrid Azure AD.

     Horizon Agent

          The Horizon Agent Installer now includes a pre-check to confirm that .NET 4.6.2 is installed if Horizon Performance Tracker is selected.

     Horizon Client

          For information about new features in a Horizon client, including HTML Access, review the release notes for that Horizon client.

     General

          The View Agent Direct-Connection Plug-In product name was changed to Horizon Agent Direct-Connection Plug-In in the documentation set.

          Customers are automatically enrolled in the VMware Customer Experience Improvement Program (CEIP) during installation.

For more information about VMware vSphere and its components, see: https://www.vmware.com/products/vsphere.html.

VMware vSphere 8.0 Update 2

VMware vSphere is an enterprise workload platform for holistically managing large collections of infrastructures (resources including CPUs, storage, and networking) as a seamless, versatile, and dynamic operating environment. Unlike traditional operating systems that manage an individual machine, VMware vSphere aggregates the infrastructure of an entire data center to create a single powerhouse with resources that can be allocated quickly and dynamically to any application in need.

Note:      VMware vSphere 8 became available in November of 2022.

The VMware vSphere 8 Update 2 release delivered enhanced value in operational efficiency for admins, supercharged performance for higher-end AI/ML workloads, and elevated security across the environment. VMware vSphere 8 Update 2 has now achieved general availability.

For more information about VMware vSphere 8 Update 2 three key areas of enhancements, go to: VMware vSphere

VMware vSphere vCenter

VMware vCenter Server provides unified management of all hosts and VMs from a single console and aggregates performance monitoring of clusters, hosts, and VMs. VMware vCenter Server gives administrators deep insight into the status and configuration of compute clusters, hosts, VMs, storage, the guest OS, and other critical components of a virtual infrastructure. VMware vCenter manages the rich set of features available in a VMware vSphere environment.

Cisco Intersight Assist Device Connector for VMware vCenter and Hitachi Virtual Storage Platform

Cisco Intersight integrates with VMware vCenter, and Hitachi Virtual Storage Platform (VSP) as follows:

     Cisco Intersight uses the device connector running within Cisco Intersight Assist virtual appliance to communicate with VMware vCenter.

     Cisco Intersight uses the device connector running within a Cisco Intersight Assist virtual appliance to integrate the Hitachi VSP.

For further information and supported VSP models, go to:

https://intersight.com/help/saas/supported_systems#supported_hardware_for_standalone_and_ucsm_managed_mode

Integrating Hitachi Virtual Storage Platform with Cisco Intersight Quick Start Guide (hitachivantara.com)

Figure 10.       Cisco Intersight and vCenter and Hitachi Storage Integration

Related image, diagram or screenshot

The device connector provides a safe way for connected targets to send information and receive control instructions from the Cisco Intersight portal using a secure Internet connection. The integration brings the full value and simplicity of Cisco Intersight infrastructure management service to VMware hypervisor and Hitachi VSP environments. The integration architecture enables you to use new management capabilities without compromise in your existing VMware or Hitachi VSP operations. IT users will be able to manage heterogeneous infrastructure from a centralized Cisco Intersight portal. At the same time, the IT staff can continue to use VMware vCenter and the Hitachi VSP dashboard for comprehensive analysis, diagnostics, and reporting of virtual and storage environments. The next section addresses the functions that this integration provides.

Hitachi Virtual Storage Platform

The Hitachi Virtual Storage Platform (VSP) is a highly scalable, true enterprise-class storage system that can virtualize external storage and provide virtual partitioning and quality of service for diverse workload consolidation. With the industry’s only 100 percent data availability guarantee, VSP delivers the highest uptime and flexibility for your block-level storage needs.

Figure 11.       Hitachi VSP

Related image, diagram or screenshot

Hitachi Virtual Storage Platform E1090

The Hitachi Virtual Storage Platform E series builds over 50 years of proven Hitachi engineering experience, offering you a superior range of business continuity options that provide the best reliability in the industry. As a result, 85 percent of Fortune 100 financial services companies trust Hitachi storage systems with their mission-critical data.

The Hitachi Virtual Storage Platform E1090 (VSP E1090) storage system is a high-performance, large-capacity data storage system. The VSP E1090 all-flash arrays (AFAs) support NVMe and SAS solid-state drives (SSDs). The VSP E1090H hybrid models can be configured with both SSDs and hard disk drives (HDDs).

The NVMe flash architecture delivers consistent, low-microsecond latency, which reduces the transaction costs of latency-critical applications and delivers predictable performance to optimize storage resources.

The hybrid architecture allows for greater scalability and provides data-in-place migration support.

The storage systems offer superior performance, resiliency, and agility, featuring response times as low as 41μ, all backed by the industry’s first and most comprehensive 100 percent data availability guarantee.

The Hitachi Virtual Storage Platform E series innovative active-active controller architecture protects your business against local faults while mitigating performance issues as well as providing all enterprise features of the VSP 5000 series in a lower cost form factor to satisfy midrange customer needs and business requirements.

Hitachi VSP E1090 Key Features

     High performance

          Multiple controller configuration distributes processing across controllers

          High-speed processing facilitated by up to 1,024 GiB of cache

          I/O processing speed increased by NVMe flash drives

          High-speed front-end data transfer up to 32 Gbps for FC and 10 Gbps for iSCSI

          I/O response times as low as low as 41 μ

          Integrated with Hitachi Ops Center to improve IT operational efficiencies.

     High reliability

          Service continuity for all main components due to redundant configuration!

          RAID 1, RAID 5, and RAID 6 support (RAID 6 including 14D+2P)

          Data security by transferring data to cache flash memory in case of a power outage.

     Scalability and versatility

          Scalable capacity up to 25.9 PB, 287 PB (external), and 8.4M IOPS

          The hybrid architecture allows for greater scalability and provides data-in-place migration support.

     Performance and Resiliency Enhancements

          Upgraded controllers with 14 percent more processing power than VSP E990 and 53 percent more processing power than VSP F900

          Significantly improved adaptive data reduction (ADR) performance through Compression Accelerator Modules

          An 80 percent reduction in drive rebuild time compared to earlier midsized enterprise platforms.

          Smaller access size for ADR metadata reduces overhead.

          Support for NVMe allows extremely low latency with up to 5 times higher cache miss IOPS per drive.

     Reliability and serviceability

The Virtual Storage Platform E1090 (VSP E1090) storage system is designed to deliver industry-leading performance and availability. The VSP E1090 features a single, flash-optimized Storage Virtualization Operating System (SVOS) image running on 64 processor cores, sharing a global cache of 1 TiB. The VSP E1090 offers higher performance with fewer hardware resources than competitors. The VSP E1090 was upgraded to an advanced Cascade Lake CPU, permitting read response times as low as 41 microseconds and data reduction throughput was improved up to 2X by the new Compression Accelerator Module. Improvements in reliability and serviceability allow the VSP E1090 to claim an industry-leading 99.9999% availability (on average, 0.3 seconds per year of downtime).

     Advanced AIOps and easy-to-use management

The Hitachi Virtual Storage Platform E series achieves greater efficiency and agility with Hitachi Ops Center’s advanced AIOps which provide real-time monitoring for VSP E series systems located on-premises or in a colocation facility. Advanced AIOps provides unique integration of IT analytics and automation that identifies issues and, through automation, quickly resolves issues before they impact your critical workloads. Ops Center uses the latest AI and machine learning (ML) capabilities to improve IT operations, and simplifies day-to-day administrative, optimization, and management orchestration for VSP E-Series, freeing you to focus on innovation and strategic initiatives.

Figure 12.       Hitachi Virtual Storage Platform E1090

A black rectangular object with a black coverDescription automatically generated with medium confidence

For more information about Hitachi Virtual Storage Platform E-Series, go to: https://www.hitachivantara.com/en-us/products/storage-platforms/primary-block-storage/vsp-e-series.html

Hitachi Storage Virtualization Operating System RF

Hitachi Storage Virtualization Operating System (SVOS) RF (Resilient Flash) delivers best-in-class business continuity and data availability and simplifies storage management of all Hitachi VSPs by sharing a common operating system. Flash performance is optimized with a patented flash-aware I/O stack to further accelerate data access. Adaptive inline data reduction increases storage efficiency while enabling a balance of data efficiency and application performance. Industry-leading storage virtualization allows Hitachi Storage Virtualization Operating System RF to use third-party all-flash and hybrid arrays as storage capacity, consolidating resources and extending the life of storage investments.

Hitachi Storage Virtualization Operating System RF works with the virtualization capabilities of Hitachi VSP storage systems to provide the foundation for global storage virtualization. SVOS RF delivers software-defined storage by abstracting and managing heterogeneous storage to provide a unified virtual storage layer, resource pooling, and automation. Hitachi Storage Virtualization Operating System RF also offers self-optimization, automation, centralized management, and increased operational efficiency for improved performance and storage utilization. Optimized for flash storage, Hitachi Storage Virtualization Operating System RF provides adaptive inline data reduction to keep response times low as data levels grow, and selectable services enable data-reduction technologies to be activated based on workload benefit.

Hitachi Storage Virtualization Operating System RF integrates with Hitachi’s base and advanced software packages to deliver superior availability and operational efficiency. You gain active-active clustering, data-at-rest encryption, insights from machine learning, and policy-defined data protection with local and remote replication.

Figure 13.       Hitachi Storage Virtualization Operating System RF Features

Related image, diagram or screenshot

For more information about Hitachi Storage Virtualization Operating System RF, go to: https://www.hitachivantara.com/en-us/products/storage-platforms/primary-block-storage/virtualization-operating-system.html

Capacity Saving

The capacity saving feature is native to the Hitachi VSP. When enabled, data deduplication and compression is performed to reduce the size of data to be stored. Capacity saving can be enabled on dynamic provisioning volumes in Hitachi Dynamic Provisioning (HDP) pools and Hitachi Dynamic Tiering (HDT) pools. You can use the capacity saving function on internal flash drives only, including data stored on encrypted flash drives and external storage. Within this guide, the capacity saving function was enabled to show the use case benefits in conjunction with VDI deployments. The following image demonstrates how the Hitachi VSP makes use of the capacity saving function after it is enabled on the DP pool.

A diagram of a data flowDescription automatically generated

Hitachi Ops Center

Hitachi Ops Center is an integrated suite of applications that enable you to optimize your data center operations through integrated configuration, analytics, automation, and copy data management. These features allow you to administer, automate, optimize, and protect your Hitachi storage infrastructure.

The following modules are included in the Hitachi Ops Center:

     Ops Center Administrator

     Ops Center Analyzer

     Ops Center Automator

     Ops Center Protector

     Ops Center API Configuration Manager

     Ops Center Analyzer Viewpoint

Figure 14.       Hitachi Ops Center Products

Related image, diagram or screenshot

For more Information about Ops Center, go to: https://www.hitachivantara.com/en-us/products/storage-software/ai-operations-management/ops-center.html

Hitachi Ops Center Administrator

Hitachi Ops Center Administrator is an infrastructure management solution that unifies storage provisioning, data protection, and storage management across the entire Hitachi VSP family.   

Figure 15.       Hitachi Ops Center Administrator UI Console

A screenshot of a computerDescription automatically generated

The Hitachi Ops Center Administrator key benefits are:

     Reduction in administration time and effort to efficiently deploy and manage new storage resources using Administrator’s easy to use interface.

     Utilization of common configurations to centrally manage multiple Hitachi Virtual Storage Platform storage systems.

     Standards-based APIs that enable fast storage provisioning operations and integration with external tools.

     Use of common administrative workflows to manage highly available storage volumes.

     Integrate with Hitachi Ops Center management to incorporate analytics, automation, and data protection.

Ops Center Analyzer

Ops Center Analyzer provides a comprehensive application service-level and storage performance management solution that enables you to quickly identify and isolate performance problems, determine the root cause, and provide solutions. It enables proactive monitoring from the application level through server, network, and storage resources for end-to-end visibility of your monitored environment. It also increases performance and storage availability by identifying problems before they can affect applications.

Figure 16.       Hitachi Ops Center Analyzer UI Console

Related image, diagram or screenshot

The Ops Center Analyzer collects and correlates data from these sources:

     Storage systems

     Fibre Channel switches

     Hypervisors

     Hosts

Ops Center Analyzer Viewpoint

Ops Center Analyzer viewpoint displays user the operational status of data centers around the world in a single window allowing comprehensive insight to global operations.

Figure 17.       Hitachi Ops Center Analyzer Viewpoint

Related image, diagram or screenshot

With Hitachi Ops Center Analyzer Viewpoint, the following information can be viewed:

     Check the overall status of multiple data centers: By accessing Analyzer viewpoint from a web browser, you can collectively display and view information about supported resources in the data centers. Even for a large-scale system consisting of multiple data centers, you can check the comprehensive status of all data centers.

     Easily analyze problems related to resources: By using the UI, you can display information about resources in a specific data center in a drill-down view and easily identify where a problem occurred. Additionally, because you can launch the Ops Center Analyzer UI from the Analyzer viewpoint UI, you can quickly perform the tasks needed to resolve the problem.

Hitachi Ops Center Protector

With Hitachi Ops Center Protector you can easily configure in-system or remote-replication with the Hitachi VSP. Protector as an enterprise data copy management platform provides business-defined data protection, which simplifies the creation and management of complex, business-defined policies to meet service-level objectives for availability, recoverability, and retention.

Figure 18.       Hitachi Ops Center Protector UI Console

A screenshot of a computerDescription automatically generated

Hitachi Ops Center Automator

Hitachi Ops Center Automator is a software solution that provides automation to simplify end-to-end data management tasks such as storage provisioning for storage and data center administrators. The building blocks of the product are prepackaged automation templates known as service templates which can be customized and configured for other people in the organization to use as a self-service model, reducing the load on traditional administrative staff.

Figure 19.       Hitachi Ops Center Automator UI Console

Related image, diagram or screenshot

Hitachi Ops Center API Configuration Manager

Hitachi Ops Center API Configuration Manager REST is an independent and lightweight binary that enables programmatic management of Hitachi VSP storage systems using restful APIs. This component can be deployed stand -alone or as a part of Hitachi Ops Center.

The REST API supports the following storage systems:

     VSP 5000 series

     VSP E Series

     VSP F Series VSP G Series

Figure 20.       Hitachi Ops Center API Configuration Manager

A diagram of a serverDescription automatically generated

Hitachi Virtual Storage Platform Sustainability

Hitachi Virtual Storage Platform (VSP) storage systems are certified with CFP (Carbon Footprint of Products) that is a method of “visualizing” CO2 equivalent emissions obtained by converting Greenhouse Gas emissions from the entire life cycle stages of a product (goods or service), that is from the raw material acquisition stage to the disposal and recycling stage. Eco-friendly storage products from Hitachi help reduce CO2 emissions by approximately 30 percent to 60 percent compared to previous models.

Figure 21.       Carbon Footprint Certification for Hitachi Virtual Storage Platforms

A close-up of a computerDescription automatically generated
For more information, go to: https://ecoleaf-label.jp/en/organization/7

Additionally, the Hitachi VSP brings the following advantages for a greener datacenter:

     Unique hardware-based data compression achieves approximately 60 percent reduction in power consumption.

     Fast data compression processing improves read/write performance by 40 percent.

     Automatic switching enables high performance and energy savings.

     Eliminating the need for data migration saves energy while minimizing waste.

The Hitachi VSP E1090 storage solution is also certified under the USA ENERGY STAR program and is number 1 in its class.

Figure 22.       ENERGY STAR Certification

A screenshot of a computerDescription automatically generated

For more information, go to: https://www.energystar.gov/productfinder/product/certified-data-center-storage/details/2406163/export/pdf

Solution Design

This chapter contains the following:

     Design Considerations for Desktop Virtualization

     Understanding Applications and Data

     Project Planning and Solution Sizing Sample Questions

     Hypervisor Selection

     Storage Considerations

     Hitachi SAN and VMware Configuration Best Practices

Design Considerations for Desktop Virtualization

There are many reasons to consider a virtual desktop solution such as an ever growing and diverse base of user devices, complexity in management of traditional desktops, security, and even Bring Your Own Device (BYOD) to work programs. The first step in designing a virtual desktop solution is to understand the user community and the type of tasks that are required to successfully execute their role. The following user classifications are provided:

     Knowledge Workers today do not just work in their offices all day – they attend meetings, visit branch offices, work from home, and even coffee shops. These anywhere workers expect access to all of their same applications and data wherever they are.

     External Contractors are increasingly part of your everyday business. They need access to certain portions of your applications and data, yet administrators still have little control over the devices they use and the locations they work from. Consequently, IT is stuck making trade-offs on the cost of providing these workers a device vs. the security risk of allowing them access from their own devices.

     Task Workers perform a set of well-defined tasks. These workers access a small set of applications and have limited requirements from their PCs. However, since these workers are interacting with your customers, partners, and employees, they have access to your most critical data.

     Mobile Workers need access to their virtual desktop from everywhere, regardless of their ability to connect to a network. In addition, these workers expect the ability to personalize their PCs, by installing their own applications and storing their own data, such as photos and music, on these devices.

     Shared Workstation users are often found in state-of-the-art university and business computer labs, conference rooms or training centers. Shared workstation environments have the constant requirement to re-provision desktops with the latest operating systems and applications as the needs of the organization change, tops the list.

After the user classifications have been identified and the business requirements for each user classification have been defined, it becomes essential to evaluate the types of virtual desktops that are needed based on user requirements. There are five potential desktops environments for each user:

     Traditional PC: A traditional PC is what typically constitutes a desktop environment: a physical device with a locally installed operating system.

     Remote Desktop Session Host (RDSH) Sessions: A hosted; server-based desktop is a desktop where the user interacts through a delivery protocol. With hosted, server-based desktops, a single installed instance of a server operating system, such as Microsoft Windows Server 2022, is shared by multiple users simultaneously. Each user receives a desktop "session" and works in an isolated memory space. Remote Desktop Session Host Server sessions: A hosted virtual desktop is a virtual desktop running on a virtualization layer (ESX). The user does not work with and sit in front of the desktop, but instead the user interacts through a delivery protocol.

     Published Applications: Published applications run entirely on the VMware RDS server virtual machines and the user interacts through a delivery protocol. With published applications, a single installed instance of an application, such as Microsoft Office, is shared by multiple users simultaneously. Each user receives an application "session" and works in an isolated memory space.

     Streamed Applications: Streamed desktops and applications run entirely on the user‘s local client device and are sent from a server on demand. The user interacts with the application or desktop directly, but the resources may only be available while they are connected to the network.

     Local Virtual Desktop: A local virtual desktop is a desktop running entirely on the user‘s local device and continues to operate when disconnected from the network. In this case, the user’s local device is used as a type 1 hypervisor and is synced with the data center when the device is connected to the network.

Note:      For the purposes of the validation represented in this document, both Single-session OS and Multi-session OS VDAs were validated.

Understanding Applications and Data

When the desktop user groups and sub-groups have been identified, the next task is to catalog group application and data requirements. This can be one of the most time-consuming processes in the VDI planning exercise but is essential for the VDI project’s success. If the applications and data are not identified and co-located, performance will be negatively affected.

The process of analyzing the variety of application and data pairs for an organization will likely be complicated by the inclusion of cloud applications, for example, SalesForce.com. This application and data analysis is beyond the scope of this Cisco Validated Design but should not be omitted from the planning process. There are a variety of third-party tools available to assist organizations with this crucial exercise.

Project Planning and Solution Sizing Sample Questions

The following key project and solution sizing questions should be considered:

     Has a VDI pilot plan been created based on the business analysis of the desktop groups, applications, and data?

     Is there infrastructure and budget in place to run the pilot program?

     Are the required skill sets to execute the VDI project available? Can we hire or contract for them?

     Do we have end user experience performance metrics identified for each desktop sub-group?

     How will we measure success or failure?

     What is the future implication of success or failure?

Below is a short, non-exhaustive list of sizing questions that should be addressed for each user sub-group:

     What is the Single-session OS version?

     32-bit or 64-bit desktop OS?

     How many virtual desktops will be deployed in the pilot? In production?

     How much memory per target desktop group desktop?

     Are there any rich media, Flash, or graphics-intensive workloads?

     Are there any applications installed? What application delivery methods will be used, Installed, Streamed, Layered, Hosted, or Local?

     What is the Multi-session OS version?

     What is a method be used for virtual desktop deployment?

     What is the hypervisor for the solution?

     What is the storage configuration in the existing environment?

     Are there sufficient IOPS available for the write intensive VDI workload?

     Will there be storage dedicated and tuned for VDI service?

     Is there a voice component to the desktop?

     Is there a 3rd party graphics component?

     Is anti-virus a part of the image?

     What is the SQL server version for database?

     Is user profile management (for example, non-roaming profile based) part of the solution?

     What is the fault tolerance, failover, disaster recovery plan?

     Are there additional desktop sub-group specific questions?

Hypervisor Selection

VMware vSphere 8.0 2 has been selected as the hypervisor for this VMware Horizon Virtual Desktops and Remote Desktop Session Host (RDSH) Sessions deployment.

VMware vSphere: VMware vSphere comprises the management infrastructure or virtual center server software and the hypervisor software that virtualizes the hardware resources on the servers. It offers features like Distributed Resource Scheduler, vMotion, high availability, Storage vMotion, VMFS, and a multi-pathing storage layer. More information on vSphere can be obtained at the VMware website.

Storage Considerations

Boot from SAN

When utilizing Cisco UCS Server technology, it is recommended to configure Boot from SAN and store the boot partitions on remote storage, this enables architects and administrators to take full advantage of the stateless nature of service profiles for hardware flexibility across lifecycle management of server hardware generational changes, Operating Systems/Hypervisors, and overall portability of server identity. Boot from SAN also removes the need to populate local server storage creating more administrative overhead.

Note:      Within this document M.2 boot drives were used under lab testing. But the direction sets documented provide SAN boot configuration.

Hitachi VSP Considerations

Each VSP storage system is comprised of multiple controllers and Fibre Channel adapters that control connectivity to the Fibre Channel fabrics using the MDS FC switches. Channel boards (CHBs) are used within the VSP E1090 models and have two controllers contained within the storage system. The multiple CHBs within each storage system allow for designing multiple layers of redundancy within the storage architecture, increasing availability, and maintaining performance during a failure event. 

The VSP E1090 CHBs each contain up to four individual Fibre Channel ports, allowing for redundant connections to each fabric in the Cisco UCS infrastructure. The VSP CHBs each contain up to four individual Fibre Channel (FC) ports, allowing for redundant connections to each fabric in the Cisco UCS infrastructure. In this deployment 4 ports are configured for FC-SCSI protocol, one from controller 1 (CL1-A) and one from controller 2 (CL2-A) going into MDS fabric A, as well as one from controller 1 (CL3-A) and one from controller 2 (CL4-A) going into MDS fabric B for a total of 4 connectionsThese connections provide the data path to boot LUNs and VMFS datastores that use the FC-SCSI protocol.

Additionally, 4 ports are configured for FC-NVMe, one from controller 1 (CL1-B) and one from controller 2 (CL2-B) going into MDS fabric A, and one from controller 1 (CL3-B) and one from controller 2 (CL4-B) going into MDS fabric B to provide an additional data path for VMFS datastores that use the FC-NVMe protocol.

With the Cisco UCS ability to provide alternate data paths, there are a total of 4 paths per host, which provide boot LUN as well as for VMFS datastores. If you plan to deploy VMware ESXi hosts, each host’s WWN should be in its own host group. This approach provides granular control over LUN presentation to ESXi hosts. This is the best practice for SAN boot environments such as Cisco UCS, because ESXi hosts do not have access to other ESXi hosts’ boot LUNs.

Figure 23.       Logical View of LUNs to VSP E1090 Port Association

Related image, diagram or screenshot

Port Connectivity

The Hitachi VSP E1090 provides up to 32Gb FC support, as well as up to 25Gbs by way of optional iSCSI CHB adapters. Always make sure the correct number of HBAs and the speed of SFPs are included in the original BOM.

Oversubscription

To reduce the impact of an outage or scheduled maintenance downtime, it Is good practice when designing fabrics to provide oversubscription of bandwidth, this enables a similar performance profile during component failure and protects workloads from being impacted by a reduced number of paths during a component failure or maintenance event. Oversubscription can be achieved by increasing the number of physically cabled connections between storage and compute.  These connections can then be utilized to deliver performance and reduced latency to the underlying workloads running on the solution.

Topology

When configuring your SAN, it’s important to remember that the more hops you have, the more latency you will see. For best performance, the ideal topology is a “Flat Fabric” where the VSP is only one hop away from any applications being hosted on it.

Hitachi SAN and VMware Configuration Best Practices

A well-designed SAN must be reliable and scalable and recover quickly in the event of a single device failure. Also, a well- designed SAN grows easily as the demands of the infrastructure that it serves increases.

Hitachi storage uses Hitachi Storage Virtualization Operating System RF (SVOS RF). The following are specific advice for VMware environments using SVOS RF.

LUN and Datastore Provisioning Best Practices

These are the best practices for general VMFS provisioning. Hitachi recommends that you always use the latest VMFS version. Always separate the VMware cluster workload from other workloads.

LUN Size

The following lists the current maximum LUN/datastore size for VMware vSphere and Hitachi storage:

     The maximum LUN size for VMware vSphere is 64 TB. https://configmax.esp.vmware.com/home.  

     The maximum LUN size for Hitachi Virtual Storage Platform E series, F series or VSP G series is 256 TB with replication.

     The LUN must be within a dynamic provisioning pool.

     The maximum VMDK size is 62 TB (vVol-VMDK or RDM).

     Using multiple smaller sized LUNs tends to provide higher aggregated I/O performance by reducing the concentration of a storage processor (MPB). It also reduces the recovery time in the event of a disaster. Take these points into consideration when using larger LUNs. In some environments, the convenience of using larger LUNs might outweigh the relatively minor performance disadvantage.

     Before Hitachi Virtual Storage Platform E series, VSP F series, and VSP G series storage systems, the maximum supported LUN size was limited to 4 TB because of storage replication capability. With current Hitachi Virtual Storage Platform storage systems, this limitation has been removed. Keep in mind that recovery is typically quicker with smaller LUNs. so use the size that maximizes usage of MPB resources per LUN for the workload. Use the VMware integrated adapters or plugins Hitachi Vantara provides, such as vSphere Plugin, Microsoft PowerShell cmdlets, or VMware vRealize Orchestrator workflows to automate datastore and LUN management.

Thin-Provisioned VMDKs on a Thin-Provisioned LUNs from Dynamic Provisioning Pool

     Thin provisioned VMDKs on thin provisioned LUNs have become a common storage provisioning configuration for virtualized environments. While EagerZeroThick VMDKs have typically seen better latency performance in older vSphere releases (that is, releases older than vSphere 5), the performance gap between thin VMDK and thick VMDK is now insignificant, and you get added benefits with in-guest UNMAP for better space efficiency with thin. In vVols, thin provisioned VMDK (vVol) is the norm, and it performs even better than thin VMDK on VMFS as no zeroing is required when allocating blocks (thin vVols are the new EZT). Generally, start with thin VMDK on VMFS or vVols datastores. The only exception where you might consider migrating to EZT disks is if you have performance sensitive heavy write VM/container workloads where you can potentially see low single digit % performance improvement for those initial writes that might not be noticeable to your app.

     In the VSP storage system with Hitachi Dynamic Provisioning, it is also quite common to provision thin LUNs with less physical storage capacity (as opposed to fully allocated LUNs) However, monitor storage usage closely to avoid running out of physical storage capacity.

The following are some storage management and monitoring recommendations:

     Hitachi Infrastructure Management Pack for VMware vRealize Operations provides dashboards and alerting capability for monitoring physical and logical storage capacity.

     Enable automatic UNMAP with VMFS 6 (scripted UNMAP command with VMFS 5) to maintain higher capacity efficiency.

RDMs and Command Devices

If presenting command devices as RDMs to virtual machines, ensure command devices have all attributes set before presenting them to VMware ESXi hosts.

LUN Distribution

The general recommendation is to distribute LUNs and workloads so that each host has 2-8 paths to each LDEV. This prevents workload pressure on a small set target ports to become a potential performance bottleneck.

It is prudent to isolate your production environment, and critical systems. to dedicated ports to avoid contention from other hosts' workloads. However, presenting the same LUN to too many target ports could also introduce additional problems with slower error recovery.

Follow the practice below while trying to achieve this goal:

     Each host bus adapter physical port (HBA) should only see one instance of each LUN.

     The number of paths should typically not exceed the number of HBA ports for better reliability and recovery.

     Two to four paths to each LUN provides the optimal performance for most workload environments,

     See Recommended Multipath Settings for Hitachi Storage knowledge base article for more information about LUN instances.

HBA LUN Queue Depth

In a general VMware environment, increasing the HBA LUN queue depth will not solve a storage I/O performance issue. It may overload the storage processors on your storage systems. Hitachi recommends keeping queue depth values to the HBA vendor’s default in most cases. See this Broadcom’s KB article for more details.

In certain circumstances, increasing the queue depth value may increase overall I/O throughput. For example, a LUN hosting as a target for virtual machine backups might require higher throughput during the backup window. Make sure to monitor storage processor usage carefully for queue depth changes.

Slower hosts with read-intensive workloads may request more data than they can remove from the fabric in a timely manner. Lowering the queue depth value can be an effective control mechanism to limit slower hosts.

For a VMware vSphere protocol endpoint (PE) configured to enable virtual volumes (vVols) from Hitachi storage, set a higher queue depth value, such as 128.

Host Group and Host Mode Options

To grant a host access to an LDEV, assign a logical unit number (LUN) within a host group. These are the settings and LUN mapping for host group configurations.

Fibre Channel Port Settings

If connecting a Fibre Channel port using a SAN switch or director, you must change the following settings:

     Port security — Set the port security to Enable. This allows multiple host groups on the Fibre Channel port.

     Fabric — Set fabric to ON. This allows connection to a Fibre Channel switch or director.

     Connection Type — Set the connection type to P-to-P. This allows a point-to-point connection to a Fibre Channel switch or director. Loop Attachment is deprecated and no longer supported on 16 Gb/s and 32 Gb/s storage channel ports.

     Hitachi recommends that you apply the same configuration to a port in cluster 1 as to a port in cluster 2 in the same location. For example, if you create a host group for a host on port CL1-A, also create a host group for that host on portCL2-A. One Host Group per VMware ESXi Host Configuration

     If you plan to deploy VMware ESXi hosts, each host’s WWN can be in its own host group. This approach provides granular control over LUN presentation to ESXi hosts. This is the best practice for SAN boot environments, because ESXi hosts do not have access to other ESXi hosts’ boot LUNs. Make sure to reserve LUN ID 0 for boot LUN for easier troubleshooting.

However, in a cluster environment, this approach can be an administrative challenge because keeping track of which WWNs for ESXi hosts are in a cluster can be difficult. When multiple ESXi hosts need to access the same LDEV for clustering purposes, the LDEV must be added to each host group.

One Host Group per Cluster, Cluster Host Configuration

     VMware vSphere features such as vMotion, Distributed Resource Scheduler, High Availability, and Fault Tolerance require shared storage across the VMware ESXi hosts. Many of these features require that the same LUNs be presented to all ESXi hosts participating in these cluster functions.

     For convenience and where granular control is not essential, create host groups with clustering in mind. Place all the WWNs for the clustered ESXi hosts in a single host group. This ensures that when adding LDEVs to the host group, all ESXi hosts see the same LUNs. This creates consistency with LUN presentation across all hosts.

     Host Group Options

On Hitachi Virtual Storage Platform storage systems, create host groups using Hitachi Storage Navigator. Change the following host mode and host mode options to enable VMware vSphere Storage APIs for Array Integration (VAAI):

          Host Mode Options

Enable 63-(VAAI) Support option for vStorage APIs based on T10 standards (this includes extended copy from T10 which is why HMO 54 is now redundant or ignored for all SVOS releases).

Zoning

Use zoning to enable access control in a SAN environment. Through zoning, a SAN administrator can configure which HBA WWPNs on the VMware ESXi host can connect to which WWPNs on the Hitachi storage processors.

The VMware ESXi host port in the Fibre Channel HBA is referred to as the initiator. The storage processor port in the Hitachi storage system is referred to as the target.

You can break zoning down into the following different configurations:

     Single Initiator to Single Target (SI-ST) Zoning — This configuration allows one initiator to be zoned to only one target. This configuration is the most resilient configuration because traffic originating from another initiator on the SAN will have less impact than the initiator in this zone.

     Cisco Smart Zoning – This implementation is preferred in a Cisco environment where NX-OS can eliminate initiator to initiator and target to target communication.

     Single Initiator to Multiple Target (SI-MT) Zoning — This configuration allows one initiator to be zoned to multiple targets in a single zone.

     Multi Initiator Zoning This configuration is never recommended. This configuration allows multiple initiators to be zoned to multiple targets in a single zone. This exposes all initiators and targets to all traffic in the zone. Events such as a malfunctioning HBA could affect all initiators and targets in the zone and either negatively affect performance for all or bring down the Fibre Channel network completely.

Hitachi recommends the following:

     For utmost availability with slightly higher administrative cost, Hitachi recommends SI-ST zoning. Cisco Smart Zoning is supported to reduce the administrative burden.

     Each HBA port should only see one instance of each LUN. This is primarily based on years of experience with fabrics and to avoid potential availability issues where host HBA ports can be overrun leading to performance issues and error recovery with fabric path issues (transient or otherwise) is faster and less impactful to hosts.

     Do the following:

          Use SI-MT Cisco Smart Zoning and follow same LUN presentation recommendation previously.

          Regarding SI-MT, an example is provided within Cisco and Hitachi Adaptive Solutions for Converged Infrastructure and Cisco and Hitachi Adaptive Solutions for SAP HANA Tailored Data Center Integration.

          Zoning is configured as SI-MT with Cisco Smart Zoning to optimize traffic intended to be specific to the initiator (Cisco UCS host vHBA) and the targets (Hitachi Virtual Storage Platform controller ports).

          Using SI-MT zoning provides reduced administrative overhead versus configuring traditional SI-ST zoning, and results in the same SAN switching efficiency when configured with Smart Zoning. See Cisco and Hitachi Adaptive Solutions for Converged Infrastructure Design Guide for more details.

Multipathing

Multipathing allows a VMware ESXi host to use more than one physical path between the ESXi host and the storage array. Multipathing provides load balancing. This is the process of distributing I/O across multiple physical paths, to reduce or remove potential bottlenecks. Multipathing also provides redundancy and fault tolerance in case of a failure of any element in the SAN network, such as an adapter, switch, or cable. The ESXi host can switch to another physical path that does not use the failed component. This process of path switching to avoid failed components is known as path failover.

To support path switching with a Fibre Channel SAN, the ESXi host typically has two or more HBAs available from which the storage array can be reached. It also has full fault tolerance that uses two or more switches. Additionally, for full fault tolerance, two storage processors on Hitachi storage systems should be used so the HBA can use a different path to reach the disk array.

     Available multipathing policies supported by ESXi hosts are Round Robin, Most Recently Used, Fixed, and Hitachi Dynamic Link Manager.

     Hitachi recommends using the Round Robin Multipathing PSP policy (VMW_PSP_RR) and use SATP default of active-active (VMW_SATP_DEFAULT_AA). In a global-active device configuration, Round Robin Multipathing PSP using ALUA SATP (VMW_SATP_ALUA) are recommended options. This multipathing policy takes advantage of all available paths and bandwidth. Taking advantage of all available paths assures maximum performance from the SAN infrastructure.

In a global-active device configuration without ALUA configured, the Fixed policy is preferred PSP to ensure writes are sent to the preferred side.

     As part of VMware ESXi Round Robin Path Selection Plug-in (PSP), there is an I/O quantity value when a path change is triggered that is known as the limit. After reaching that I/O limit, the PSP selects the next path in the list.

     The default I/O limit is 1000 but can be adjusted if needed to improve performance. Specifically, it can be adjusted to reduce latency seen by the ESXi host when the storage system does not see latency.

     The general recommendation for the PSP limit is to continue to use the default value of 1000 in typical VMware mixed environments with multiple ESXi hosts with multiple datastores. It has been observed that a value of 20 provides potentially the optimum value for additional 3-5% latency improvement and potentially reducing path error detection.

For reference, here is information on various multipath policies:

     Round Robin (VMware) — This policy sends a set number of I/O down the first available path, then sends the same set number of I/O down the next available path. This repeats through all available paths, and then starts over again and repeats. If a path becomes unavailable, it is skipped over until it becomes available again.

     Most Recently Used (VMware) — This policy uses the last successfully used path. If the last successfully used path is not available, then path failover occurs, and the next available path is used. This new path continues to be used until it is no longer available, even if the previously used path becomes available again.

     Fixed (VMware) — This policy has a preferred path that can be set by the VMware vCenter administrator. This path is used until it becomes unavailable. Then, it fails over to the next available path until it becomes unavailable. In which case, the path fails over to the next available path, or until the preferred path becomes available again. If the preferred path does become available again, then the system fails back to the preferred path.

     Hitachi Dynamic Link Manager — VMware ESXi also supports third party path selection policies. Hitachi Dynamic Link Manager is multipathing software from Hitachi that integrates with global-active device on Hitachi Virtual Storage Platform and Hitachi High Availability Manager to provide load balancing and path failover capabilities for servers.

Multiple Fibre Channel Fabrics

     When designing and building a reliable and scalable SAN environment, multiple Fibre Channel fabrics are recommended. For example, with multiple switches, create two separate Fibre Channel fabrics such as Fabric-A and Fabric-B.

     In a VMware vSphere environment, the ESXi hosts should have two or more HBA ports. Allocate at least one HBA port for each Fibre Channel fabric. Not only will this allow for greater I/O throughput on the SAN as more paths are available when using the round robin (VMware) multipathing policy, multiple HBAs also allow for redundancy and greater reliability in case of a component failure.

     Each VMware ESXi host in a cluster should have an equal number of connections to each Fibre Chanel switch. Each Hitachi Storage array should have an equal number of connections from each storage processor to each switch.

     This SAN Fibre Channel switch configuration ensures that a single switch failure will not leave an ESXi host unable to connect to a datastore, unable to continue running the virtual machines on those datastores.

Note:      Do not up-link the Fibre Channel switches to each other, creating separate Fibre Channel networks. This ensures that conditions on a Fibre Channel network do not affect traffic on another Fibre Channel network, such as would happen with a malfunctioning HBA. This helps ensure system reliability.

For more information about the VMware vSphere Hitachi  VSP Storage Best Practices, see: Optimize Hitachi Storage and Compute Platforms in VMware vSphere Environments (hitachivantara.com)

Desktop Virtualization Design Fundamentals

An ever growing and diverse base of user devices, complexity in management of traditional desktops, security, and even Bring Your Own (BYO) device to work programs are prime reasons for moving to a virtual desktop solution.

VMware Horizon Design Fundamentals

VMware Horizon 8 integrates Remote Desktop Session Host (RDSH) sessions users and VDI desktop virtualization technologies into a unified architecture that enables a scalable, simple, efficient, mixed users and manageable solution for delivering Windows applications and desktops as a service.

You can select applications from an easy-to-use “store” that is accessible from tablets, smartphones, PCs, Macs, and thin clients. VMware Horizon delivers a native touch-optimized experience via PCoIP or Blast Extreme high-definition performance, even over mobile networks.

Several components must be deployed to create a functioning Horizon environment to deliver the VDI. These components refer to as common services and encompass: Domain Controllers, DNS, DHCP, User Profile managers, SQL, vCenters, VMware Horizon View Connection Servers.

Figure 24.       VMware Horizon Design Overview

Graphical user interface, applicationDescription automatically generated

Horizon VDI Pool and RDSH Servers Pool

Collections of identical Virtual Machines (VMs) or physical computers are managed as a single entity called a Desktop Pool. The VM provisioning relies on VMware Horizon Connection Server, vCenter Server, and AD components. The Horizon Client then forms a session using PCoIP, Blast, or RDP protocols to a Horizon Agent running in a virtual desktop, RDSH server, or physical computer. In this CVD, virtual machines in the Pools are configured to run either a Windows Server 2022 OS (for RDS Hosted shared sessions using RDP protocol) or a Windows 11 Desktop OS (for pooled VDI desktops using Blast protocol).

Figure 25.       Horizon VDI and RDSH Desktop Delivery Based on Display Protocol (PcoIP/Blast/RDP)

DiagramDescription automatically generated

Multisite Configuration

If you have multiple regional sites, you can use any of the Load Balances Tools to direct the user connections to the most appropriate site to deliver the desktops and application to users.

Figure 26 illustrates the logical architecture of the Horizon multisite deployment. Such architecture eliminates any single point of failure that can cause an outage for desktop users.

Figure 26.       Multisite Configuration Overview

Graphical user interface, applicationDescription automatically generated

Based on the requirement and the number of data centers or remote locations, you can choose any of the available load balancing software to increase security and optimize the user experience.

Note:      Multisite configuration is shown as an example and was not used as part of this CVD testing.

Designing a Virtual Desktop Environment for Different Workloads

With VMware Horizon, the method you choose to provide applications or desktops to users depends on the types of applications and desktops you are hosting and available system resources, as well as the types of users and user experience you want to provide.

Table 1.      Desktop type and user experience

Desktop Type

User Experience

Server OS machines

You want: Inexpensive server-based delivery to minimize the cost of delivering applications to a large number of users, while providing a secure, high-definition user experience.

Your users: Perform well-defined tasks and do not require personalization or offline access to applications. Users may include task workers such as call center operators and retail workers, or users that share workstations.

Application types: Any application.

Desktop OS machines

You want: A client-based application delivery solution that is secure, provides centralized management, and supports a large number of users per host server (or hypervisor), while providing users with applications that display seamlessly in high-definition.

Your users: Are internal, external contractors, third-party collaborators, and other provisional team members. Users do not require off-line access to hosted applications.

Application types: Applications that might not work well with other applications or might interact with the operating system, such as .NET framework. These types of applications are ideal for hosting on virtual machines.

Applications running on older operating systems such as Windows XP or Windows Vista, and older architectures, such as 32-bit or 16-bit. By isolating each application on its own virtual machine, if one machine fails, it does not impact other users.

Remote PC Access

You want: Employees with secure remote access to a physical computer without using a VPN. For example, the user may be accessing their physical desktop PC from home or through a public WIFI hotspot. Depending upon the location, you may want to restrict the ability to print or copy and paste outside of the desktop. This method enables BYO device support without migrating desktop images into the data center.

Your users: Employees or contractors that have the option to work from home but need access to specific software or data on their corporate desktops to perform their jobs remotely.

Host: The same as Desktop OS machines.

Application types: Applications that are delivered from an office computer and display seamlessly in high definition on the remote user's device.

For the Cisco Validated Design described in this document, a Remote Desktop Session Host (RDSH)  sessions (RDSH) using RDS based Server OS and VMware Horizon pooled Instant and Full Clone Virtual Machine Desktops using VDI based desktop OS machines were configured and tested.

Deployment Hardware and Software

This chapter contains the following:

     Architecture

     Products Deployed

     Physical Topology

     Logical Architecture

Architecture

This Cisco and Hitachi Adaptative Solution architecture delivers a Virtual Desktop Infrastructure that is redundant and uses the best practices of Cisco and Hitachi VSP storage.

The architecture includes:

     VMware vSphere 8.0.2 hypervisor installed on the Cisco UCS X210c M7 compute nodes configured for stateless compute design using Cisco M.2 SSD card.

     Hitachi VSP E1090 storage systems provide the storage infrastructure required for VMware vSphere hypervisors and the VDI workload delivered by VMware Horizon 8 2312.

     Cisco Intersight provides UCS infrastructure management with lifecycle management capabilities.

The architecture deployed is highly modular. While each customer’s environment might vary in its exact configuration, the reference architecture contained in this document once built, can easily be scaled as requirements, and demands change. This includes scaling both up (adding additional resources within a Cisco UCS Domain) and out (adding additional Cisco UCS Domains and Hitachi storage).

Products Deployed

The following are the products deployed in this solution:

     VMware vSphere ESXi 8.0 2 hypervisor.

     VMware vCenter 8.0.2 to set up and manage the virtual infrastructure as well as integration of the virtual environment with Cisco Intersight software.

     Microsoft SQL Server 2019.

     Microsoft Windows Server 2022 and Windows 11 64-bit virtual machine Operating Systems.

     VMware Horizon 8 2312 Remote Desktop Session Host (RDSH) Sessions provisioned as Instant Clone RDS Servers and stored on Hitachi VSP E1090.

     VMware Horizon 8 2312 non-persistent Windows 11 Virtual Desktops (VDI) provisioned as Instant Clones virtual machines and stored on Hitachi VSP E1090.

     VMware Horizon 8 2312 Persistent Windows 11 Virtual Desktops (VDI) provisioned as Full Clones virtual machines and stored on Hitachi VSP E1090.

     Microsoft Office 2021 configured (Office Applications) for measuring End User Measurement Knowledge worker workload tests for Windows 11 and Server 2022 RDSH tests.

     FSLogix for User Profile Management.

Physical Topology

Cisco and Hitachi Adaptive Solution VDI with Cisco UCS X210c M7 Modular System is a Fibre Channel (FC) based storage access design. Hitachi VSP and Cisco UCS are connected through Cisco MDS 9132T switches and storage access uses the FC network.

Figure 27 illustrates the physical connectivity details.

Figure 27.       Cisco Hitachi Adaptive Solution VDI – Physical Topology for FC

Related image, diagram or screenshot

Figure 28 details the physical hardware and cabling deployed to enable this solution:

     Two Cisco Nexus 93180YC-FX Switches in NX-OS Mode.

     Two Cisco MDS 9132T 32-Gb Fibre Channel Switches.

     One Cisco UCS 9508 Chassis with two Cisco UCS-IOM-2408 25GB IOM Modules.

     Eight Cisco UCS X210c M7 Blade Servers with Intel(R) Xeon(R) Gold 6448 CPU 2.40GHz 32-core processors, 2TB 4800MHz RAM, and one Cisco VIC15231 mezzanine card, providing N+1 server fault tolerance.

     One Hitachi E1090 Storage System with dual redundant controllers, 2 RU with 61.89. TB NVMe drives.

Figure 28.       Hitachi VSP E1090 Controller Front and Back View

A screenshot of a computerDescription automatically generated

Table 2 lists the software versions of the primary products installed in the environment.

Table 2.      Software and Firmware Versions

Vendor

Product/Component

Version/Build/Code

Cisco

UCS Component Firmware

4.2(3h)

Cisco

UCS X210C M7Compute Node

5.3(1.280052)

Cisco

VIC 15231 (Virtual Interface Card)

5.3(1.280046)

Cisco

Cisco Nexus 93180YC-FX Switches

9.3(7a)

Cisco

Cisco MDS 9132T

8.5(1a)

Hitachi                   

VSP E1090

Code 93-07-21-80/00

Hitachi

Hitachi Ops Center

10.9.3

VMware

vCenter Server Appliance

8.0.2

VMware

vSphere 8. 0. 2

8.0.2

VMware

VMware Horizon 8 2312 Connection server

8.12.0-23148203

VMware

VMware Horizon 8 2312 Agent

8.12.0-23142606 (2312)

Cisco

Intersight Assist

1.0.11-759

Microsoft

FSLogix(User Profile Mgmt.)

 2.9.8784.63912

VMware

Tools

12.3.0.22234872

Logical Architecture

The logical architecture of the validated solution which is designed to run desktop and RDSH server VMs supporting up to 2800 users on a single chassis containing 8 blades, with physical redundancy for the blade servers for each workload type and have a separate vSphere cluster to host management services, is illustrated in Figure 29.

Note:      Separating management components and desktops is a best practice for large environments.

Figure 29.       Logical Architecture Overview

 Related image, diagram or screenshot

Clusters

The following clusters in one vCenter datacenter were used to support the solution and testing environment:

      Cisco and Hitachi Adaptive Solution supporting Horizon VDI

          Infrastructure Cluster: Infra VMs (vCenter, Active Directory Controllers, DNS, DHCP, SQL Server, VMware Horizon Connection Servers, VMware Horizon Replica Servers, VMware vSphere, VSMs, and required Analytical VMs or plug in VMS).

          VDI Workload Cluster (Windows Server 2022 provisioned RDS Server VMs with VMware Horizon for Remote Desktop Session Host (RDSH) Sessions, Windows 11 provisioned with VMware Horizon Instant Clone (non-persistent), and Full Clone (persistent) desktops.

     LoginVSI Launcher Cluster configured in a separate vCenter instance

For Example, the cluster(s) configured for running LoginVSI workload for measuring VDI End User Experience is LVSI-Launcher-CLSTR: The Login VSI infrastructure cluster consists of Login VSI data shares, Login VSI Web Servers and LoginVSI Management Control VM and so on, were connected using the same set of switches with separate vCenter Appliance instance, hosted on separate hardware servers and local storage isolating from any storage connection or usage.

The VMware Horizon solution described in this document provides details for configuring a fully redundant, highly available configuration. Configuration guidelines are provided that refer to which redundant component is being configured with each step, whether that be A or B. For example, Cisco Nexus A and Cisco Nexus B identify the pair of Cisco Nexus switches that are configured. The Cisco UCS Fabric Interconnects are configured similarly. 

Note:      This document demonstrates configuring the VMware Horizon customer environment as a stand-alone solution. 

Solution Configuration

This chapter contains the following:

     Solution Cabling

Solution Cabling

The following sections detail the physical connectivity configuration of the Cisco and Hitachi Adaptive Solution VMware Horizon VDI environment.

The information provided in this section is a reference for cabling the physical equipment in this Cisco Validated Design environment. To simplify cabling requirements, the tables include both local and remote device and port locations.

The tables in this section list the details for the prescribed and supported configuration of the Hitachi VSP E1090 to the Cisco 6536 Fabric Interconnects through Cisco MDS 9132T 32-Gb FC switches.

Note:      This document assumes that out-of-band management ports are plugged into an existing management infrastructure at the deployment site. These interfaces will be used in various configuration steps.

Note:      Be sure to follow the cabling directions in this section. Failure to do so will result in problems with your deployment.

Figure 30 details the cable connections used in the validation lab for Cisco Hitachi Adaptive Solution topology based on the Cisco UCS 6536 fabric interconnect. Four 32Gb uplinks connect as port-channels to each Cisco UCS Fabric Interconnect from the MDS switches, and a total of eight 32Gb links connect the MDS switches to the Hitachi VSP controllers. Four of these have been used for FC-SCSI and the other four to support FC-NVMe.

Figure 30.       Hitachi Solution Cabling

Related image, diagram or screenshot

Configuration and Installation

This chapter contains the following:

     Cisco UCS X210c M7Configuration – Intersight Managed Mode (IMM)

     Hitachi Ops Center Configuration and Initial VSP Settings

     Cisco MDS 9132T 32-Gb FC Switch Configuration

     Hitachi VSP Storage Configuration

     Hitachi Storage Configuration for FC-NVMe

     Install and Configure VMware ESXi 8.0

     Cisco Intersight Orchestration

Cisco UCS X210c M7 Configuration – Intersight Managed Mode (IMM)

Procedure 1.       Configure Cisco UCS Fabric Interconnects for Intersight Managed Mode

Step 1.      Verify the following physical connections on the fabric interconnect:

          The management Ethernet port (mgmt0) is connected to an external hub, switch, or router.

          The L1 ports on both fabric interconnects are directly connected to each other.

          The L2 ports on both fabric interconnects are directly connected to each other.

Step 2.      Connect to the console port on the first Fabric Interconnect.

Step 3.      Configure Fabric Interconnect A (FI-A). On the Basic System Configuration Dialog screen, set the management mode to Intersight. All the remaining settings are similar to those for the Cisco UCS Manager managed mode (UCSM-Managed).

Cisco UCS Fabric Interconnect A

Procedure 1.       Configure the Cisco UCS for use in Intersight Managed Mode

Step 1.      Connect to the console port on the first Cisco UCS fabric interconnect:

  Enter the configuration method. (console/gui)? console

 

  Enter the management mode. (ucsm/intersight)? intersight

 

  You have chosen to setup a new Fabric interconnect in “intersight” managed mode. Continue? (y/n): y

 

  Enforce strong password? (y/n) [y]: Enter

 

  Enter the password for "admin": <password>

  Confirm the password for "admin": <password>

 

  Enter the switch fabric (A/B) []: A

 

  Enter the system name:  <ucs-cluster-name>

  Physical Switch Mgmt0 IP address : <ucsa-mgmt-ip>

 

  Physical Switch Mgmt0 IPv4 netmask : <ucsa-mgmt-mask>

 

  IPv4 address of the default gateway : <ucsa-mgmt-gateway>

 

  Configure the DNS Server IP address? (yes/no) [n]: y

 

    DNS IP address: <dns-server-1-ip>

 

  Configure the default domain name? (yes/no) [n]: y

 

    Default domain name: <ad-dns-domain-name>

<SNIP>

  Verify and save the configuration.

Step 2.      After applying the settings, make sure you can ping the fabric interconnect management IP address. When Fabric Interconnect A is correctly set up and is available, Fabric Interconnect B will automatically discover Fabric Interconnect A during its setup process as shown in the next step.

Step 3.      Configure Fabric Interconnect B (FI-B). For the configuration method, choose console. Fabric Interconnect B will detect the presence of Fabric Interconnect A and will prompt you to enter the admin password for Fabric Interconnect A. Provide the management IP address for Fabric Interconnect B and apply the configuration.

Cisco UCS Fabric Interconnect B

Enter the configuration method. (console/gui) ? console

 

  Installer has detected the presence of a peer Fabric interconnect. This Fabric interconnect will be added to the cluster. Continue (y/n) ? y

 

  Enter the admin password of the peer Fabric interconnect: <password>

    Connecting to peer Fabric interconnect... done

    Retrieving config from peer Fabric interconnect... done

    Peer Fabric interconnect Mgmt0 IPv4 Address: <ucsa-mgmt-ip>

    Peer Fabric interconnect Mgmt0 IPv4 Netmask: <ucsa-mgmt-mask>

 

    Peer FI is IPv4 Cluster enabled. Please Provide Local Fabric Interconnect Mgmt0 IPv4 Address

 

  Physical Switch Mgmt0 IP address: <ucsb-mgmt-ip>

 

  Local fabric interconnect model (UCS-FI-6536)

  Peer fabric interconnect is compatible with the local fabric interconnect. Continuing with the installer...

 

  Apply and save the configuration (select 'no' if you want to re-enter)? (yes/no): yes

Procedure 2.       Claim a Cisco UCS Fabric Interconnect in the Cisco Intersight Platform

If you do not already have a Cisco Intersight account, you need to set up a new account in which to claim your Cisco UCS deployment. Start by connecting to https://intersight.com.

All information about Cisco Intersight features, configurations can be accessed in the Cisco Intersight Help Center.

Step 1.      Click Create an account.

Step 2.      Sign in with your Cisco ID.

Step 3.      Read, scroll through, and accept the end-user license agreement. Click Next.

Step 4.      Enter an account name and click Create.

Note:      If you have an existing Cisco Intersight account, connect to https://intersight.com and sign in with your Cisco ID, select the appropriate account.

Note:      In this step, a Cisco Intersight organization is created where all Cisco Intersight managed mode configurations including policies are defined.

Step 5.      Log into the Cisco Intersight portal as a user with account administrator role.

Step 6.      From the Service Selector drop-down list, select System.

Step 7.      Navigate to Settings > General > Resource Groups.

A screenshot of a computerDescription automatically generated with low confidence

Step 8.      On Resource Groups panel click + Create Resource Group in the top-right corner.

A screenshot of a computerDescription automatically generated with medium confidence

Step 9.      Provide a name for the Resource Group (for example, Cisco Hitachi -L151-DMZ).

Step 10.  Click Create.

A screenshot of a computerDescription automatically generated

Step 11.  Navigate to Settings > General > Organizations.

A screenshot of a computerDescription automatically generated with low confidence

Step 12.  On Organizations panel click + Create Organization in the top-right corner.

ChartDescription automatically generated with medium confidence

Step 13.  Provide a name for the organization (for example, VDI Stack).

Step 14.  Select the Resource Group created in the last step (for example, VDI Stack-L151-DMZ).

Step 15.  Click Create.

A screenshot of a computerDescription automatically generated

Step 16.  Use the management IP address of Fabric Interconnect A to access the device from a web browser and the previously configured admin password to log into the device.

Step 17.  Under DEVICE CONNECTOR, the current device status will show “Not claimed.” Note, or copy, the Device ID, and Claim Code information for claiming the device in Cisco Intersight.

Graphical user interface, websiteDescription automatically generated

Step 18.  Navigate to Admin > General > Targets.

A screenshot of a phoneDescription automatically generated with medium confidence

Step 19.  On Targets panel click Claim a New Target in the top-right corner.

Related image, diagram or screenshot

Step 20.  Select Cisco UCS Domain (Intersight Managed) and click Start.

A screenshot of a computerDescription automatically generated

Step 21.  Enter the Device ID and Claim Code captured from the Cisco UCS FI.

Step 22.  Select the previously created Resource Group and click Claim.

A screenshot of a computerDescription automatically generated 

Step 23.  On successfully device claim, Cisco UCS FI should appear as a target in Cisco Intersight.

Graphical user interface, applicationDescription automatically generated

Configure a Cisco UCS Domain Profile

A Cisco UCS domain profile configures a fabric interconnect pair through reusable policies, allows configuration of the ports and port channels, and configures the VLANs and VSANs in the network. It defines the characteristics of and configured ports on fabric interconnects. The domain-related policies can be attached to the profile either at the time of creation or later. One Cisco UCS domain profile can be assigned to one fabric interconnect domain.

After the Cisco UCS domain profile has been successfully created and deployed, the policies including the port policies are pushed to Cisco UCS Fabric Interconnects. Cisco UCS domain profile can easily be cloned to install additional Cisco UCS systems. When cloning the UCS domain profile, the new UCS domains utilize the existing policies for consistent deployment of additional Cisco UCS systems at scale.

Procedure 1.       Create a Domain Profile

Step 1.      From the Service Selector drop-down list, select Infrastructure Service. Navigate to Configure > Profiles, to launch the Profiles Table view.

A screenshot of a computerDescription automatically generated with medium confidence

Step 2.      Navigate UCS Domain Profiles tab and click Create UCS Domain Profile.

A screenshot of a computerDescription automatically generated

Step 3.      On the Create UCS Domain Profile screen, click Start.

A screenshot of a computerDescription automatically generated

Step 4.      On the General page, select the organization created before and enter a name for your profile (for example, FS-L152-DMZ-K4). Optionally, include a short description and tag information to help identify the profile. Tags must be in the key:value format. For example, Org: IT or Site: APJ. Click Next.

A screenshot of a computerDescription automatically generated

Step 5.      On the Domain Assignment page, assign a switch pair to the Domain profile. Click Next.

Note:      You can also click Assign Later and assign a switch pair to the Domain profile at a later time.

A screenshot of a computerDescription automatically generated

Step 6.      On the VLAN & VSAN Configuration page, attach VLAN and VSAN policies for each switch to the UCS Domain Profile.

Note:      In this step, a single VLAN policy is created for both fabric interconnects and two individual VSAN policies are created because the VSAN IDs are unique for each fabric interconnect.

Step 7.      Click Select Policy next to VLAN Configuration under Fabric Interconnect A.

A screenshot of a computerDescription automatically generated with medium confidence

Step 8.      In the pane on the right, click Create New.

Step 9.      Verify correct organization is selected from the drop-down list and provide a name for the policy (for example, FS-L152-DMZ-VLAN). Click Next.

A screenshot of a computerDescription automatically generated

Step 10.  Click Add VLANs.

A screenshot of a computerDescription automatically generated

Step 11.  Provide a name and VLAN ID for the VLAN from you list (for example, 70, 71, 72,73). Enable Auto Allow On Uplinks. To create the required Multicast policy, under Multicast Policy*, click Select Policy.

A screenshot of a computerDescription automatically generated with medium confidence

Step 12.  In the window on the right, click Create New to create a new Multicast Policy.

Step 13.  Provide a Name for the Multicast Policy (for example, FS-L152-DMZ-McastPol). Provide optional Description and click Next.

A screenshot of a computerDescription automatically generated with medium confidence

Step 14.  Leave defaults selected and click Create.

A screenshot of a computerDescription automatically generated with medium confidence

Step 15.  Click Add to add the VLAN.

A screenshot of a computerDescription automatically generated with medium confidence

Step 16.  Add the remaining VLANs from you list by clicking Add VLANs and entering the VLANs one by one. Reuse the previously created multicast policy for all the VLANs.

The VLANs created during this validation are shown below:

A screenshot of a computerDescription automatically generated

Note:      A VSAN policy is only needed when configuring Fibre Channel and can be skipped when configuring IP-only storage access.

Step 17.  Click Select Policy next to VSAN Configuration under Fabric Interconnect A. Click Create New.

A screenshot of a computer screenDescription automatically generated

Step 18.  Verify correct organization is selected from the drop-down list and provide a name for the policy (for example, FS-L152-DMZ-VSAN-A). Click Next.

A screenshot of a computerDescription automatically generated with medium confidence

Step 19.  Click Add VSAN.

A screenshot of a computerDescription automatically generated

Step 20.  Provide a name (for example, VSAN-A), VSAN ID (for example, 500), and associated Fibre Channel over Ethernet (FCoE) VLAN ID (for example, 500) for VSAN A.

Step 21.  Set VLAN Scope as Uplink.

Step 22.  Click Add.

A screenshot of a computerDescription automatically generated with low confidence

Step 23.  Click Create to finish creating VSAN policy for fabric A.

A screenshot of a computerDescription automatically generated

Step 24.  Repeat steps 7 - 23 for fabric interconnect B assigning the VLAN policy created previously and creating a new VSAN policy for VSAN-B. Name the policy to identify the SAN-B configuration (for example, FS-L152-DMZ-VSAN-B) and use appropriate VSAN and FCoE VLAN (for example, 501).

Step 25.  Verify that a common VLAN policy and two unique VSAN policies are associated with the two fabric interconnects. Click Next.

A screenshot of a computerDescription automatically generated with medium confidence

Step 26.  On the Ports Configuration page, attach port policies for each switch to the UCS Domain Profile.

Note:      Use two separate port policies for the fabric interconnects. Using separate policies provide flexibility when port configuration (port numbers or speed) differs between the two FIs. When configuring Fibre Channel, two port policies are required because each fabric interconnect uses unique Fibre Channel VSAN ID.

Step 27.  Click Select Policy for Fabric Interconnect A.

A screenshot of a computerDescription automatically generated

Step 28.  Click Create New.

Step 29.  Verify the correct organization is selected from the drop-down list and provide a name for the policy (for example, FS-L152-DMZ-K4-FI-A). Click Next.

A screenshot of a computer screenDescription automatically generated

Step 30.  Move the slider to set up unified ports. In this deployment, the first four ports were selected as Fibre Channel ports. Click Next.

A screenshot of a computerDescription automatically generated

Step 31.  On the breakout Options page click Next.

Note:      No Ethernet/Fibre Channel breakouts were used in this validation.

A screenshot of a computerDescription automatically generated

Step 32.  Select the ports that need to be configured as server ports by clicking the ports in the graphics (or select from the list below the graphic). When all ports are selected, click Configure.

A screenshot of a computerDescription automatically generated with medium confidence

Step 33.  From the drop-down list, select Server as the role. Click Save.

A screenshot of a computerDescription automatically generated

Step 34.  Configure the Ethernet uplink port channel by selecting the Port Channel in the main pane and then clicking Create Port Channel.

A screenshot of a computerDescription automatically generated

Step 35.  Select Ethernet Uplink Port Channel as the role, provide a port-channel ID (for example, 11).

Note:      You can create the Ethernet Network Group, Flow Control, Link Aggregation or Link control policy for defining disjoint Layer-2 domain or fine tune port-channel parameters. These policies were not used in this deployment and system default values were utilized.

Step 36.  Scroll down and select uplink ports from the list of available ports (for example, port 49 and 50).

Step 37.  Click Save.

A screenshot of a computerDescription automatically generated with medium confidence

Step 38.  Repeat steps 27 - 37 to create the port policy for Fabric Interconnect B. Use the following values for various parameters:

     Name of the port policy: FS-L152-DMZ-K4-FI-B

     Ethernet port-Channel ID: 12

Step 39.  When the port configuration for both fabric interconnects is complete and looks good, click Next.

A screenshot of a computerDescription automatically generated

Step 40.  Under UCS domain configuration, additional policies can be configured to setup NTP, Syslog, DNS settings, SNMP, QoS and UCS operating mode (end host or switch mode). For this deployment, System QoS will be configured.

Step 41.  Click Select Policy next to System QoS* and click Create New to define the System QOS policy.

A screenshot of a computerDescription automatically generated

Step 42.  Verify correct organization is selected from the drop-down list and provide a name for the policy (for example, FS-L152-DMZ-QosPol). Click Next.

A screenshot of a computerDescription automatically generated

Step 43.  Change the MTU for Best Effort class to 9216. Keep the rest default selections. Click Create.

A screenshot of a computerDescription automatically generated

Step 44.  Click Next.

A screenshot of a computerDescription automatically generated

Step 45.  From the UCS domain profile Summary view, verify all the settings including the fabric interconnect settings, by expanding the settings and make sure that the configuration is correct. Click Deploy.

A screenshot of a computerDescription automatically generated with medium confidence

The system will take some time to validate and configure the settings on the fabric interconnects. Log into the console servers to see when the Cisco UCS fabric interconnects have finished configuration and are successfully rebooted.

When the Cisco UCS domain profile has been successfully deployed, the Cisco UCS chassis and the blades should be successfully discovered.

It takes a while to discover the blades for the first time. Cisco Intersight provides an ability to view the progress in the Requests page:
IconDescription automatically generated with medium confidence

Step 46.  From the Service Selector drop-down list, select Infrastructure Service. Navigate to Configure > Profiles, select UCS Domain Profiles, verify that the domain profile has been successfully deployed.

A screenshot of a videoDescription automatically generated 

Step 47.  From the Service Selector drop-down list, select Infrastructure Service. Navigate to Operate > Chassis, verify that the chassis has been discovered.

A screenshot of a videoDescription automatically generated

Step 48.  From the Service Selector drop-down list, select Infrastructure Service. Navigate to Operate > Servers, verify that the servers have been successfully discovered.

 Related image, diagram or screenshot

Configure Cisco UCS Chassis Profile

Cisco UCS Chassis profile in Cisco Intersight allows you to configure various parameters for the chassis, including:

     IMC Access Policy: IP configuration for the in-band chassis connectivity. This setting is independent of Server IP connectivity and only applies to communication to and from chassis.

     SNMP Policy, and SNMP trap settings.

     Power Policy to enable power management and power supply redundancy mode.

     Thermal Policy to control the speed of FANs (only applicable to Cisco UCS 9508).

A chassis policy can be assigned to any number of chassis profiles to provide a configuration baseline for a chassis. In this deployment, chassis profile was created and attached to the chassis with the settings shown in Figure 31.

Graphical user interface, text, applicationDescription automatically generated with medium confidence

Figure 31.       Chassis policy detail

A screenshot of a computerDescription automatically generated with medium confidence      A screenshot of a computerDescription automatically generated with medium confidence      Graphical user interface, text, applicationDescription automatically generated

Configure Server Profiles

In the Cisco Intersight platform, a server profile enables resource management by simplifying policy alignment and server configuration. The server profiles are derived from a server profile template. The server profile template and its associated policies can be created using the server profile template wizard. After creating server profile template, you can derive multiple consistent server profiles from the template.

Note:      The server profile captured in this deployment guide supports both Cisco UCS X210c M7blade servers and Cisco UCS X210c M7compute nodes.

Procedure 1.       Create vNIC and vHBA Placement for the Server Profile Template

In this deployment, four vNICs and two vHBAs are configured. These devices are manually placed as listed in Table 3.

Table 3.      vHBA and vNIC Placement for FC Connected Storage

vNIC/vHBA Name

Slot

Switch ID

PCI Order

vHBA-A

MLOM

A

0

vHBA-B

MLOM

B

1

01-vSwitch0-A

MLOM

A

2

02-vSwitch0-B

MLOM

B

3

03-VDS0-A

MLOM

A

4

04-VDS0-B

MLOM

B

5

Note:      Two vHBAs (vHBA-A and vHBA-B) are configured to support FC boot from SAN.

Step 1.      Log into the Cisco Intersight portal as a user with account administrator role.

Step 2.      Navigate to Configure > Templates and click Create UCS Server Profile Template.

Background patternDescription automatically generated

Step 3.      Select the organization from the drop-down list. Provide a name for the server profile template (for example, FS-L151-DMZ-K4-X210c M7) for FI-Attached UCS Server. Click Next.

A screenshot of a computerDescription automatically generated

Step 4.      Click Select Pool under UUID Pool and then click Create New.

A screenshot of a computer screenDescription automatically generated with medium confidence

Step 5.      Verify correct organization is selected from the drop-down list and provide a name for the UUID Pool (for example, FS-L151-DMZ-UUID-Pool). Provide an optional Description and click Next.

Related image, diagram or screenshot

Step 6.      Provide a UUID Prefix (for example, a random prefix of A11A14B6-B193-49C7 was used). Add a UUID block of appropriate size. Click Create.

Related image, diagram or screenshot 

Step 7.      Click Select Policy next to BIOS and in the pane on the right, click Create New.

Step 8.      Verify the correct organization is selected from the drop-down list and provide a name for the policy (for example, FS-L151-DMZ-M6-BIOS-Perf).

Step 9.      Click Next.

Related image, diagram or screenshot

Step 10.  On the Policy Details screen, select appropriate values for the BIOS settings. Click Create.

A screenshot of a computerDescription automatically generated

Note:      In this deployment, the BIOS values were selected based on recommendations in the performance tuning guide for Cisco UCS M6 BIOS: https://www.cisco.com/c/en/us/products/collateral/servers-unified-computing/ucs-b-series-blade-servers/performance-tuning-guide-ucs-m6-servers.html.

Table 4.      Cisco Hitachi -L151-DMZ-M7-BIOS-Perf Token Values

BIOS Token

Value

Intel Directed IO

 

Intel VT for Directed IO

enabled

Memory

 

Memory RAS Configuration

maximum-performance

Power And Performance

 

Core Performance Boost

Auto

Enhanced CPU Performance

Auto

LLC Deadline

disabled

UPI Link Enablement

1

UPI Power Management

enabled

Processor

 

Altitude

auto

Boot Performance Mode

Max Performance

Core Multiprocessing

all

CPU Performance

enterprise

Power Technology

performance

Direct Cache Access Support

enabled

DRAM Clock Throttling

Performance

Enhanced Intel Speedstep(R) Technology

enabled

Execute Disable Bit

enabled

IMC Interleaving

1-way Interleave

Intel HyperThreading Tech

Enabled

Intel Turbo Boost Tech

enabled

Intel(R) VT

enabled

DCU IP Prefetcher

enabled

Processor C1E

disabled

Processor C3 Report

disabled

Processor C6 Report

disabled

CPU C State

disabled

Sub Numa Clustering

enabled

DCU Streamer Prefetch

enabled

Step 11.  Click Select Policy next to Boot Order and then click Create New.

Step 12.  Verify correct organization is selected from the drop-down list and provide a name for the policy (for example, FS-L151-DMZ-BootPol). Click Next.

A screenshot of a computerDescription automatically generated

Step 13.  For Configured Boot Mode, select Unified Extensible Firmware Interface (UEFI).

Step 14.  Turn on Enable Secure Boot.

Step 15.  Click Add Boot Device drop-down list and select Virtual Media.

Step 16.  Provide a device name (for example, vKVM-DVD) and then, for the subtype, select KVM Mapped DVD.

For Fibre Channel SAN boot, four connected FC ports on Hitachi E1090 Storage System controllers will be added as boot options. The four FC ports are as follows:

For more information for using SAN boot and configuration, go to:  https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/hitachi_adaptive_vmware_vsp.html

Note:      For this Cisco Validated Design, the M.2 SSD was used for the ESXi hypervisor installation.

Step 17.  Verify the order of the boot policies and adjust the boot order as necessary using the arrows next to delete icon. Click Create.

Related image, diagram or screenshot

Step 18.  Click Select Policy next to Power and in the pane on the right, click Create New.

Step 19.  Verify the correct organization is selected from the drop-down list and provide a name for the policy (for example, UCS-PWR). Click Next.

A screenshot of a computerDescription automatically generated

Step 20.  Enable Power Profiling and select High from the Power Priority drop-down list. Click Create.

A screenshot of a computer screenDescription automatically generated with medium confidence

Step 21.  Click Select Policy next to Virtual Media and in the pane on the right, click Create New (Optional)

Step 22.  Verify the correct organization is selected from the drop-down list and provide a name for the policy (for example, FS-L151-DMZ-Vmedia-Pol). Click Next.

A screenshot of a computerDescription automatically generated

Step 23.  Disable Lower Power USB and click Create.

A screenshot of a computerDescription automatically generated

Step 24.  Click Next to go to Management Configuration.

A screenshot of a computerDescription automatically generated with medium confidence

Step 25.  Click Select Policy next to IMC Access and then click Create New.

Step 26.  Verify the correct organization is selected from the drop-down list and provide a name for the policy (for example, FS-L152-DMZ-IMCAPol). Click Next.

A screenshot of a computerDescription automatically generated

Note:      You can select in-band management access to the compute node using an in-band management VLAN (for example, VLAN 70) or out-of-band management access using the Mgmt0 interfaces of the FIs. KVM Policies like SNMP, vMedia and Syslog are currently not supported via Out-Of-Band and will require an In-Band IP to be configured.

Step 27.  Click UCS Server (FI-Attached). Enable In-Band Configuration and type VLAN Id designated for the In-Band management (for example, 70).

A computer screen shot of a computer screenDescription automatically generated

Step 28.  Under IP Pool, click Select IP Pool and then click Create New.

ShapeDescription automatically generated with medium confidence

Step 29.  Verify the correct organization is selected from the drop-down list and provide a name for the policy (for example, FS-L152-DMZ-ICMA-IP-Pool). Click Next.

A screenshot of a computerDescription automatically generated

Step 30.  Select Configure IPv4 Pool and provide the information to define a pool for KVM IP address assignment including an IP Block.

A screenshot of a computerDescription automatically generated

Note:      The management IP pool subnet should be accessible from the host that is trying to open the KVM connection. In the example shown here, the hosts trying to open a KVM connection would need to be able to route to 10.10.70.0/24 subnet.

A screenshot of a computer screenDescription automatically generated

Step 31.  Click Select Policy next to IPMI Over LAN and then click Create New.

Step 32.  Verify the correct organization is selected from the drop-down list and provide a name for the policy (for example, Enable-IPMIoLAN). Click Next.

A screenshot of a computerDescription automatically generated with medium confidence

Step 33.  Turn on Enable IPMI Over LAN.

Step 34.  From the Privilege Level drop-down list, select admin.

Step 35.  Click Create.

A screenshot of a computer screenDescription automatically generated with medium confidence

Step 36.  Click Select Policy next to Local User and the, in the pane on the right, click Create New.

Step 37.  Verify the correct organization is selected from the drop-down list and provide a name for the policy.

Step 38.  Verify that UCS Server (FI-Attached) is selected.

Step 39.  Verify that Enforce Strong Password is selected.

Graphical user interface, text, applicationDescription automatically generated

Step 40.  Click Add New User and then click + next to the New User.

Step 41.  Provide the username (for example, fpadmin), choose a role for example, admin), and provide a password.

Graphical user interface, applicationDescription automatically generated

Note:      The username and password combination defined here will be used to log into KVMs. The typical Cisco UCS admin username and password combination cannot be used for KVM access.

Step 42.  Click Create to finish configuring the user.

Step 43.  Click Create to finish configuring local user policy.

Step 44.  Click Next to move to Storage Configuration.

A screenshot of a computerDescription automatically generated with medium confidence

Step 45.  Click Next on the Storage Configuration screen. No configuration is needed in the local storage system.

A screenshot of a computerDescription automatically generated

Step 46.  Click Select Policy next to LAN Connectivity and then click Create New.

Note:      LAN connectivity policy defines the connections and network communication resources between the server and the LAN. This policy uses pools to assign MAC addresses to servers and to identify the vNICs that the servers use to communicate with the network. For consistent vNIC placement, manual vNIC placement is utilized.

The FC boot from SAN hosts uses 4 vNICs configured as listed in Table 5.

Table 5.      vNICs for LAN Connectivity

vNIC

Slot ID

Switch ID

PCI Order

VLANs

vSwitch0-A

MLOM

A

2

FS-InBand-Mgmt_70

vSwitch0-B

MLOM

B

3

FS-InBand-Mgmt_70

VDS0-A

MLOM

A

4

FS-VDI_72, FS-vMotion_73

VDS0-B

MLOM

B

5

FS-VDI_72, FS-vMotion_73

Note:      The PCI order 0 and 1 will be used in the SAN Connectivity policy to create vHBA-A and vHBA-B.

Step 47.  Verify the correct organization is selected from the drop-down list and provide a name for the policy (for example, FS-L151-DMZ-LAN-Conn-Pol). Click Next.

Step 48.  Under vNIC Configuration, select Manual vNICs Placement.

Step 49.  Click Add vNIC.

A screenshot of a computerDescription automatically generated

Step 50.  Click Select Pool under MAC Address Pool and then click Create New.

Note:      When creating the first vNIC, the MAC address pool has not been defined yet, therefore a new MAC address pool will need to be created. Two separate MAC address pools are configured for each Fabric. MAC-Pool-A will be reused for all Fabric-A vNICs, and MAC-Pool-B will be reused for all Fabric-B vNICs.

Table 6.      MAC Address Pools

Pool Name

Starting MAC Address

Size

vNICs

FS-L151-DMZ-MAC-Pool-A

00:25: B5:04:0A:00

256*

vSwitch0-A, VDS0-A

FS-L151-DMZ-MAC-Pool-B

00:25: B5:04:0B:00

256*

vSwitch0-B, VDS0-B

Step 51.  Verify the correct organization is selected from the drop-down list and provide a name for the pool from Table 6 depending on the vNIC being created (for example, FS-L151-DMZ-MAC-Pool-A for Fabric A).

Step 52.  Click Next.

A screenshot of a computerDescription automatically generated

Step 53.  Provide the starting MAC address from Table 6 (for example, 00:25:B5:04:0A:00) and the size of the MAC address pool (for example, 256). Click Create to finish creating the MAC address pool.

A screenshot of a computerDescription automatically generated with medium confidence

Step 54.  From the Add vNIC window, provide vNIC Name, Slot ID, Switch ID, and PCI Order information from Table 5.

A screenshot of a computerDescription automatically generated

Step 55.  For Consistent Device Naming (CDN), from the drop-down list, select vNIC Name.

Step 56.  Verify that Failover is disabled because the failover will be provided by attaching multiple NICs to the VMware vSwitch and VDS.

Graphical user interface, applicationDescription automatically generated

Step 57.  Click Select Policy under Ethernet Network Group Policy and then click Create New.

Note:      The Ethernet Network Group policies will be created and reused on applicable vNICs as explained below. The Ethernet Network Group policy defines the VLANs allowed for a particular vNIC, therefore multiple network group policies will be defined for this deployment as listed in Table 7.

Table 7.      Ethernet Group Policy Values

Group Policy Name

Native VLAN

Apply to vNICs

VLANs

FS-L151-DMZ-vSwitch0-NetGrp-Pol

Native-VLAN (1)

vSwitch0-A, vSwitch0-B

FS-InBand-Mgmt_70

FS-L151-DMZ-vSwitch1-NetGrp-Pol

Native-VLAN (1)

VDS0-A, VDS0-B

FS-VDI_72, FS-vMotion_73

Step 58.  Verify the correct organization is selected from the drop-down list and provide a name for the policy from Table 8 (for example, FS-L151-DMZ-vSwitch0-NetGrp-Pol). Click Next.

A screenshot of a computerDescription automatically generated with medium confidence

Step 59.  Enter the allowed VLANs from Table 6 (for example, 70) and the native VLAN ID from Table 8 (for example, 1). Click Create.

A screenshot of a computerDescription automatically generated with medium confidence

Note:      When ethernet group policies are shared between two vNICs, the ethernet group policy only needs to be defined for the first vNIC. For subsequent vNIC policy mapping, just click Select Policy and pick the previously defined ethernet group policy from the list on the right.

Step 60.  Click Select Policy under Ethernet Network Control Policy and then click Create New.

Note:      The Ethernet Network Control Policy is used to enable Cisco Discovery Protocol (CDP) and Link Layer Discovery Protocol (LLDP) for the vNICs. A single policy will be created and reused for all the vNICs.

Step 61.  Verify the correct organization is selected from the drop-down list and provide a name for the policy (for example, FS-L151-DMZ-NetCtrl-Pol).

Step 62.  Click Next.

A screenshot of a computerDescription automatically generated with medium confidence

Step 63.  Enable Cisco Discovery Protocol and both Enable Transmit and Enable Receive under LLDP. Click Create.

A screenshot of a computerDescription automatically generated 

Step 64.  Click Select Policy under Ethernet QoS and click Create New.

Note:      The Ethernet QoS policy is used to enable jumbo maximum transmission units (MTUs) for all the vNICs. A single policy will be created and reused for all the vNICs.

Step 65.  Verify the correct organization is selected from the drop-down list and provide a name for the policy (for example, FS-L151-DMZ-QOS).

Step 66.  Click Next.

A screenshot of a computerDescription automatically generated with medium confidence

Step 67.  Change the MTU Bytes value to 9000. Click Create.

A screenshot of a computer screenDescription automatically generated with medium confidence

Step 68.  Click Select Policy under Ethernet Adapter and then click Create New.

Note:      The ethernet adapter policy is used to set the interrupts and the send and receive queues. The values are set according to the best-practices guidance for the operating system in use. Cisco Intersight provides default VMware Ethernet Adapter policy for typical VMware deployments. Optionally, you can configure a tweaked ethernet adapter policy for additional hardware receive queues handled by multiple CPUs in scenarios where there is a lot of vMotion traffic and multiple flows. In this deployment, a modified ethernet adapter policy, FS-L151-DMZ-EthAdapt-VMware-HiTraffic, is created and attached to the VDS0-A and VDS0-B interfaces which handle vMotion.

Table 8.      Ethernet Adapter Policy association to vNICs

Policy Name

vNICS

FS-L151-DMZ-EthAdapt-VMware

vSwitch0-A, vSwitch0-B

FS-L151-DMZ-EthAdapt-VMware-HiTraffic

VDS0-A, VDS0-B,

Step 69.  Verify the correct organization is selected from the drop-down list and provide a name for the policy (for example, FS-L151-DMZ-EthAdapt-VMware).

Step 70.  Click Select Default Configuration under Ethernet Adapter Default Configuration.

Step 71.  From the list, select VMware. Click Next.

A screenshot of a computerDescription automatically generated

Step 72.  For the FS-L151-DMZ-EthAdapt-VMware policy, click Create and skip the rest of the steps in this section.

A screenshot of a computerDescription automatically generated

Step 73.  For the optional FS-L151-DMZ-EthAdapt-VMware-HiTraffic policy used for VDS interfaces, make the following modifications to the policy:

          Increase Interrupts to 11

          Increase Receive Queue Count to 8

          Increase Completion Queue Count to 9

          Enable Receive Side Scaling

A screenshot of a computerDescription automatically generated

Graphical user interface, applicationDescription automatically generated

Step 74.  Click Create.

A screenshot of a computerDescription automatically generated with medium confidence

Step 75.  Click Create to finish creating the vNIC.

Step 76.  Repeat the vNIC creation steps for the rest of vNICs. Verify all four vNICs were successfully created. Click Create.

A screenshot of a computerDescription automatically generated

Step 77.  Click Select Policy next to SAN Connectivity and then click Create New.

Note:      A SAN connectivity policy determines the network storage resources and the connections between the server and the storage device on the network. This policy enables customers to configure the vHBAs that the servers use to communicate with the SAN.

Table 9.      vHBA for boot from FC SAN

vNIC/vHBA Name

Slot

 Switch ID

PCI Order

vHBA-A

MLOM

A

0

vHBA-B

MLOM

B

1

Step 78.  Verify the correct organization is selected from the drop-down list and provide a name for the policy (for example, FS-L151-DMZ-FC-SAN-Conn-Pol).

A screenshot of a computerDescription automatically generated

Step 79.  Select Manual vHBAs Placement.

Step 80.  Select Pool under WWNN.

A screenshot of a computerDescription automatically generated 

Note:      The WWNN address pools have not been defined yet therefore a new WWNN address pool has to be defined.

Step 81.  Click Select Pool under WWNN Pool and then click Create New.

A screenshot of a computerDescription automatically generated

Step 82.  Verify the correct organization is selected from the drop-down list and provide a name for the policy (for example, FS-L151-DMZ-WWN-Pool).

Step 83.  Click Next.

A screenshot of a computerDescription automatically generated

Step 84.  Provide the starting WWNN block address and the size of the pool. Click Create.

A screenshot of a computerDescription automatically generated with medium confidence

Note:      As a best practice, additional information should always be coded into the WWNN address pool for troubleshooting. For example, in the address 20:00:00:25:B5:23:00:00, 23 is the rack ID.  

Step 85.  Click vHBA icon and enter vHBA-A for the Name and select fc-initiator from the drop-down list.

Graphical user interface, application, TeamsDescription automatically generated

Note:      The WWPN address pool has not been defined yet therefore a WWPN address pool for Fabric A will be defined.

Step 86.  Click Select Pool under WWPN Address Pool and then click Create New.

Graphical user interface, textDescription automatically generated with medium confidence

Step 87.  Verify the correct organization is selected from the drop-down list and provide a name for the policy (for example, FS-L151-DMZ-WWPN-Pool-A).

A screenshot of a computerDescription automatically generated with medium confidence

Step 88.  Provide the starting WWPN block address for SAN A and the size. Click Create.

Graphical user interface, text, applicationDescription automatically generated

Step 89.  Provide the Switch ID (for example, A) and PCI Order (for example, 0) from Table 10.

A screenshot of a computerDescription automatically generated

Step 90.  Click Select Policy under Fibre Channel Network and then click Create New.

Note:      A Fibre Channel network policy governs the VSAN configuration for the virtual interfaces. In this deployment, VSAN 100 will be used for vHBA-A and VSAN 101 will be used for vHBA-B.

Step 91.  Verify the correct organization is selected from the drop-down list and provide a name for the policy (for example, FS-L151-DMZ-K4-FCN-A). Click Next.

A screenshot of a computerDescription automatically generated

Step 92.  For the scope, select UCS Server (FI-Attached).

Step 93.  Under VSAN ID, provide the VSAN information (for example, 100).

Step 94.  Click Create.

A screenshot of a computerDescription automatically generated

Step 95.  Click Select Policy under Fibre Channel QoS and then click Create New.

Note:      The Fibre Channel QoS policy assigns a system class to the outgoing traffic for a vHBA. This system class determines the quality of service for the outgoing traffic. The Fibre Channel QoS policy used in this deployment uses default values and will be shared by all vHBAs.

Step 96.  Verify the correct organization is selected from the drop-down list and provide a name for the policy (for example, FS-L151-DMZ-FCQOS-Pol). Click Next.

A screenshot of a computerDescription automatically generated

Step 97.  For the scope, select UCS Server (FI-Attached).

Note:      Do not change the default values on the Policy Details screen.

Step 98.  Click Create.

A screenshot of a computerDescription automatically generated

Step 99.  Click Select Policy under Fibre Channel Adapter and then click Create New.

Note:      A Fibre Channel adapter policy governs the host-side behavior of the adapter, including the way that the adapter handles traffic. This validation uses the default values for the adapter policy, and the policy will be shared by all the vHBAs.

Step 100.                           Verify the correct organization is selected from the drop-down list and provide a name for the policy (for example, FS-L151-DMZ-FC-Adapter-Pol).

A screenshot of a computerDescription automatically generated

Step 101.                           For the scope, select UCS Server (FI-Attached).

Note:      Do not change the default values on the Policy Details screen.

Step 102.                           Click Create.

A screenshot of a computerDescription automatically generated 

Step 103.                           Click Add to create vHBA-A.

A screenshot of a computerDescription automatically generated

Step 104.                           Create the vHBA-B using the same steps from above using pools and Fibre Channel Network policy for SAN-B.

Step 105.                           Verify both vHBAs are added to the SAN connectivity policy.

A screenshot of a computerDescription automatically generated with medium confidence 

Step 106.                           When the LAN connectivity policy and SAN connectivity policy are created and assigned, click Next to move to the Summary screen.

A screenshot of a computerDescription automatically generated 

Step 107.                           From the Server profile template Summary screen, click Derive Profiles.

Note:      This action can also be performed later by navigating to Templates, clicking “…” next to the template name and selecting Derive Profiles.

A screenshot of a computerDescription automatically generated

Step 108.                           Under the Server Assignment, select Assign Now and select Cisco UCS X210c M7Nodes. You can select one or more servers depending on the number of profiles to be deployed. Click Next.

Cisco Intersight will fill the default information for the number of servers selected.

A screenshot of a computerDescription automatically generated

Step 109.                           Adjust the Prefix and number as needed. Click Next.

A screenshot of a computerDescription automatically generated with medium confidence

Step 110.                           Verify the information and click Derive to create the Server Profiles.

Configure Cisco Nexus 93180YC-FX Switches

This section details the steps for the Cisco Nexus 93180YC-FX switch configuration.

Procedure 1.       Configure Global Settings for Cisco Nexus A and Cisco Nexus B

Step 1.      Log in as admin user into the Cisco Nexus Switch A and run the following commands to set global configurations and jumbo frames in QoS:

conf terminal

policy-map type network-qos jumbo

class type network-qos class-default

mtu 9216

exit

class type network-qos class-fcoe

pause no-drop

mtu 2158

exit

exit

system qos

service-policy type network-qos jumbo

exit

copy running-config startup-config

Step 2.      Log in as admin user into the Cisco Nexus Switch B and run the same commands (above) to set global configurations and jumbo frames in QoS.

Procedure 2.       Configure VLANs for Cisco Nexus A and Cisco Nexus B Switches

Note:      For this solution, we created VLAN 70, 71, 72, 73 and 76.

Step 1.      Log in as admin user into the Cisco Nexus Switch A.

Step 2.      Create VLAN 70:

config terminal

VLAN 70

name InBand-Mgmt

no shutdown

exit

copy running-config startup-config

Step 3.      Log in as admin user into the Nexus Switch B and create VLANs.

Virtual Port Channel (vPC) Summary for Data and Storage Network

In the Cisco Nexus 93180YC-FX switch topology, a single vPC feature is enabled to provide HA, faster convergence in the event of a failure, and greater throughput. Cisco Nexus 93180YC-FX vPC configurations with the vPC domains and corresponding vPC names and IDs for Oracle Database Servers is listed in Table 10.

Table 10.   vPC Summary

vPC Domain

vPC Name

vPC ID

vPC Domain

50

Peer-Link

1

50

50

vPC Port-Channel to FI-A

11

50

50

vPC Port-Channel to FI-B

12

50

As listed in Table 10, a single vPC domain with Domain ID 50 is created across two Cisco Nexus 93180YC-FX member switches to define vPC members to carry specific VLAN network traffic. In this topology, a total number of 3 vPCs were defined:

     vPC ID 1 is defined as Peer link communication between two Nexus switches in Fabric A and B.

     vPC IDs 11 and 12 are defined for traffic from Cisco UCS fabric interconnects.

Cisco Nexus 93180YC-FX Switch Cabling Details

The following tables list the cabling information.

Table 11.   Cisco Nexus 93180YC-FX-A Cabling Information

Local Device

Local Port

Connection

Remote Device

Remote Port

Cisco Nexus 93180YC-FX Switch A

Eth1/51

100Gbe

Cisco UCS fabric interconnect B

Eth1/49

Eth1/52

100Gbe

Cisco UCS fabric interconnect A

Eth1/49

Eth1/1

25Gbe

Cisco Nexus 93180YC-FX B

Eth1/1

Eth1/2

25Gbe

Cisco Nexus 93180YC-FX B

Eth1/2

Eth1/3

25Gbe

Cisco Nexus 93180YC-FX B

Eth1/3

Eth1/4

25Gbe

Cisco Nexus 93180YC-FX B

Eth1/4

MGMT0

1Gbe

Gbe management switch

Any

Table 12.   Cisco Nexus 93180YC-FX-B Cabling Information

Local Device

Local Port

Connection

Remote Device

Remote Port

Cisco Nexus 93180YC-FX Switch B

Eth1/51

100Gbe

Cisco UCS fabric interconnect B

Eth1/50

Eth1/52

100Gbe

Cisco UCS fabric interconnect A

Eth1/50

Eth1/1

25Gbe

Cisco Nexus 93180YC-FX A

Eth1/1

Eth1/2

25Gbe

Cisco Nexus 93180YC-FX A

Eth1/2

Eth1/3

25Gbe

Cisco Nexus 93180YC-FX A

Eth1/3

Eth1/4

25Gbe

Cisco Nexus 93180YC-FX A

Eth1/4

MGMT0

1Gbe

Gbe management switch

Any

Cisco UCS Fabric Interconnect 6536 Cabling

The following tables list the Cisco UCS FI 6536 cabling information.

Table 13.   Cisco UCS Fabric Interconnect (FI) A Cabling Information

Local Device

Local Port

Connection

Remote Device

Remote Port

Cisco UCS FI-6536-A

FC 1/1

32G FC

Cisco MDS 9132T 32-Gb-A

FC 1/05

FC 1/2

32G FC

Cisco MDS 9132T 32-Gb-A

FC 1/06

Eth1/11-14

25Gbe

UCS 9508 Chassis IFM-A Chassis 1

 

Intelligent Fabric

Module 1 Port1-2

Eth1/49

100Gbe

Cisco Nexus 93180YC-FX Switch A

Eth1/52

Eth1/50

100Gbe

Cisco Nexus 93180YC-FX Switch B

Eth1/52

Mgmt 0

1Gbe

Management Switch

Any

L1

1Gbe

Cisco UCS FI - A

L1

L2

1Gbe

Cisco UCS FI - B

L2

Table 14.   Cisco UCS Fabric Interconnect (FI) B Cabling Information

Local Device

Local Port

Connection

Remote Device

Remote Port

Cisco UCS FI-6536-B

FC 1/1

32Gb FC

Cisco MDS 9132T 32-Gb-B

FC 1/05

 

FC 1/2

32Gb FC

Cisco MDS 9132T 32-Gb-B

FC 1/06

 

Eth1/17-18

25Gbe

UCS 9508 Chassis IFM-B Chassis 1

Intelligent Fabric Module 1 Port1-2

 

Eth1/49

100Gbe

Cisco Nexus 93180YC-FX Switch A

 

Eth1/51

 

Eth1/50

100Gbe

Cisco Nexus 93180YC-FX Switch B

 

Eth1/51

 

Mgmt 0

1Gbe

Management Switch

Any

 

L1

1Gbe

Cisco UCS FI - A

L1

 

L2

1Gbe

Cisco UCS FI - B

L2

Procedure 1.       Create vPC Peer-Link Between the Two Cisco Nexus Switches

Step 1.      Log in as “admin” user into the Cisco Nexus Switch A.

Note:      For vPC 1 as Peer-link, we used interfaces 53-54 for Peer-Link. You may choose the appropriate number of ports for your needs.

Step 2.      Create the necessary port channels between devices by running these commands on both Cisco Nexus switches:

config terminal

feature vpc

feature lacp

vpc domain 50

peer-keepalive destination 173.37.52.104 source 173.37.52.103

exit

interface port-channel 10

description VPC peer-link

switchport mode trunk

switchport trunk allowed VLAN 1,70-76

spanning-tree port type network

vpc peer-link

interface Ethernet1/1

description VPC to K23-N9K-A

switchport mode trunk

switchport trunk allowed vlan 1,70-76,132

channel-group 10 mode active

no shutdown

exit

 

interface Ethernet1/2

description VPC to K23-N9K-A

switchport mode trunk

switchport trunk allowed vlan 1,70-76,132

channel-group 10 mode active

no shutdown

exit

 

interface Ethernet1/3

description VPC to K23-N9K-A

switchport mode trunk

switchport trunk allowed vlan 1,70-76,132

channel-group 10 mode active

no shutdown

exit

 

interface Ethernet1/4

description VPC to K23-N9K-A

switchport mode trunk

switchport trunk allowed vlan 1,70-76,132

channel-group 10 mode active

no shutdown

exit

copy running-config startup-config

Step 3.      Log in as admin user into the Nexus Switch B and repeat the above steps to configure second Cisco Nexus switch.

Step 4.      Make sure to change the peer-keepalive destination and source IP address appropriately for Cisco Nexus Switch B.

Procedure 2.       Create vPC Configuration Between Cisco Nexus 93180YC-FX and Cisco Fabric Interconnects

Create and configure vPC 11 and 12 for the data network between the Cisco Nexus switches and fabric interconnects.

Note:      Create the necessary port channels between devices, by running the following commands on both Cisco Nexus switches.

Step 1.      Log in as admin user into Cisco Nexus Switch A and enter the following:

config terminal

interface port-channel11

description FI-A-Uplink

switchport mode trunk

switchport trunk allowed VLAN 1,70-76

spanning-tree port type edge trunk

vpc 11

no shutdown

exit

interface port-channel12

description FI-B-Uplink

switchport mode trunk

switchport trunk allowed VLAN 1,70-76

spanning-tree port type edge trunk

vpc 12

no shutdown

exit

interface Ethernet1/51

description FI-A-Uplink

switch mode trunk

switchport trunk allowed vlan 1,70-76

spanning-tree port type edge trunk

mtu 9216

channel-group 11 mode active

no shutdown

exit

interface Ethernet1/52

description FI-B-Uplink

switch mode trunk

switchport trunk allowed vlan 1,70-76

spanning-tree port type edge trunk

mtu 9216

channel-group 12 mode active

no shutdown

exit

copy running-config startup-config

Step 2.      Log in as admin user into the Nexus Switch B and complete the following for the second switch configuration:

config Terminal

interface port-channel11

description FI-A-Uplink

switchport mode trunk

switchport trunk allowed VLAN 1,70-76

spanning-tree port type edge trunk

vpc 11

no shutdown

exit

interface port-channel12

description FI-B-Uplink

switchport mode trunk

switchport trunk allowed VLAN 1,70-76

spanning-tree port type edge trunk

vpc 12

no shutdown

exit

interface Ethernet1/51

description FI-A-Uplink

switch mode trunk

switchport trunk allowed vlan 1,70-76

spanning-tree port type edge trunk

mtu 9216

channel-group 11 mode active

no shutdown

exit

interface Ethernet1/52

description FI-B-Uplink

switch mode trunk

switchport trunk allowed vlan 1,70-76

spanning-tree port type edge trunk

mtu 9216

channel-group 12 mode active

no shutdown

exit

copy running-config startup-config

Verify all vPC Status is up on both Cisco Nexus Switches

Figure 32 shows the verification of the vPC status on both Cisco Nexus Switches.

Figure 32.       vPC Description for Cisco Nexus Switch A and B

TextDescription automatically generated TextDescription automatically generated

Hitachi Ops Center Configuration and Initial VSP Settings

Hitachi Ops Center VM must be deployed on Cisco UCS Management cluster and the Ops Center environment must meet minimum system requirements to support management of various storage systems and servers. For additional details on Hitachi Ops Center, go to https://docs.hitachivantara.com/r/en-us/ops-center-administrator/10.9.x/mk-99adm000

The software can be obtained from your respective Hitachi representative, or for partner access, software can be downloaded here: https://support.hitachivantara.com/en/user/answers/downloads.htm

For additional information, see the Hitachi Ops Center document library: https://docs.hitachivantara.com/r/en-us/ops-center/10.9.x/mk-99ops001

Configure Hitachi Ops Center

Procedure 1.       Initial Configuration of the Hitachi Ops Center

After deploying the OVA template, you can configure Hitachi Ops Center.

Step 1.      Log in with the following credentials:

Username: root

Password: manager

Related image, diagram or screenshot

Step 2.      Run the opsvmsetup command to start the setup tool.

Related image, diagram or screenshot

Step 3.      Enter the Hostname (FQDN), IP Address, Subnet mask, Gateway, DNS, Time zone, and NTP details, as shown below:

Related image, diagram or screenshot

A screenshot of a computerDescription automatically generated

Step 4.      After providing the initial setup information, enter “y” to start the configuration.

Note:      Do not press or enter any key while running the configuration setup. The OS will restart automatically after the configuration is completed.

Related image, diagram or screenshot

Procedure 2.       Access Hitachi Ops Center Administrator

Step 1.      After the Ops Center configuration is completed, open a browser, and enter https://[FQDN or IP Address]/portal/#/inventory/products.

Step 2.      Enter the following credentials and click Log in:

Username: sysadmin

Password: sysadmin

A white background with triangles and red dotsDescription automatically generated

Step 3.      After logging in to the Hitachi Ops Center UI, you will find different product types such as Administrator, Analyzer, Analyzer detail view, Automator, and Protector.

A screenshot of a computerDescription automatically generated

Step 4.      Select the highlighted icon to launch Ops Center Administrator.

A screenshot of a computerDescription automatically generated

Procedure 3.       Onboarding Hitachi VSP to Ops Center Administrator

Onboarding a storage system is the process of associating it with Ops Center Administrator. After onboarding the storage system, you can manage it from the Ops Center Administrator dashboard.

Before you begin, verify the following:

     The service processor (SVP) username used to onboard a storage system in Ops Center Administrator has access to all resource groups on the storage system, including custom resource groups and meta resource groups, to ensure workflows function correctly.

     The user is a member of the Administration Users Group.

Step 1.      On the Ops Center Administrator dashboard, click Storage Systems, and click the plus sign (+) to add a storage system.

A screenshot of a computerDescription automatically generated

Step 2.      In the Onboard Storage System window, enter values for the following parameters:

     IP Address

For a storage system with an SVP, enter the IP address (IPv4) of the SVP for the storage system you want to discover.

Note:      For the VSP E1090, if there is no SVP, you can onboard storage using the IP address of the controllers.

     Username and password

Onboard the VSP system as a user with administrator privileges on the storage system. For example, you can use the following username and password:

          Username: maintenance          

          Password: raid-maintenance

Step 3.      Click Submit.

A screenshot of a computerDescription automatically generated

Step 4.      The dashboard now shows that the number of storage systems has been increased by one. Additionally, when you click Storage Systems, you are redirected to the storage system inventory window where you can see the newly added storage system.

A screenshot of a computerDescription automatically generated

When a storage system is onboarded, the Ops Center Administrator undergoes an initialization process to gather information about the current configuration of the storage system. During this time, you may observe that the ports, volumes, pools, and Parity Groups in the storage system are "Not accessible." After the initialization is completed, you can view information about PARITY GROUPS, POOLS, VOLUMES, PORTS, HOST GROUPS/SCSI TARGETS, and NEW SUBSYSTEMS in the Storage Systems tab.

A screenshot of a computerDescription automatically generated

Procedure 4.       Configure Fibre Channel Ports on the Hitachi VSP from Ops Center Administrator (FC-SCSI)

Before the Ops Center Administrator can create a host storage domain (HSD) on a port, you must change the port security and port attributes settings.

Port security must be enabled for fibre ports. By default, security is disabled on the VSP storage ports. Additionally, for VSP 5000 series systems, you must verify that the port attribute is set to TARGET.

Step 1.      Log in to Hitachi Ops Center Administrator. From the navigation pane, click Storage Systems.

A screenshot of a computerDescription automatically generated

Step 2.      Click the S/N listing of the Storage System.

 A screenshot of a computerDescription automatically generated

Step 3.      Click PORTS to see the configured storage ports for the storage systems.

A screenshot of a computerDescription automatically generated

Step 4.      To modify ports, select one or more Fibre Channel ports, and then click the edit pencil icon in the Actions pane.

  A screenshot of a computerDescription automatically generated

Step 5.      In the Edit Fibre Port dialog box, you can change the security settings and port attributes. Verify that port settings for fibre ports used in FC-SCSI connectivity have SCSI Mode, Enable Security, and Target selected as the port attribute. In the context of this document, these settings apply to fibre ports CL1-A, CL2-A, CL3-A, and CL4-A.

Step 6.      Click OK.

A screenshot of a computerDescription automatically generated

A screenshot of a computerDescription automatically generated

Step 7.      In the HOST GROUP NAME text box, update the host group name to include Fabric A. In the specific context outlined in this document, CL1-A for UCS_ESXi_1 is assigned VSI_x210_M7_01_Fab_A as the host group name. Click Submit.

Related image, diagram or screenshot

Step 8.      Repeat Step 2 through Step 4 for the remaining ports where CL2-A is Fabric A, CL3-A is Fabric B, and CL4- A is Fabric B. When completed, you can expect to see the following:

A screenshot of a computerDescription automatically generated

 

Cisco MDS 9132T 32-Gb FC Switch Configuration

Table 15 lists the cable connectivity between the Cisco MDS 9132T 32-Gb switch and the Cisco 6536 Fabric Interconnects and Hitachi VSP E1090.

Note:      Hitachi VSP E1090 to MDS A and B Switches using VSAN 100 for Fabric A and VSAN 101 Configured for Fabric B

Note:      In this solution, two ports (ports FC1/9 and FC1/10) of MDS Switch A and two ports (ports FC1/9 and FC1/10) of MDS Switch B are connected to Hitachi Storage System for FC-SCSI and , two ports (ports FC1/11 and FC1/12) of MDS Switch A and two ports (ports FC1/11 and FC1/12) of MDS Switch B are connected to Hitachi Storage System for FC-NVMe as listed in Table 16. All ports connected to the Hitachi Storage Array carry 32 Gb/s FC Traffic.

Table 15.   Cisco MDS 9132T-A Cabling Information

Local Device

Local Port

Connection

Remote Device

Remote Port

Cisco MDS 9132T-A

FC1/9

32Gb FC

Hitachi E1090 VSP Controller 1 - FC-SCSi

CL1-A

FC1/10

32Gb FC

Hitachi E1090 VSP Controller 2 - FC-SCSi

CL2-A

FC1/11

32Gb FC

Hitachi E1090 VSP Controller 1 - FC-NVME

CL1-B

FC1/12

32Gb FC

Hitachi E1090 VSP Controller 2 - FC-NVME

CL2-B

FC1/13

32Gb FC

Cisco 6536 Fabric Interconnect-A

FC1/1

FC1/14

32Gb FC

Cisco 6536 Fabric Interconnect-A

FC1/2

Table 16.   Cisco MDS 9132T-B Cabling Information

Local Device

Local Port

Connection

Remote Device

Remote Port

Cisco MDS 9132T-B

FC1/9

32Gb FC

Hitachi E1090 VSP Controller 1 - FC-SCSi

CL3-A

FC1/10

32Gb FC

Hitachi E1090 VSP Controller 2 - FC-SCSi

CL4-A

FC1/11

32Gb FC

Hitachi E1090 VSP Controller 1 - FC-NVME

CL3-B

FC1/12

32Gb FC

Hitachi E1090 VSP Controller 2 - FC-NVME

CL4-B

FC1/13

32Gb FC

Cisco 6536 Fabric Interconnect-B

FC1/1

FC1/14

32Gb FC

Cisco 6536 Fabric Interconnect-B

FC1/2

Procedure 1.       Configure Features and Names for MDS Switch A and MDS Switch B

Follow these steps on both MDS switches.

Step 1.      Log in as admin user into MDS Switch A:

config terminal

feature npiv

feature telnet

switchname VDIStack-MDS-A

copy running-config startup-config

Step 2.      Log in as admin user into MDS Switch B. Repeat step 1 on MDS Switch B.

Procedure 2.       Configure VSANs for MDS Switch A and MDS Switch B

Step 1.      Log in as admin user into MDS Switch A. Create VSAN 100 for Storage Traffic:

config terminal

VSAN database

vsan 100

exit

zone smart-zoning enable vsan 100

vsan database

vsan 100 interface fc 1/9-16

exit

interface fc 1/9-16

switchport trunk allowed vsan 100

switchport trunk mode off

port-license acquire

no shutdown

exit

copy running-config startup-config

Step 2.      Log in as admin user into MDS Switch B. Create VSAN 101 for Storage Traffic:

config terminal

VSAN database

vsan 101

exit

zone smart-zoning enable vsan 101

vsan database

vsan 101 interface fc 1/9-16

exit

interface fc 1/9-16

switchport trunk allowed vsan 101

switchport trunk mode off

port-license acquire

no shutdown

exit

copy running-config startup-config

Procedure 3.       Create and Configure Fiber Channel Zoning

This procedure sets up the Fibre Channel connections between the Cisco MDS 9132T 32-Gb switches, the Cisco UCS Fabric Interconnects, and the Hitachi VSP.

Note:      Before you configure the zoning details, decide how many paths are needed for each LUN and extract the WWPN numbers for each of the HBAs from each server. We used 2 HBAs for each Server. One of the HBAs (HBA-A) is connected to MDS Switch-A and other HBAs (HBA-B) is connected to MDS Switch-B.

Step 1.      Log into the Cisco Intersight portal as a user with account administrator role.

Step 2.      From the Service Selector drop-down list, choose Infrastructure Service.

Step 3.      Navigate to Configure > Pools. Filter WWPN type pools.

A screenshot of a computerDescription automatically generated

Step 4.      Select the Usage tab and collect the WWPNs and profiles to which they are assigned.

A screenshot of a computerDescription automatically generated

Step 5.      Connect to the Hitachi VSP embedded UI by navigating to the CTL1, CTL2, or the service IP and select ports. WWPN of FC Ports connected to the Cisco MDS Switches from the VSP can be noted.

A screenshot of a computerDescription automatically generated 

Procedure 4.       Create Device Aliases for Fiber Channel Zoning for SAN Boot Paths and Datapaths on Cisco MDS Switch A

Step 1.      Log in as admin user and run the following commands from the global configuration mode:

configure terminal

device-alias mode enhanced

device-alias database

device-alias name Host-FCP-1-HBA0 pwwn 20:00:00:25:B5:AA:17:00

device-alias name CL1-A pwwn 50:06:0e:80:23:b1:04:00

device-alias name CL2-A pwwn 50:06:0e:80:23:b1:04:10

exit

device-alias commit

Procedure 5.       Create Device Aliases for Fiber Channel Zoning for SAN Boot Paths and Datapaths on Cisco MDS Switch B

Step 1.      Log in as admin user and run the following commands from the global configuration mode:

configure terminal

device-alias mode enhanced

device-alias database

device-alias name Host-FCP-1-HBA1 pwwn 20:00:00:25:b5:bb:17:00

device-alias name CL3-A pwwn 50:06:0e:80:23:b1:04:20

device-alias name CL4-A pwwn 50:06:0e:80:23:b1:04:30

exit

device-alias commit

Procedure 6.       Create Fiber Channel Zoning for Cisco MDS Switch A for each Service Profile

Step 1.      Log in as admin user and create the zone:

configure terminal

zone name Cisco Hitachi Adaptive Solution-Fabric-A vsan 100

    member device-alias CL1-A target

    member device-alias CL2-A target

    member device-alias Host-FCP-1-HBA0 init

Step 2.      After the zone for the Cisco UCS service profile has been created, create the zone set and add the created zones as members:

configure terminal

zoneset name VDI-Fabric-A vsan 100

    member VDIStack-Fabric-A

Step 3.      Activate the zone set by running following commands:

zoneset activate name VDI-Fabric-A vsan 100

exit

copy running-config startup-config

Procedure 7.       Create Fiber Channel Zoning for Cisco MDS Switch B for each Service Profile

Step 1.      Log in as admin user and create the zone as shown below:

configure terminal zone name VDIStack-Fabric-B vsan 101

    member device-alias CL3-A target target

    member device-alias CL4-A target target

    member device-alias Host-FCP-1-HBA1 init

Step 2.      After the zone for the Cisco UCS service profile has been created, create the zone set and add the necessary members:

zoneset name VDI-Fabric-B vsan 101

    member VDIStack-Fabric-B

Step 3.      Activate the zone set by running following commands:

zoneset activate name VDI-Fabric-B vsan 101

exit

copy running-config startup-config

Hitachi VSP Storage Configuration

This section contains the following:

     Hitachi Virtual Storage Platform Configuration for FC-SCSI

     Hitachi Storage Configuration for FC-NVMe

The procedures in this section explain the initial configuration for the Hitachi Virtual Storage Platform (VSP).

Hitachi Virtual Storage Platform Configuration for FC-SCSI

Procedure 1.       Initialize Parity Groups with Hitachi Ops Center Administrator

The steps in this procedure assume that Parity Groups have already been created by Hitachi professional services or from Hitachi Device Manager-Storage Navigator. To initialize Parity Groups from Hitachi Ops Center Administrator, complete the following steps:

Step 1.      Log in to Hitachi Ops Center Administrator and from the navigation pane, select Storage Systems.

A screenshot of a computerDescription automatically generated

Step 2.      Select the respective Virtual Storage Platform S/N from the Storage Systems list.

A screenshot of a computerDescription automatically generated

Step 3.      Click the PARITY GROUPS icon under the selected storage system to view the Parity Groups.

A screenshot of a computerDescription automatically generated

Step 4.      Click any Parity Group ID that you want to initialize as parity for creating a pool. From the Actions pane, click Initialize Parity Groups.

A screenshot of a computerDescription automatically generated

Step 5.      Click OK.

A screenshot of a computerDescription automatically generated

Note:      The Created Parity Groups initially have a status of UNINITIALIZED. Upon complete initialization, the status should change to IN_USE.

Procedure 2.       Create a Hitachi Dynamic Provisioning Pool for UCS Server Boot LDEVs from Hitachi Ops Center Administrator

Within the scope of this document, the servers deployed for VDI testing used local M.2 boot drives. For users that require SAN boot, go to: https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/hitachi_adaptive_vmware_vsp.html under the section Create a Hitachi Dynamic Provisioning Pool for UCS Server Boot LDEVs from Hitachi Ops Center Administrator.

Procedure 3.       Create FC-SCSI Servers from Hitachi Ops Center Administrator

Hitachi Ops Center Administrator supports provisioning storage from logical containers known as servers. These servers host Cisco UCS server WWNs and the server IP. After Cisco UCS servers are onboarded in Ops Center Administrator, BOOT LDEVs and LUNs for VMFS datastores can be provisioned using servers. Proceed with the following steps to create servers from Ops Center Administrator.

Step 1.      Log in to Hitachi Ops Center Administrator and from the navigation pane, click Servers.

A screenshot of a computerDescription automatically generated

Step 2.      Click the plus sign (+) to open the Add Server window.

Related image, diagram or screenshot

Step 3.      Click the plus sign (+) under the Fibre Servers and enter the following server information:

          SERVER NAME

          DESCRIPTION

          IP ADDRESS

          WWN LIST: Fabric A and Fabric B WWNs of the Cisco UCS Servers

          OS TYPE: Select the OS TYPE as VMWARE EX

Step 4.      Click Submit to add the server.

A screenshot of a computerDescription automatically generated

Step 5.      Repeat Step 1 through Step 4 for any additional Cisco UCS servers. Upon completion, you can expect the following representation of the Cisco UCS servers:

A screenshot of a computerDescription automatically generated

Procedure 4.       Create FC-SCSI Server Groups from Ops Center Administrator

The Server Groups are created to manage multiple servers and attach volumes using a single workflow.

Step 1.      Log in to Hitachi Ops Center Administrator and from the navigation pane, click Servers.

A screenshot of a computerDescription automatically generated

Step 2.      Select Server Groups. Click the plus sign (+) icon to open the Add Server Group wizard.

Related image, diagram or screenshot

Step 3.      In the Add Server Group wizard, enter the SERVER GROUP NAME (for example, UCS_Cluster_VDI) and select the Cisco UCS servers that are going to be added to the server group. Click Add to move the selected servers from AVAILABLE SERVERS to ASSIGNED SERVERS.

A screenshot of a computerDescription automatically generated

Step 4.      Click Submit.

A screenshot of a computerDescription automatically generated

The Server Group (UCS_Cluster_VDI) was created and you can find it along with the Server Group ID:

A screenshot of a computerDescription automatically generated

After your server group is created, you will see the following:

A screenshot of a computerDescription automatically generated

Procedure 5.       Allocate Boot LUNs to UCS Servers from Hitachi Ops Center Administrator with Multiple LDEV paths

In this document example, the servers deployed for VDI testing used local M.2 boot drives. For users that require the SAN boot, go to: https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/hitachi_adaptive_vmware_vsp.html under section Allocate Boot LUNs to UCS Servers from Hitachi Ops Center Administrator with Multiple LDEV paths.

Procedure 6.       Edit Host Groups from Ops Center Administrator for Fabric A and Fabric B Representation

Note:      For those who do not utilize boot from SAN, Procedure 6 Edit Host Groups from Ops Center Administrator for Fabric A and Fabric B Representation must be completed after Procedure 8 Allocate FC-SCSI Shared VMFS LDEV and Adding LDEV Paths from Server Groups.

Step 1.      Log in to Hitachi Ops Center Administrator and from the navigation pane select Servers.

A screenshot of a computerDescription automatically generated

Step 2.      Click the Server ID for UCS_ESXi_1 as shown:

A screenshot of a computerDescription automatically generated

Step 3.      Expand the Volume ID of the boot volume created. Click the Edit pencil icon to edit CL1-A.

A screenshot of a computerDescription automatically generated

Step 4.      In the HOST GROUP NAME text box, update the host group name to include Fabric A. In the specific context outlined in this document, CL1-A for UCS_ESXi_1 is assigned VSI_x210_M7_01_Fab_A as the host group name. Click Submit.

Related image, diagram or screenshot

Step 5.      Repeat Step 2 through Step 4 for the remaining ports where CL2-A is Fabric A, CL3-A is Fabric B, and CL4- A is Fabric B. When completed, you will see the following:

A screenshot of a computerDescription automatically generated

Procedure 7.       Create a Hitachi Dynamic Provisioning Pool for FC-SCSI VMFS LDEVs for UCS Servers

When creating a pool, use the basic option to leverage tiers of storage available on the VSP, following best practices. By default, the basic option creates a Hitachi Dynamic Provisioning Pool.

For increased flexibility and if best practices are not essential, choose the advanced option. This enables you to select specific Parity Groups and define your pool types as either Tiered, Thin, or Snap.

Step 1.      Log in to Hitachi Ops Center Administrator. In the Dashboard, click Storage Systems to access the inventory of registered storage systems.

A screenshot of a computerDescription automatically generated

Step 2.      Click the S/N listing of the Storage System.

A screenshot of a computerDescription automatically generated

Step 3.      From the Storage Systems, click Pools.

A screenshot of a computerDescription automatically generated

Step 4.      Click the plus sign (+) to open the Create Pool window.

A screenshot of a computerDescription automatically generated

Step 5.      Enter the following details:

          For the POOL NAME enter UCS_Application_Pool (Pool names can be any combination of alphanumeric characters, hyphens, and underscores only. Initial hyphens are not allowed).

          Click an available Tier to view the storage capacity and select the required capacity.

          Review the high and low pool utilization thresholds. By default, the low utilization capacity is set to 70%, and high threshold is set to 80%. You can customize thresholds to receive notifications based on their specific environment requirements.

          To specify over allocation, you can set the limit to Unlimited.

          Click Submit.

A screenshot of a computerDescription automatically generated

Procedure 8.       Allocate FC-SCSI Shared VMFS LDEV and Adding LDEV Paths from Server Groups

Step 1.      Log in to Hitachi Ops Center Administrator and from the navigation pane, select Servers.

A screenshot of a computerDescription automatically generated

Step 2.      From the Servers tab, select Server Groups, and then select a SERVER GROUP ID. Under the Actions pane select Attach Volumes, and then click Create, Attach and Protect Volumes with Local Replication.

A screenshot of a computerDescription automatically generated

Step 3.      Configure volumes for the specified Storage System. You can switch to another Storage System using the Storage System drop-down list. To allocate LDEVs for use as VMFS datastores:

          For the VOLUME LABEL enter VDI-PROD.

          Select the NUMBER OF VOLUMES.

          Enter the Volume SIZE and select the volume unit: GiB, TiB, or Blocks.

          Select the POOL TYPE as Thin.

          For a Thin pool, select the POOL TIER: Diamond, Platinum, Gold, Silver, or Bronze.

          By default, the POOL is auto selected. Verify that the chosen POOL is the ‘’UCS_Application_Pool’’ for provisioning VMFS datastores.

          Select Deduplication and Compression under the CAPACITY SAVING drop down.

          Click the plus sign (+) to verify volume settings.

          Click Next.

A screenshot of a computerDescription automatically generated

Step 4.      The HOST MODE and HOST MODE OPTIONS should be selected as follows:

          HOST MODE: VMWARE EX.

          HOST MODE OPTIONS: 63 (VAAI) Support option for vStorage APIs based on T10 standards.

          Select MANDATE LUN ALIGNMENT as Yes; this option determines whether to assign the same LUN number to multiple servers for a volume. If Yes is specified, the same LUN number is always assigned.

          Set AUTO CREATE ZONE as No.

Step 5.      Click Next to explore options for creating and editing LUN paths.

A screenshot of a computerDescription automatically generated

Step 6.      On the Path Settings pane, you need to click the respective server WWNs and map them to the VSP storage ports as based on MDS zoning. When completed, you will see the following:

A screenshot of a computerDescription automatically generated

Step 7.      For the REPLICATION TYPE select None and click Next.

A screenshot of a computerDescription automatically generated

Step 8.      If required to modify the LUN ID, click LUN settings. If the LUN ID is correct skip to Step 10.

A screenshot of a computerDescription automatically generated

Step 9.      From the LUN settings window, choose the appropriate LUN ID using the FROM drop-down list, and click OK.

Step 10.  Verify the operation plan and click Submit.

A screenshot of a computerDescription automatically generated

Hitachi Storage Configuration for FC-NVMe

Procedure 1.       Configure Fibre Channel Ports on Hitachi Virtual Storage Platform from Ops Center Administrator for FC-NVMe

Note:      This procedure must be completed before provisioning the FC-NVMe VMFS datastore.

Step 1.      Log in to Hitachi Ops Center Administrator and from the navigation pane, select Storage Systems.

A screenshot of a computerDescription automatically generated

Step 2.      Click the S/N listing for the Storage System.

A screenshot of a computerDescription automatically generated

Step 3.      Select Fibre Channel Port (for example, CL1-B). Click the Edit sign icon.

Related image, diagram or screenshot

Step 4.      Select NVMe Mode and Disable Security. Click OK.

Note:      For the VSP E1090 systems, you are not required to modify port attributes.

A screenshot of a computerDescription automatically generated

Step 5.      Repeat Step 1 through Step 4 for the remaining Fibre ports CL2-B, CL3-B, and CL4-B.

After all the ports are configured, you will see the following:

Related image, diagram or screenshot

Procedure 2.       Initialize Parity Groups from Ops Center Administrator for FC-NVMe

Step 1.      Log in to Hitachi Ops Center Administrator and from the navigation pane, select Storage Systems.

A screenshot of a computerDescription automatically generated

Step 2.      Click the S/N listing of the Storage System.

A screenshot of a computerDescription automatically generated

Step 3.      Click the PARITY GROUPS icon, under the selected storage system, to view parity groups.

A screenshot of a computerDescription automatically generated

Step 4.      Select any Parity Group ID you want to initialize as parity for creating the FC-NVMe pool. From the Actions pane, click Initialize Parity Groups.

Related image, diagram or screenshot

Step 5.      Click OK.

A screenshot of a computerDescription automatically generated

Note:      Created Parity Groups initially have a status of UNINITIALIZED. Upon complete initialization, the status should change into IN_USE.

Procedure 3.       Create FC-NVMe Servers from Ops Center Administrator

Step 1.      Log in to Hitachi Ops Center Administrator and from the navigation pane, click Servers.

Related image, diagram or screenshot

Step 2.      Click the plus sign (+) under the FC-NVMe Servers section.

A screenshot of a computerDescription automatically generated 

Step 3.      Enter the SERVER NAME, OS TYPE as VMWARE EX, and HOST NQN, and click Submit.

Related image, diagram or screenshot

Note:      Host NQN must be lowercase. If the ESXi host has a capitalized host name, users must update naming to be all lowercase.

Step 4.      Repeat Step 1 through Step 3 to add the remaining Cisco UCS servers that use the FC-NVMe protocol.

Procedure 4.       Create FC-NVMe Server Groups from Ops Center Administrator

Step 1.      In the Ops Center Administrator Dashboard, from the navigation pane, click Servers.

Related image, diagram or screenshot

Step 2.      Select the Server Groups tab. Click the plus sign (+).

A screen shot of a computerDescription automatically generated

Step 3.      In the add Server Group wizard, enter the SERVER GROUP NAME.

Step 4.      Select the FC-NVMe Servers from AVAILABLE SERVERS and click Add.

Related image, diagram or screenshot

Step 5.      The FC-NVMe Servers are moved from the AVAILABLE SERVERS to the ASSIGNED SERVERS list. Click Submit.

A screenshot of a computerDescription automatically generated

Procedure 5.       Create a Hitachi Dynamic Provisioning Pool for UCS Servers for FC-NVMe VMFS Volume LDEVs

When creating a pool, use the basic option to take advantage of tiers of storage available on the VSP, following best practices. By default, the basic option will create a Hitachi Dynamic Provisioning Pool. For increased flexibility and if best practices are not essential, choose the advanced option. This enables you to specify Parity Groups and define your pool types as either Tiered, Thin, or Snap.

Step 1.      Log in to Hitachi Ops Center Administrator and from the navigation pane, click Storage Systems to access the inventory of registered storage systems.

A screenshot of a computerDescription automatically generated

Step 2.      Click the S/N listing of the Storage System.

A screenshot of a computerDescription automatically generated

Step 3.      From the Storage System, click Pools.

Related image, diagram or screenshot

Step 4.      Click the plus sign (+) to open the Create Pool window.

A screenshot of a computerDescription automatically generated

Step 5.      Enter the following details:

     For the POOL NAME enter UCS_Application_NVMe_pool (Pool names can be any combination of alphanumeric characters, hyphens, and underscores only. Initial hyphens are not allowed.)

     Click an available Tier to view the available storage capacity and select the available capacity.

     Review the high and low pool utilization thresholds. By default, the low utilization threshold is set to 70% and the high threshold is set to 80%. You can customize thresholds to receive notifications based on their specific environment requirements.

     To specify over allocation, you can set the limit to Unlimited.

     Click Submit.

Related image, diagram or screenshot

Procedure 6.       Allocate FC-NVMe Shared VMFS LDEV and Adding LDEV Paths from Server Groups

Step 1.      Log in to the Hitachi Ops Center Administrator console and click the Servers tab.

Related image, diagram or screenshot

Step 2.      Select the Server Groups under the Servers tab, and then select Server Group ID.

Step 3.      Under Actions, click Attach Volumes, select Create, Attach and Protect Volumes with Local Replication.

A screenshot of a computerDescription automatically generated

Step 4.      Configure volumes for the specified storage System. Proceed with the following steps to add the volumes to UCS ESXi servers that use the FC-NVMe protocol:

          For the VOLUME LABEL enter VDI_NVMe.

          Select the NUMBER OF VOLUMES.

          Enter the Volume SIZE. And select the volume unit: GiB, TiB, or Blocks.

          For the POOL TYPE select Thin.

          For a Thin pool, select the POOL TIER: Diamond, Platinum, Gold, Silver, or Bronze

          By default, the POOL is auto selected. Verify that the chosen POOL is the ‘’UCS_Application_NVMe_Pool’’ for provisioning VMFS datastores.

          Select Deduplication and Compression from the CAPACITY SAVING drop-down list.

          Click the plus sign (+) to verify the volume settings.

Step 5.      Click Next.

Related image, diagram or screenshotRelated image, diagram or screenshotRelated image, diagram or screenshotRelated image, diagram or screenshotRelated image, diagram or screenshotRelated image, diagram or screenshotRelated image, diagram or screenshotA screenshot of a computerDescription automatically generated

Step 6.      For the HOST MODE select VMWARE EX.

Step 7.      Validate the volume values and click Next.

Related image, diagram or screenshot

Step 8.      Under Path Settings, select the VSP ports that are in NVMe mode, and click Next.

Related image, diagram or screenshot

Step 9.      Select None for Replication Type and click Next.

Related image, diagram or screenshot

Step 10.  Validate Selected Servers, Volume Specification, and Create Paths. Click Submit.

Related image, diagram or screenshot

A screenshot of a computerDescription automatically generated

Install and Configure VMware ESXi 8.0

This section explains how to install VMware ESXi 8.0 Update 2 in an environment.

There are several methods to install ESXi in a VMware environment. These procedures focus on how to use the built-in keyboard, video, mouse (KVM) console and virtual media features in Cisco UCS Manager to map remote installation media to individual servers and install ESXi on boot logical unit number (LUN). Upon completion of steps outlined here, ESXi hosts will be booted from their corresponding SAN Boot LUNs.

Download Cisco Custom Image for VMware vSphere ESXi 8.0 2

To download the Cisco Custom Image for VMware ESXi 8.0 Update 2, from the page click the Custom ISOs tab.

https://customerconnect.vmware.com/en/downloads/details?downloadGroup=ESXI80U2&productId=1345

https://customerconnect.vmware.com/downloads/info/slug/datacenter_cloud_infrastructure/vmware_vsphere/8_0

Procedure 1.       Install VMware vSphere ESXi 8.0 U2

Step 1.      From the Service Selector drop-down list, select Infrastructure Service. Navigate to Operate > Servers.

Step 2.      Right-click on the … icon for the server being access and select Launch vKVM.

Step 3.      Click Boot Device and then select vKVM Mapped vDVD.

Related image, diagram or screenshot

Graphical user interfaceDescription automatically generated

Step 4.      Browse to the ESXi iso image file. Click Map Drive to mount the ESXi ISO image.

A screenshot of a computer programDescription automatically generated

Step 5.      Boot into ESXi installer and follow the prompts to complete installing VMware vSphere ESXi hypervisor.

Step 6.      When selecting a storage device to install ESXi, use M.2 SSD Card or select Remote LUN provisioned through Hitachi VSP Administrative console and access through FC connection if using SAN Boot.

Note:      For this Cisco Validated Design, we have used M.2 SSD card to install ESXi hypervisor.

For more information about how to install M.2 Card for ESXi installation, go to:

https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/x/hw/x210c-m7/install/b-cisco-ucs-x210c-m7-install-guide.pdf

https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/x/hw/x210c-m7/install/b-cisco-ucs-x210c-m7-install-guide/m-servicing-the-compute-node.html

Procedure 2.       Set Up Management Networking for ESXi Hosts

Adding a management network for each VMware host is necessary for managing the host and connection to vCenter Server. Select the IP address that can communicate with existing or new vCenter Server.

Step 1.      After the server has finished rebooting, press F2 to enter into configuration wizard for ESXi Hypervisor.

Step 2.      Log in as root and enter the corresponding password.

Step 3.      Select the Configure the Management Network option and press Enter.

Step 4.      Select the VLAN (Optional) option and press Enter. Enter the VLAN In-Band management ID and press Enter.

Step 5.      From the Configure Management Network menu, select IP Configuration and press Enter.

Step 6.      Select the Set Static IP Address and Network Configuration option by using the space bar. Enter the IP address to manage the first ESXi host. Enter the subnet mask for the first ESXi host. Enter the default gateway for the first ESXi host. Press Enter to accept the changes to the IP configuration.

Note:      IPv6 Configuration is set to automatic.

Step 7.      Select the DNS Configuration option and press Enter.

Step 8.      Enter the IP address of the primary and secondary DNS server. Enter Hostname

Step 9.      Enter DNS Suffixes.

Note:      Since the IP address is assigned manually, the DNS information must also be entered manually.

Note:      The steps provided vary based on the configuration. Please make the necessary changes according to your configuration.

Figure 33.       Sample ESXi Configure Management Network

A picture containing graphical user interfaceDescription automatically generated

Update Cisco VIC Drivers for ESXi

When ESXi is installed from Cisco Custom ISO, you might have to update the Cisco VIC drivers for VMware ESXi Hypervisor to match the current Cisco Hardware and Software Interoperability Matrix.

Cisco Intersight also incorporates an HCL check.

Figure 34.       Servers HCL Status in Cisco Intersight Infrastructure Services

A screen shot of a computerDescription automatically generated

In this Cisco Validated Design, the following drivers were used (VMware-ESXi-8.0.2-22380479-Custom-Cisco-4.2.2-a):

     Cisco-nenic- 2.0.11.0-1

     Cisco-nfnic- 5.0.0.42-1

Note:      For additional information about how to update Cisco VIC drivers on ESXi see: Cisco UCS Virtual Interface Card Drivers for ESX Installation Guide.

VMware Clusters

The VMware vSphere Client was configured to support the solution and testing environment as follows:

     Datacenter:  Hitachi VSP E1090 with Cisco UCS.

     Cluster: VDI - Single-session/Multi-session OS VDA (Virtual Desktop Agent) workload.

     Infrastructure: Infrastructure virtual machines (vCenter, Active Directory, DNS, DHCP, SQL Server,) VMware Horizon Connection Server and Horizon Replica Servers, Login VSI launcher infrastructure and work web servers were connected using the same set of switches but hosted on separate VMware cluster.

Figure 35.       VMware vSphere WebUI Reporting Cluster Configuration for this Validated Design

A screenshot of a computerDescription automatically generated

Cisco Intersight Orchestration

Cisco Intersight Assist helps you add endpoint devices to Cisco Intersight. The Cisco Intersight environment includes multiple devices that do not connect directly with Cisco Intersight. Any device that is supported by Cisco Intersight, but does not connect directly with it, will need a connection mechanism. Cisco Intersight Assist provides that connection mechanism, and helps you add devices into Cisco Intersight.

Cisco Intersight Assist is available within the Cisco Intersight Virtual Appliance, which is distributed as a deployable virtual machine contained within an Open Virtual Appliance (OVA) file format. You can install the appliance on an ESXi server. For more information, see the Cisco Intersight Virtual Appliance Getting Started Guide.

After claiming Cisco Intersight Assist into Cisco Intersight, you can claim endpoint devices using the Claim Through Intersight Assist option.

Procedure 1.       Configure Cisco Intersight Assist Virtual Appliance

Step 1.      To install Cisco Intersight, Assist from an Open Virtual Appliance (OVA) in your VMware Cisco Hitachi Management Cluster, first download the latest release of the OVA from: .

Step 2.      https://software.cisco.com/download/home/286319499/type/286323047/release/1.0.9-630

Step 3.      To set up the DNS entries for the Cisco Intersight Assist hostname as specified under Before you Begin, go to: https://www.cisco.com/c/en/us/td/docs/unified_computing/Intersight/cisco-intersight-assist-getting-started-guide/m-installing-cisco-intersight-assist.html.

Step 4.      From Hosts and Clusters in the VMware vCenter HTML5 client, right-click the Cisco and Hitachi Adaptive Solution-Management cluster and click Deploy OVF Template.

Step 5.      Specify a URL or browse to the intersight-appliance-installer-vsphere-1.0.9-342.ova file. Click NEXT.

Graphical user interface, text, application, emailDescription automatically generated

Step 6.      Name the Cisco Intersight Assist VM and choose the location. Click NEXT.

Step 7.      Select the VDI-Management cluster and click NEXT.

Step 8.      Review details and click NEXT.

Step 9.      Select a deployment configuration (Tiny recommended) and click NEXT.

Graphical user interface, text, applicationDescription automatically generated

Step 10.  Select the appropriate datastore for storage and select the Thin Provision virtual disk format. Click NEXT.

Step 11.  Select IB-MGMT Network for the VM Network. Click NEXT.

Step 12.  Fill in all values to customize the template. Click NEXT.

Step 13.  Review the deployment information and click FINISH to deploy the appliance.

Step 14.  Once the OVA deployment is complete, right-click the Cisco Intersight Assist VM and click Edit Settings.

Step 15.  Expand CPU and adjust the Cores per Socket so that 2 Sockets are shown. Click OK.

Graphical user interface, text, application, emailDescription automatically generated

Step 16.  Right-click the Cisco Intersight Assist VM and choose Open Remote Console.

Step 17.  Power on the VM.

Step 18.  When you see the login prompt, close the Remote Console, and connect to https://intersight-assist-fqdn.

Note:      It may take a few minutes for https://intersight-assist-fqdn to respond.

Step 19.  Navigate the security prompts and select Intersight Assist. Click Proceed.

Graphical user interfaceDescription automatically generated

Step 20.  From Cisco Intersight, click ADMIN > Devices. Click Claim a New Device. Copy and paste the Device ID and Claim Code shown in the Cisco Intersight Assist web interface to the Cisco Intersight Device Claim Direct Claim window. In Cisco Intersight, click Claim.

Step 21.  In the Cisco Intersight Assist web interface, click Continue.

Note:      The Cisco Intersight Assist software will now be downloaded and installed into the Cisco Intersight Assist VM. This can take up to an hour to complete.

Note:      The Cisco Intersight Assist VM will reboot during the software download process. It will be necessary to refresh the Web Browser after the reboot is complete to follow the status of the download process.

Step 22.  When the software download is complete, navigate the security prompts and a Cisco Intersight Assist login screen will appear. Log into Cisco Intersight Assist with the admin user and the password supplied in the OVA installation. Check the Cisco Intersight Assist status and log out of Intersight Assist.

Procedure 2.       Claim Intersight Assist into Cisco Intersight

Step 1.      To claim the Intersight assist appliance, from the Service Selector drop-down list, select System.

Step 2.      From Cisco Intersight, click ADMIN > Targets. Click Claim a New Target. In the Select Target Type window, select Cisco Intersight Assist under Platform Services and click Start.

A screenshot of a computerDescription automatically generated

Step 3.      Fill in the Intersight Assist information and click Claim.

A screenshot of a computerDescription automatically generated

After a few minutes, Cisco Intersight Assist will appear in the Targets list.

A screenshot of a computerDescription automatically generated

Procedure 3.       Claim vCenter in Cisco Intersight

Step 1.      To claim the vCenter, from Cisco Intersight, click ADMIN > Targets. Click Claim a New Target. In the Select Target Type window, select VMware vCenter under Hypervisor and click Start.

A screenshot of a computerDescription automatically generated

Step 2.      In the VMware vCenter window, make sure the Intersight Assist is correctly selected, fill in the vCenter information, and click Claim.

A screenshot of a computerDescription automatically generated

Step 3.      After a few minutes, the VMware vCenter will appear in the Devices list. It also can be viewed by clicking Intersight Assist in the Devices list.

A screenshot of a computerDescription automatically generated with medium confidence

Step 4.      Detailed information obtained from the vCenter can now be viewed by clicking Virtualization from the Infrastructure service > Operate menu.

A screenshot of a computerDescription automatically generated

Procedure 4.       Claim Hitachi VSP in Cisco Intersight

Before onboarding the Hitachi VSP to Cisco Intersight, the prerequisites outlined in the following document need to be completed. Refer to the Integrating Hitachi Virtual Storage Platform with Cisco Intersight Quick Start Guide: https://docs.hitachivantara.com/v/u/en-us/application-optimized-solutions/mk-sl-220

Begin State  

     Hitachi Virtual Storage Platform should be online and operational, but not claimed by Cisco Intersight.

     Intersight assist VM should be deployed using the Cisco provided OVA template.

     Hitachi Ops Center API Configuration Manager Rest should also be deployed as a VM or server, from a template or with manual installation, so that we can communicate between Cisco Intersight and Hitachi VSP. Hitachi Ops Center API Configuration Manager provides the Web API for getting information or changing the configuration of storage systems. Hitachi Ops Center API Configuration Manager is required to use Hitachi Virtual Storage Platform with Cisco Intersight.

     Communication between Hitachi Ops Center API Configuration Manager and the REST API Client.

End State

     Cisco Intersight is communicating with Intersight assist.

     Hitachi VSP is onboarded via Hitachi Ops Center API configuration manager RESTAPI server. 

     Hitachi VSP is claimed as a device with cisco Intersight.

Note:      Claiming a Hitachi VSP also requires the use of an Intersight Assist virtual machine.

Procedure 5.       Register Hitachi Virtual Storage Platform to Ops Center Configuration Manager Server 

Step 1.      Open your respective API client.  

Step 2.      Enter the base URL for the deployed Hitachi Ops Center API Configuration Manager IP address. For example, https://[Ops_Center_IP]:23450/ConfigurationManager/v1/objects/storages  

Step 3.      Click the Authorization tab.  

Step 4.      Select Basic from the Select authorization drop-down list. 

Step 5.      Enter the Username and Password for the VSP storage system. 

 A screenshot of a computerDescription automatically generated 

Step 6.      Click the Body tab.  Enter the VSP storage SVP IP, Serial Number, and Model as indicated in the following examples in JSON format. 

Note:      If a midrange VSP storage, it will be CTL1 and CTL2 IP instead of SVP IP. 

The following example uses a VSP 5600:

“svpIp”: “192.168.1.10”, 

“serialNumber”: 60749, 

“model”: “VSP 5600” 

 

The following example uses a VSP E1090: 

{ “ctl1Ip”: “192.168.1.10”,  

“ctl2Ip”: “192.168.1.11”,  

“model”: “VSP E1090”, 

“serialNumber”: 451139 } 

Step 7.      Under the HEADERS tab verify Accept: application/json.

A screenshot of a computerDescription automatically generated 

Step 8.      Verify that the REST call is set to POST. Click Submit. 

Step 9.      After successful registration, a response header is displayed as 200 OK.

A screenshot of a computerDescription automatically generated 

Step 10.  To confirm onboarding, the API parameter can be updated to the GET method to retrieve storage system information and can be verified with 200 OK status. 

A screenshot of a computerDescription automatically generated 

Procedure 6.       Onboarding VSP Storage to Cisco Intersight using Hitachi Ops Center API Configuration Manager

Step 1.      To claim the Hitachi VSP, from Cisco Intersight, click ADMIN > Targets. Click Claim a New Target. In the Select Target Type window, select Hitachi Virtaul Storage Platform under Storage and click Start.

A screenshot of a computerDescription automatically generated

Step 2.      Enter the Hitachi VSP target controller IP address, username, and password.  Also define the Hitachi Ops Center API Configuration Manager IP address. Click Claim.

A screenshot of a computerDescription automatically generated

Procedure 7.       Verify Cisco UCS Server HCL Status using Cisco Intersight

Step 1.      From the Infrastructure Service click Operate >Servers, HCL Status field will provide the status overview.

A screenshot of a computerDescription automatically generated

Step 2.      Select a server and click the HCL tab to view validation details.

A screenshot of a computerDescription automatically generated

Build the Virtual Machines and Environment for Workload Testing

This chapter contains the following:

     Prerequisites

     Software Infrastructure Configuration

     Prepare the Master Targets

     Install and Configure VMware Horizon

Prerequisites

Create the necessary DHCP scopes for the environment and set the scope options.

Figure 36.       Example of the DHCP Scopes used in this CVD

Graphical user interface, text, application, WordDescription automatically generated 

Software Infrastructure Configuration

This section explains how to configure the software infrastructure components that comprise this solution.

Install and configure the infrastructure virtual machines by following the process listed in Table 17.

Table 17.   Test Infrastructure Virtual Machine Configuration

Configuration

Microsoft Active Directory DCs Virtual Machine

vCenter Server Appliance Virtual Machine

Operating system

Microsoft Windows Server 2019

VCSA – SUSE Linux

Virtual CPU amount

4

16

Memory amount

8 GB

32 GB

Network

VMXNET3 k23-Infra-Mgmt-71

VMXNET3 k23-Infra-Mgmt-71

Disk-1 (OS) size

60 GB

698.84 GB (across 13 VMDKs)

Disk-2 size

 

 

 

Configuration

Microsoft SQL Server Virtual Machine

VMware Horizon Connection Server Virtual Machine

Operating system

Microsoft Windows Server 2019

Microsoft Windows Server 2019

Virtual CPU amount

4

4

Memory amount

 8 GB

8 GB

Network

VMXNET3 FS-Infra-Mgmt_71

VMXNET3 FS-Infra-Mgmt_71

Disk-1 (OS) size

60 GB

60 GB

Disk-2 size

100 GB SQL Databases\Logs

 

Prepare the Master Targets

This section provides guidance regarding creating the golden (or master) images for the environment. Virtual machines for the master targets must first be installed with the software components needed to build the golden images. Additionally, all available security patches as of current date and recommended updates from Microsoft for the Microsoft operating systems and Microsoft Office 2021 were installed.

The single-session OS and multi-session OS master target virtual machines were configured as detailed in Table 18.

Table 18.   Single-session OS and Multi-session OS Virtual Machines Configurations

Configuration

Single-session OS Virtual Machine

Mutli-session OS Virtual Machine

Operating system

Microsoft Windows 11 64-bit 21H2 (22000.2836)

Microsoft Windows Server 2022  (20348.2340)

Virtual CPU amount

2

12

Memory amount

4 GB

32 GB

Network

VMXNET3

FS-VDI_64

VMXNET3

FS-VDI_64

vDisk size

60 GB

100 GB

Additional software used for testing

Microsoft Office 2021

Office Update applied.

Login VSI 4.1.40.1 Target Software (Knowledge Worker Workload)

Microsoft Office 2021

Office Update applied.

Login VSI 4.1.40.1 Target Software (Knowledge Worker Workload)

Additional Configuration

Configure DHCP

Add to domain

Install VMWare tools

Install .Net 3.5

Activate Office

Install Horizon Agent

Install FSLogix

Configure DHCP

Add to domain

Install VMWare tools

Install .Net 3.5

Activate Office

Install Horizon Agent

Install FSLogix

Procedure 1.       Prepare the Master Virtual Machines

To prepare the master virtual machines, there are three major steps: installing the operating system and VMware tools, installing the application software, and installing the VMware Horizon Agent.

Note:      For this CVD, the images contain the basics needed to run the Login VSI workload.

Step 1.      During the VMware Horizon Agent installation, select IPv4 for the network protocol.

Graphical user interface, text, applicationDescription automatically generated

Step 2.      On the Custom Setup screen, leave the defaults preparing the Instant clone master image. Deselect the VMware Horizon Instant Clone option for the Full clone master image.

Graphical user interface, text, applicationDescription automatically generated

Step 3.      Enable the remote Desktop Protocol.

Graphical user interface, text, application, emailDescription automatically generated

Step 4.      During the VMware Horizon Agent installation on the Windows server select RDS Mode.

Graphical user interface, text, application, emailDescription automatically generated

The final step is to optimize the Windows OS. VMware OSOT, the optimization tool, includes customizable templates to enable or disable Windows system services and features using VMware recommendations and best practices across multiple systems. Since most Windows system services are enabled by default, the optimization tool can be used to easily disable unnecessary services and features to improve performance.

Note:      In this CVD, the Windows OS Optimization Tool for VMware Horizon. Version 1.2. (2303) is being used.

A screenshot of a computerDescription automatically generated


OSOT Template: Default optimization template for Windows 11 21H2 or Windows Server 2022.

Step 5.      To successfully run the Login VSI knowledge worker workload the ‘Disable animation in web pages – Machine Policy’ option in Default Template under Programs -> Internet Explorer must be disabled.

Graphical user interface, applicationDescription automatically generated

Install and Configure FSLogix

FSLogix, a Microsoft tool, was used to manage user profiles in this validated design.

A Windows user profile is a collection of folders, files, registry settings, and configuration settings that define the environment for a user who logs on with a particular user account. These settings may be customizable by the user, depending on the administrative configuration. Profile management in VDI environments is an integral part of the user experience.

FSLogix allows you to:

     Roam user data between remote computing session hosts.

     Minimize sign in times for virtual desktop environments.

     Optimize file IO between host/client and remote profile store.

     Provide a local profile experience, eliminating the need for roaming profiles.

     Simplify the management of applications and 'Gold Images'.

More information about the tool can be found here.

Procedure 1.       FSLogix Apps Installation

Step 1.      Download the FSLogix file here.

Step 2.      Run FSLogixAppSetup.exe on VDI master image (32 bit or 64 bit depending on your environment).

Step 3.      Click OK to proceed with the default installation folder.

Screenshot of installation option screen

Step 4.      Review and accept the license agreement.

Step 5.      Click Install.

Screenshot of click through license

Step 6.      Reboot.

Procedure 2.       Configure Profile Container Group Policy

Step 1.      Copy "fslogix.admx" to C:\Windows\PolicyDefinitions, and "fslogix.adml" to C:\Windows\PolicyDefinitions\en-US on Active Directory Domain Controllers.

Step 2.      Create FSLogix GPO and apply to the desktops OU:

          Navigate to Computer Configuration > Administrative Templates > FSLogix > Profile Containers.

          Configure the following settings:

          Enabled – Enabled

          VHD location – Enabled, with the path set to \\<FileServer>\<Profiles Directory>

Note:      Consider enabling and configuring FSLogix logging as well as limiting the size of the profiles and excluding additional directories.

Figure 37.       Example of FSLogix Policy

A screenshot of a computerDescription automatically generated

Install and Configure VMware Horizon

Procedure 1.       Configure VMware Horizon Connection Server

Step 1.      Download the Horizon Connection server installer from VMware and click Install on the Connection Server Windows Server Image. In this study, we used version Connection Server Horizon 8 2312 build 8.12-23148203.exe.

Step 2.      Click Next.

A screenshot of a computerDescription automatically generated

Step 3.      Read and accept the End User License Agreement and click Next.

Graphical user interface, text, application, emailDescription automatically generated

Step 4.      Select the destination folder where you want to install the application and click Next.

Graphical user interface, text, application, emailDescription automatically generated

Step 5.      Select the Standard Server and IPv4 for the IP protocol version.

Graphical user interface, text, applicationDescription automatically generated

Step 6.      Provide the data recovery details.

Graphical user interface, text, applicationDescription automatically generated

Step 7.      Select Configure Windows Firewall automatically. Click Next.

Graphical user interface, text, application, emailDescription automatically generated

Step 8.      Authorize Domain Admins to be VMware Horizon administrators.

Graphical user interface, text, application, emailDescription automatically generated

Step 9.      (Optional) Join the Customer Experience Program.

Graphical user interface, text, applicationDescription automatically generated

Step 10.  Click Install to begin the installation.

Graphical user interface, textDescription automatically generated

Step 11.  Click Next.

Graphical user interface, textDescription automatically generated

Step 12.  Select General for the type of installation. Click Install.

Graphical user interface, text, applicationDescription automatically generated

Step 13.  After Horizon Connection Server installation is complete, click Finish.

A screenshot of a computerDescription automatically generated

Procedure 2.       Install VMware Horizon Replica Server

Step 1.      Click the Connection Server installer based on your Operating System.

Step 2.      Click Next.

A screenshot of a computerDescription automatically generated

Step 3.      Read and accept the End User License Agreement and click Next.

Graphical user interface, text, application, emailDescription automatically generated

Step 4.      Select the destination folder where you want to install the application and click Next.

Graphical user interface, text, application, emailDescription automatically generated

Step 5.      Select the Replica Server and IPv4 for the IP protocol version.

Graphical user interface, text, application, emailDescription automatically generated

Step 6.      Provide the existing Standard View Connection Server’s FQDN or IP address and click Next.

Graphical user interface, text, applicationDescription automatically generated

Step 7.      Select Configure the Windows Firewall automatically.

Graphical user interface, text, application, emailDescription automatically generated

Step 8.      Click Install to begin the installation process.

Graphical user interface, textDescription automatically generated

Step 9.      After installation is complete, click Finish.

A screenshot of a computerDescription automatically generated

VMware Horizon Desktop Configuration

Management of the desktops, application pools, and farms is accomplished in VMware Horizon Console (HTML5) or Horizon Administrator (Flex). We used Horizon Console to administer VMware Horizon environment in this validated design.

Note:      VMware recommends using Horizon Console, an HTML5-based interface with enhanced security, capabilities, and performance.

Procedure 1.       Configure VMware Horizon Desktop

Step 1.      Log in to Horizon Console 2312 via a web browser using Address or FQDN>/admin/#/login.

A screenshot of a login boxDescription automatically generated

Step 2.      In Horizon Console, expand Settings and click Servers.

A screenshot of a computerDescription automatically generated

Step 3.      Select the vCenter Settings tab and click Add.

A screenshot of a computerDescription automatically generated

Step 4.      Provide the Server Address (IP or FQDN) and the credentials that Horizon will use to log in to vCenter, then click Next.

A screenshot of a computerDescription automatically generated

Step 5.      If you receive a message stating an invalid certificate, click View Certificate.

Graphical user interface, text, application, emailDescription automatically generated

Step 6.      Click Accept.

Graphical user interface, applicationDescription automatically generated

Step 7.      Keep the defaults, select Reclaim VM disk space and Enable Horizon Storage Accelerator with cache size of 1024MB. Click Next.

Graphical user interface, applicationDescription automatically generated

Step 8.      Review the information you provided and click Submit.

Graphical user interface, text, applicationDescription automatically generated

Step 9.      In Horizon Console, expand Settings and click Domains.

Note:      Domain Authentication with Domain name is required for both RDS (RDSH) server sessions and Windows VDI virtual machines Instant Clone deployment.

A screenshot of a computerDescription automatically generated

Step 10.  Select the Instant Clone Engine Domain Accounts tab and click Add.

A screenshot of a computerDescription automatically generated

Step 11.  Provide a domain name and credentials that Horizon will use to log in to AD during Instant Clone management tasks, then click OK.

A screenshot of a login boxDescription automatically generated

Procedure 2.       Create VDI Instant Clone Desktop Pool

Step 1.      In Horizon Console on the left plane, expand Inventory, select Desktops. Click Add.

Graphical user interface, text, application, emailDescription automatically generated

Step 2.      Select Type of Desktop pool to be created. Click Next.

A screenshot of a computerDescription automatically generated

Step 3.      Select the provisioning type for the desktops in the pool (we created Instant Clones and Full Virtual Machines pools in this design). Click Next.

A screenshot of a computerDescription automatically generated

Step 4.      Select the User assignment to be used by the desktop pool. Click Next.

Note:      We used the Floating assignment for the Instance clone pool.

A screenshot of a computerDescription automatically generated

Step 5.      Select the required option for Storage Policy Management. Click Next.

A screenshot of a computerDescription automatically generated

Step 6.      Provide the Desktop Pool ID and virtual display name. Click Next.

A screenshot of a computerDescription automatically generated

Step 7.      Provide the naming pattern and the number of desktops to be provisioned. Click Next.

Note:      In this Cisco Validate Design, we used:
Single Server pool – 285
Cluster pool – 2000

A screenshot of a computerDescription automatically generated

Step 8.      Provide the parent VM, snapshot and host/cluster info, and data store information for the virtual machines to create. Click Next.

A screenshot of a computerDescription automatically generated

Step 9.      Configure the State and Session Type for Desktop Pool Settings. Click Next.

A screenshot of a computerDescription automatically generated

Step 10.  Configure the Remote Display Protocol. Click Next.

A screenshot of a computerDescription automatically generated

Step 11.  Select the AD Container for desktops to place in a Domain Controller computer location.

A screenshot of a computerDescription automatically generated

Step 12.  Review the deployment specifications and click Submit to complete the deployment.

A screenshot of a computerDescription automatically generated

Step 13.  Select Entitle users after this wizard finishes, to enable desktop user group/users to access this pool.

Procedure 3.       Create VDI Full Clone Desktop Pool

Step 1.      Select Type of Desktop pool to be created. Click Next.

A screenshot of a computerDescription automatically generated 

Step 2.      Select the provisioning type for the desktops in the pool (we created Instant Clones and Full Virtual Machines pools in this design). Click Next.

A screenshot of a computerDescription automatically generated

Step 3.      Select the User assignment to be used by the desktop pool, We used Dedicated assignment for Full Cone pool. Click Next.

Graphical user interface, text, application, TeamsDescription automatically generated

Step 4.      On the Storage Optimization screen click Next.

Graphical user interface, application, TeamsDescription automatically generated

Step 5.      Provide the Desktop Pool ID and Display Name. Click Next.

A screenshot of a computerDescription automatically generated

Step 6.      Provide the naming pattern and the number of desktops to be provisioned. Click Next.

Note:      In this validated design for VDI pools we used:
Single Server pool –285
Cluster pool – 2000

A screenshot of a computer screenDescription automatically generated

Step 7.      Provide the parent VM, snapshot and host/cluster information, and data store information for the virtual machines to create.

A screenshot of a computerDescription automatically generated

Note:      A single datastore was used per 8 host pools.
A screenshot of a computerDescription automatically generated

Step 8.      Configure Desktop Pool settings.

A screenshot of a computerDescription automatically generated

Step 9.      Provide the customizations to the remote display protocol that will be used by the desktops in the pool.

We used the defaults in this deployment. For Remote Desktop Settings, desktop connectivity protocol options, customers have a choice to select the user’s session or desktop connectivity protocol applicable from VMware Blast, PCoIP, or Microsoft RDP for the type of deployment as per requirement.

A screenshot of a computerDescription automatically generated

Note:      For Advanced Storage Options, we used defaults in this deployment.

Step 10.  Click Next.

A screenshot of a computerDescription automatically generated

Step 11.  Select the VM Customization Specification to be used during deployment. Click Next.

A screenshot of a computerDescription automatically generated

Step 12.  Review all the deployment specifications and click Submit to complete the deployment.

A screenshot of a computerDescription automatically generated

Note:      The automated pool creation will add AD computer accounts to Computers OU. Move this account according to your policies, in our case the machine accounts were moved to Login VSI OU.

Procedure 4.       Create RDSH Farm and Pool

Step 1.      Select the FARM when creating the RDS Pool.

Note:      You can entitle the user access at the RDS Pool level for RDS users/groups who can access the RDS VMs.

Step 2.      Select Type of Farm. We used Automated Farm for the RDS desktops in this design. Click Next.

A screenshot of a computerDescription automatically generated

Step 3.      Select the provisioning type and vCenter Server for the desktops in the pool. Click Next.

A screenshot of a computerDescription automatically generated

Step 4.      On the Storage Optimization screen, click Next.

A screenshot of a computerDescription automatically generated

Step 5.      Provide the ID and Description for RDS-FARM. Select the Display Protocol which is required for users to connect to the RDS Sessions. Click Next.

A screenshot of a computerDescription automatically generated

Step 6.      Select Load Balancing Settings. Click Next.

A screenshot of a computerDescription automatically generated

Step 7.      Provide the naming pattern and number of virtual machines to create. Click Next.

A screenshot of a computerDescription automatically generated

Step 8.      Select the previously created golden image to be used as the RDS host. Select the datastore where RDS hosts will be deployed. Click Next.

A screenshot of a computerDescription automatically generated

Step 9.      Select the AD Container for desktops to place in a Domain Controller computer location.

A screenshot of a computerDescription automatically generated

Step 10.  Review the Farm information and click Submit to complete the RDS Farm creation.

A screenshot of a computerDescription automatically generated

Procedure 5.       Create RDS Pool

When the RDS FARM is created, you need to create an RDS pool to absorb the RDS VMS FARM into the Pool for further management.

Step 1.      Select type as RDS Desktop Pool.

Graphical user interface, application, TeamsDescription automatically generated

Step 2.      Select type as Automated Farm.

A screenshot of a computerDescription automatically generated

Step 3.      Provide an ID and Display Name for the Pool. Click Next.

A screenshot of a computerDescription automatically generated

Step 4.      Leave the default settings for the Desktop Pool Settings. Click Next.

A screenshot of a computerDescription automatically generated

Step 5.      Select the RDS Farm. Select the farm which was already created for this desktop pool. Click Next.

Graphical user interface, application, TeamsDescription automatically generated

Step 6.      Review the RDS Pool deployment specifications and click Next to complete the RDS pool deployment.

Graphical user interface, applicationDescription automatically generated

Step 7.      Select Entitle users after this wizard finishes, to enable desktop user group/users to access this pool.

VMware Horizon Deployment

With VMware Horizon 8, IT departments can run remote desktops and applications in the data center and deliver these desktops and applications to employees. End users gain a familiar, personalized environment that they can access from any number of devices anywhere throughout the enterprise or from home. Administrators gain centralized control, efficiency, and security by having desktop data in the data center.

The benefits of VMware Horizon 8 include simplicity, security, speed, and scale for delivering virtual desktops and applications with cloud-like economics and elasticity.

Flexible Horizon 8 Deployments

Horizon 8 offers the flexibility of deploying virtual desktops and applications on premises, in a cloud-hosted environment, or a hybrid mix of both. License requirements vary based on the deployment environment.

You can deploy Horizon 8 in the following environments:

     On-premises Deployment

Horizon 8 can be deployed on-premises or in a private cloud. You can use a perpetual, term, or SaaS subscription license for an on-premises deployment. With a SaaS subscription license, you will have access to the Horizon Control Plane and associated services. Internet connectivity is optional so you can deploy Horizon 8 in an air-gapped environment or a sovereign cloud.

     Cloud-hosted Deployment

Horizon 8 can be deployed in a public cloud such as VMware Cloud on AWS or Azure VMware Solutions. You are required to use a SaaS subscription license for deployment in a public cloud. With the SaaS subscription license, you have the option to leverage the SaaS services provided by the Horizon Control Plane.

     Hybrid Deployment

You can deploy Horizon 8 on-premises and in cloud-hosted environments. You can link these deployments in a federation. In this hybrid deployment scenario, you can use the following licenses:

          Use perpetual or term license for your on-premises deployments and use SaaS subscription license for your cloud-hosted deployments.

          Use SaaS subscription license for both your on-premises deployments and your cloud-hosted deployments.

Connect Horizon 8 Deployments to Horizon Control Plane

With the SaaS subscription license, you can connect Horizon 8 pods to the control plane for additional SaaS services.

Horizon Control Plane enabled by the SaaS subscription license provides the following benefits when connected to a Horizon 8 deployment.

     The Horizon Universal Console provides a single unified console that provides additional SaaS features across on-premises and multi-cloud deployments for working with your tenant’s fleet of cloud-connected pods.

     The single pane dashboard and Workspace One Intelligence console gives you the ability to monitor capacity, usage, and health within and across your fleet of cloud-connected pods, regardless of the deployment environments in which those individual pods reside.

Additional SaaS services are enabled on an ongoing basis. For information on available services, see the Horizon Cloud Service documentation.

Just-in-Time Management Platform JMP

JMP represents VMware Horizon 8 capabilities for delivering just-in-time virtual desktops and applications that are flexible, fast, and personalized. JMP includes the following VMware technologies.

Instant Clones

Instant clone is a vSphere-based cloning technology that is used to provision thousands of non-persistent virtual desktops from a single golden image. Instant-clone desktops offer the following advantages:

     Rapid provisioning speed that takes 1-2 seconds on average to create a new desktop.

     Delivers a pristine, high performance desktop every time a user logs in.

     Improves security by destroying the desktop every time a user logs out.

     Eliminates the need to have a dedicated desktop for every single user.

     Zero downtime for patching a pool of desktops.

     You can couple instant clones with VMware App Volumes and VMware Dynamic Environment Manager to deliver fully personalized desktops.

VMware App Volumes

VMware App Volumes is an integrated and unified application delivery and user management system for Horizon 8 and other virtual environments. VMware App Volumes offers the following advantages:

     Quickly provision applications at scale.

     Dynamically attach applications to users, groups, or devices, even when users are already logged in to their desktop.

     Provision, deliver, update, and retire applications in real time.

     Provide a user-writable volume, allowing users to install applications that follow across desktops.

VMware Dynamic Environment Manager

VMware Dynamic Environment Manager offers personalization and dynamic policy configuration across any virtual, physical, and cloud-based environment. VMware Dynamic Environment Manager offers the following advantages:

     Provide end users with quick access to a Windows workspace and applications, with a personalized and consistent experience across devices and locations.

     Simplify end user profile management by providing organizations with a single and scalable solution that leverages the existing infrastructure.

     Speed up the login process by applying configuration and environment settings in an asynchronous process instead of all at login.

     Provide a dynamic environment configuration, such as drive or printer mappings, when a user launches an application.

Reliability and Security

Horizon 8, along with the underlying vSphere platform, provides the following reliability and security advantages to your virtual desktop and app deployment:

     Access to data can easily be restricted. Sensitive data can be prevented from being copied onto a remote employee’s home computer.

     RADIUS support provides flexibility when choosing among two-factor authentication vendors. Supported vendors include RSA SecureID, VASCO DIGIPASS, SMS Passcode, and SafeNet, among others.

     Integration with VMware Workspace ONE Access means that end users have on-demand access to remote desktops through the same web-based application catalog they use to access SaaS, Web, and Windows applications. Users can also use this custom app store to access applications inside a remote desktop. With the True SSO feature, users who authenticate using smart cards or two-factor authentication can access their remote desktops and applications without supplying Active Directory credentials.

Unified Access Gateway functions as a secure gateway for users who want to access remote desktops and applications from outside the corporate firewall. Unified Access Gateway is an appliance that is installed in a demilitarized zone DMZ. Use Unified Access Gateway to ensure that the only traffic entering the corporate data center is traffic on behalf of a strongly authenticated remote user.

     The ability to provision remote desktops with pre-created Active Directory accounts addresses the requirements of locked-down Active Directory environments that have read-only access policies.

     Data backups can be scheduled without considering when end users’ systems might be turned off.

     Remote desktops and applications that are hosted in a data center experience little or no downtime. Virtual machines can reside on high-availability clusters of VMware servers.

Virtual desktops can also connect to back-end physical systems and Microsoft Remote Desktop Services RDS hosts.

Rich User Experience

Horizon 8 provides the familiar, personalized desktop environment that end users expect, including the following features:

     A rich selection of display protocols

     Ability to access USB and other devices connected to their local computer.

     Send documents to any printer their local computer can detect.

     Real-time audio/video features

     Authentication with smart cards

     Use of multiple display monitors

     3D graphics support

For more information on VMware Horizon VDI deployment, go to:

https://docs.omnissa.com/bundle/HorizonOverviewDeployment/page/IntroductiontotheHorizonPortfolio.html

https://docs.omnissa.com/bundle/HorizonOverviewDeployment/page/HorizonOverviewandDeploymentPlanning.html

https://docs.omnissa.com/bundle/Desktops-and-Applications-in-HorizonV2312/page/InstantCloneDesktopPools.html

https://docs.omnissa.com/bundle/HorizonAgentDirectConnection/page/Multi-SessionHosts.html

https://docs.vmware.com/en/VMware-Fusion/13/com.vmware.fusion.using.doc/GUID-2A18D9B2-67E4-4821-962F-8BA7F7BF3C0C.html

https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/vmware-horizon-view-instant-clone-technology.pdf

Test Setup, Configuration, and Load Recommendation

This chapter contains the following:

     Cisco UCS Test Configuration for Single Blade Scalability

     Cisco UCS Test Configuration for Full Scale Testing

     Test Methodology and Success Criteria

Note:      We tested a single Cisco UCS X210c M7Compute Node to validate against the performance of one server and eight Cisco UCS X210c M7 Compute Nodes as a cluster on a single chassis to illustrate linear scalability for each workload use case tested.

Cisco UCS Test Configuration for Single Blade Scalability

These test cases validate eight blades in a cluster hosting three distinct workloads using VMware Horizon 8.12 or VMware 2312 with:

     400 VMware Horizon Remote Desktop Session Host (RDSH) sessions Instant-clone RDS VMs using Microsoft Server 2022 OS for RDS sessions.

     285 VMware Horizon VDI-Non persistent Instant clone Single-session OS sessions using Microsoft Windows 11 OS

     285 VMware Horizon VDI-Persistent Full Clone Single-session OS sessions using Microsoft Windows 11 OS

This test case validates Recommended Maximum Workload per host server using VMware Horizon 8 2312 with 400 Multi Server VMware Horizon Remote Desktop Session Host (RDSH) Sessions.

Figure 38.       Test Configuration for Single Server Scalability VMware Horizon 8 2312 VMware Horizon Remote Desktop Session Host (RDSH) multi-session

A screenshot of a computerDescription automatically generated

Figure 39.       Test configuration for Single Server Scalability VMware Horizon 8 2312 Instant Clone and Full Clone (Persistent) independent tests for Single-session OS machine VDAs

A screenshot of a computerDescription automatically generated

Hardware components:

     Cisco UCS 9508 Chassis.

     2 Cisco UCS 6536 5th Gen Fabric Interconnects.

     1 Cisco UCS X210c M7 Compute Node Servers with Intel(R) Xeon(R) Gold 6448 CPU 2.40GHz 32-core processors, 2TB 4800MHz RAM for all host blades.

     2 Cisco Nexus 93180YC-FX Access Switches.

     2 Cisco MDS 9132T 32-Gb 32-Port Fibre Channel Switches.

     Hitachi E1090 Storage System with dual redundant controllers, with 61.89TB NVMe drives.

Software components:

     Cisco UCS firmware 4.2(3h).

     Hitachi storage code 93-07-21-80/00.

     ESXi 8.0 Update 2 for host blades.

     VMware Horizon 8 2312.

     Microsoft SQL Server 2019.

     Microsoft Windows 11 64 bit, 2vCPU, 4 GB RAM, 60/100 GB HDD (master).

     Microsoft Windows Server 2022 12vCPU, 32GB RAM, 100 GB vDisk (master).

     Microsoft Office 2021 64-bit.

     FSLogix 29.8612.600056

     Login VSI 4.1.40 Knowledge Worker Workload (Benchmark Mode).

Cisco UCS Test Configuration for Full Scale Testing

These test cases validate eight blades in a cluster hosting three distinct workloads using VMware Horizon 8.12 or VMware 2312 with:

     2800 VMware Horizon Remote Desktop Session Host (RDSH) sessions Instant-clone RDS VMs

     2000 VMware Horizon VDI-Non persistent Instant clone Single-session OS sessions

     2000 VMware Horizon VDI-Persistent Full Clone Single-session OS sessions

Note:      Server N+1 fault tolerance is factored into this solution for each cluster/workload.

Figure 40.       Test Configuration for Full Scale / Cluster Test VMware Horizon 8 2312 VMware Horizon Remote Desktop Session  Host (RDSH) multi-session OS machines.

A screenshot of a computer programDescription automatically generated 

Figure 41.       Test Configuration for Full Scale VMware Horizon 8 2312 Instant Clone (non-persistent) single-session Windows 11 OS machines

A screenshot of a computer programDescription automatically generated

Figure 42.       Test Configuration for Full Scale VMware Horizon 8 2312 Full Clone (Persistent) single-session Windows 11 OS machines

A screenshot of a computerDescription automatically generated

Hardware components:

     Cisco UCS 9508 Chassis.

     2 Cisco UCS 6536 4th Gen Fabric Interconnects.

     8 Cisco UCS X210c M7 Compute Node Servers with Intel(R) Xeon(R) Gold 6448 CPU 2.40GHz 32-core processors, 2TB 4800MHz RAM for all host blades.

     Cisco VIC 15231 CNA (1 per blade).

     2 Cisco Nexus 93180YC-FX Access Switches.

     2 Cisco MDS 9132T 32-Gb 32-Port Fibre Channel Switches.

     Hitachi VSP E1090 with dual redundant controllers, with 61.89 TB DirectFlash NVMe drives.

Software components:

     Cisco UCS firmware 4.2(3h).

     Hitachi VSP E1090 93-07-21-80/00.

     ESXi 8.0 Update 2 for host blades.

     VMware Horizon 8 2312.

     Microsoft SQL Server 2019.

     Microsoft Windows Server 2022, 8vCPU, 32GB RAM, 100 GB vDisk (master) for Remote Desktop Session Host (RDSH) Server Sessions.

     Microsoft Windows 11 64-bit 2vCPU, 4 GB RAM, 60/100 GB (full clone) HDD (master disk size).

     Microsoft Office 2021 64-bit.

     FSLogix 29.8974.600056

     Login VSI 4.1.40 Knowledge Worker Workload (Benchmark Mode).

Test Methodology and Success Criteria

All validation testing was conducted on-site within the Cisco labs in San Jose, California.

The testing results focused on the entire process of the virtual desktop lifecycle by capturing metrics during the desktop boot-up, user logon and virtual desktop acquisition (also referred to as ramp-up,) user workload execution (also referred to as steady state), and user logoff for the RDSH/VDI Session under test.

Test metrics were gathered from the virtual desktop, storage, and load generation software to assess the overall success of an individual test cycle. Each test cycle was not considered passing unless all of the planned test users completed the ramp-up and steady state phases (described below) and unless all metrics were within the permissible thresholds as noted as success criteria.

Three successfully completed test cycles were conducted for each hardware configuration and results were found to be relatively consistent from one test to the next.

You can obtain additional information and a free test license from http://www.loginvsi.com

Login VSI

Login VSI successfully predicts, validates, and manages the performance of virtualized desktop environments making it easy to load, test, benchmark, and plan capacity to improve end user experience and productivity for even the most complex virtualized desktop environments. Login VSI tests performance using virtual users, so your real users benefit from consistently great performance.

With Login VSI, you can gain performance insights that enable you to:

     Predict the performance impact of necessary updates and upgrades.

     Know the maximum user capacity of your current infrastructure.

     Understand the end users’ perspective on performance.

With agentless installation and minimal infrastructure requirements, Login VSI works in any Windows-based virtualized desktop environment including VMware Horizon, Citrix XenDesktop, and Microsoft Remote Desktop Services (Terminal Services). Below is a architecture diagram that represents both Login VSI and environment components.

A diagram of a software development processDescription automatically generated with medium confidence

When used for benchmarking, the product measures the total response times of several specific user operations being performed within a desktop workload, in a scripted loop. The baseline is the measurement of the specific operational response times performed in the desktop workload, measured in milliseconds (ms). Two values are very important: VSIbase and VSImax.

     VSIbase:  A score reflecting the response time of specific operations performed in the desktop workload when there is little or no stress on the system. A low baseline indicates a better user experience and a well-tuned desktop image—resulting in applications responding faster within the environment.

     VSImax:  The maximum number of desktop sessions attainable on the host before experiencing degradation in both host and desktop performance.

Both values, VSIbase and VSImax, offer undeniable proof (vendor independent, industry standard, and easy to understand) to innovative technology vendors of the power, the scalability, and the benefits of their software and hardware solutions, in a virtual desktop environment. Table 19 lists the Login VSI base scores ratings.

Table 19.   Login VSI Base Score Ratings

Login VSI Base Score Range

Rating

0-799

Very Good

800-1199

Good

1200-1599

Fair

1600-1999

Poor

2000-9999

Very Poor

Test Procedure

This chapter contains the following:

     Pre-Test Setup for Single and Multi-Blade Testing

     Test Run Protocol

     Success Criteria

     VSImax 4.1.x Description

The following protocol was used for each test cycle in this study to ensure consistent results.

Pre-Test Setup for Single and Multi-Blade Testing

All virtual machines were shut down utilizing the VMware Horizon Console and vCenter.

All Launchers for the test were shut down. They were then restarted in groups of 10 each minute until the required number of launchers was running with the Login VSI Agent at a “waiting for test to start” state.

All VMware ESXi VDI host blades to be tested were restarted prior to each test cycle.

Test Run Protocol

To simulate severe, real-world environments, Cisco requires the log-on and start-work sequence, known as Ramp Up, to complete in 48 minutes. For testing where the user session count exceeds 1000 users, we will now deem the test run successful with up to 1% session failure rate.

Additionally, Cisco requires that the Login VSI Benchmark method be used for all single server and scale testing. This assures that our tests represent real-world scenarios. For each of the three consecutive runs on single server tests, the same process was followed. To do so, follow these steps:

1.    Time 0:00:00 Start PerfMon/ESXTOP Logging on the following system:

     Infrastructure and VDI Host Blades used in the test run

2.    vCenter used in the test run.

3.    All Infrastructure virtual machines used in test run (AD, SQL, brokers, image mgmt., and so on)

4.    Time 0:00:10 Start Storage Partner Performance Logging on Storage System.

5.    Time 0:05: Boot Virtual Desktops/RDS Virtual Machines using View Connection server.

6.    The boot rate should be around 10-12 virtual machines per minute per server.

7.    Time 0:06 First machines boot.

8.    Time 0:30 Single Server or Scale target number of desktop virtual machines booted on 1 or more blades.

9.    No more than 30 minutes for boot up of all virtual desktops is allowed.

10.  Time 0:35 Single Server or Scale target number of desktop virtual machines desktops available on View Connection Server.

11.  Virtual machine settling time.

12.  No more than 60 Minutes of rest time is allowed after the last desktop is registered on the XD Studio or available in View Connection Server dashboard. Typically, a 30-45-minute rest period is sufficient.

13.  Time 1:35 Start Login VSI 4.1.x Office Worker Benchmark Mode Test, setting auto-logoff time at 15 minutes, with Single Server or Scale target number of desktop virtual machines utilizing sufficient number of Launchers (at 20-25 sessions/Launcher).

14.  Time 2:23 Single Server or Scale target number of desktop virtual machines desktops launched (48-minute benchmark launch rate).

15.  Time 2:25 All launched sessions must become active. id test run within this window.

16.  Time 2:40 Login VSI Test Ends (based on Auto Logoff 15 minutes period designated above).

17.  Time 2:55 All active sessions logged off.

18.  Time 2:57 All logging terminated, Test complete.

19.  Time 3:15 Copy all log files off to archive; Set virtual desktops to maintenance mode through broker; Shutdown all Windows machines.

20.  Time 3:30 Reboot all hypervisor hosts.

21.  Time 3:45 Ready for the new test sequence.

Success Criteria

Our pass criteria for this testing is as follows:

     Cisco will run tests at a session count level that effectively utilizes the blade capacity measured by CPU utilization, memory utilization, storage utilization, and network utilization. We will use Login VSI to launch version 4.1.x Office Worker workloads. The number of launched sessions must equal active sessions within two minutes of the last session launched in a test as observed on the VSI Management console.

The VMware Horizon Console be monitored throughout the steady state to make sure of the following:

     All running sessions report In Use throughout the steady state.

     No sessions move to unregistered, unavailable, or available state at any time during steady state.

     Within 20 minutes of the end of the test, all sessions on all launchers must have logged out automatically and the Login VSI Agent must have shut down. Stuck sessions define a test failure condition.

     Cisco requires three consecutive runs with results within +/-1% variability to pass the Cisco Validated Design performance criteria. For white papers written by partners, two consecutive runs within +/-1% variability are accepted. (All test data from partner run testing must be supplied along with the proposed white paper.)

We will publish Cisco Validated Designs with our recommended workload following the process above and will note that we did not reach a VSImax dynamic in our testing. Cisco Hitachi Adaptive Solution Data Center with Cisco UCS and VMware Horizon 8 2312 on VMware ESXi 8.0 Update 2 Test Results.

The purpose of this testing is to provide the data needed to validate VMware Horizon Remote Desktop Sessions (RDS) and VMware Horizon Virtual Desktop (VDI) instant-clones and VMware Horizon Virtual Desktop (VDI) full-clones models using ESXi and vCenter to virtualize Microsoft Windows 11 desktops and Microsoft Windows Server 2022 sessions on Cisco UCS X210c M7 Compute Node Servers using the Hitachi VSP E1090 Storage System storage system.

The information contained in this section provides data points that a customer may reference in designing their own implementations. These validation results are an example of what is possible under the specific environment conditions outlined here, and do not represent the full characterization of VMware products.

Four test sequences, each containing three consecutive test runs generating the same result, were performed to establish single blade performance and multi-blade, linear scalability.

VSImax 4.1.x Description

The philosophy behind Login VSI is different from conventional benchmarks. In general, most system benchmarks are steady state benchmarks. These benchmarks execute one or multiple processes, and the measured execution time is the outcome of the test. Simply put: the faster the execution time or the bigger the throughput, the faster the system is according to the benchmark.

Login VSI is different in approach. Login VSI is not primarily designed to be a steady state benchmark (however, if needed, Login VSI can act like one). Login VSI was designed to perform benchmarks for HSD or VDI workloads through system saturation. Login VSI loads the system with simulated user workloads using well known desktop applications like Microsoft Office, Internet Explorer, and Adobe PDF reader. By gradually increasing the number of simulated users, the system will eventually be saturated. Once the system is saturated, the response time of the applications will increase significantly. This latency in application response times shows a clear indication whether the system is (close to being) overloaded. As a result, by nearly overloading a system it is possible to find out what is its true maximum user capacity.

After a test is performed, the response times can be analyzed to calculate the maximum active session/desktop capacity. Within Login VSI this is calculated as VSImax. When the system is coming closer to its saturation point, response times will rise. When reviewing the average response time, it will be clear the response times escalate at saturation point.

This VSImax is the “Virtual Session Index (VSI).” With Virtual Desktop Infrastructure (VDI) and Terminal Services (RDS) workloads this is valid and useful information. This index simplifies comparisons and makes it possible to understand the true impact of configuration changes on hypervisor host or guest level.

Server-Side Response Time Measurements

It is important to understand why specific Login VSI design choices have been made. An important design choice is to execute the workload directly on the target system within the session instead of using remote sessions. The scripts simulating the workloads are performed by an engine that executes workload scripts on every target system and are initiated at logon within the simulated user’s desktop session context.

An alternative to the Login VSI method would be to generate user actions client side through the remoting protocol. These methods are always specific to a product and vendor dependent. More importantly, some protocols simply do not have a method to script user actions client side.

For Login VSI, the choice has been made to execute the scripts completely server side. This is the only practical and platform independent solution, for a benchmark like Login VSI.

Calculating VSImax v4.1.x

The simulated desktop workload is scripted in a 48 minute loop when a simulated Login VSI user is logged on, performing generic Office worker activities. After the loop is finished it will restart automatically. Within each loop, the response times of sixteen specific operations are measured in a regular interval: sixteen times within each loop. The response times of these five operations are used to determine VSImax.

The five operations from which the response times are measured are:

     Notepad File Open (NFO)

Loading and initiating VSINotepad.exe and opening the open file dialog. This operation is handled by the OS and by the VSINotepad.exe itself through execution. This operation seems almost instant from an end-user’s point of view.

     Notepad Start Load (NSLD)

Loading and initiating VSINotepad.exe and opening a file. This operation is also handled by the OS and by the VSINotepad.exe itself through execution. This operation seems almost instant from an end-user’s point of view.

     Zip High Compression (ZHC)

This action copy's a random file and compresses it (with 7zip) with high compression enabled. The compression will very briefly spike CPU and disk IO.

     Zip Low Compression (ZLC)

This action copy's a random file and compresses it (with 7zip) with low compression enabled. The compression will very briefly disk IO and creates some load on the CPU.

     CPU

Calculates a large array of random data and spikes the CPU for a short period of time.

These measured operations within Login VSI do hit considerably different subsystems such as CPU (user and kernel), Memory, Disk, the OS in general, the application itself, print, GDI, and so on. These operations are specifically short by nature. When such operations become consistently long: the system is saturated because of excessive queuing on any kind of resource. As a result, the average response times will then escalate. This effect is clearly visible to end-users. If such operations consistently consume multiple seconds, the user will regard the system as slow and unresponsive.

Figure 43.       Sample of a VSI Max Response Time Graph, Representing a Normal Test

Good-chart.png

Figure 44.       Sample of a VSI Test Response Time Graph with a Performance Issue

Bad-chart.png

When the test is finished, VSImax can be calculated. When the system is not saturated, and it could complete the full test without exceeding the average response time latency threshold, VSImax is not reached, and the number of sessions ran successfully.

The response times are very different per measurement type, for instance Zip with compression can be around 2800 ms, while the Zip action without compression can only take 75ms. These response times of these actions are weighted before they are added to the total. This ensures that each activity has an equal impact on the total response time.

In comparison to previous VSImax models, this weighting much better represents system performance. All actions have very similar weight in the VSImax total. The following weighting of the response times is applied.

The following actions are part of the VSImax v4.1.x calculation and are weighted as follows (US notation):

     Notepad File Open (NFO): 0.75

     Notepad Start Load (NSLD): 0.2

     Zip High Compression (ZHC): 0.125

     Zip Low Compression (ZLC): 0.2

     CPU: 0.75

This weighting is applied on the baseline and normal Login VSI response times.

With the introduction of Login VSI 4.1.x, we also created a new method to calculate the basephase of an environment. With the new workloads (Taskworker, Powerworker, and so on) enabling 'basephase' for a more reliable baseline has become obsolete. The calculation is explained below. In total the 15 lowest VSI response time samples are taken from the entire test; the lowest 2 samples are removed. and the 13 remaining samples are averaged. The result is the Baseline.

To summarize:

     Take the lowest 15 samples of the complete test

     From those 15 samples remove the lowest 2

     Average the 13 results that are left is the baseline

The VSImax average response time in Login VSI 4.1.x is calculated on the number of active users that are logged on the system.

Always a 5 Login VSI response time samples are averaged + 40 percent of the number of “active” sessions. For example, if the active sessions are 60, then latest 5 + 24 (=40 percent of 60) = 31 response time measurement is used for the average calculation.

To remove noise (accidental spikes) from the calculation, the top 5 percent and bottom 5 percent of the VSI response time samples are removed from the average calculation, with a minimum of 1 top and 1 bottom sample. As a result, with 60 active users, the last 31 VSI response time samples are taken. From those 31 samples, the top 2 samples are removed, and the lowest 2 results are removed (5 percent of 31 = 1.55, rounded to 2). At 60 users the average is then calculated over the 27 remaining results.

VSImax v4.1.x is reached when the VSIbase + a 1000 ms latency threshold is not reached by the average VSI response time result. Depending on the tested system, VSImax response time can grow 2 - 3x the baseline average. In end-user computing, a 3x increase in response time in comparison to the baseline is typically regarded as the maximum performance degradation to be considered acceptable.

In VSImax v4.1.x this latency threshold is fixed to 1000ms, this allows better and fairer comparisons between two different systems, especially when they have different baseline results. Ultimately, in VSImax v4.1.x, the performance of the system is not decided by the total average response time, but by the latency is has under load. For all systems, this is now 1000ms (weighted).

The threshold for the total response time is average weighted baseline response time + 1000ms.

When the system has a weighted baseline response time average of 1500ms, the maximum average response time may not be greater than 2500ms (1500+1000). If the average baseline is 3000 the maximum average response time may not be greater than 4000ms (3000+1000).

When the threshold is not exceeded by the average VSI response time during the test, VSImax is not hit, and the number of sessions ran successfully. This approach is fundamentally different in comparison to previous VSImax methods, as it was always required to saturate the system beyond VSImax threshold.

Lastly, VSImax v4.1.x is now always reported with the average baseline VSI response time result. For example: “The VSImax v4.1.x was 125 with a baseline of 1526ms”. This helps considerably in the comparison of systems and gives a more complete understanding of the system. The baseline performance helps to understand the best performance the system can give to an individual user. VSImax indicates what the total user capacity is for the system. These two are not automatically connected and related.

When a server with a very fast dual core CPU, running at 3.6 GHz, is compared to a 10 core CPU, running at 2.26 GHz, the dual core machine will give an individual user better performance than the 10 core machine. This is indicated by the baseline VSI response time. The lower this score is, the better performance an individual user can expect.

However, the server with the slower 10 core CPU will easily have a larger capacity than the faster dual core system. This is indicated by VSImax v4.1.x, and the higher VSImax is, the larger overall user capacity can be expected.

With Login VSI 4.1.x a new VSImax method is introduced: VSImax v4.1.x. This methodology gives much better insight into system performance and scales to extremely large systems.

Single-Server Recommended Maximum Workload

For the VMware Horizon 8 2312 Virtual Desktop and VMware Horizon 8 2312 Remote Desktop Service Hosts (RDSH) use cases, a recommended maximum workload was determined by the Login VSI Knowledge Worker Workload in VSI Benchmark Mode end-user experience measurements and blade server operating parameters.

This recommended maximum workload approach allows you to determine the server N+1 fault tolerance load the blade can successfully support in the event of a server outage for maintenance or upgrade.

Our recommendation is that the Login VSI Average Response and VSI Index Average should not exceed the Baseline plus 2000 milliseconds to ensure that end user experience is outstanding. Additionally, during steady state, the processor utilization should average no more than 90-95 percent.

Note:      Memory should never be oversubscribed for Desktop Virtualization workloads.

Table 20.   Phases of Test Runs

Test Phase

Description

Boot

Start all RDS and VDI virtual machines at the same time

Idle

The rest time after the last desktop is registered on the XD Studio. (typically, a 30-45 minute, <60 min)

Logon

The Login VSI phase of the test is where sessions are launched and start executing the workload over a 48 minutes duration

Steady state

The steady state phase is where all users are logged in and performing various workload tasks such as using Microsoft Office, Web browsing, PDF printing, playing videos, and compressing files (typically for the 15-minute duration)

Logoff

Sessions finish executing the Login VSI workload and logoff

Test Results

This chapter contains the following:

     Single-Server Recommended Maximum Workload Testing

     Full-Scale Workload Testing

Single-Server Recommended Maximum Workload Testing

This section presents the key performance metrics recorded on the Cisco UCS host blades during the single server testing. The aim was to determine the recommended maximum workload per host server. The single server testing comprised the following three tests:

     400 VMware Horizon Remote Desktop Session Host (RDSH) Instant clone multi-session OS RDS session (Random) using Microsoft Server 2022 OS for RDS server configuration.

     285 VMware Horizon VDI non-persistent Instant clone single OS sessions (Random), using Microsoft Windows 11 OS.

     285 VMware Horizon VDI persistent full clone single OS sessions (Static), also using Microsoft Windows 11 OS.

Single-Server Recommended Maximum Workload for Remote Desktop Session Host (RDSH) Sessions with 400 Users

The recommended maximum workload for a Cisco UCS X210c M7 blade server, equipped with dual Intel®  Xeon® Gold 6448 CPU 2.40 GHz 32-core processors and 2TB 4800MHz RAM, is 400 VMware Horizon Remote Desktop Session Host (RDSH) sessions, each with 12 vCPU and 32 GB RAM.

For a single Cisco UCS X210c M7 blade server, a Knowledge Worker profile achieved a VSImax of 400 user sessions. These 400 user sessions were deployed as Horizon remote desktop sessions on this host. The test yielded a Login VSI score of 531. The figure below illustrates the graphical output of the test.

The Login VSI performance data is shown below:

Figure 45.       Single Server | VMware Horizon 8 2312 VMware Horizon Remote Desktop Session  Host (RDSH) sessions multi-session OS machines | VSI Score

A screen shot of a graphDescription automatically generated

Performance data for the CPU utilization running the workload is shown below:

Figure 46.       Single Server | VMware Horizon 8 2312 VMware Horizon Remote Desktop Session Host (RDSH) Sessions Multi-session OS machines | Host CPU Utilization

Related image, diagram or screenshot

Figure 47.       Single Server | VMware Horizon 8 2312 VMware Horizon Remote Desktop Session Host (RDSH) sessions multi-session OS machines | Host Memory Utilization

 Related image, diagram or screenshot

Figure 48.       RDS Server Process Utilization% Time for Single server RDSH test (sample from one of RDS servers on the ESXi host)

Related image, diagram or screenshot

Single-Server Recommended Maximum Workload for VMware Horizon Persistent Full Clone Windows 11 Virtual Machines Single-session with 285 Users Dedicated User Assignment.

The recommended maximum workload for a Cisco UCS X210c M7 blade server, equipped with dual Intel®  Xeon® Gold 6448 CPU 2.40GHz 32-core processors and 2TB 4800MHz RAM, is 285 Windows 11 64-bit VDI persistent VMware Instant Cloned virtual machines, each with 2 vCPU and 4 GB RAM.

A single Cisco UCS X210c M7 blade server, configured with a Knowledge Worker profile, can provide a VSImax of 285 user sessions. For the test, 285 full clone VDIs, each with 2 vCPU and 4 GB Memory, were deployed as Horizon full clone on this host. The test yielded a Login VSI score of 652.  The figure below illustrates the graphical output of the test.

The Login VSI performance data is as shown below:

Figure 49.       Single Server | VMware Horizon 8 2312 VMware Horizon Persistent Full Clone Single-session OS machines | VSI Score

A screen shot of a graphDescription automatically generated

Performance data for the server running the workload is shown below:

Figure 50.       Single Server Recommended Maximum Workload | VMware Horizon 8 Horizon Persistent Full Clone single-session OS machines | Host CPU Utilization

Related image, diagram or screenshot

Single-Server Recommended Maximum Workload for VMware Horizon Instant Clone Non-Persistent Windows 11 Virtual Machine Single-session OS Floating Assignment with 285 Users

The recommended maximum workload for a Cisco UCS X210c M7 blade server, equipped with dual Intel®  Xeon® Gold 6448 CPU 2.40GHz 32-core processors and 2TB 4800MHz RAM, is 285 Windows 11 64-bit VDI persistent VMware Horizon Instant Cloned virtual machines, each with 2 vCPU and 4 GB RAM.

The Login VSI data is shown below:

Figure 51.       Single Server Recommended Maximum Workload | VMware Horizon 8 2312 Persistent Windows 11 Single OS Full Clone virtual machines | VSI Score

A screenshot of a computerDescription automatically generated

For a single Cisco UCS X210c M7 blade server, a Knowledge Worker profile achieved a VSImax of 285 user sessions. These 285 Instant Clone VDIs, configured with 2vCPU and 4 GB memory, were deployed as Horizon instant clone on this host. The test yielded a Login VSI score of 638. The figure below illustrates the graphical output of the test.

Performance data for the server running the workload is shown below:

Figure 52.       Single Server Recommended Maximum Workload VMware Horizon 8 2312 Persistent Windows 11 Single OS Full Clone virtual machines | Host CPU Utilization

Related image, diagram or screenshot

Full-Scale Workload Testing

This section describes the key performance metrics captured on the Cisco UCS during the full-scale testing. The full-scale testing was conducted with the following workloads using eight Cisco UCS X210c M7 Compute Node Servers. These servers were configured in a single ESXi Host Pool, designed to support single a host failure (N+1 Fault tolerance):

     2800 VMware Remote Desktop Session Host (RDSH) Multi-OS Server sessions.

     2000 VMware Horizon Instant Clone (Non-persistent) Single-session OS Windows 11 sessions.

     2000 VMware Horizon Full Clone (Persistent) Single-session OS Windows 11 sessions.

To achieve the target, sessions were launched against each workload set at a time. According to the Cisco Test Protocol for VDI solutions, all sessions were launched within 48 minutes (using the official Knowledge Worker Workload in VSI Benchmark Mode), and all launched sessions became active within two minutes after the last session logged in.

Full-Scale Recommended Maximum Workload Testing for VMware Horizon Remote Desktop Session Host (RDSH) Multi-OS Sessions for 2800 RDS Users

This section describes the key performance metrics captured on the Cisco UCS and Hitachi VSP E1090 during the full-scale testing with 2800 VMware Horizon Remote Desktop Session Host (RDSH) multi-OS sessions using 8 blades in a single pool.

The workload for the test is 2800 RDSH Multi-OS Server Session users. To achieve the target, sessions were launched against all workload hosts concurrently. As per the Cisco Test Protocol for VDI solutions, all sessions were launched within 48 minutes (using the official Knowledge Worker Workload in VSI Benchmark Mode) and all launched sessions became active within two minutes after the last logged in session.

For eight Cisco UCS X210c M7 blade servers with a Knowledge Worker profile a VSImax of 2800 user sessions was achieved. 2800 user sessions were deployed as Horizon RDS on this host. Login VSI score of 507 was obtained for the test. Below figure depicts the graphical output of the test:

The configured system efficiently and effectively delivered the following results:

Figure 53.       Full Scale | 2800 Users | VMware Horizon 8 2312 VMware Horizon Remote Desktop Session Host (RDSH) Sessions multi-OS sessions for 2800 RDS users | VSI Score

A screen shot of a graphDescription automatically generated

Figure 54.       Full Scale | 2800 Users | VMware Horizon 8 2312 Remote Desktop Session Host (RDSH) Multi-OS Sessions for 2800 RDS Users | Host CPU Utilization (for 8 hosts in cluster) 

Related image, diagram or screenshot

Figure 55.       Full Scale | 2800 Users | VMware Horizon 8 2312 Remote Desktop Session Host (RDSH) Multi-OS Sessions for 2800 RDS Users Multi-session OS Machine | Host Memory Utilization (for 8 hosts in Cluster)

Related image, diagram or screenshot

Figure 56.       Full Scale | 2800 Users | VMware Horizon 8 2312 Remote Desktop Session Host (RDSH) Multi-OS Sessions for 2800 RDS Users Multi-session OS Sessions | Host Network Utilization (for 8 hosts in cluster)

Related image, diagram or screenshot

Figure 57.       Single Server Recommended Maximum Workload | VMware Horizon 8 2312 VMware Horizon Remote Desktop Session Host (RDSH) Sessions Multi-session OS Machines | RDS Server Total % User Time.   (4 different RDS servers configured on different ESXi hosts taken to monitor RDS Server Processor time during cluster/scale test)

Related image, diagram or screenshot

Related image, diagram or screenshot

Figure 58.       Single Server Recommended Maximum Workload | VMware Horizon 8 2312 VMware Horizon Remote Desktop Session  Host (RDSH) Sessions Multi-session OS Machines | RDS Server Total % User Time

Related image, diagram or screenshot

Related image, diagram or screenshot

Figure 59.       Full Scale | 2800 Users | VMware Horizon 8 2312 Horizon Remote Desktop Session Host (RDSH) multi-OS sessions for 2800 RDS users | Hitachi E1090 Storage System Latency Chart

Related image, diagram or screenshot

 

Figure 59 displays the Hitachi VSP E1090 average latency of sub-millisecond times during the test, during peak load testing the VSP provided max total response time of .37 milliseconds, while during steady state and log on and log off period performance improved to .2 milliseconds.

Hitachi VSP E1090 shows a maximum of 50000 Total IOPS during the peak testing for 2800 RDS Sessions. During the logon phase IOPS was between 10000 to 20000. The test ran for 68 minutes (48 minutes to logon and 20 minutes full load testing).

Figure 60.       Full Scale | 2800 Users | VMware Horizon 8 2312 Remote Desktop Session Host (RDSH) Multi-OS Sessions for 2800 RDS Users HITACHI VSP E1090 System IOPS Chart

Related image, diagram or screenshot

Figure 61.       Full Scale | 2800 Users | VMware Horizon 8 2312 Remote Desktop Session Host (RDSH) multi-OS sessions for 2800 RDS users | HITACHI E1090 System Bandwidth Chart

Related image, diagram or screenshot

Hitachi VSP E1090 shows a maximum of 50000 KB/s Total Transfer Rate during the peak load for 2800 RDS Sessions.

Figure 62.       2800 Users 2 Remote Desktop Session Host (RDSH) Tests End User Experience VSI Charts Comparison

A screenshot of a computerDescription automatically generated

Figure 63.       Hitachi VSP E1090 Capacity Usage for 2800 RDSH sessions Uses configured with Capacity Savings Function enabled

Related image, diagram or screenshot

During RDSH testing, the VSP E1090 capacity saving function provided a 4.86:1 Data Reduction ratio.

     Full Scale Recommended Maximum Workload Testing for Non-persistent Windows 11 Single-session Full Clone OS Virtual Machines with 2000 Users

     This section describes the key performance metrics that were captured on the Cisco UCS and Hitachi Storage during the persistent desktop full-scale testing with 2000 non-persistent, single-session Windows 11 OS machines using 8 blades in a single pool.

The workload for the test is 2000 Non-Persistent VDI users. To achieve the target, sessions were launched against all workload clusters concurrently. As per the Cisco Test Protocol for VDI solutions, all sessions were launched within 48 minutes (using the official Knowledge Worker Workload in VSI Benchmark Mode) and all launched sessions became active within two minutes subsequent to the last logged in session.

For an Eight Cisco UCS X210c M7 blade server with a Knowledge Worker profile a VSImax of 2000 user sessions was achieved. 2000 Full Clone VDI’s with 2vCPU and 4 GB Memory were deployed as Horizon Full Clone on this host.  LoginVSI score of 621 was obtained for the test. Below figure depicts the graphical output of the test.

The configured system efficiently and effectively delivered the following results:

Figure 64.       Full Scale | 2000 Users | VMware Horizon 8 2312 persistent Windows 11 single-session Full Clone OS virtual machines with 2000 Users | VSI Score

Related image, diagram or screenshot

Figure 65.       Full Scale | 2000 Users | VMware Horizon 8 2312 persistent Windows 11 Single-session Full Clone OS Virtual Machines with 2000 Users | Host CPU Utilization (for 8 hosts in cluster)

Related image, diagram or screenshot

Figure 66.       Full Scale | 2000 Users | VMware Horizon 8 2312 persistent Windows 11 Single-session Full Clone OS Virtual Machines with 2000 Users | Host Memory Utilization (for 8 hosts in cluster)

Related image, diagram or screenshot

Figure 67.       Full Scale | 2000 Users | VMware Horizon 8 2312 persistent Windows 11 Single-session Full Clone OS Virtual Machines with 2000 Users | Host Network Utilization (for 8 hosts in cluster)

Related image, diagram or screenshot

Figure 68.       Full Scale | 2000 Users | VMware Horizon 8 2312 persistent Windows 11 Single-session Full Clone OS Virtual Machines with 2000 Users | Disk Adapter Mbytes Read/Written (for 8 hosts in cluster)

Related image, diagram or screenshot

Figure 69 displays Hitachi VSP E1090 average latency of sub-millisecond times during the test, during peak load period the VSP provided a max total response time of .22 milliseconds, while during steady state and log off period performance improved to .20 milliseconds.

Figure 69.       Full Scale | 2000 Users | VMware Horizon 8 2312 persistent Windows 11 Single-session Persistent Clone OS Virtual Machines with 2000 Users | HITACHI E1090 System Latency Chart

Related image, diagram or screenshot

 

Hitachi VSP E1090 shows a maximum of 11300 Total IOPS during the peak load for 2000 Full Clone Desktop Sessions. During the logon and logoff phase IOPS was between 2000 through 10000. The test ran for 68 minutes (48 minutes to logon and 20 minutes full load testing).

Figure 70.       Full Scale | 2000 Users | VMware Horizon 8 2312 Non-persistent Windows 11 Single-session Persistent Clone OS Virtual Machines with 2000 Users | HITACHI E1090 System IOPS Chart

Related image, diagram or screenshot

Figure 71.       Full Scale | 2000 Users | VMware Horizon 8 2312 persistent Windows 11 Single-session Full  Clone OS Virtual Machines with 2000 Users | HITACHI E1090 System Total Transfer Rate Chart

Related image, diagram or screenshot

Hitachi VSP E1090 shows a maximum of 180000 KB/s Total Transfer Rate during the peak load for 2000 Full Clone Sessions, during the logon and logoff periods the transfer rate ranges between 21000 KB/s and 180000 KB/s.

Figure 72.       2000 Users VDI Persistent Full Clone 4 tests comparison Chart

A screenshot of a computerDescription automatically generated

Figure 73.       Full Scale | 2000 Users | VMware Horizon 8 2312 Persistent Windows 11 Single-session Full Clone OS Virtual Machines with 2000 Users | HITACHI E1090 with Capacity Savings Function

A screenshot of a computerDescription automatically generated

During Full Clone testing, the VSP E1090 capacity saving function provided a 48.46:1 Data Reduction ratio.

Full Scale Recommended Maximum Workload for VMware Horizon Non-Persistent Instant Clone Windows 11 Single-session Full Clone Virtual Machines with 2000 Users

This section describes the key performance metrics that were captured on the Cisco UCS and Hitachi VSP E1090, during the persistent single-session Windows 11 OS Instant clone scale testing with 2000 Desktop Sessions using 8 blades configured in a single Host Pool.

The single-session OS workload for the solution is 2000 users. To achieve the target, sessions were launched against all workload clusters concurrently. As per the Cisco Test Protocol for VDI solutions, all sessions were launched within 48 minutes (using the official Knowledge Worker Workload in VSI Benchmark Mode) and all launched sessions became active within two minutes subsequent to the last logged in session.

For an eight Cisco UCS X210c M7 blade server with a Knowledge Worker profile a VSImax of 2000 user sessions was achieved. 2000 Instant Clone VDI’s with 2 vCPU and 4GB Memory were deployed as Horizon Instant  Clone on this host.  LoginVSI score of 640 was obtained for the test. Below figure depicts the graphical output of the test.

The configured system efficiently and effectively delivered the following results:

Figure 74.       Full Scale | 2000 Users | VMware Horizon 8 2312 Horizon Non-persistent Windows 11 Single-session Instant Clone Virtual Machines with 2000 Users| VSI Score

A screenshot of a computerDescription automatically generated

Figure 75.       Full Scale | 2000 Users | VMware Horizon 8 2312 Horizon Non-persistent Windows 11 Single-session Instant Clone Virtual Machines with 2000 Users| Host CPU Utilization (for 8 hosts in cluster)

Related image, diagram or screenshot

Figure 76.       Full Scale | 2000 Users | VMware Horizon 8 2312 Horizon persistent Windows 11 Single-session non- persistent Virtual Machines with 2000 Users | Host Memory Utilization (for 8 hosts in cluster)

Related image, diagram or screenshot

Figure 77.       Full Scale | 2000 Users | VMware Horizon 8 2312 Horizon persistent Windows 11 Single-session Non-persistent Virtual Machines with 2000 Users | Host Network Utilization (for 8 hosts in cluster)

Related image, diagram or screenshot

Figure 78.       Full Scale | 2000 Users | VMware Horizon 8 2312 Horizon persistent Windows 11 Single-session Non-Persistent Clone Virtual Machines with 2000 Users | Host Network Utilization (for 8 hosts in cluster)

Related image, diagram or screenshot

Figure 79 displays Hitachi VSP E1090 Average latency of sub-millisecond times during the test, during peak load period the VSP provided a max total response time of .22 milliseconds, while during steady state and log off period performance improved to .19 milliseconds.

Figure 79.       Full Scale | 2000 Users | VMware Horizon 8 2312 Horizon persistent Windows 11 Single-session Non- persistent Clone Virtual Machines with 2000 Users | Hitachi E1090 Storage System Latency Chart

Related image, diagram or screenshot

 

Hitachi VSP E1090 shows a maximum of 60000 Total IOPS during the peak load for 2000 Instant Clones. During the logon and logoff phase IOPS was between 10000 through 20000. The test ran for 68 minutes (48 minutes to logon and 20 minutes full load testing).

Figure 80.       Full Scale | 2000 Users | VMware Horizon 8 2312 Horizon persistent Windows 11 Single-session Non- persistent Virtual Machines with 2000 Users | Hitachi E1090 Storage System IOPS Chart

Related image, diagram or screenshot

Hitachi VSP E1090 shows maximum 1200000 KB/s Total Transfer Rate during the peak load for 2000 Instant Clone Sessions. During the full logon state, it is between 200000 KB/s and 400000 KB/s.

 Related image, diagram or screenshot

Figure 81.       Full Scale | 2000 Users | VMware Horizon 8 2312 Horizon persistent Windows 11 Single-session Instant  Clone Virtual Machines with 2000 Users | HITACHI E1090 Volume Data Optimization

A screenshot of a computerDescription automatically generated

During Instant Clone testing, the VSP E1090 capacity saving function provided a 3.76:1 Data Reduction ratio.

Figure 82.       2000 Users VDI Non-Persistent Instant Clone 4 tests comparison Chart

A screenshot of a computerDescription automatically generated

Summary

Cisco Hitachi Adaptive Solution is a powerful and reliable platform that has been specifically developed for enterprise end-user computing deployments and cloud data centers. It utilizes a range of innovative technologies, including Cisco UCS Blade and Rack Servers, Cisco Fabric Interconnects, Cisco Nexus 9000 switches, Cisco MDS 9100 Fibre Channel switches, and Hitachi VSP E1090 Storage System, to provide customers with a comprehensive solution that is designed and validated using best practices for compute, network, and storage.

With the introduction of Cisco UCS X210c M7 Series modular platform and Cisco Intersight, Cisco Hitachi Adaptive Solution now offers even more benefits to its users. These technologies enhance the ability to provide complete visibility and orchestration across all elements of the Cisco Hitachi Adaptive Solution datacenter, enabling users to modernize their infrastructure and operations. This means that you can achieve higher levels of efficiency, scalability, and flexibility while also reducing deployment time, project risk, and IT costs.

The Cisco Hitachi Adaptive Solution has been validated using industry-standard benchmarks to ensure that it meets the highest standards of performance, management, scalability, and resilience. This makes it the ideal choice for customers who are looking to deploy enterprise-class VDI and other IT initiatives. With its powerful combination of hardware and software, Cisco Hitachi Adaptive Solution is capable of meeting the demands of the most complex and demanding IT environments, ensuring that users can focus on their core business objectives without having to worry about the underlying infrastructure.

This document demonstrates the deployment of Horizon with vSphere on Cisco UCS X210c M7 along with Hitachi VSP E1090 storage, validated with Login VSI test results with Knowledge workload without compromising end user experience. Here are the key takeaways:

     Full Clone: LoginVSI Score of 621 was obtained for 2000 Windows 11 VDIs. Storage has shown tremendous performance by delivering under millisecond (0.22 ms) response time and 11300 IOPS was seen throughout the test. Enabled deduplication and compression, witnessed Total efficiency of 176.72:1 and data reduction ratio of 48.46:1.

     Instant Clone: LoginVSI Score of 640 was obtained for 2000 Windows 11 VDIs. Storage has shown performance by delivering under millisecond (0.22 ms) response time and 60000 IOPS was seen throughout the test.

     RDSH Session: LoginVSI Score of 507 was obtained for 2800 RDSH Sessions. Storage has shown performance by delivering under millisecond (0.37 ms) response time and 50000 IOPS was seen throughout the test.

As per LoginVSI standards scores ranging between 0-799 are considered as Very Good, we have achieved Login VSI scores which comes under Very Good rating for all the three test cases without compromising end user experience.

Get More Business Value with Services

Whether you are planning your next-generation environment, need specialized know-how for a major deployment, or want to get the most from your current storage, Cisco Advanced Services, Hitachi E1090 Storage, and our certified partners can help. We collaborate with you to enhance your IT capabilities through a full portfolio of services for your IT lifecycle with:

     Strategy services to align IT with your business goals.

     Design services to architect your best storage environment.

     Deploy and transition services to implement validated architectures and prepare your storage environment.

     Operations services to deliver continuous operations while driving operational excellence and efficiency.

Additionally, Cisco Advanced Services and Hitachi Storage Support provide in-depth knowledge transfer and education services that give you access to our global technical resources and intellectual property.

About the Author

Ramesh Guduru, Technical Marketing Engineer, Desktop Virtualization and Graphics Solutions, Cisco Systems, Inc.

Ramesh Guduru is a member of the Cisco’s Computing Systems Product Group team focusing on design, testing, and solutions validation, technical content creation, performance testing and benchmarking, and end user computing. He has years of experience in VMware products, Microsoft Server and Desktop Virtualization, Virtual Desktop Infrastructure (VDI) in converged and hyper converged environments.

Ramesh is a subject matter expert on Desktop and Server virtualization, Cisco HyperFlex, Cisco Unified Computing System, Cisco Nexus Switching, and NVIDIA/AMD Graphics.

Acknowledgements

For their support and contribution to the design, validation, and creation of this Cisco Validated Design, we would like to acknowledge the following for their contribution and expertise that resulted in developing this document:

     Ramesh Issac, Cisco Systems, Inc.

     Arvin Jami, Hitachi Vantara, LLC

     Iman Shaik, Hitachi Vantara, LLC

     Gilbert Pena, Jr, Hitachi Vantara, LLC

Appendix

This appendix contains the following:

     Appendix A – Switch Configuration

     Appendix B – Cisco UCS Best Practices for VDI

     Appendix C – References used in this guide

Appendix A – Switch Configurations

Cisco Nexus 93180YC-A Configuration

 

version 9.3(3) Bios:version 05.39

switchname K23-N9K-A

policy-map type network-qos jumbo

  class type network-qos class-default

    mtu 9216

vdc K23-N9K-A id 1

  limit-resource vlan minimum 16 maximum 4094

  limit-resource vrf minimum 2 maximum 4096

  limit-resource port-channel minimum 0 maximum 511

  limit-resource u4route-mem minimum 248 maximum 248

  limit-resource u6route-mem minimum 96 maximum 96

  limit-resource m4route-mem minimum 58 maximum 58

  limit-resource m6route-mem minimum 8 maximum 8

feature telnet

feature nxapi

feature bash-shell

cfs eth distribute

feature interface-vlan

feature hsrp

feature lacp

feature dhcp

feature vpc

feature telemetry

no password strength-check

username admin password 5 $5$0BAB7aa4$v07pyr7xw1f5WpD2wZc3qmG3Flb04Wa62aNgxg82hUA role network-admin

ip domain-lookup

system default switchport

ip access-list acl1

  10 permit ip 10.10.71.0/24 any

ip access-list acl_oob

  10 permit ip 10.10.71.0/24 any

system qos

  service-policy type network-qos jumbo

copp profile lenient

snmp-server user admin network-admin auth md5 0x83fa863523d7d94fe06388d7669f62f5 priv 0x83fa863523d7d94fe06388d7669f62f5 localizedkey

snmp-server host 173.37.52.102 traps version 2c public udp-port 1163

snmp-server host 192.168.24.30 traps version 2c public udp-port 1163

rmon event 1 description FATAL(1) owner PMON@FATAL

rmon event 2 description CRITICAL(2) owner PMON@CRITICAL

rmon event 3 description ERROR(3) owner PMON@ERROR

rmon event 4 description WARNING(4) owner PMON@WARNING

rmon event 5 description INFORMATION(5) owner PMON@INFO

ntp server 10.10.50.252 use-vrf default

ntp peer 10.10.50.253 use-vrf default

ntp server 171.68.38.65 use-vrf default

ntp logging

ntp master 8

vlan 1,50-56,70-76

vlan 50

  name Inband-Mgmt-C1

vlan 51

  name Infra-Mgmt-C1

vlan 52

  name StorageIP-C1

vlan 53

  name vMotion-C1

vlan 54

  name VM-Data-C1

vlan 55

  name Launcher-C1

vlan 56

  name Launcher-Mgmt-C1

vlan 70

  name InBand-Mgmt-SP

vlan 71

  name Infra-Mgmt-SP

vlan 72

  name VM-Network-SP

vlan 73

  name vMotion-SP

vlan 74

  name Storage_A-SP

vlan 75

  name Storage_B-SP

vlan 76

  name Launcher-SP

service dhcp

ip dhcp relay

ip dhcp relay information option

ipv6 dhcp relay

vrf context management

  ip route 0.0.0.0/0 173.37.52.1

hardware access-list tcam region ing-racl 1536

hardware access-list tcam region nat 256

vpc domain 50

  role priority 10

  peer-keepalive destination 173.37.52.104 source 173.37.52.103

  delay restore 150

  auto-recovery

interface Vlan1

  no shutdown

interface Vlan50

  no shutdown

  ip address 10.10.50.252/24

  hsrp version 2

  hsrp 50

    preempt

    priority 110

    ip 10.10.50.1

interface Vlan51

  no shutdown

  ip address 10.10.51.252/24

  hsrp version 2

  hsrp 51

    preempt

    priority 110

    ip 10.10.51.1

interface Vlan52

  no shutdown

  ip address 10.10.52.2/24

  hsrp version 2

  hsrp 52

    preempt

    priority 110

    ip 10.10.52.1

interface Vlan53

  no shutdown

  ip address 10.10.53.2/24

  hsrp version 2

  hsrp 53

    preempt

    priority 110

    ip 10.10.53.1

interface Vlan54

  no shutdown

  ip address 10.54.0.2/19

  hsrp version 2

  hsrp 54

    preempt

    priority 110

    ip 10.54.0.1

  ip dhcp relay address 10.10.71.11

  ip dhcp relay address 10.10.71.12

interface Vlan55

  no shutdown

  ip address 10.10.55.2/23

  hsrp version 2

  hsrp 55

    preempt

    priority 110

    ip 10.10.55.1

  ip dhcp relay address 10.10.51.11

  ip dhcp relay address 10.10.51.12

interface Vlan56

  no shutdown

  ip address 10.10.56.2/24

  hsrp version 2

  hsrp 56

    preempt

    ip 10.10.56.1

  ip dhcp relay address 10.10.51.11

  ip dhcp relay address 10.10.51.12

interface Vlan70

  no shutdown

  ip address 10.10.70.2/24

  hsrp version 2

  hsrp 70

    preempt

    priority 110

    ip 10.10.70.1

interface Vlan71

  no shutdown

  ip address 10.10.71.2/24

  hsrp version 2

  hsrp 71

    preempt

    priority 110

    ip 10.10.71.1

interface Vlan72

  no shutdown

  ip address 10.72.0.2/19

  hsrp version 2

  hsrp 72

    preempt

    priority 110

    ip 10.72.0.1

  ip dhcp relay address 10.10.71.11

  ip dhcp relay address 10.10.71.12

interface Vlan73

  no shutdown

  ip address 10.10.73.2/24

  hsrp version 2

  hsrp 73

    preempt

    priority 110

    ip 10.10.73.1

interface Vlan74

  no shutdown

  ip address 10.10.74.2/24

  hsrp version 2

  hsrp 74

    preempt

    priority 110

    ip 10.10.74.1

interface Vlan75

  no shutdown

  ip address 10.10.75.2/24

  hsrp version 2

  hsrp 75

    preempt

    priority 110

    ip 10.10.75.1

interface Vlan76

  no shutdown

  ip address 10.10.76.2/23

  hsrp version 2

  hsrp 76

    preempt

    priority 110

    ip 10.10.76.1

  ip dhcp relay address 10.10.71.11

  ip dhcp relay address 10.10.71.12

interface port-channel10

  description VPC-PeerLink

  switchport mode trunk

  switchport trunk allowed vlan 1,50-56,70-76,132

  spanning-tree port type network

  vpc peer-link

interface port-channel11

  description FI-Uplink-K22-B

  switchport mode trunk

  switchport trunk allowed vlan 1,50-56,70-76,132

  spanning-tree port type edge trunk

  mtu 9216

  vpc 11

interface port-channel12

  description FI-Uplink-K22-B

  switchport mode trunk

  switchport trunk allowed vlan 1,50-56,70-76,132

  spanning-tree port type edge trunk

  mtu 9216

  vpc 12

interface port-channel49

  description FI-Uplink-K23

  switchport mode trunk

  switchport trunk allowed vlan 1,50-56,70-76,132

  spanning-tree port type edge trunk

  mtu 9216

  vpc 49

interface port-channel50

  description FI-Uplink-K23

  switchport mode trunk

  switchport trunk allowed vlan 1,50-56,70-76,132

  spanning-tree port type edge trunk

  mtu 9216

  vpc 50

interface Ethernet1/1

  description VPC to K23-N9K-A

  switchport mode trunk

  switchport trunk allowed vlan 1,50-56,70-76,132

  channel-group 10 mode active

interface Ethernet1/2

  description VPC to K23-N9K-A

  switchport mode trunk

  switchport trunk allowed vlan 1,50-56,70-76,132

  channel-group 10 mode active

interface Ethernet1/3

  description VPC to K23-N9K-A

  switchport mode trunk

  switchport trunk allowed vlan 1,50-56,70-76,132

  channel-group 10 mode active

interface Ethernet1/4

  description VPC to K23-N9K-A

  switchport mode trunk

  switchport trunk allowed vlan 1,50-56,70-76,132

  channel-group 10 mode active

interface Ethernet1/5

interface Ethernet1/6

interface Ethernet1/7

interface Ethernet1/8

interface Ethernet1/9

interface Ethernet1/10

interface Ethernet1/11

interface Ethernet1/12

interface Ethernet1/13

interface Ethernet1/14

interface Ethernet1/15

interface Ethernet1/16

interface Ethernet1/17

interface Ethernet1/18

interface Ethernet1/19

interface Ethernet1/20

interface Ethernet1/21

interface Ethernet1/22

interface Ethernet1/23

interface Ethernet1/24

interface Ethernet1/25

interface Ethernet1/26

interface Ethernet1/27

interface Ethernet1/28

interface Ethernet1/29

interface Ethernet1/30

interface Ethernet1/31

interface Ethernet1/32

interface Ethernet1/33

  switchport access vlan 71

  spanning-tree port type edge

interface Ethernet1/34

  switchport access vlan 71

  spanning-tree port type edge

interface Ethernet1/35

interface Ethernet1/36

interface Ethernet1/37

interface Ethernet1/38

interface Ethernet1/39

interface Ethernet1/40

interface Ethernet1/41

interface Ethernet1/42

interface Ethernet1/43

interface Ethernet1/44

interface Ethernet1/45

  description VLAN 30 access JH

  switchport access vlan 30

  switchport trunk allowed vlan 1,30-36,60-68,132

  speed 1000

interface Ethernet1/46

interface Ethernet1/47

  switchport access vlan 50

  spanning-tree port type edge

interface Ethernet1/48

interface Ethernet1/49

  switchport mode trunk

  switchport trunk allowed vlan 1,50-56,70-76,132

  mtu 9216

  channel-group 49 mode active

interface Ethernet1/50

  switchport mode trunk

  switchport trunk allowed vlan 1,50-56,70-76,132

  mtu 9216

  channel-group 50 mode active

interface Ethernet1/51

  switchport mode trunk

  switchport trunk allowed vlan 1,50-56,70-76,132

  spanning-tree port type edge trunk

  mtu 9216

  channel-group 11 mode active

interface Ethernet1/52

  switchport mode trunk

  switchport trunk allowed vlan 1,50-56,70-76,132

  spanning-tree port type edge trunk

  mtu 9216

  channel-group 12 mode active

interface Ethernet1/53

  switchport mode trunk

  switchport trunk allowed vlan 1,30-36,50-56,60-68,70-76,132

interface Ethernet1/54

interface mgmt0

  vrf member management

  ip address 173.37.52.103/23

line console

line vty

boot nxos bootflash:/nxos.9.3.3.bin

no system default switchport shutdown

telemetry

  certificate /bootflash/home/admin/telemetry-cert.pem localhost

  destination-profile

    use-nodeid timba-640142e96f72612d3459249f

  destination-group timba-640142e96f72612d3459249f-0

    ip address 10.10.71.20 port 443 protocol HTTP encoding JSON

  sensor-group timba-640142e96f72612d3459249f-0

    data-source NX-API

    path "show system resources all-modules" depth 0

  sensor-group timba-640142e96f72612d3459249f-1

    data-source NX-API

    path "show module" depth 0

  sensor-group timba-640142e96f72612d3459249f-2

    data-source NX-API

    path "show environment power" depth 0

  sensor-group timba-640142e96f72612d3459249f-3

    data-source NX-API

    path "show interface fc regex *" depth 0

  sensor-group timba-640142e96f72612d3459249f-4

    data-source DME

    path sys/ch depth 1 query-condition query-target=subtree&target-subtree-class=eqptSensor

  sensor-group timba-640142e96f72612d3459249f-5

    data-source DME

    path sys/ch query-condition query-target=subtree&target-subtree-class=eqptSupC

  sensor-group timba-640142e96f72612d3459249f-6

    data-source DME

    path sys/ch query-condition query-target=subtree&target-subtree-class=eqptFt

  sensor-group timba-640142e96f72612d3459249f-7

    data-source DME

    path sys/intf depth 0 query-condition query-target=subtree&target-subtree-class=ethpmPhysIf filter-condition updated(ethpmPhysIf.operSt)

Cisco Nexus 93180YC -B Configuration

version 9.3(3) Bios:version 05.39

switchname K23-N9K-B

policy-map type network-qos jumbo

  class type network-qos class-default

    mtu 9216

vdc K23-N9K-B id 1

  limit-resource vlan minimum 16 maximum 4094

  limit-resource vrf minimum 2 maximum 4096

  limit-resource port-channel minimum 0 maximum 511

  limit-resource u4route-mem minimum 248 maximum 248

  limit-resource u6route-mem minimum 96 maximum 96

  limit-resource m4route-mem minimum 58 maximum 58

  limit-resource m6route-mem minimum 8 maximum 8

feature telnet

feature nxapi

feature bash-shell

cfs eth distribute

feature interface-vlan

feature hsrp

feature lacp

feature dhcp

feature vpc

feature telemetry

no password strength-check

username admin password 5 $5$5TxyL6Rl$7U4nS.UfzkPgXl5mVqiuHoPLHyAZgnNAiKyz7aEVK05  role network-admin

ip domain-lookup

system default switchport

system qos

  service-policy type network-qos jumbo

copp profile lenient

snmp-server user admin network-admin auth md5 0x57cdc0fb04a0dd922046cb694508c9b7 priv 0x57cdc0fb04a0dd922046cb694508c9b7 localizedkey

rmon event 1 description FATAL(1) owner PMON@FATAL

rmon event 2 description CRITICAL(2) owner PMON@CRITICAL

rmon event 3 description ERROR(3) owner PMON@ERROR

rmon event 4 description WARNING(4) owner PMON@WARNING

rmon event 5 description INFORMATION(5) owner PMON@INFO

ntp server 171.68.38.65 use-vrf default

vlan 1,50-56,70-76,132

vlan 50

  name Inband-Mgmt-C1

vlan 51

  name Infra-Mgmt-C1

vlan 52

  name StorageIP-C1

vlan 53

  name vMotion-C1

vlan 54

  name VM-Data-C1

vlan 55

  name Launcher-C1

vlan 56

  name Launcher-Mgmt-C1

vlan 70

  name InBand-Mgmt-SP

vlan 71

  name Infra-Mgmt-SP

vlan 72

  name VM-Network-SP

vlan 73

  name vMotion-SP

vlan 74

  name Storage_A-SP

vlan 75

  name Storage_B-SP

vlan 76

  name Launcher-SP

vlan 132

  name OOB-Mgmt

service dhcp

ip dhcp relay

ip dhcp relay information option

ipv6 dhcp relay

vrf context management

  ip route 0.0.0.0/0 173.37.52.1

vpc domain 50

  role priority 10

  peer-keepalive destination 173.37.52.103 source 173.37.52.104

  delay restore 150

  auto-recovery

interface Vlan1

  no shutdown

interface Vlan50

  no shutdown

  ip address 10.10.50.253/24

  hsrp version 2

  hsrp 50

    preempt

    priority 110

    ip 10.10.50.1

interface Vlan51

  no shutdown

  ip address 10.10.51.253/24

  hsrp version 2

  hsrp 51

    preempt

    priority 110

    ip 10.10.51.1

interface Vlan52

  no shutdown

  ip address 10.10.52.3/24

  hsrp version 2

  hsrp 52

    preempt

    priority 110

    ip 10.10.52.1

interface Vlan53

  no shutdown

  ip address 10.10.53.3/24

  hsrp version 2

  hsrp 53

    preempt

    priority 110

    ip 10.10.53.1

interface Vlan54

  no shutdown

  ip address 10.54.0.3/19

  hsrp version 2

  hsrp 54

    preempt

    priority 110

    ip 10.54.0.1

  ip dhcp relay address 10.10.71.11

  ip dhcp relay address 10.10.71.12

interface Vlan55

  no shutdown

  ip address 10.10.55.3/23

  hsrp version 2

  hsrp 55

    preempt

    priority 110

    ip 10.10.55.1

  ip dhcp relay address 10.10.51.11

  ip dhcp relay address 10.10.51.12

interface Vlan56

  no shutdown

  ip address 10.10.56.3/24

  hsrp version 2

  hsrp 56

    preempt

    ip 10.10.56.1

  ip dhcp relay address 10.10.51.11

  ip dhcp relay address 10.10.51.12

interface Vlan70

  no shutdown

  ip address 10.10.70.3/24

  hsrp version 2

  hsrp 70

    preempt

    priority 110

    ip 10.10.70.1

interface Vlan71

  no shutdown

  ip address 10.10.71.3/24

  hsrp version 2

  hsrp 71

    preempt

    priority 110

    ip 10.10.71.1

interface Vlan72

  no shutdown

  ip address 10.72.0.3/19

  hsrp version 2

  hsrp 72

    preempt

    priority 110

    ip 10.72.0.1

  ip dhcp relay address 10.10.71.11

  ip dhcp relay address 10.10.71.12

interface Vlan73

  no shutdown

  ip address 10.10.73.3/24

  hsrp version 2

  hsrp 73

    preempt

    priority 110

    ip 10.10.73.1

interface Vlan74

  no shutdown

  ip address 10.10.74.3/24

  hsrp version 2

  hsrp 74

    preempt

    priority 110

    ip 10.10.74.1

interface Vlan75

  no shutdown

  ip address 10.10.75.3/24

  hsrp version 2

  hsrp 75

    preempt

    priority 110

    ip 10.10.75.1

interface Vlan76

  no shutdown

  ip address 10.10.76.3/23

  hsrp version 2

  hsrp 76

    preempt

    priority 110

    ip 10.10.76.1

  ip dhcp relay address 10.10.71.11

  ip dhcp relay address 10.10.71.12

interface port-channel10

  description VPC-PeerLink

  switchport mode trunk

  switchport trunk allowed vlan 1,50-56,70-76,132

  spanning-tree port type network

  vpc peer-link

interface port-channel11

  description FI-Uplink-K22-A

  switchport mode trunk

  switchport trunk allowed vlan 1,50-56,70-76,132

  spanning-tree port type edge trunk

  mtu 9216

  vpc 11

interface port-channel12

  description FI-Uplink-K22-B

  switchport mode trunk

  switchport trunk allowed vlan 1,50-56,70-76,132

  spanning-tree port type edge trunk

  mtu 9216

  vpc 12

interface port-channel49

  description FI-Uplink-K23-A

  switchport mode trunk

  switchport trunk allowed vlan 1,50-56,70-76,132

  spanning-tree port type edge trunk

  mtu 9216

  vpc 49

interface port-channel50

  description FI-Uplink-K23-B

  switchport mode trunk

  switchport trunk allowed vlan 1,50-56,70-76,132

  spanning-tree port type edge trunk

  mtu 9216

  vpc 50

interface Ethernet1/1

  description VPC to K23-N9K-A

  switchport mode trunk

  switchport trunk allowed vlan 1,50-56,70-76,132

  channel-group 10 mode active

interface Ethernet1/2

  description VPC to K23-N9K-A

  switchport mode trunk

  switchport trunk allowed vlan 1,50-56,70-76,132

  channel-group 10 mode active

interface Ethernet1/3

  description VPC to K23-N9K-A

  switchport mode trunk

  switchport trunk allowed vlan 1,50-56,70-76,132

  channel-group 10 mode active

interface Ethernet1/4

  description VPC to K23-N9K-A

  switchport mode trunk

  switchport trunk allowed vlan 1,50-56,70-76,132

  channel-group 10 mode active

interface Ethernet1/5

interface Ethernet1/6

interface Ethernet1/7

interface Ethernet1/8

interface Ethernet1/9

interface Ethernet1/10

interface Ethernet1/11

interface Ethernet1/12

interface Ethernet1/13

interface Ethernet1/14

interface Ethernet1/15

interface Ethernet1/16

interface Ethernet1/17

interface Ethernet1/18

interface Ethernet1/19

interface Ethernet1/20

interface Ethernet1/21

interface Ethernet1/22

interface Ethernet1/23

interface Ethernet1/24

interface Ethernet1/25

interface Ethernet1/26

interface Ethernet1/27

interface Ethernet1/28

interface Ethernet1/29

interface Ethernet1/30

interface Ethernet1/31

interface Ethernet1/32

interface Ethernet1/33

  switchport access vlan 71

  spanning-tree port type edge

interface Ethernet1/34

  switchport access vlan 71

  spanning-tree port type edge

interface Ethernet1/35

interface Ethernet1/36

interface Ethernet1/37

interface Ethernet1/38

interface Ethernet1/39

interface Ethernet1/40

interface Ethernet1/41

interface Ethernet1/42

interface Ethernet1/43

interface Ethernet1/44

interface Ethernet1/45

interface Ethernet1/46

  description K23-HXVDIJH

  switchport access vlan 70

  spanning-tree port type edge

interface Ethernet1/47

interface Ethernet1/48

interface Ethernet1/49

  switchport mode trunk

  switchport trunk allowed vlan 1,50-56,70-76,132

  mtu 9216

  channel-group 49 mode active

interface Ethernet1/50

  switchport mode trunk

  switchport trunk allowed vlan 1,50-56,70-76,132

  mtu 9216

  channel-group 50 mode active

interface Ethernet1/51

  switchport mode trunk

  switchport trunk allowed vlan 1,50-56,70-76,132

  spanning-tree port type edge trunk

  mtu 9216

  channel-group 11 mode active

interface Ethernet1/52

  switchport mode trunk

  switchport trunk allowed vlan 1,50-56,70-76,132

  spanning-tree port type edge trunk

  mtu 9216

  channel-group 12 mode active

interface Ethernet1/53

  switchport mode trunk

  switchport trunk allowed vlan 1,30-36,50-56,60-68,70-76,132

interface Ethernet1/54

interface mgmt0

  vrf member management

  ip address 173.37.52.104/23

line console

line vty

boot nxos bootflash:/nxos.9.3.3.bin

no system default switchport shutdown

telemetry

  certificate /bootflash/home/admin/telemetry-cert.pem localhost

  destination-profile

    use-nodeid timba-640143f86f72612d345931c3

  destination-group timba-640143f86f72612d345931c3-0

    ip address 10.10.71.20 port 443 protocol HTTP encoding JSON

  sensor-group timba-640143f86f72612d345931c3-0

    data-source NX-API

    path "show system resources all-modules" depth 0

  sensor-group timba-640143f86f72612d345931c3-1

    data-source NX-API

    path "show module" depth 0

  sensor-group timba-640143f86f72612d345931c3-2

    data-source NX-API

    path "show environment power" depth 0

  sensor-group timba-640143f86f72612d345931c3-3

    data-source NX-API

    path "show interface fc regex *" depth 0

  sensor-group timba-640143f86f72612d345931c3-4

    data-source DME

    path sys/ch depth 1 query-condition query-target=subtree&target-subtree-class=eqptSensor

  sensor-group timba-640143f86f72612d345931c3-5

    data-source DME

    path sys/ch query-condition query-target=subtree&target-subtree-class=eqptSupC

  sensor-group timba-640143f86f72612d345931c3-6

    data-source DME

    path sys/ch query-condition query-target=subtree&target-subtree-class=eqptFt

  sensor-group timba-640143f86f72612d345931c3-7

    data-source DME

    path sys/intf depth 0 query-condition query-target=subtree&target-subtree-class=ethpmPhysIf filter-condition updated(ethpmPhysIf.operSt)

Cisco MDS 9132T-A Configuration

version 8.4(2d)

power redundancy-mode redundant

feature npiv

feature fport-channel-trunk

role name default-role

  description This is a system defined role and applies to all users.

  rule 5 permit show feature environment

  rule 4 permit show feature hardware

  rule 3 permit show feature module

  rule 2 permit show feature snmp

  rule 1 permit show feature system

no password strength-check

username admin password 5 $5$Dcs72Ao/$8lHyVrotTm4skqb/84BC793tgdly/yWf9IoMx2OEg6C role network-admin

ip domain-lookup

ip name-server 10.10.61.30

ip host ADD16-MDS-A 10.29.164.238

aaa group server radius radius

snmp-server user admin network-admin auth md5 0x616758aed4f07bab2d24f3d594ebd649 priv 0x616758aed4f07bab2d24f3d594ebd649 localizedkey

snmp-server host 10.24.30.91 traps version 2c public udp-port 1163

snmp-server host 10.24.46.67 traps version 2c public udp-port 1163

snmp-server host 10.24.66.169 traps version 2c public udp-port 1163

snmp-server host 10.24.72.119 traps version 2c public udp-port 1165

rmon event 1 log trap public description FATAL(1) owner PMON@FATAL

rmon event 2 log trap public description CRITICAL(2) owner PMON@CRITICAL

rmon event 3 log trap public description ERROR(3) owner PMON@ERROR

rmon event 4 log trap public description WARNING(4) owner PMON@WARNING

rmon event 5 log trap public description INFORMATION(5) owner PMON@INFO

ntp server 10.81.254.131

ntp server 10.81.254.202

vsan database

  vsan 100 name "Cisco Hitachi Adaptive Solution-VCC-CVD-Fabric-A"

device-alias database

  device-alias name E1090-CL1-A pwwn 50:06:0e:80:23:b1:04:00

  device-alias name E1090-CL1-A pwwn 50:06:0e:80:23:b1:04:10

  device-alias name VCC-WLHost01-M7 pwwn 20:00:00:25:b5:aa:17:00

  device-alias name VCC-WLHost02-M7 pwwn 20:00:00:25:b5:aa:17:01

  device-alias name VCC-WLHost03-M7 pwwn 20:00:00:25:b5:aa:17:02

  device-alias name VCC-WLHost04-M7 pwwn 20:00:00:25:b5:aa:17:03

  device-alias name VCC-WLHost05-M7 pwwn 20:00:00:25:b5:aa:17:04

  device-alias name VCC-WLHost06-M7 pwwn 20:00:00:25:b5:aa:17:05

  device-alias name VCC-WLHost07-M7 pwwn 20:00:00:25:b5:aa:17:06

  device-alias name VCC-WLHost08-M7 pwwn 20:00:00:25:b5:aa:17:07

  device-alias commit

fcdomain fcid database

  vsan 100 wwn 20:03:00:de:fb:92:8d:00 fcid 0x300000 dynamic

  vsan 100 wwn 52:4a:93:75:dd:91:0a:02 fcid 0x300020 dynamic

    !          [X70-CT0-FC2]

  vsan 100 wwn 52:4a:93:75:dd:91:0a:17 fcid 0x300040 dynamic

  vsan 100 wwn 52:4a:93:75:dd:91:0a:06 fcid 0x300041 dynamic

    !          [X70-CT0-FC8]

  vsan 100 wwn 52:4a:93:75:dd:91:0a:07 fcid 0x300042 dynamic

  vsan 100 wwn 52:4a:93:75:dd:91:0a:16 fcid 0x300043 dynamic

    !          [X70-CT1-FC8]

  vsan 100 wwn 20:00:00:25:b5:aa:17:3e fcid 0x300060 dynamic

    !          [VCC-Infra02-HBA0]

  vsan 100 wwn 20:00:00:25:b5:aa:17:07 fcid 0x300061 dynamic

    !          [VCC-WLHost04-HBA2]

  vsan 100 wwn 20:00:00:25:b5:aa:17:06 fcid 0x300062 dynamic

    !          [VCC-WLHost04-HBA0]

  vsan 100 wwn 20:00:00:25:b5:aa:17:3a fcid 0x300063 dynamic

    !          [VCC-WLHost29-HBA0]

  vsan 100 wwn 20:00:00:25:b5:aa:17:29 fcid 0x300064 dynamic

    !          [VCC-WLHost20-HBA2]

  vsan 100 wwn 20:00:00:25:b5:aa:17:13 fcid 0x300065 dynamic

    !          [VCC-WLHost10-HBA2]

  vsan 100 wwn 20:00:00:25:b5:aa:17:1c fcid 0x300066 dynamic

    !          [VCC-WLHost15-HBA0]

  vsan 100 wwn 20:00:00:25:b5:aa:17:32 fcid 0x300067 dynamic

    !          [VCC-WLHost25-HBA0]

  vsan 100 wwn 20:00:00:25:b5:aa:17:17 fcid 0x300068 dynamic

    !          [VCC-WLHost12-HBA2]

  vsan 100 wwn 20:00:00:25:b5:aa:17:2e fcid 0x300069 dynamic

    !          [VCC-WLHost23-HBA0]

  vsan 100 wwn 20:00:00:25:b5:aa:17:1f fcid 0x30006a dynamic

    !          [VCC-Infra01-HBA2]

    !          [VCC-WLHost17-HBA2]

  vsan 100 wwn 20:00:00:25:b5:aa:17:00 fcid 0x30007b dynamic

    !          [VCC-WLHost01-HBA0]

  vsan 100 wwn 20:00:00:25:b5:aa:17:04 fcid 0x30007c dynamic

    !          [VCC-WLHost03-HBA0]

  vsan 100 wwn 20:00:00:25:b5:aa:17:03 fcid 0x30007d dynamic

    !          [VCC-WLHost02-HBA2]

  vsan 100 wwn 20:00:00:25:b5:aa:17:0f fcid 0x30007e dynamic

    !          [VCC-WLHost08-HBA2]

  vsan 100 wwn 20:00:00:25:b5:aa:17:14 fcid 0x30008e dynamic

    !          [VCC-WLHost11-HBA0]

  vsan 100 wwn 20:00:00:25:b5:aa:17:11 fcid 0x30008f dynamic

    !          [VCC-WLHost09-HBA2]

  vsan 100 wwn 20:00:00:25:b5:aa:17:10 fcid 0x300090 dynamic

    !          [VCC-WLHost09-HBA0]

  vsan 100 wwn 20:00:00:25:b5:aa:17:05 fcid 0x300091 dynamic

    !          [VCC-WLHost03-HBA2]

  vsan 100 wwn 20:00:00:25:b5:aa:17:0e fcid 0x300092 dynamic

    !          [VCC-WLHost08-HBA0]

  vsan 100 wwn 20:00:00:25:b5:aa:17:0d fcid 0x300093 dynamic

    !          [VCC-WLHost07-HBA2]

  vsan 100 wwn 20:00:00:25:b5:aa:17:0c fcid 0x300094 dynamic

    !          [VCC-WLHost07-HBA0]

  vsan 100 wwn 20:00:00:25:b5:aa:17:1e fcid 0x300095 dynamic

    !          [VCC-Infra01-HBA0]

  vsan 100 wwn 20:00:00:25:b5:aa:17:0b fcid 0x300096 dynamic

    !          [VCC-WLHost06-HBA2]

  vsan 100 wwn 20:00:00:25:b5:aa:17:28 fcid 0x300097 dynamic

    !          [VCC-WLHost20-HBA0]

  vsan 100 wwn 20:00:00:25:b5:aa:17:37 fcid 0x300098 dynamic

    !          [VCC-WLHost27-HBA2]

  vsan 100 wwn 20:00:00:25:b5:aa:17:3b fcid 0x300099 dynamic

    !          [VCC-WLHost29-HBA2]

  vsan 100 wwn 20:00:00:25:b5:aa:17:09 fcid 0x30009a dynamic

  vsan 100 wwn 20:02:00:de:fb:92:8d:00 fcid 0x3000a0 dynamic

  vsan 100 wwn 20:04:00:de:fb:92:8d:00 fcid 0x3000c0 dynamic

  vsan 100 wwn 20:01:00:de:fb:92:8d:00 fcid 0x3000e0 dynamic

  vsan 100 wwn 52:4a:93:75:dd:91:0a:00 fcid 0x300044 dynamic

    !          [X70-CT0-FC0]

!Active Zone Database Section for vsan 100

zone name VCCStack-VCC-CVD-WLHost01 vsan 100

    member pwwn 20:00:00:25:b5:aa:17:00

    !           [VCC-WLHost01-HBA0]

    member pwwn 20:00:00:25:b5:aa:17:01

    !           [VCC-WLHost01-HBA2]

    member pwwn 52:4a:93:71:56:84:09:00

    !           [VCC-CT0-FC0]

    member pwwn 52:4a:93:71:56:84:09:10

    !           [VCC-CT1-FC0]

zone name VCCStack-VCC-CVD-WLHost02 vsan 100

    member pwwn 20:00:00:25:b5:aa:17:02

    !           [VCC-WLHost02-HBA0]

    member pwwn 20:00:00:25:b5:aa:17:03

    !           [VCC-WLHost02-HBA2]

    member pwwn 52:4a:93:71:56:84:09:00

    !           [VCC-CT0-FC0]

    member pwwn 52:4a:93:71:56:84:09:10

    !           [VCC-CT1-FC0]

zone name VCCStack-VCC-CVD-WLHost03 vsan 100

    member pwwn 20:00:00:25:b5:aa:17:04

    !           [VCC-WLHost03-HBA0]

    member pwwn 20:00:00:25:b5:aa:17:05

    !           [VCC-WLHost03-HBA2]

    member pwwn 52:4a:93:71:56:84:09:00

    !           [VCC-CT0-FC0]

    member pwwn 52:4a:93:71:56:84:09:10

    !           [VCC-CT1-FC0]

zone name VCCStack-VCC-CVD-WLHost04 vsan 100

    member pwwn 20:00:00:25:b5:aa:17:06

    !           [VCC-WLHost04-HBA0]

    member pwwn 20:00:00:25:b5:aa:17:07

    !           [VCC-WLHost04-HBA2]

    member pwwn 52:4a:93:71:56:84:09:00

    !           [VCC-CT0-FC0]

    member pwwn 52:4a:93:71:56:84:09:10

    !           [VCC-CT1-FC0]

zone name VCCStack-VCC-CVD-WLHost05 vsan 100

    member pwwn 20:00:00:25:b5:aa:17:08

    !           [VCC-WLHost05-HBA0]

    member pwwn 20:00:00:25:b5:aa:17:09

    !           [VCC-WLHost05-HBA2]

    member pwwn 52:4a:93:71:56:84:09:00

    !           [VCC-CT0-FC0]

    member pwwn 52:4a:93:71:56:84:09:10

    !           [VCC-CT1-FC0]

zone name VCCStack-VCC-CVD-WLHost06 vsan 100

    member pwwn 20:00:00:25:b5:aa:17:0a

    !           [VCC-WLHost06-HBA0]

    member pwwn 20:00:00:25:b5:aa:17:0b

    !           [VCC-WLHost06-HBA2]

    member pwwn 52:4a:93:71:56:84:09:00

    !           [VCC-CT0-FC0]

    member pwwn 52:4a:93:71:56:84:09:10

    !           [VCC-CT1-FC0]

zone name VCCStack-VCC-CVD-WLHost07 vsan 100

    member pwwn 20:00:00:25:b5:aa:17:0c

    !           [VCC-WLHost07-HBA0]

    member pwwn 20:00:00:25:b5:aa:17:0d

    !           [VCC-WLHost07-HBA2]

    member pwwn 52:4a:93:71:56:84:09:00

    !           [VCC-CT0-FC0]

    member pwwn 52:4a:93:71:56:84:09:10

    !           [VCC-CT1-FC0]

zone name VCCStack-VCC-CVD-WLHost08 vsan 100

    member pwwn 20:00:00:25:b5:aa:17:0e

    !           [VCC-WLHost08-HBA0]

    member pwwn 20:00:00:25:b5:aa:17:0f

    !           [VCC-WLHost08-HBA2]

    member pwwn 52:4a:93:71:56:84:09:00

    !           [VCC-CT0-FC0]

    member pwwn 52:4a:93:71:56:84:09:10

    !           [VCC-CT1-FC0]

zoneset name Cisco Hitachi Adaptive Solution-VCC-CVD vsan 100

    member HITACHI-VCC-CVD-WLHost01

    member HITACHI-VCC-CVD-WLHost02

    member HITACHI-VCC-CVD-WLHost03

    member HITACHI-VCC-CVD-WLHost04

    member HITACHI-VCC-CVD-WLHost05

    member HITACHI-VCC-CVD-WLHost06

    member HITACHI-VCC-CVD-WLHost07

    member HITACHI-VCC-CVD-WLHost08

    zoneset activate name Cisco Hitachi Adaptive Solution-VCC-CVD vsan 100

do clear zone database vsan 100

!Full Zone Database Section for vsan 100

zone name HITACHI-VCC-CVD-WLHost01 vsan 100

    member pwwn 20:00:00:25:b5:aa:17:00

    !           [VCC-WLHost01-HBA0]

    member pwwn 20:00:00:25:b5:aa:17:01

    !           [VCC-WLHost01-HBA2]

    member pwwn 52:4a:93:71:56:84:09:00

    !           [VCC-CT0-FC0]

    member pwwn 52:4a:93:71:56:84:09:10

    !           [VCC-CT1-FC0]

zone name HITACHI-VCC-CVD-WLHost02 vsan 100

    member pwwn 20:00:00:25:b5:aa:17:02

    !           [VCC-WLHost02-HBA0]

    member pwwn 20:00:00:25:b5:aa:17:03

    !           [VCC-WLHost02-HBA2]

    member pwwn 52:4a:93:71:56:84:09:00

    !           [VCC-CT0-FC0]

    member pwwn 52:4a:93:71:56:84:09:10

    !           [VCC-CT1-FC0]

zone name HITACHI-VCC-CVD-WLHost03 vsan 100

    member pwwn 20:00:00:25:b5:aa:17:04

    !           [VCC-WLHost03-HBA0]

    member pwwn 20:00:00:25:b5:aa:17:05

    !           [VCC-WLHost03-HBA2]

    member pwwn 52:4a:93:71:56:84:09:00

    !           [VCC-CT0-FC0]

    member pwwn 52:4a:93:71:56:84:09:10

    !           [VCC-CT1-FC0]

zone name HITACHI-VCC-CVD-WLHost04 vsan 100

    member pwwn 20:00:00:25:b5:aa:17:06

    !           [VCC-WLHost04-HBA0]

    member pwwn 20:00:00:25:b5:aa:17:07

    !           [VCC-WLHost04-HBA2]

    member pwwn 52:4a:93:71:56:84:09:00

    !           [VCC-CT0-FC0]

    member pwwn 52:4a:93:71:56:84:09:10

    !           [VCC-CT1-FC0]zone name HITACHI-VCC-CVD-WLHost05 vsan 100

    member pwwn 20:00:00:25:b5:aa:17:08

    !           [VCC-WLHost05-HBA0]

    member pwwn 20:00:00:25:b5:aa:17:09

    !           [VCC-WLHost05-HBA2]

    member pwwn 52:4a:93:71:56:84:09:00

    !           [VCC-CT0-FC0]

    member pwwn 52:4a:93:71:56:84:09:10

    !           [VCC-CT1-FC0]

zone name HITACHI-VCC-CVD-WLHost06 vsan 100

    member pwwn 20:00:00:25:b5:aa:17:0a

    !           [VCC-WLHost06-HBA0]

    member pwwn 20:00:00:25:b5:aa:17:0b

    !           [VCC-WLHost06-HBA2]

    member pwwn 52:4a:93:71:56:84:09:00

    !           [VCC-CT0-FC0]

    member pwwn 52:4a:93:71:56:84:09:10

    !           [VCC-CT1-FC0]

zone name HITACHI-VCC-CVD-WLHost07 vsan 100

    member pwwn 20:00:00:25:b5:aa:17:0c

    !           [VCC-WLHost07-HBA0]

    member pwwn 20:00:00:25:b5:aa:17:0d

    !           [VCC-WLHost07-HBA2]

    member pwwn 52:4a:93:71:56:84:09:00

    !           [VCC-CT0-FC0]

    member pwwn 52:4a:93:71:56:84:09:10

    !           [VCC-CT1-FC0]

zone name HITACHI-VCC-CVD-WLHost08 vsan 100

    member pwwn 20:00:00:25:b5:aa:17:0e

    !           [VCC-WLHost08-HBA0]

    member pwwn 20:00:00:25:b5:aa:17:0f

    !           [VCC-WLHost08-HBA2]

    member pwwn 52:4a:93:71:56:84:09:00

    !           [VCC-CT0-FC0]

    member pwwn 52:4a:93:71:56:84:09:10

    !           [VCC-CT1-FC0]

zoneset name Cisco Hitachi Adaptive Solution-VCC-CVD vsan 100

    member HITACHI-VCC-CVD-WLHost01

    member HITACHI-VCC-CVD-WLHost02

    member HITACHI-VCC-CVD-WLHost03

    member HITACHI-VCC-CVD-WLHost04

    member HITACHI-VCC-CVD-WLHost05

    member HITACHI-VCC-CVD-WLHost06

    member HITACHI-VCC-CVD-WLHost07

    member HITACHI-VCC-CVD-WLHost08

    interface mgmt0

  ip address 10.29.164.238 255.255.255.0

vsan database

  vsan 400 interface fc1/1

  vsan 400 interface fc1/2

  vsan 400 interface fc1/3

  vsan 400 interface fc1/4

  vsan 400 interface fc1/5

  vsan 400 interface fc1/6

  vsan 400 interface fc1/7

  vsan 400 interface fc1/8

  vsan 100 interface fc1/9

  vsan 100 interface fc1/10

  vsan 100 interface fc1/11

  vsan 100 interface fc1/12

  vsan 100 interface fc1/13

  vsan 100 interface fc1/14

  vsan 100 interface fc1/15

  vsan 100 interface fc1/16

clock timezone PST 0 0

clock summer-time PDT 2 Sun Mar 02:00 1 Sun Nov 02:00 60

switchname ADD16-MDS-A

cli alias name autozone source sys/autozone.py

line console

line vty

boot kickstart bootflash:/m9100-s6ek9-kickstart-mz.8.3.1.bin

boot system bootflash:/m9100-s6ek9-mz.8.3.1.bin

interface fc1/4

  switchport speed auto

interface fc1/1

interface fc1/2

interface fc1/3

interface fc1/5

interface fc1/6

interface fc1/7

interface fc1/8

interface fc1/9

interface fc1/10

interface fc1/11

interface fc1/12

interface fc1/13

interface fc1/14

interface fc1/15

interface fc1/16

interface fc1/4

interface fc1/1

  port-license acquire

  no shutdown

interface fc1/2

  port-license acquire

  no shutdown

interface fc1/3

  port-license acquire

  no shutdown

interface fc1/4

  port-license acquire

  no shutdown

interface fc1/5

  no port-license

interface fc1/6

  no port-license

interface fc1/7

  no port-license

interface fc1/8

  no port-license

interface fc1/9

  switchport trunk allowed vsan 100

  switchport trunk mode off

  port-license acquire

 no shutdown

interface fc1/10

  switchport trunk allowed vsan 100

  switchport trunk mode off

  port-license acquire

  no shutdown

interface fc1/11

  switchport trunk allowed vsan 100

  switchport trunk mode off

  port-license acquire

  no shutdown

interface fc1/12

  switchport trunk allowed vsan 100

  switchport trunk mode off

  port-license acquire

  no shutdown

interface fc1/13

  switchport trunk allowed vsan 100

  switchport trunk mode off

  port-license acquire

  no shutdown

interface fc1/14

  switchport trunk allowed vsan 100

  switchport trunk mode off

  port-license acquire

  no shutdown

interface fc1/15

  switchport trunk allowed vsan 100

  switchport trunk mode off

  port-license acquire

  no shutdown

interface fc1/16

  switchport trunk allowed vsan 100

  switchport trunk mode off

  port-license acquire

  no shutdown

ip default-gateway 10.29.164.1

Cisco MDS 9132T-B Configuration

version 8.4(2d)

power redundancy-mode redundant

feature npiv

feature fport-channel-trunk

role name default-role

  description This is a system defined role and applies to all users.

  rule 5 permit show feature environment

  rule 4 permit show feature hardware

  rule 3 permit show feature module

  rule 2 permit show feature snmp

  rule 1 permit show feature system

no password strength-check

username admin password 5 $5$1qs42bIH$hp2kMO3FA/4Zzg6EekVHWpA8lA7Mc/kBsFZVU8q1uU7  role network-admin

ip domain-lookup

ip host ADD16-MDS-B  10.29.164.239

aaa group server radius radius

snmp-server user admin network-admin auth md5 0x6fa97f514b0cdf3638e31dfd0bd19c71 priv 0x6fa97f514b0cdf3638e31dfd0bd19c71 localizedkey

snmp-server host 10.155.160.97 traps version 2c public udp-port 1164

snmp-server host 10.24.66.169 traps version 2c public udp-port 1164

snmp-server host 10.24.72.119 traps version 2c public udp-port 1166

snmp-server host 10.29.164.250 traps version 2c public udp-port 1163

rmon event 1 log trap public description FATAL(1) owner PMON@FATAL

rmon event 2 log trap public description CRITICAL(2) owner PMON@CRITICAL

rmon event 3 log trap public description ERROR(3) owner PMON@ERROR

rmon event 4 log trap public description WARNING(4) owner PMON@WARNING

rmon event 5 log trap public description INFORMATION(5) owner PMON@INFO

ntp server 10.81.254.131

ntp server 10.81.254.202

vsan database

  vsan 101 name "Cisco Hitachi Adaptive Solution-VCC-CVD-Fabric-B"

device-alias database

  device-alias name E1090-CL3-A pwwn 50:06:0e:80:23:b1:04:20

  device-alias name E1090-CL4-A pwwn 50:06:0e:80:23:b1:04:30

  device-alias name VCC-Infra01-HBA1 pwwn 20:00:00:25:b5:bb:17:1e

  device-alias name VCC-Infra01-HBA3 pwwn 20:00:00:25:b5:bb:17:1f

  device-alias name VCC-Infra02-HBA1 pwwn 20:00:00:25:b5:bb:17:3e

  device-alias name VCC-Infra02-HBA3 pwwn 20:00:00:25:b5:bb:17:3f

  device-alias name VCC-WLHost01-HBA1 pwwn 20:00:00:25:b5:bb:17:00

  device-alias name VCC-WLHost02-HBA1 pwwn 20:00:00:25:b5:bb:17:02

  device-alias name VCC-WLHost03-HBA1 pwwn 20:00:00:25:b5:bb:17:04

  device-alias name VCC-WLHost04-HBA1 pwwn 20:00:00:25:b5:bb:17:06

  device-alias name VCC-WLHost05-HBA1 pwwn 20:00:00:25:b5:bb:17:08

  device-alias name VCC-WLHost06-HBA1 pwwn 20:00:00:25:b5:bb:17:0a

  device-alias name VCC-WLHost07-HBA1 pwwn 20:00:00:25:b5:bb:17:0c

  device-alias name VCC-WLHost08-HBA1 pwwn 20:00:00:25:b5:bb:17:0e

  device-alias commit

fcdomain fcid database

  vsan 101 wwn 20:03:00:de:fb:90:a4:40 fcid 0xc40000 dynamic

  vsan 101 wwn 52:4a:93:75:dd:91:0a:17 fcid 0xc40020 dynamic

    !          [X70-CT1-FC9]

  vsan 101 wwn 52:4a:93:75:dd:91:0a:07 fcid 0xc40040 dynamic

    !          [X70-CT0-FC9]

  vsan 101 wwn 52:4a:93:75:dd:91:0a:16 fcid 0xc40021 dynamic

  vsan 101 wwn 52:4a:93:75:dd:91:0a:13 fcid 0xc40041 dynamic

    !          [X70-CT1-FC3]

  vsan 101 wwn 20:00:00:25:b5:bb:17:3e fcid 0xc40060 dynamic

    !          [VCC-Infra02-HBA1]

  vsan 101 wwn 20:00:00:25:b5:bb:17:07 fcid 0xc40061 dynamic

    !          [VCC-WLHost04-HBA3]

  vsan 101 wwn 20:00:00:25:b5:bb:17:3c fcid 0xc40062 dynamic

    !          [VCC-WLHost30-HBA1]

  vsan 101 wwn 20:00:00:25:b5:bb:17:11 fcid 0xc40063 dynamic

    !          [VCC-WLHost09-HBA3]

  vsan 101 wwn 20:00:00:25:b5:bb:17:01 fcid 0xc40064 dynamic

    !          [VCC-WLHost01-HBA3]

  vsan 101 wwn 20:00:00:25:b5:bb:17:00 fcid 0xc40065 dynamic

    !          [VCC-WLHost01-HBA1]

  vsan 101 wwn 20:00:00:25:b5:bb:17:13 fcid 0xc40066 dynamic

    !          [VCC-WLHost10-HBA3]

  vsan 101 wwn 20:00:00:25:b5:bb:17:04 fcid 0xc40067 dynamic

    !          [VCC-WLHost03-HBA1]

  vsan 101 wwn 20:00:00:25:b5:bb:17:17 fcid 0xc40068 dynamic

    !          [VCC-WLHost12-HBA3]

  vsan 101 wwn 20:00:00:25:b5:bb:17:16 fcid 0xc40069 dynamic

    !          [VCC-WLHost12-HBA1]

  vsan 101 wwn 20:00:00:25:b5:bb:17:30 fcid 0xc4006a dynamic

    !          [VCC-WLHost24-HBA1]

  vsan 101 wwn 20:00:00:25:b5:bb:17:21 fcid 0xc4006b dynamic

    !          [VCC-WLHost16-HBA3]

  vsan 101 wwn 20:00:00:25:b5:bb:17:1f fcid 0xc4006c dynamic

    !          [VCC-Infra01-HBA3]

  vsan 101 wwn 20:00:00:25:b5:bb:17:1a fcid 0xc4006d dynamic

    !          [VCC-WLHost14-HBA1]

  vsan 101 wwn 20:00:00:25:b5:bb:17:3f fcid 0xc4006e dynamic

    !          [VCC-Infra02-HBA3]

  vsan 101 wwn 20:00:00:25:b5:bb:17:0a fcid 0xc4006f dynamic

    !          [VCC-WLHost06-HBA1]

  vsan 101 wwn 20:00:00:25:b5:bb:17:38 fcid 0xc40070 dynamic

    !          [VCC-WLHost28-HBA1]

  vsan 101 wwn 20:00:00:25:b5:bb:17:19 fcid 0xc40071 dynamic

    !          [VCC-WLHost13-HBA3]

  vsan 101 wwn 20:00:00:25:b5:bb:17:22 fcid 0xc40072 dynamic

    !          [VCC-WLHost17-HBA1]

  vsan 101 wwn 20:00:00:25:b5:bb:17:2f fcid 0xc40073 dynamic

    !          [VCC-WLHost23-HBA3]

  vsan 101 wwn 20:00:00:25:b5:bb:17:1b fcid 0xc40074 dynamic

    !          [VCC-WLHost14-HBA3]

  vsan 101 wwn 20:00:00:25:b5:bb:17:3b fcid 0xc40075 dynamic

    !          [VCC-WLHost29-HBA3]

  vsan 101 wwn 20:00:00:25:b5:bb:17:2a fcid 0xc40076 dynamic

    !          [VCC-WLHost21-HBA1]

  vsan 101 wwn 20:00:00:25:b5:bb:17:29 fcid 0xc40077 dynamic

    !          [VCC-WLHost20-HBA3]

  vsan 101 wwn 20:00:00:25:b5:bb:17:1c fcid 0xc40078 dynamic

    !          [VCC-WLHost15-HBA1]

  vsan 101 wwn 20:00:00:25:b5:bb:17:0b fcid 0xc40079 dynamic

    !          [VCC-WLHost06-HBA3]

  vsan 101 wwn 20:00:00:25:b5:bb:17:0d fcid 0xc4007a dynamic

    !          [VCC-WLHost07-HBA3]

  vsan 101 wwn 20:00:00:25:b5:bb:17:37 fcid 0xc4007b dynamic

    !          [VCC-WLHost27-HBA3]

  vsan 101 wwn 20:00:00:25:b5:bb:17:31 fcid 0xc4007c dynamic

    !          [VCC-WLHost24-HBA3]

  vsan 101 wwn 20:00:00:25:b5:bb:17:08 fcid 0xc4007d dynamic

    !          [VCC-WLHost05-HBA1]

  vsan 101 wwn 20:00:00:25:b5:bb:17:10 fcid 0xc4007e dynamic

    !          [VCC-WLHost09-HBA1]

  vsan 101 wwn 20:00:00:25:b5:bb:17:34 fcid 0xc4007f dynamic

    !          [VCC-WLHost26-HBA1]

  vsan 101 wwn 20:00:00:25:b5:bb:17:25 fcid 0xc40080 dynamic

    !          [VCC-WLHost18-HBA3]

  vsan 101 wwn 20:00:00:25:b5:bb:17:3d fcid 0xc40081 dynamic

    !          [VCC-WLHost30-HBA3]

  vsan 101 wwn 20:00:00:25:b5:bb:17:15 fcid 0xc40082 dynamic

    !          [VCC-WLHost11-HBA3]

  vsan 101 wwn 20:00:00:25:b5:bb:17:23 fcid 0xc40083 dynamic

    !          [VCC-WLHost17-HBA3]

  vsan 101 wwn 20:00:00:25:b5:bb:17:3a fcid 0xc40084 dynamic

    !          [VCC-WLHost29-HBA1]

  vsan 101 wwn 20:00:00:25:b5:bb:17:28 fcid 0xc40085 dynamic

    !          [VCC-WLHost20-HBA1]

  vsan 101 wwn 20:00:00:25:b5:bb:17:32 fcid 0xc40086 dynamic

    !          [VCC-WLHost25-HBA1]

  vsan 101 wwn 20:00:00:25:b5:bb:17:0f fcid 0xc40087 dynamic

    !          [VCC-WLHost08-HBA3]

  vsan 101 wwn 20:00:00:25:b5:bb:17:0c fcid 0xc40088 dynamic

    !          [VCC-WLHost07-HBA1]

  vsan 101 wwn 20:00:00:25:b5:bb:17:2e fcid 0xc40089 dynamic

    !          [VCC-WLHost23-HBA1]

  vsan 101 wwn 20:00:00:25:b5:bb:17:03 fcid 0xc4008a dynamic

    !          [VCC-WLHost02-HBA3]

  vsan 101 wwn 20:00:00:25:b5:bb:17:02 fcid 0xc4008b dynamic

    !          [VCC-WLHost02-HBA1]

  vsan 101 wwn 20:00:00:25:b5:bb:17:2b fcid 0xc4008c dynamic

    !          [VCC-WLHost21-HBA3]

  vsan 101 wwn 20:00:00:25:b5:bb:17:35 fcid 0xc4008d dynamic

    !          [VCC-WLHost26-HBA3]

  vsan 101 wwn 20:00:00:25:b5:bb:17:2c fcid 0xc4008e dynamic

    !          [VCC-WLHost22-HBA1]

  vsan 101 wwn 20:00:00:25:b5:bb:17:27 fcid 0xc4008f dynamic

    !          [VCC-WLHost19-HBA3]

  vsan 101 wwn 20:00:00:25:b5:bb:17:18 fcid 0xc40090 dynamic

    !          [VCC-WLHost13-HBA1]

  vsan 101 wwn 20:00:00:25:b5:bb:17:14 fcid 0xc40091 dynamic

    !          [VCC-WLHost11-HBA1]

  vsan 101 wwn 20:00:00:25:b5:bb:17:0e fcid 0xc40092 dynamic

    !          [VCC-WLHost08-HBA1]

  vsan 101 wwn 20:00:00:25:b5:bb:17:1e fcid 0xc40093 dynamic

    !          [VCC-Infra01-HBA1]

  vsan 101 wwn 20:00:00:25:b5:bb:17:06 fcid 0xc40094 dynamic

    !          [VCC-WLHost04-HBA1]

  vsan 101 wwn 20:00:00:25:b5:bb:17:09 fcid 0xc40095 dynamic

    !          [VCC-WLHost05-HBA3]

  vsan 101 wwn 20:00:00:25:b5:bb:17:26 fcid 0xc40096 dynamic

    !          [VCC-WLHost19-HBA1]

  vsan 101 wwn 20:00:00:25:b5:bb:17:24 fcid 0xc40097 dynamic

    !          [VCC-WLHost18-HBA1]

  vsan 101 wwn 20:00:00:25:b5:bb:17:20 fcid 0xc40098 dynamic

    !          [VCC-WLHost16-HBA1]

  vsan 101 wwn 20:00:00:25:b5:bb:17:1d fcid 0xc40099 dynamic

    !          [VCC-WLHost15-HBA3]

  vsan 101 wwn 20:00:00:25:b5:bb:17:33 fcid 0xc4009a dynamic

    !          [VCC-WLHost25-HBA3]

  vsan 101 wwn 20:00:00:25:b5:bb:17:36 fcid 0xc4009b dynamic

    !          [VCC-WLHost27-HBA1]

  vsan 101 wwn 20:00:00:25:b5:bb:17:39 fcid 0xc4009c dynamic

    !          [VCC-WLHost28-HBA3]

  vsan 101 wwn 20:00:00:25:b5:bb:17:2d fcid 0xc4009d dynamic

    !          [VCC-WLHost22-HBA3]

  vsan 101 wwn 20:00:00:25:b5:bb:17:12 fcid 0xc4009e dynamic

    !          [VCC-WLHost10-HBA1]

  vsan 101 wwn 20:00:00:25:b5:bb:17:05 fcid 0xc4009f dynamic

    !          [VCC-WLHost03-HBA3]

  !Active Zone Database Section for vsan 101

zone name HITACHI-VCC-CVD-WLHost01 vsan 101

    member pwwn 20:00:00:25:b5:bb:17:00

    !           [VCC-WLHost01-HBA1]

    member pwwn 20:00:00:25:b5:bb:17:01

    !           [VCC-WLHost01-HBA3]

    member pwwn 52:4a:93:71:56:84:09:02

    !           [VCC-CT0-FC2]

    member pwwn 52:4a:93:71:56:84:09:12

    !           [VCC-CT1-FC2]

zone name HITACHI-VCC-CVD-WLHost02 vsan 101

    member pwwn 20:00:00:25:b5:bb:17:02

    !           [VCC-WLHost02-HBA1]

    member pwwn 20:00:00:25:b5:bb:17:03

    !           [VCC-WLHost02-HBA3]

    member pwwn 52:4a:93:71:56:84:09:02

    !           [VCC-CT0-FC2]

    member pwwn 52:4a:93:71:56:84:09:12

    !           [VCC-CT1-FC2]

zone name HITACHI-VCC-CVD-WLHost03 vsan 101

    member pwwn 20:00:00:25:b5:bb:17:04

    !           [VCC-WLHost03-HBA1]

    member pwwn 20:00:00:25:b5:bb:17:05

    !           [VCC-WLHost03-HBA3]

    member pwwn 52:4a:93:71:56:84:09:02

    !           [VCC-CT0-FC2]

    member pwwn 52:4a:93:71:56:84:09:12

    !           [VCC-CT1-FC2]

zone name HITACHI-VCC-CVD-WLHost04 vsan 101

    member pwwn 20:00:00:25:b5:bb:17:06

    !           [VCC-WLHost04-HBA1]

    member pwwn 20:00:00:25:b5:bb:17:07

    !           [VCC-WLHost04-HBA3]

    member pwwn 52:4a:93:71:56:84:09:02

    !           [VCC-CT0-FC2]

    member pwwn 52:4a:93:71:56:84:09:12

    !           [VCC-CT1-FC2]

zone name HITACHI-VCC-CVD-WLHost05 vsan 101

    member pwwn 20:00:00:25:b5:bb:17:08

    !           [VCC-WLHost05-HBA1]

    member pwwn 20:00:00:25:b5:bb:17:09

    !           [VCC-WLHost05-HBA3]

    member pwwn 52:4a:93:71:56:84:09:02

    !           [VCC-CT0-FC2]

    member pwwn 52:4a:93:71:56:84:09:12

    !           [VCC-CT1-FC2]

zone name HITACHI-VCC-CVD-WLHost06 vsan 101

    member pwwn 20:00:00:25:b5:bb:17:0a

    !           [VCC-WLHost06-HBA1]

    member pwwn 20:00:00:25:b5:bb:17:0b

    !           [VCC-WLHost06-HBA3]

    member pwwn 52:4a:93:71:56:84:09:02

    !           [VCC-CT0-FC2]

    member pwwn 52:4a:93:71:56:84:09:12

    !           [VCC-CT1-FC2]

zone name HITACHI-CVD-WLHost07 vsan 101

    member pwwn 20:00:00:25:b5:bb:17:0c

    !           [VCC-WLHost07-HBA1]

    member pwwn 20:00:00:25:b5:bb:17:0d

    !           [VCC-WLHost07-HBA3]

    member pwwn 52:4a:93:71:56:84:09:02

    !           [VCC-CT0-FC2]

    member pwwn 52:4a:93:71:56:84:09:12

    !           [VCC-CT1-FC2]

zone name HITACHI-VCC-CVD-WLHost08 vsan 101

    member pwwn 20:00:00:25:b5:bb:17:0e

    !           [VCC-WLHost08-HBA1]

    member pwwn 20:00:00:25:b5:bb:17:0f

    !           [VCC-WLHost08-HBA3]

    member pwwn 52:4a:93:71:56:84:09:02

    !           [VCC-CT0-FC2]

    member pwwn 52:4a:93:71:56:84:09:12

    !           [VCC-CT1-FC2]

zoneset name Cisco Hitachi Adaptive Solution-VCC-CVD vsan 101

    member HITACHI-VCC-CVD-WLHost01

    member HITACHI-VCC-CVD-WLHost02

    member HITACHI-VCC-CVD-WLHost03

    member HITACHI-VCC-CVD-WLHost04

    member HITACHI-VCC-CVD-WLHost05

    member HITACHI-VCC-CVD-WLHost06

    member HITACHI-VCC-CVD-WLHost07

    member HITACHI-VCC-CVD-WLHost08

    zoneset activate name Cisco Hitachi Adaptive Solution-VCC-CVD vsan 101

do clear zone database vsan 101

!Full Zone Database Section for vsan 101

zone name HITACHI-VCC-CVD-WLHost01 vsan 101

    member pwwn 20:00:00:25:b5:bb:17:00

    !           [VCC-WLHost01-HBA1]

    member pwwn 20:00:00:25:b5:bb:17:01

    !           [VCC-WLHost01-HBA3]

    member pwwn 52:4a:93:71:56:84:09:02

    !           [VCC-CT0-FC2]

    member pwwn 52:4a:93:71:56:84:09:12

    !           [VCC-CT1-FC2]

zone name  HITACHI-VCC-WLHost02 vsan 101

    member pwwn 20:00:00:25:b5:bb:17:02

    !           [VCC-WLHost02-HBA1]

    member pwwn 20:00:00:25:b5:bb:17:03

    !           [VCC-WLHost02-HBA3]

    member pwwn 52:4a:93:71:56:84:09:02

    !           [VCC-CT0-FC2]

    member pwwn 52:4a:93:71:56:84:09:12

    !           [VCC-CT1-FC2]

zone name HITACHI-VCC-CVD-WLHost03 vsan 101

    member pwwn 20:00:00:25:b5:bb:17:04

    !           [VCC-WLHost03-HBA1]

    member pwwn 20:00:00:25:b5:bb:17:05

    !           [VCC-WLHost03-HBA3]

    member pwwn 52:4a:93:71:56:84:09:02

    !           [VCC-CT0-FC2]

    member pwwn 52:4a:93:71:56:84:09:12

    !           [VCC-CT1-FC2]

zone name HITACHI-VCC-CVD-WLHost04 vsan 101

    member pwwn 20:00:00:25:b5:bb:17:06

    !           [VCC-WLHost04-HBA1]

    member pwwn 20:00:00:25:b5:bb:17:07

    !           [VCC-WLHost04-HBA3]

    member pwwn 52:4a:93:71:56:84:09:02

    !           [VCC-CT0-FC2]

    member pwwn 52:4a:93:71:56:84:09:12

    !           [VCC-CT1-FC2]

zone name HITACHI-VCC-CVD-CVD-WLHost05 vsan 101

    member pwwn 20:00:00:25:b5:bb:17:08

    !           [VCC-WLHost05-HBA1]

    member pwwn 20:00:00:25:b5:bb:17:09

    !           [VCC-WLHost05-HBA3]

    member pwwn 52:4a:93:71:56:84:09:02

    !           [VCC-CT0-FC2]

    member pwwn 52:4a:93:71:56:84:09:12

    !           [VCC-CT1-FC2]

zone name HITACHI-VCC-CVD-WLHost06 vsan 101

    member pwwn 20:00:00:25:b5:bb:17:0a

    !           [VCC-WLHost06-HBA1]

    member pwwn 20:00:00:25:b5:bb:17:0b

    !           [VCC-WLHost06-HBA3]

    member pwwn 52:4a:93:71:56:84:09:02

    !           [VCC-CT0-FC2]

    member pwwn 52:4a:93:71:56:84:09:12

    !           [VCC-CT1-FC2]

zone name HITACHI-VCC-CVD-WLHost07 vsan 101

    member pwwn 20:00:00:25:b5:bb:17:0c

    !           [VCC-WLHost07-HBA1]

    member pwwn 20:00:00:25:b5:bb:17:0d

    !           [VCC-WLHost07-HBA3]

    member pwwn 52:4a:93:71:56:84:09:02

    !           [VCC-CT0-FC2]

    member pwwn 52:4a:93:71:56:84:09:12

    !           [VCC-CT1-FC2]

zone name HITACHI -VCC-CVD-WLHost08 vsan 101

    member pwwn 20:00:00:25:b5:bb:17:0e

    !           [VCC-WLHost08-HBA1]

    member pwwn 20:00:00:25:b5:bb:17:0f

    !           [VCC-WLHost08-HBA3]

    member pwwn 52:4a:93:71:56:84:09:02

    !           [VCC-CT0-FC2]

    member pwwn 52:4a:93:71:56:84:09:12

    !           [VCC-CT1-FC2]

zoneset name Cisco Hitachi Adaptive Solution-VCC-CVD vsan 101

    member HITACHI-VCC-CVD-WLHost01

    member HITACHI-VCC-CVD-WLHost02

    member HITACHI-VCC-CVD-WLHost03

    member HITACHI-VCC-CVD-WLHost04

    member HITACHI-VCC-CVD-WLHost05

    member HITACHI-VCC-CVD-WLHost06

    member HITACHI-VCC-CVD-WLHost07

    member HITACHI-VCC-CVD-WLHost08

    interface mgmt0

  ip address 10.29.164.239 255.255.255.0

vsan database

  vsan 101 interface fc1/9

  vsan 101 interface fc1/10

  vsan 101 interface fc1/11

  vsan 101 interface fc1/12

  vsan 101 interface fc1/13

  vsan 101 interface fc1/14

  vsan 101 interface fc1/15

  vsan 101 interface fc1/16

clock timezone PST 0 0

clock summer-time PDT 2 Sun Mar 02:00 1 Sun Nov 02:00 60

switchname ADD16-MDS-B

cli alias name autozone source sys/autozone.py

line console

line vty

boot kickstart bootflash:/m9100-s6ek9-kickstart-mz.8.3.1.bin

boot system bootflash:/m9100-s6ek9-mz.8.3.1.bin

interface fc1/1

interface fc1/2

interface fc1/3

interface fc1/4

interface fc1/5

interface fc1/6

interface fc1/7

interface fc1/8

interface fc1/9

interface fc1/10

interface fc1/11

interface fc1/12

interface fc1/13

interface fc1/14

interface fc1/15

interface fc1/16

interface fc1/1

  no port-license

interface fc1/2

  no port-license

interface fc1/3

  no port-license

interface fc1/4

 no port-license

interface fc1/5

  no port-license

interface fc1/6

  no port-license

interface fc1/7

  no port-license

interface fc1/8

  no port-license

interface fc1/9

  switchport trunk allowed vsan 101

  switchport trunk mode off

  port-license acquire

  no shutdown

interface fc1/10

  switchport trunk allowed vsan 101

  switchport trunk mode off

  port-license acquire

  no shutdown

interface fc1/11

  switchport trunk allowed vsan 101

  switchport trunk mode off

  port-license acquire

  no shutdown

interface fc1/12

  switchport trunk allowed vsan 101

  switchport trunk mode off

  port-license acquire

  no shutdown

interface fc1/13

  switchport trunk allowed vsan 101

  switchport trunk mode off

  port-license acquire

  no shutdown

interface fc1/14

  switchport trunk allowed vsan 101

  switchport trunk mode off

  port-license acquire

  no shutdown

interface fc1/15

  switchport trunk allowed vsan 101

  switchport trunk mode off

  port-license acquire

  no shutdown

interface fc1/16

  switchport trunk allowed vsan 101

  switchport trunk mode off

  port-license acquire

  no shutdown

ip default-gateway 10.29.164.1

Appendix B - Cisco UCS Best Practices for VDI

This appendix summarizes the settings we use to ensure VDI workloads run efficiently in Cisco UCS environments. These settings work for all VDI brokers and apply to all Windows VDI workloads.

Power Policy:

     Chassis/Server Power Policy for best performance

          Power Profiling - Disabled

          Power Priority - High

          Power Restore - Always Off

          Power Redundancy - N+1

          Power Save Mode - Disabled

          Dynamic Power Rebalancing - Enabled

          Extended Power Capacity - Enabled

          Power Allocation (Watts) - 0

BIOS settings for AMD Processors (all other settings are platform-default):

A screen shot of a computer programDescription automatically generated

BIOS settings for Intel Processors:

     High perf (CVD Settings. All BIOS settings are platform-default except for the following)

          Processor C1E – disabled

          Processor C6 Report – disabled

          Processor EPP Enable – Enabled

          EPP Profile – Performance

          Intel VT for Directed IO – enabled

          Memory RAS Configuration – maximum-performance

          Core Performance Boost – Auto

          Enhanced CPU Performance – Auto

          LLC Dead Line – disabled

          UPI Power Management – enabled

          Altitude – auto

          Boot Performance Mode – Max Performance

          Core Multi Processing – all

          CPU Performance – enterprise

          Power Technology – performance

          Direct Cache Access Support – enabled

          DRAM Clock Throttling – Performance

          Enhanced Intel Speedstep ® Technology – enabled

          Execute Disable Bit – enabled

          IMC Interleaving – 1-way interleave

          Intel HyperThreading Tech – enabled

          Intel Turbo Boost Tech – enabled

          Intel ® VT – enabled

          DCU IP Prefetcher – enabled

          Processor C3 Report – disabled

          CPU C State – disabled

          Sub NUMA Clustering – enabled

          DCU Streamer Prefetch – enabled

Note:      All settings combined with the ESXi Power Management setting set to ‘High Performance’ will give the best results for VDI on Cisco UCS.

     Sustainability (If power savings required. All other settings are platform-default)

          Processor C1E – Enabled

          Processor C6 Report – Enabled

          Optimized Power Mode – Enabled

Note:      All settings with the ESXi Power Management setting set to ‘Balanced’ will provide significant power savings on the host.  Performance may be impacted, but these settings have been thoroughly validated to show acceptable performance levels.

Networking:

     Four vNICS (two for normal traffic and two for High traffic)

          VMware Default Traffic Settings

-       Enable Virtual Extensible LAN - Off

-       Enable Network Virtualization using Generic Routing Encapsulation - Off

-       Enable Accelerated Receive Flow Steering - Off

-       Enable Precision Time Protocol - Off

-       Enable Advanced Filter - Off

-       Enable Interrupt Scaling - Off

-       Enable GENEVE Offload - Off

RoCE Settings

-       Enable RDMA over Converged Ethernet - Off

Interrupt Settings

-       Interrupts - 11

-       Interrupt Mode - MSIx

-       Interrupt Timer, us - 125

-       Interrupt Coalescing Type - Min

Receive

-       Receive Queue Count - 8

-       Receive Ring Size - 4096

Transmit

-       Transmit Queue Count - 1

-       Transmit Ring Size - 4096

Completion

-       Completion Queue Count - 9

-       Completion Ring Size - 1

-       Uplink Failback Timeout (seconds) - 5

TCP Offload

-       Enable Tx Checksum Offload - On

-       Enable Rx Checksum Offload - On

-       Enable Large Send Offload - On

-       Enable Large Receive Offload - On

Receive Side Scaling

-       Enable Receive Side Scaling - On

-       Enable IPv4 Hash - On

-       Enable IPv6 Extensions Hash - Off

-       Enable IPv6 Hash - On

-       Enable TCP and IPv4 Hash - Off

-       Enable TCP and IPv6 Extensions Hash - Off

-       Enable TCP and IPv6 Hash - Off

-       Enable UDP and IPv4 Hash - Off

-       Enable UDP and IPv6 Hash - Off

          VMware High Traffic Settings

-       Enable Virtual Extensible LAN - Off

-       Enable Network Virtualization using Generic Routing Encapsulation - Off

-       Enable Accelerated Receive Flow Steering - Off

-       Enable Precision Time Protocol - Off

-       Enable Advanced Filter - Off

-       Enable Interrupt Scaling - Off

-       Enable GENEVE Offload - Off

RoCE Settings

-       Enable RDMA over Converged Ethernet - Off

Interrupt Settings

-       Interrupts - 8

-       Interrupt Mode - MSIx

-       Interrupt Timer, us - 125

-       Interrupt Coalescing Type - Min

Receive

-       Receive Queue Count - 8

-       Receive Ring Size - 4096

Transmit

-       Transmit Queue Count - 1

-       Transmit Ring Size - 4096

Completion

-       Completion Queue Count - 5

-       Completion Ring Size - 1

-       Uplink Failback Timeout (seconds) - 5

TCP Offload

-       Enable Tx Checksum Offload - On

-       Enable Rx Checksum Offload - On

-       Enable Large Send Offload - On

-       Enable Large Receive Offload - On

Receive Side Scaling

-       Enable Receive Side Scaling - On

-       Enable IPv4 Hash - On

-       Enable IPv6 Extensions Hash - Off

-       Enable IPv6 Hash - On

-       Enable TCP and IPv4 Hash - On

-       Enable TCP and IPv6 Extensions Hash - Off

-       Enable TCP and IPv6 Hash - On

-       Enable UDP and IPv4 Hash - Off

-       Enable UDP and IPv6 Hash - Off

A graph with text and arrowsDescription automatically generated with medium confidence

A screenshot of a computerDescription automatically generated

Appendix C – References used in this guide

This section provides links to additional information for each partner’s solution component of this document.

     Cisco UCS X210c M7 Compute Node Data Sheet - Cisco

     Cisco UCS X9508 Chassis Data Sheet
Cisco
UCS X-Series Modular System At-a-Glance

     Cisco UCS X210c M7 Compute Node Spec Sheet

     Cisco UCS Manager Configuration Guides

http://www.cisco.com/c/en/us/support/servers-unified-computing/ucs-manager/products-installation-and-configuration-guides-list.html

https://www.cisco.com/c/en/us/support/servers-unified-computing/ucs-manager/products-release-notes-list.html

     Cisco UCS Virtual Interface Cards

https://www.cisco.com/c/en/us/products/interfaces-modules/unified-computing-system-adapters/index.html

     Cisco Nexus Switching References

http://www.cisco.com/c/en/us/products/collateral/switches/nexus-9000-series-switches/datasheet-c78-736967.html

https://www.cisco.com/c/en/us/products/switches/nexus-93180yc-fx-switch/index.html

     Cisco MDS 9000 Service Switch References

http://www.cisco.com/c/en/us/products/storage-networking/mds-9000-series-multilayer-switches/index.html

http://www.cisco.com/c/en/us/products/storage-networking/product-listing.html

https://www.cisco.com/c/en/us/products/collateral/storage-networking/mds-9100-series-multilayer-fabric-switches/datasheet-c78-739613.html

     Cisco Intersight References

https://www.cisco.com/c/en/us/products/cloud-systems-management/intersight/index.html

https://www.cisco.com/c/en/us/products/collateral/cloud-systems-management/intersight/intersight-ds.html

     Cisco Hitachi Adaptive Solution

https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/hitachi_adaptive_vmware_vsp_design.html

https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/hitachi_adaptive_vmware_vsp.pdf

     Microsoft References

https://docs.microsoft.com/en-us/fslogix/

     VMware References

https://docs.vmware.com/en/VMware-vSphere/index.html

     Login VSI Documentation

https://www.loginvsi.com/resources/

     Hitachi Storage Reference Documents

https://www.hitachivantara.com/en-us/pdf/datasheet/virtual-storage-platform-e-series-datasheet.pdf

Feedback

For comments and suggestions about this guide and related guides, join the discussion on Cisco Community at https://cs.co/en-cvds.

CVD Program

ALL DESIGNS, SPECIFICATIONS, STATEMENTS, INFORMATION, AND RECOMMENDATIONS (COLLECTIVELY, "DESIGNS") IN THIS MANUAL ARE PRESENTED "AS IS," WITH ALL FAULTS. CISCO AND ITS SUPPLIERS DISCLAIM ALL WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE. IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THE DESIGNS, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.

THE DESIGNS ARE SUBJECT TO CHANGE WITHOUT NOTICE. USERS ARE SOLELY RESPONSIBLE FOR THEIR APPLICATION OF THE DESIGNS. THE DESIGNS DO NOT CONSTITUTE THE TECHNICAL OR OTHER PROFESSIONAL ADVICE OF CISCO, ITS SUPPLIERS OR PARTNERS. USERS SHOULD CONSULT THEIR OWN TECHNICAL ADVISORS BEFORE IMPLEMENTING THE DESIGNS. RESULTS MAY VARY DEPENDING ON FACTORS NOT TESTED BY CISCO.

CCDE, CCENT, Cisco Eos, Cisco Lumin, Cisco Nexus, Cisco StadiumVision, Cisco TelePresence, Cisco WebEx, the Cisco logo, DCE, and Welcome to the Human Network are trademarks; Changing the Way We Work, Live, Play, and Learn and Cisco Store are service marks; and Access Registrar, Aironet, AsyncOS, Bringing the Meeting To You, Catalyst, CCDA, CCDP, CCIE, CCIP, CCNA, CCNP, CCSP, CCVP, Cisco, the Cisco Certified Internetwork Expert logo, Cisco IOS, Cisco Press, Cisco Systems, Cisco Systems Capital, the Cisco Systems logo, Cisco Unified Computing System (Cisco UCS), Cisco UCS B-Series Blade Servers, Cisco UCS C-Series Rack Servers, Cisco UCS S-Series Storage Servers, Cisco UCS X-Series, Cisco UCS Manager, Cisco UCS Management Software, Cisco Unified Fabric, Cisco Application Centric Infrastructure, Cisco Nexus 9000 Series, Cisco Nexus 7000 Series. Cisco Prime Data Center Network Manager, Cisco NX-OS Software, Cisco MDS Series, Cisco Unity, Collaboration Without Limitation, EtherFast, EtherSwitch, Event Center, Fast Step, Follow Me Browsing, FormShare, GigaDrive, HomeLink, Internet Quotient, IOS, iPhone, iQuick Study,  LightStream, Linksys, MediaTone, MeetingPlace, MeetingPlace Chime Sound, MGX, Networkers, Networking Academy, Network Registrar, PCNow, PIX, PowerPanels, ProConnect, ScriptShare, SenderBase, SMARTnet, Spectrum Expert, StackWise, The Fastest Way to Increase Your Internet Quotient, TransPath, WebEx, and the WebEx logo are registered trade-marks of Cisco Systems, Inc. and/or its affiliates in the United States and certain other countries. (LDW_P1)

All other trademarks mentioned in this document or website are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (0809R)

Learn more