Design and Deployment of Cisco HyperFlex for Virtual Desktop Infrastructure with Citrix XenDesktop 7.13
Last Updated: August 15, 2017
About Cisco Validated Designs
The CVD program consists of systems and solutions designed, tested, and documented to facilitate faster, more reliable, and more predictable customer deployments. For more information visit
http://www.cisco.com/go/designzone.
ALL DESIGNS, SPECIFICATIONS, STATEMENTS, INFORMATION, AND RECOMMENDATIONS (COLLECTIVELY, "DESIGNS") IN THIS MANUAL ARE PRESENTED "AS IS," WITH ALL FAULTS. CISCO AND ITS SUPPLIERS DISCLAIM ALL WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE. IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THE DESIGNS, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
THE DESIGNS ARE SUBJECT TO CHANGE WITHOUT NOTICE. USERS ARE SOLELY RESPONSIBLE FOR THEIR APPLICATION OF THE DESIGNS. THE DESIGNS DO NOT CONSTITUTE THE TECHNICAL OR OTHER PROFESSIONAL ADVICE OF CISCO, ITS SUPPLIERS OR PARTNERS. USERS SHOULD CONSULT THEIR OWN TECHNICAL ADVISORS BEFORE IMPLEMENTING THE DESIGNS. RESULTS MAY VARY DEPENDING ON FACTORS NOT TESTED BY CISCO.
CCDE, CCENT, Cisco Eos, Cisco Lumin, Cisco Nexus, Cisco StadiumVision, Cisco TelePresence, Cisco WebEx, the Cisco logo, DCE, and Welcome to the Human Network are trademarks; Changing the Way We Work, Live, Play, and Learn and Cisco Store are service marks; and Access Registrar, Aironet, AsyncOS, Bringing the Meeting To You, Catalyst, CCDA, CCDP, CCIE, CCIP, CCNA, CCNP, CCSP, CCVP, Cisco, the Cisco Certified Internetwork Expert logo, Cisco IOS, Cisco Press, Cisco Systems, Cisco Systems Capital, the Cisco Systems logo, Cisco Unified Computing System (Cisco UCS), Cisco UCS B-Series Blade Servers, Cisco UCS C-Series Rack Servers, Cisco UCS S-Series Storage Servers, Cisco UCS Manager, Cisco UCS Management Software, Cisco Unified Fabric, Cisco Application Centric Infrastructure, Cisco Nexus 9000 Series, Cisco Nexus 7000 Series. Cisco Prime Data Center Network Manager, Cisco NX-OS Software, Cisco MDS Series, Cisco Unity, Collaboration Without Limitation, EtherFast, EtherSwitch, Event Center, Fast Step, Follow Me Browsing, FormShare, GigaDrive, HomeLink, Internet Quotient, IOS, iPhone, iQuick Study, LightStream, Linksys, MediaTone, MeetingPlace, MeetingPlace Chime Sound, MGX, Networkers, Networking Academy, Network Registrar, PCNow, PIX, PowerPanels, ProConnect, ScriptShare, SenderBase, SMARTnet, Spectrum Expert, StackWise, The Fastest Way to Increase Your Internet Quotient, TransPath, WebEx, and the WebEx logo are registered trademarks of Cisco Systems, Inc. and/or its affiliates in the United States and certain other countries.
All other trademarks mentioned in this document or website are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (0809R)
© 2017 Cisco Systems, Inc. All rights reserved.
Table of Contents
Cisco Desktop Virtualization Solutions: Data Center
Cisco Desktop Virtualization Focus
Cisco UCS B-Series Blade Servers
Cisco Unified Computing System
Cisco Unified Computing System Components
Cisco HyperFlex HX-Series Nodes
Cisco UCS 2204XP Fabric Extender
Cisco HyperFlex Converged Data Platform Software
Log-Structured Distributed Objects
Data Replication and Availability
Enhancements for Version 2.1.1
Cisco HyperFlex HX Data Platform Administration Plug-in
Cisco HyperFlex HX Data Platform Controller
Citrix XenApp™ and XenDesktop™ 7.13
Improved Database Flow and Configuration
Multiple Notifications before Machine Updates or Scheduled Restarts
API Support for Managing Session Roaming
API Support for Provisioning VMs from Hypervisor Templates
Support for New and Additional Platforms
Citrix Provisioning Services 7.13
Benefits for Citrix XenApp and Other Server Farm Administrators
Benefits for Desktop Administrators
Citrix Provisioning Services Solution
Citrix Provisioning Services Infrastructure
Understanding Applications and Data
Project Planning and Solution Sizing Sample Questions
Citrix XenDesktop Design Fundamentals
Example XenDesktop Deployments
Distributed Components Configuration
Designing a XenDesktop Environment for a Mixed Workload
Deployment Hardware and Software
Cisco Unified Computing System Configuration
Deploy and Configure HyperFlex Data Platform
Deploy Cisco HyperFlex Data Platform Installer VM
Cisco HyperFlex Cluster Configuration
Cisco HyperFlex Cluster Expansion
Build the Virtual Machines and Environment for Workload Testing
Software Infrastructure Configuration
Install and Configure XenDesktop and XenApp
Install XenDesktop Delivery Controller, Citrix Licensing, and StoreFront
Configure the XenDesktop Site Administrators
Configure additional XenDesktop Controller
Add the Second Delivery Controller to the XenDesktop Site
Install and Configure StoreFront
Additional StoreFront Configuration
Install and Configure Citrix Provisioning Server 7.13
Install Additional PVS Servers
Install XenDesktop Virtual Desktop Agents
Install the Citrix Provisioning Services Target Device Software
Create Citrix Provisioning Services vDisks
Provision Virtual Desktop Machines
Non-Persistent PVS streamed desktops
Non-persistent Random HVD Provisioned using MCS
Persistent Static Provisioned with MCS
Citrix XenDesktop Policies and Profile Management
Configure Citrix XenDesktop Policies
Configuring User Profile Management
Testing Methodology and Success Criteria
Recommended Maximum Workload and Configuration Guidelines
4000 User Full Scale Testing on Thirty-two Node Cisco HyperFlex Cluster
Cisco Nexus 9372 Switch Configuration
boot nxos bootflash:/nxos.7.0.3.I2.2d.bin Switch B Configuration
boot nxos bootflash:/nxos.7.0.3.I2.2d.binAppendix B
To keep pace with the market, you need systems that support rapid, agile development processes. Cisco HyperFlex™ Systems let you unlock the full potential of hyper-convergence and adapt IT to the needs of your workloads. The systems use an end-to-end software-defined infrastructure approach, combining software-defined computing in the form of Cisco HyperFlex HX-Series Nodes, software-defined storage with the powerful Cisco HyperFlex HX Data Platform, and software-defined networking with the Cisco UCS fabric that integrates smoothly with Cisco® Application Centric Infrastructure (Cisco ACI™).
Together with a single point of connectivity and management, these technologies deliver a pre-integrated and adaptable cluster with a unified pool of resources that you can quickly deploy, adapt, scale, and manage to efficiently power your applications and your business
This document provides an architectural reference and design guide for up to a 4000 user mixed workload on a 32-node (16 Cisco HyperFlex HXAF220c-M4SX servers plus 8 Cisco UCS B200 M4 blade servers and 8 Cisco UCS C220 M4 rack servers) Cisco HyperFlex system. We provide deployment guidance and performance data for Citrix XenDesktops running Microsoft Windows 10 with Office 2016, Randomly assigned and Persistent virtual desktops, as well as Windows Server 2016 RDS server-based sessions on vSphere 6. The solution is a pre-integrated, best-practice data center architecture built on the Cisco Unified Computing System (UCS), the Cisco Nexus® 9000 family of switches and Cisco HyperFlex Data Platform software version 2.1.1b.
The solution payload is 100 percent virtualized on Cisco HyperFlex HXAF220c-M4SX hyperconverged nodes, Cisco UCS C220 M4 rack servers and Cisco UCS B200 M4 blade servers booting via on-board Flex-Flash controller SD cards running VMware vSphere 6.0 U3 hypervisor. The Citrix Provisioning Services and Machine Creation Services were used during desktop deployment. The virtual desktops are configured with Citrix XenDesktop 7.13 that provides unparalleled scale and management simplicity for Windows 10 pooled or persistent desktops (3000) and hosted Windows 2016 server-based sessions (1000) on a 32-node Cisco HyperFlex cluster.
Where applicable document provides best practice recommendation and sizing guidelines for customer deployment of this solution.
The solution provides outstanding virtual desktop end user experience as measured by the Login VSI 4.1 Knowledge Worker workload running in benchmark mode, with 1 second or less index average response times.
A current industry trend in data center design is towards small, granularly expandable hyperconverged infrastructures. By using virtualization along with pre-validated IT platforms, customers of all sizes have embarked on the journey to “just in time capacity” using this new technology. The Cisco Hyper Converged Solution can be quickly deployed, thereby increasing agility and reducing costs. Cisco HyperFlex uses best of breed storage, server and network components to serve as the foundation for desktop virtualization workloads, enabling efficient architectural designs that can be quickly and confidently deployed and scaled out.
The audience for this document includes, but is not limited to; sales engineers, field consultants, professional services, IT managers, partner engineers, and customers who want to take advantage of an infrastructure built to deliver IT efficiency and enable IT innovation.
This document provides a step-by-step design, configuration and implementation guide for the Cisco Validated Design for a Cisco HyperFlex All-Flash system running four (4) different Citrix XenDesktop workloads with Cisco UCS 6248UP Fabric Interconnects and Cisco Nexus 9300 series switches.
This is the first Cisco Validated Design with Cisco HyperFlex All-Flash system running Virtual Desktop Infrastructure with compute only nodes included. It incorporates the following features:
· Validation of Cisco Nexus 9000 with Cisco HyperFlex
· Support for the Cisco UCS 3.1(2) release and Cisco HyperFlex v 2.1.1
· VMware vSphere 6.0 U3 Hypervisor
· Citrix XenDesktop 7.13 Pooled Desktops, Persistent Desktops and XenApp shared server sessions
· Citrix Provisioning Services (PVS) and Citrix Machine Creation Service virtual machine deployment
The data center market segment is shifting toward heavily virtualized private, hybrid and public cloud computing models running on industry-standard systems. These environments require uniform design points that can be repeated for ease of management and scalability.
These factors have led to the need predesigned computing, networking and storage building blocks optimized to lower the initial design cost, simple management, and enable horizontal scalability and high levels of utilization.
The use cases include:
· Enterprise Data Center (small failure domains)
· Service Provider Data Center (small failure domains)
· Commercial Data Center
· Remote Office/Branch Office
· SMB Standalone Deployments
This Cisco Validated Design prescribes a defined set of hardware and software that serves as an integrated foundation for both XenDesktop Microsoft Windows 10 virtual desktops and XenApp RDS server desktop sessions based on Microsoft Server 2016. The mixed workload solution includes Cisco HyperFlex hardware and Data Platform software, Cisco Nexus® switches, the Cisco Unified Computing System (Cisco UCS®), Citrix XenDesktop and VMware vSphere software in a single package. The design is efficient such that the network, compute, and storage components occupy a 34-rack unit footprint in an industry standard 42U rack. Port density on the Cisco Nexus switches and Cisco UCS Fabric Interconnects enables the networking components to accommodate multiple HyperFlex clusters in a single Cisco UCS domain.
A key benefit of the Cisco Validated Design architecture is the ability to customize the environment to suit a customer's requirements. This Cisco Validated Design scales easily as requirements and demand change. The unit can be scaled out (adding more Cisco Validated Design units).
The reference architecture detailed in this document highlights the resiliency, cost benefit, and ease of deployment of a hyper-converged desktop virtualization solution. A solution capable of consuming multiple protocols across a single interface allows for customer choice and investment protection because it truly is a wire-once architecture.
The combination of technologies from Cisco Systems, Inc. and Citrix Inc. produced a highly efficient, robust and affordable desktop virtualization solution for a virtual desktop, hosted shared desktop or mixed deployment supporting different use cases. Key components of the solution include the following:
· More power, same size. Cisco HX-series nodes, Cisco UCS rack servers and Cisco UCS blade servers with dual 14-core 2.6 GHz Intel Xeon (E5-2690v4) processors and 512GB of memory for Citrix XenDesktop support more virtual desktop workloads than the previously released generation processors on the same hardware. The Intel Xeon E5-2690 v4 14-core processors used in this study provided a balance between increased per-server capacity and cost.
· Fault-tolerance with high availability built into the design. The various designs are based on multiple Cisco HX-Series nodes, Cisco UCS rack servers and Cisco UCS blade servers for virtual desktop and infrastructure workloads. The design provides N+1 server fault tolerance for every payload type tested.
· Stress-tested to the limits during aggressive boot scenario. The 4000 user mixed hosted virtual desktop and hosted shared desktop environment booted and registered with the XenDesktop Delivery controllers in 30 minutes, providing our customers with a fast, reliable cold-start desktop virtualization system.
· Stress-tested to the limits during simulated login storms. All 4000 users logged in and started running workloads up to steady state in 48-minutes without overwhelming the processors, exhausting memory or exhausting the storage subsystems, providing customers with a desktop virtualization system that can easily handle the most demanding login and startup storms.
· Ultra-condensed computing for the datacenter. The rack space required to support the initial 4000 user system is 34 rack units, including Cisco Nexus Switching and Cisco Fabric interconnects. Incremental 4000 seat Cisco HyperFlex clusters can be added in 30 rack unit groups, conserving valuable data center floor space.
· 100 percent virtualized: This CVD presents a validated design that is 100 percent virtualized on VMware ESXi 6.0. All of the virtual desktops, user data, profiles, and supporting infrastructure components, including Active Directory, SQL Servers, XenDesktop and XenApp infrastructure components, XenDesktop desktops were hosted as virtual machines. This provides customers with complete flexibility for maintenance and capacity additions because the entire system runs on the Cisco HyperFlex hyper-converged infrastructure with stateless Cisco UCS HX-series servers. (Infrastructure VMs were hosted on two Cisco UCS C220 M4 Rack Servers outside of the HX cluster to deliver the highest capacity and best economics for the solution.)
· Cisco datacenter management: Cisco maintains industry leadership with the new Cisco UCS Manager 3.1(2) software that simplifies scaling, guarantees consistency, and eases maintenance. Cisco’s ongoing development efforts with Cisco UCS Manager, Cisco UCS Central, and Cisco UCS Director insure that customer environments are consistent locally, across Cisco UCS Domains and across the globe. Cisco UCS software suite offers increasingly simplified operational and deployment management, and it continues to widen the span of control for customer organizations’ subject matter experts in compute, storage and network.
· Cisco 10G Fabric: Our 10G unified fabric story gets additional validation on 6200 Series Fabric Interconnects as Cisco runs more challenging workload testing, while maintaining unsurpassed user response times.
· Cisco HyperFlex storage performance: Cisco HyperFlex provides industry-leading hyper converged storage performance that efficiently handles the most demanding I/O bursts (for example, login storms), high write throughput at low latency, delivers simple and flexible business continuity and helps reduce storage cost per desktop.
· Cisco HyperFlex agility: Cisco HyperFlex System enables users to seamlessly add, upgrade or remove storage from the infrastructure to meet the needs of the virtual desktops.
· Cisco HyperFlex vCenter integration: Cisco HyperFlex plugin for VMware vSphere provides easy-button automation for key storage tasks such as storage provisioning and storage resize, cluster health status and performance monitoring directly from the VCenter web client in a single pane of glass. Experienced vCenter administrators have a near zero learning curve when HyperFlex is introduced into the environment.
· Citrix XenDesktop and XenApp Advantage: XenApp and XenDesktop are virtualization solutions that give IT control of virtual machines, applications, licensing, and security while providing anywhere access for any device.
· XenApp and XenDesktop allow:
- End users to run applications and desktops independently of the device's operating system and interface.
- Administrators to manage the network and control access from selected devices or from all devices.
- Administrators to manage an entire network from a single data center.
XenApp and XenDesktop share a unified architecture called FlexCast Management Architecture (FMA). FMA's key features are the ability to run multiple versions of XenApp or XenDesktop from a single Site and integrated provisioning.
- Optimized for performance and scale. For hosted shared desktop sessions, the best performance was achieved when the number of vCPUs assigned to the XenApp virtual machines did not exceed the number of hyper-threaded (logical) cores available on the server. In other words, maximum performance is obtained when not overcommitting the CPU resources for the virtual machines running virtualized RDS systems.
- Provisioning Choices Explored: Citrix provides two core provisioning methods for XenDesktop and XenApp virtual machines: Citrix Provisioning Services for pooled virtual desktops and XenApp virtual servers and Citrix Machine Creation Services for pooled or persistent virtual desktops. This paper provides guidance on how to use each method and documents the performance of each technology.
Today’s IT departments are facing a rapidly evolving workplace environment. The workforce is becoming increasingly diverse and geographically dispersed, including offshore contractors, distributed call center operations, knowledge and task workers, partners, consultants, and executives connecting from locations around the world at all times.
This workforce is also increasingly mobile, conducting business in traditional offices, conference rooms across the enterprise campus, home offices, on the road, in hotels, and at the local coffee shop. This workforce wants to use a growing array of client computing and mobile devices that they can choose based on personal preference. These trends are increasing pressure on IT to ensure protection of corporate data and prevent data leakage or loss through any combination of user, endpoint device, and desktop access scenarios (Figure 1).
These challenges are compounded by desktop refresh cycles to accommodate aging PCs and bounded local storage and migration to new operating systems, specifically Microsoft Windows 10 and productivity tools, specifically Microsoft Office 2016.
Figure 1 Cisco Data Center Partner Collaboration
Some of the key drivers for desktop virtualization are increased data security, the ability to expand and contract capacity and reduced TCO through increased control and reduced management costs.
Cisco focuses on three key elements to deliver the best desktop virtualization data center infrastructure: simplification, security, and scalability. The software combined with platform modularity provides a simplified, secure, and scalable desktop virtualization platform.
Cisco UCS and Cisco HyperFlex provides a radical new approach to industry-standard computing and provides the core of the data center infrastructure for desktop virtualization. Among the many features and benefits of Cisco UCS are the drastic reduction in the number of servers needed and in the number of cables used per server, and the capability to rapidly deploy or re-provision servers through Cisco UCS service profiles. With fewer servers and cables to manage and with the streamlined server and virtual desktop provisioning, operations are significantly simplified. Thousands of desktops can be provisioned in minutes with Cisco UCS Manager service profiles and Cisco storage partners’ storage-based cloning. This approach accelerates the time to productivity for end users, improves business agility, and allows IT resources to be allocated to other tasks.
Cisco UCS Manager automates many mundane, error-prone data center operations such as configuration and provisioning of server, network, and storage access infrastructure. In addition, Cisco UCS B-Series Blade Servers, C-Series, and HX-Series Rack Servers with large memory footprints enable high desktop density that helps reduce server infrastructure requirements.
Simplification also leads to more successful desktop virtualization implementation. Cisco and its technology partners like VMware have developed integrated, validated architectures, including predefined hyper-converged architecture infrastructure packages such as HyperFlex. Cisco Desktop Virtualization Solutions have been tested with VMware vSphere.
Although virtual desktops are inherently more secure than their physical predecessors, they introduce new security challenges. Mission-critical web and application servers using a common infrastructure such as virtual desktops are now at a higher risk for security threats. Inter–virtual machine traffic now poses an important security consideration that IT managers need to address, especially in dynamic environments in which virtual machines, using VMware vMotion, move across the server infrastructure.
Desktop virtualization, therefore, significantly increases the need for virtual machine–level awareness of policy and security, especially given the dynamic and fluid nature of virtual machine mobility across an extended computing infrastructure. The ease with which new virtual desktops can proliferate magnifies the importance of a virtualization-aware network and security infrastructure. Cisco data center infrastructure (Cisco UCS and Cisco Nexus Family solutions) for desktop virtualization provides strong data center, network, and desktop security, with comprehensive security from the desktop to the hypervisor. Security is enhanced with segmentation of virtual desktops, virtual machine–aware policies and administration, and network security across the LAN and WAN infrastructure.
The growth of a desktop virtualization solution is accelerating, so a solution must be able to scale, and scale predictably, with that growth. The Cisco Desktop Virtualization Solutions support high virtual-desktop density (desktops per server) and additional servers scale with near-linear performance. Cisco data center infrastructure provides a flexible platform for growth and improves business agility. Cisco UCS Manager service profiles allow on-demand desktop provisioning and make it just as easy to deploy dozens of desktops as it is to deploy thousands of desktops.
Cisco HyperFlex servers provide near-linear performance and scale. Cisco UCS implements the patented Cisco Extended Memory Technology to offer large memory footprints with fewer sockets (with scalability to up to 1 terabyte (TB) of memory with 2- and 4-socket servers). Using unified fabric technology as a building block, Cisco UCS server aggregate bandwidth can scale to up to 80 Gbps per server, and the northbound Cisco UCS fabric interconnect can output 2 terabits per second (Tbps) at line rate, helping prevent desktop virtualization I/O and memory bottlenecks. Cisco UCS, with its high-performance, low-latency unified fabric-based networking architecture, supports high volumes of virtual desktop traffic, including high-resolution video and communications traffic. In addition, Cisco HyperFlex helps maintain data availability and optimal performance during boot and login storms as part of the Cisco Desktop Virtualization Solutions. Recent Cisco Validated Designs based on Citrix XenDesktop, Cisco HyperFlex solutions have demonstrated scalability and performance, with up to 4000 hosted virtual desktops and hosted shared desktops up and running in 15 minutes.
Cisco UCS and Cisco Nexus data center infrastructure provides an excellent platform for growth, with transparent scaling of server, network, and storage resources to support desktop virtualization, data center applications, and cloud computing.
The simplified, secure, scalable Cisco data center infrastructure for desktop virtualization solutions saves time and money compared to alternative approaches. Cisco UCS enables faster payback and ongoing savings (better ROI and lower TCO) and provides the industry’s greatest virtual desktop density per server, reducing both capital expenditures (CapEx) and operating expenses (OpEx). The Cisco UCS architecture and Cisco Unified Fabric also enables much lower network infrastructure costs, with fewer cables per server and fewer ports required. In addition, storage tiering and deduplication technologies decrease storage costs, reducing desktop storage needs by up to 50 percent.
The simplified deployment of Cisco HyperFlex for desktop virtualization accelerates the time to productivity and enhances business agility. IT staff and end users are more productive more quickly, and the business can respond to new opportunities quickly by deploying virtual desktops whenever and wherever they are needed. The high-performance Cisco systems and network deliver a near-native end-user experience, allowing users to be productive anytime and anywhere.
The key measure of desktop virtualization for any organization is its efficiency and effectiveness in both the near term and the long term. The Cisco Desktop Virtualization Solutions are very efficient, allowing rapid deployment, requiring fewer devices and cables, and reducing costs. The solutions are also extremely effective, providing the services that end users need on their devices of choice while improving IT operations, control, and data security. Success is bolstered through Cisco’s best-in-class partnerships with leaders in virtualization and through tested and validated designs and services to help customers throughout the solution lifecycle. Long-term success is enabled through the use of Cisco’s scalable, flexible, and secure architecture as the platform for desktop virtualization.
The ultimate measure of desktop virtualization for any end user is a great experience. Cisco HyperFlex delivers class-leading performance with sub-second base line response times and index average response times at a full load of just under one second.
· Healthcare: Mobility between desktops and terminals, compliance, and cost
· Federal government: Teleworking initiatives, business continuance, continuity of operations (COOP), and training centers
· Financial: Retail banks reducing IT costs, insurance agents, compliance, and privacy
· Education: K-12 student access, higher education, and remote learning
· State and local governments: IT and service consolidation across agencies and interagency security
· Retail: Branch-office IT cost reduction and remote vendors
· Manufacturing: Task and knowledge workers and offshore contractors
· Microsoft Windows 10 migration
· Graphic intense applications
· Security and compliance initiatives
· Opening of remote and branch offices or offshore facilities
· Mergers and acquisitions
Figure 2 shows the Citrix XenDesktop solution built on Cisco Validated Design components and the network connections. The reference architecture reinforces the "wire-once" strategy because as additional storage is added to the architecture, no re-cabling is required from the hosts to the Cisco UCS fabric interconnect.
The Cisco HyperFlex system is composed of a pair of Cisco UCS 6248UP Fabric Interconnects, along with up to 16 HXAF-Series rack mount servers per cluster. In addition, up to 16 compute only servers can be added per cluster. Adding Cisco UCS 5108 Blade chassis allows the use of Cisco UCS B200-M4 blade servers for additional compute resources in a hybrid cluster design. Cisco UCS C240 and C220 servers can also be used for additional compute resources. Up to 8 separate HX clusters can be installed under a single pair of Fabric Interconnects. The Fabric Interconnects both connect to every HX-Series rack mount server, and both connect to every Cisco UCS 5108 blade chassis. Upstream network connections, also referred to as “northbound” network connections are made from the Fabric Interconnects to the customer datacenter network at the time of installation.
For this study, we uplinked the Cisco 6248UP Fabric Interconnects to Cisco Nexus 9372PX switches.
The Figure 3 and Figure 4 illustrate the hyperconverged and hybrid hyperconverged, plus compute only topologies.
Figure 3 Cisco HyperFlex Standard Topology
Figure 4 Cisco HyperFlex Hyperconverged plus Compute-Only Node Topology
Fabric Interconnects (FI) are deployed in pairs, wherein the two units operate as a management cluster, while forming two separate network fabrics, referred to as the A side and B side fabrics. Therefore, many design elements will refer to FI A or FI B, alternatively called fabric A or fabric B. Both Fabric Interconnects are active at all times, passing data on both network fabrics for a redundant and highly available configuration. Management services, including Cisco UCS Manager, are also provided by the two FIs but in a clustered manner, where one FI is the primary, and one is secondary, with a roaming clustered IP address. This primary/secondary relationship is only for the management cluster and has no effect on data transmission.
Fabric Interconnects have the following ports, which must be connected for proper management of the Cisco UCS domain:
· Mgmt: A 10/100/1000 Mbps port for managing the Fabric Interconnect and the Cisco UCS domain via GUI and CLI tools. Also used by remote KVM, IPMI, and SoL sessions to the managed servers within the domain. This is typically connected to the customer management network.
· L1: A cross connect port for forming the Cisco UCS management cluster. This is connected directly to the L1 port of the paired Fabric Interconnect using a standard CAT5 or CAT6 Ethernet cable with RJ45 plugs. It is not necessary to connect this to a switch or hub.
· L2: A cross connect port for forming the Cisco UCS management cluster. This is connected directly to the L2 port of the paired Fabric Interconnect using a standard CAT5 or CAT6 Ethernet cable with RJ45 plugs. It is not necessary to connect this to a switch or hub.
· Console: An RJ45 serial port for direct console access to the Fabric Interconnect. Typically used during the initial FI setup process with the included serial to RJ45 adapter cable. This can also be plugged into a terminal aggregator or remote console server device.
The HX-Series converged servers are connected directly to the Cisco UCS Fabric Interconnects in Direct Connect mode. This option enables Cisco UCS Manager to manage the HX-Series rack-mount Servers using a single cable for both management traffic and data traffic. Both the HXAF220c-M4S and HXAF240c-M4SX servers are configured with the Cisco VIC 1227 network interface card (NIC) installed in a modular LAN on motherboard (MLOM) slot, which has dual 10 Gigabit Ethernet (GbE) ports. The standard and redundant connection practice is to connect port 1 of the VIC 1227 to a port on FI A, and port 2 of the VIC 1227 to a port on FI B (Figure 5). Failure to follow this cabling practice can lead to errors, discovery failures, and loss of redundant connectivity.
Figure 5 HX-Series Server Connectivity
Hybrid HyperFlex clusters also incorporate 1-8 Cisco UCS B200-M4 blade servers for additional compute capacity. Like all other Cisco UCS B-series blade servers, the Cisco UCS B200-M4 must be installed within a Cisco UCS 5108 blade chassis. The blade chassis comes populated with 1-4 power supplies, and 8 modular cooling fans. In the rear of the chassis are two bays for installation of Cisco Fabric Extenders. The Fabric Extenders (also commonly called IO Modules, or IOMs) connect the chassis to the Fabric Interconnects. Internally, the Fabric Extenders connect to the Cisco VIC 1340 card installed in each blade server across the chassis backplane. The standard connection practice is to connect 1-4 10 GbE links from the left side IOM, or IOM 1, to FI A, and to connect the same number of 10 GbE links from the right side IOM, or IOM 2, to FI B (Figure 6). All other cabling configurations are invalid and can lead to errors, discovery failures, and loss of redundant connectivity.
Figure 6 Cisco UCS 5108 Chassis Connectivity
The Cisco HyperFlex system has communication pathways that fall into four defined zones:
· Management Zone: This zone comprises the connections needed to manage the physical hardware, the hypervisor hosts, and the storage platform controller virtual machines (SCVM). These interfaces and IP addresses need to be available to all staff who will administer the HX system, throughout the LAN/WAN. This zone must provide access to Domain Name System (DNS) and Network Time Protocol (NTP) services, and allow Secure Shell (SSH) communication. In this zone are multiple physical and virtual components:
- Fabric Interconnect management ports.
- Cisco UCS external management interfaces used by the servers and blades, which answer through the FI management ports.
- ESXi host management interfaces.
- Storage Controller VM management interfaces.
- A roaming HX cluster management interface.
· VM Zone: This zone comprises the connections needed to service network IO to the guest VMs that will run inside the HyperFlex hyperconverged system. This zone typically contains multiple VLANs, that are trunked to the Cisco UCS Fabric Interconnects via the network uplinks, and tagged with 802.1Q VLAN IDs. These interfaces and IP addresses need to be available to all staff and other computer endpoints which need to communicate with the guest VMs in the HX system, throughout the LAN/WAN.
· Storage Zone: This zone comprises the connections used by the Cisco HX Data Platform software, ESXi hosts, and the storage controller VMs to service the HX Distributed Data Filesystem. These interfaces and IP addresses need to be able to communicate with each other at all times for proper operation. During normal operation, this traffic all occurs within the Cisco UCS domain, however, there are hardware failure scenarios where this traffic would need to traverse the network northbound of the Cisco UCS domain. For that reason, the VLAN used for HX storage traffic must be able to traverse the network uplinks from the Cisco UCS domain, reaching FI A from FI B, and vice-versa. This zone is primarily jumbo frame traffic, therefore jumbo frames must be enabled on the Cisco UCS uplinks. In this zone are multiple components:
- A vmkernel interface used for storage traffic for each ESXi host in the HX cluster.
- Storage Controller VM storage interfaces.
- A roaming HX cluster storage interface.
· VMotion Zone: This zone comprises the connections used by the ESXi hosts to enable vMotion of the guest VMs from host to host. During normal operation, this traffic all occurs within the Cisco UCS domain, however, there are hardware failure scenarios where this traffic would need to traverse the network northbound of the Cisco UCS domain. For that reason, the VLAN used for HX storage traffic must be able to traverse the network uplinks from the Cisco UCS domain, reaching FI A from FI B, and vice-versa.
Figure 7 illustrates the logical network design.
Figure 7 Logical Network Design
The reference hardware configuration includes:
· Two Cisco Nexus 9372PX switches
· Two Cisco UCS 6248UP Fabric Interconnects
· Sixteen Cisco HXAF-series Rack server running HyperFlex data platform version 2.1.1b
· Eight Cisco UCS B200 M4 blade server and eight Cisco UCS C220 M4 rack server running HyperFlex data platform version 2.1.1b as compute-only nodes.
For desktop virtualization, the deployment includes Citrix XenDesktop running on VMware vSphere 6. The design is intended to provide a large scale building block for both Hosted Shared Desktops (HSD) and persistent/non-persistent Hosted Virtual Desktops (HVD) with following density per thirty-two node configuration:
· 1000 HSD server desktop sessions (Windows Server 2016)
· 1000 Windows 10 non-persistent HVD (PVS)
· 1000 Windows 10 non-persistent HVD (MCS)
· 1000 Windows 10 persistent, static HVD virtual desktops (MCS, full copy)
All of the Windows 10 virtual desktops were provisioned with 2GB of memory for this study. Typically, persistent desktop users may desire more memory. If 3GB or more of memory is needed, the third memory channel on the Cisco HXAF220c-M4S HX-Series rack server, Cisco UCS C220 M4 rack server and Cisco UCS B200 M4 servers should be populated.
Data provided here will allow customers to run HSD server sessions and VDI desktops to suit their environment. For example, additional Cisco HX server can be deployed in a compute-only manner to increase compute capacity or additional drives can be added in existing server to improve I/O capability and throughput, and special hardware or software features can be added to introduce new features. This document guides you through the low-level steps for deploying the base architecture, as shown in 0. These procedures cover everything from physical cabling to network, compute and storage device configurations.
This document provides details for configuring a fully redundant, highly available configuration for a Cisco Validated Design for various type of Virtual Desktop workloads on Cisco HyperFlex. Configuration guidelines are provided that refer to which redundant component is being configured with each step. For example, Cisco Nexus A or Cisco Nexus B identifies a member in the pair of Cisco Nexus switches that are configured. Cisco UCS 6248UP Fabric Interconnects are similarly identified. Additionally, this document details the steps for provisioning multiple Cisco UCS and HyperFlex hosts, and these are identified sequentially: VM-Host-Infra-01, VM-Host-Infra-02, VM-Host-RDSH-01, VM-Host-VDI-01 and so on. Finally, to indicate that you should include information pertinent to your environment in a given step, <text> appears as part of the command structure.
This section describes the infrastructure components used in the solution outlined in this study.
Cisco UCS Manager (UCSM) provides unified, embedded management of all software and hardware components of the Cisco Unified Computing System™ (Cisco UCS) and Cisco HyperFlex through an intuitive GUI, a command-line interface (CLI), and an XML API. The manager provides a unified management domain with centralized management capabilities and can control multiple chassis and thousands of virtual machines.
Cisco UCS is a next-generation data center platform that unites computing, networking, and storage access. The platform, optimized for virtual environments, is designed using open industry-standard technologies and aims to reduce total cost of ownership (TCO) and increase business agility. The system integrates a low-latency; lossless 10 Gigabit Ethernet unified network fabric with enterprise-class, x86-architecture servers. It is an integrated, scalable, multi-chassis platform in which all resources participate in a unified management domain.
Cisco UCS Manager software manages traditional Cisco UCS Blade and Rack Servers and Cisco HyperFlex hyperconverged servers. It is the unifying fabric that allows for the HyperFlex System to leverage the Cisco UCS family of blade and rack servers as compute only nodes.
The main components of the Cisco UCS are:
· Compute: The system is based on an entirely new class of computing system that incorporates blade, rack and hyperconverged servers based on Intel® Xeon® processor E5-2600/4600 v4 and E7-2800 v4 family CPUs.
· Network: The system is integrated on a low-latency, lossless, 10-Gbps unified network fabric. This network foundation consolidates LANs, SANs, and high-performance computing (HPC) networks, which are separate networks today. The unified fabric lowers costs by reducing the number of network adapters, switches, and cables needed, and by decreasing the power and cooling requirements.
· Virtualization: The system unleashes the full potential of virtualization by enhancing the scalability, performance, and operational control of virtual environments. Cisco security, policy enforcement, and diagnostic features are now extended into virtualized environments to better support changing business and IT requirements.
· Storage: The Cisco HyperFlex rack servers provide high performance, resilient storage using the powerful HX Data Platform software. Customers can deploy as few as three nodes (replication factor 2/3,) depending on their fault tolerance requirements. These nodes form a HyperFlex storage and compute cluster. The onboard storage of each node is aggregated at the cluster level and automatically shared with all of the nodes. Storage resources are managed from the familiar VMware vCenter web client, extending the capability of vCenter administrators.
· Management: Cisco UCS uniquely integrates all system components, enabling the entire solution to be managed as a single entity by Cisco UCS Manager. The manager has an intuitive GUI, a CLI, and a robust API for managing all system configuration processes and operations.
Figure 8 Cisco HyperFlex Family Overview
Cisco UCS and Cisco HyperFlex are designed to deliver:
· Reduced TCO and increased business agility.
· Increased IT staff productivity through just-in-time provisioning and mobility support.
· A cohesive, integrated system that unifies the technology in the data center; the system is managed, serviced, and tested as a whole.
· Scalability through a design for hundreds of discrete servers and thousands of virtual machines and the capability to scale I/O bandwidth to match demand.
· Industry standards supported by a partner ecosystem of industry leaders.
Cisco UCS Manager provides unified, embedded management of all software and hardware components of the Cisco Unified Computing System across multiple chassis, rack servers, and thousands of virtual machines. Cisco UCS Manager manages Cisco UCS as a single entity through an intuitive GUI, a command-line interface (CLI), or an XML API for comprehensive access to all Cisco UCS Manager Functions.
The Cisco HyperFlex system provides a fully contained virtual server platform, with compute and memory resources, integrated networking connectivity, a distributed high performance log-structured file system for VM storage, and the hypervisor software for running the virtualized servers, all within a single Cisco UCS management domain.
Figure 9 Cisco HyperFlex System Overview
The Cisco UCS 6200 Series Fabric Interconnects are a core part of Cisco UCS, providing both network connectivity and management capabilities for the system. The Cisco UCS 6200 Series offers line-rate, low-latency, lossless 10 Gigabit Ethernet, FCoE, and Fibre Channel functions.
The fabric interconnects provide the management and communication backbone for the Cisco UCS B-Series Blade Servers, Cisco UCS C-Series and HX-Series rack servers and Cisco UCS 5100 Series Blade Server Chassis. All servers, attached to the fabric interconnects become part of a single, highly available management domain. In addition, by supporting unified fabric, the Cisco UCS 6200 Series provides both LAN and SAN connectivity for all blades in the domain.
For networking, the Cisco UCS 6200 Series uses a cut-through architecture, supporting deterministic, low-latency, line-rate 10 Gigabit Ethernet on all ports, 1-terabit (Tb) switching capacity, and 160 Gbps of bandwidth per chassis, independent of packet size and enabled services. The product series supports Cisco low-latency, lossless, 10 Gigabit Ethernet unified network fabric capabilities, increasing the reliability, efficiency, and scalability of Ethernet networks. The fabric interconnects support multiple traffic classes over a lossless Ethernet fabric, from the blade server through the interconnect. Significant TCO savings come from an FCoE-optimized server design in which network interface cards (NICs), host bus adapters (HBAs), cables, and switches can be consolidated.
Figure 10 Cisco UCS 6200 Series Fabric Interconnect
A Cisco HyperFlex cluster requires a minimum of three HX-Series nodes (with disk storage). Data is replicated across at least two of these nodes, and a third node is required for continuous operation in the event of a single-node failure. Each node that has disk storage is equipped with at least one high-performance SSD drive for data caching and rapid acknowledgment of write requests. Each node also is equipped with up to the platform’s physical capacity of spinning disks for maximum data capacity. At first release, we offer three tested cluster configurations:
This small footprint configuration of Cisco HyperFlex all-flash nodes contains two Cisco Flexible Flash (FlexFlash) Secure Digital (SD) cards that act as the boot drives, a single 120-GB solid-state disk (SSD) data-logging drive, a single 800-GB SSD write-log drive, and up to six 3.8-terabyte (TB) or six 960-GB SATA SSD drives for storage capacity. A minimum of three nodes and a maximum of eight nodes can be configured in one HX cluster.
Figure 11 Cisco UCS HXAF220c-M4S Rack Server Front View
Figure 12 Cisco UCS HXAF220c-M4S Rack Server Rear View
The Cisco UCS HXAF220c-M4S delivers performance, flexibility, and optimization for data centers and remote sites. This enterprise-class server offers market-leading performance, versatility, and density without compromise for workloads ranging from web infrastructure to distributed databases. The Cisco UCS HXAF220c-M4S can quickly deploy stateless physical and virtual workloads with the programmable ease of use of the Cisco UCS Manager software and simplified server access with Cisco® Single Connect technology. Based on the Intel Xeon processor E5-2600 v4 product family, it offers up to 768 GB of memory using 32-GB DIMMs, up to eight disk drives, and up to 20 Gbps of I/O throughput. The Cisco UCS HXAF220c-M4S offers exceptional levels of performance, flexibility, and I/O throughput to run your most demanding applications.
The Cisco UCS HXAF220c-M4S provides:
· Up to two multicore Intel Xeon processor E5-2600 v4 series CPUs for up to 44 processing cores
· 24 DIMM slots for industry-standard DDR4 memory at speeds up to 2400 MHz, and up to 768 GB of total memory when using 32-GB DIMMs
· Eight hot-pluggable SAS and SATA HDDs or SSDs
· Cisco UCS VIC 1227, a 2-port, 20 Gigabit Ethernet and FCoE–capable modular (mLOM) mezzanine adapter
· Cisco FlexStorage local drive storage subsystem, with flexible boot and local storage capabilities that allow you to install and boot Hypervisor from.
· Enterprise-class pass-through RAID controller
· Easily add, change, and remove Cisco FlexStorage modules
This capacity optimized configuration contains a minimum of three nodes, a minimum of fifteen and up to twenty-three 1.2 TB SAS drives that contribute to cluster storage, a single 120 GB SSD housekeeping drive, a single 1.6 TB SSD caching drive, and two FlexFlash SD cards that act as the boot drives.
Figure 13 HXAF240c-M4SX Node
This small footprint configuration contains a minimum of three nodes with six 1.2 terabyte (TB) SAS drives that contribute to cluster storage capacity, a 120 GB SSD housekeeping drive, a 480 GB SSD caching drive, and two Cisco Flexible Flash (FlexFlash) Secure Digital (SD) cards that act as boot drives.
This capacity optimized configuration contains a minimum of three nodes, a minimum of fifteen and up to twenty-three 1.2 TB SAS drives that contribute to cluster storage, a single 120 GB SSD housekeeping drive, a single 1.6 TB SSD caching drive, and two FlexFlash SD cards that act as the boot drives.
The Cisco UCS Virtual Interface Card (VIC) 1227 is a dual-port Enhanced Small Form-Factor Pluggable (SFP+) 10-Gbps Ethernet and Fibre Channel over Ethernet (FCoE)-capable PCI Express (PCIe) modular LAN-on-motherboard (mLOM) adapter installed in the Cisco UCS HX-Series Rack Servers. The mLOM slot can be used to install a Cisco VIC without consuming a PCIe slot, which provides greater I/O expandability. It incorporates next-generation converged network adapter (CNA) technology from Cisco, providing investment protection for future feature releases. The card enables a policy-based, stateless, agile server infrastructure that can present up to 256 PCIe standards-compliant interfaces to the host that can be dynamically configured as either network interface cards (NICs) or host bus adapters (HBAs). The personality of the card is determined dynamically at boot time using the service profile associated with the server. The number, type (NIC or HBA), identity (MAC address and World Wide Name [WWN]), failover policy, bandwidth, and quality-of-service (QoS) policies of the PCIe interfaces are all determined using the service profile.
Figure 16 Cisco VIC 1227 mLOM Card
For workloads that require additional computing and memory resources, but not additional storage capacity, a compute-intensive hybrid cluster configuration is allowed. This configuration requires a minimum of three (up to eight) HyperFlex converged nodes with one to eight Cisco UCS B200-M4 Blade Servers for additional computing capacity. The HX-series Nodes are configured as described previously, and the Cisco UCS B200-M4 servers are equipped with boot drives. Use of the Cisco UCS B200-M4 compute nodes also requires the Cisco UCS 5108 blade server chassis, and a pair of Cisco UCS 2204XP Fabric Extenders.
Figure 17 Cisco UCS B200 M4 Server
The Cisco UCS Virtual Interface Card (VIC) 1340 is a 2-port 40-Gbps Ethernet or dual 4 x 10-Gbps Ethernet, Fibre Channel over Ethernet (FCoE)-capable modular LAN on motherboard (mLOM) designed exclusively for the M4 generation of Cisco UCS B-Series Blade Servers. When used in combination with an optional port expander, the Cisco UCS VIC 1340 capabilities is enabled for two ports of 40-Gbps Ethernet.
The Cisco UCS VIC 1340 enables a policy-based, stateless, agile server infrastructure that can present over 256 PCIe standards-compliant interfaces to the host that can be dynamically configured as either network interface cards (NICs) or host bus adapters (HBAs). In addition, the Cisco UCS VIC 1340 supports Cisco® Data Center Virtual Machine Fabric Extender (VM-FEX) technology, which extends the Cisco UCS fabric interconnect ports to virtual machines, simplifying server virtualization deployment and management.
Figure 19 illustrates the Cisco UCS VIC 1340 Virtual Interface Cards Deployed in the Cisco UCS B-Series B200 M4 Blade Servers.
Figure 19 Cisco UCS VIC 1340 Deployed in the Cisco UCS B200 M4
The Cisco UCS C220 M4 Rack Server is an enterprise-class infrastructure server in a 1RU form factor. It incorporates the Intel Xeon processor E5-2600 v4 and v3 product family, next-generation DDR4 memory, and 12-Gbps SAS throughput, delivering significant performance and efficiency gains. Cisco UCS C220 M4 Rack Server can be used to build a compute-intensive hybrid HX cluster, for an environment where the workloads require additional computing and memory resources but not additional storage capacity, along with the HX-series converged nodes. This configuration contains a minimum of three (up to eight) HX-series converged nodes with one to eight Cisco UCS C220-M4 Rack Servers for additional computing capacity.
Figure 20 Cisco UCS C220 M4 Rack Server
The Cisco UCS C240 M4 Rack Server is an enterprise-class 2-socket, 2-rack-unit (2RU) rack server. It incorporates the Intel Xeon processor E5-2600 v4 and v3 product family, next-generation DDR4 memory, and 12-Gbps SAS throughput that offers outstanding performance and expandability for a wide range of storage and I/O-intensive infrastructure workloads. Cisco UCS C240 M4 Rack Server can be used to expand additional computing and memory resources into a compute-intensive hybrid HX cluster, along with the HX-series converged nodes. This configuration contains a minimum of three (up to eight) HX-series converged nodes with one to eight Cisco UCS C240-M4 Rack Servers for additional computing capacity.
Figure 21 Cisco UCS C240 M4 Rack Server
The Cisco UCS 5100 Series Blade Server Chassis is a crucial building block of the Cisco Unified Computing System, delivering a scalable and flexible blade server chassis. The Cisco UCS 5108 Blade Server Chassis, is six rack units (6RU) high and can mount in an industry-standard 19-inch rack. A single chassis can house up to eight half-width Cisco UCS B-Series Blade Servers and can accommodate both half-width and full-width blade form factors.
Four single-phase, hot-swappable power supplies are accessible from the front of the chassis. These power supplies are 92 percent efficient and can be configured to support non-redundant, N+1 redundant, and grid redundant configurations. The rear of the chassis contains eight hot-swappable fans, four power connectors (one per power supply), and two I/O bays for Cisco UCS Fabric Extenders. A passive mid-plane provides up to 40 Gbps of I/O bandwidth per server slot from each Fabric Extender. The chassis is capable of supporting 40 Gigabit Ethernet standards.
Figure 22 Cisco UCS 5108 Blade Chassis Front and Rear Views
The Cisco UCS 2200 Series Fabric Extenders multiplex and forward all traffic from blade servers in a chassis to a parent Cisco UCS Fabric Interconnect over from 10-Gbps unified fabric links. All traffic, even traffic between blades on the same chassis or virtual machines on the same blade, is forwarded to the parent interconnect, where network profiles are managed efficiently and effectively by the fabric interconnect. At the core of the Cisco UCS fabric extender are application-specific integrated circuit (ASIC) processors developed by Cisco that multiplex all traffic.
The Cisco UCS 2204XP Fabric Extender has four 10 Gigabit Ethernet, FCoE-capable, SFP+ ports that connect the blade chassis to the fabric interconnect. Each Cisco UCS 2204XP has sixteen 10 Gigabit Ethernet ports connected through the midplane to each half-width slot in the chassis. Typically configured in pairs for redundancy, two fabric extenders provide up to 80 Gbps of I/O to the chassis.
Figure 23 Cisco UCS 2204XP Fabric Extender
The Cisco HyperFlex HX Data Platform is a purpose-built, high-performance, distributed file system with a wide array of enterprise-class data management services. The data platform’s innovations redefine distributed storage technology, exceeding the boundaries of first-generation hyperconverged infrastructures. The data platform has all the features that you would expect of an enterprise shared storage system, eliminating the need to configure and maintain complex Fibre Channel storage networks and devices. The platform simplifies operations and helps ensure data availability. Enterprise-class storage features include the following:
· Replication replicates data across the cluster so that data availability is not affected if single or multiple components fail (depending on the replication factor configured).
· Deduplication is always on, helping reduce storage requirements in virtualization clusters in which multiple operating system instances in client virtual machines result in large amounts of replicated data.
· Compression further reduces storage requirements, reducing costs, and the log- structured file system is designed to store variable-sized blocks, reducing internal fragmentation.
· Thin provisioning allows large volumes to be created without requiring storage to support them until the need arises, simplifying data volume growth and making storage a “pay as you grow” proposition.
· Fast, space-efficient clones rapidly replicate storage volumes so that virtual machines can be replicated simply through metadata operations, with actual data copied only for write operations.
· Snapshots help facilitate backup and remote-replication operations: needed in enterprises that require always-on data availability.
The policy for the number of duplicate copies of each storage block is chosen during cluster setup, and is referred to as the replication factor (RF). The default setting for the Cisco HyperFlex HX Data Platform is replication factor 3 (RF=3).
· Replication Factor 3: For every I/O write committed to the storage layer, 2 additional copies of the blocks written will be created and stored in separate locations, for a total of 3 copies of the blocks. Blocks are distributed in such a way as to ensure multiple copies of the blocks are not stored on the same disks, nor on the same nodes of the cluster. This setting can tolerate simultaneous failures 2 entire nodes without losing data and resorting to restore from backup or other recovery processes.
· Replication Factor 2: For every I/O write committed to the storage layer, 1 additional copy of the blocks written will be created and stored in separate locations, for a total of 2 copies of the blocks. Blocks are distributed in such a way as to ensure multiple copies of the blocks are not stored on the same disks, nor on the same nodes of the cluster. This setting can tolerate a failure 1 entire node without losing data and resorting to restore from backup or other recovery processes.
Incoming data is distributed across all nodes in the cluster to optimize performance using the caching tier (Figure 24). Effective data distribution is achieved by mapping incoming data to stripe units that are stored evenly across all nodes, with the number of data replicas determined by the policies you set. When an application writes data, the data is sent to the appropriate node based on the stripe unit, which includes the relevant block of information. This data distribution approach in combination with the capability to have multiple streams writing at the same time avoids both network and storage hot spots, delivers the same I/O performance regardless of virtual machine location, and gives you more flexibility in workload placement. This contrasts with other architectures that use a data locality approach that does not fully use available networking and I/O resources and is vulnerable to hot spots.
Figure 24 Data is Striped Across Nodes in the Cluster
When moving a virtual machine to a new location using tools such as VMware Dynamic Resource Scheduling (DRS), the Cisco HyperFlex HX Data Platform does not require data to be moved. This approach significantly reduces the impact and cost of moving virtual machines among systems.
The data platform implements a distributed, log-structured file system that changes how it handles caching and storage capacity depending on the node configuration.
In the all-flash-memory configuration, the data platform uses a caching layer in SSDs to accelerate write responses, and it implements the capacity layer in SSDs. Read requests are fulfilled directly from data obtained from the SSDs in the capacity layer. A dedicated read cache is not required to accelerate read operations.
Incoming data is striped across the number of nodes required to satisfy availability requirements—usually two or three nodes. Based on policies you set, incoming write operations are acknowledged as persistent after they are replicated to the SSD drives in other nodes in the cluster. This approach reduces the likelihood of data loss due to SSD or node failures. The write operations are then de-staged to SSDs in the capacity layer in the all-flash memory configuration for long-term storage.
The log-structured file system writes sequentially to one of two write logs (three in case of RF=3) until it is full. It then switches to the other write log while de-staging data from the first to the capacity tier. When existing data is (logically) overwritten, the log-structured approach simply appends a new block and updates the metadata. This layout benefits SSD configurations in which seek operations are not time consuming. It reduces the write amplification levels of SSDs and the total number of writes the flash media experiences due to incoming writes and random overwrite operations of the data.
When data is de-staged to the capacity tier in each node, the data is deduplicated and compressed. This process occurs after the write operation is acknowledged, so no performance penalty is incurred for these operations. A small deduplication block size helps increase the deduplication rate. Compression further reduces the data footprint. Data is then moved to the capacity tier as write cache segments are released for reuse (Figure 25).
Figure 25 Data Write Operation Flow through the Cisco HyperFlex HX Data Platform
Hot data sets—data that are frequently or recently read from the capacity tier—are cached in memory. All-Flash configurations, however, does not use an SSD read cache since there is no performance benefit of such a cache; the persistent data copy already resides on high-performance SSDs. In these configurations, a read cache implemented with SSDs could become a bottleneck and prevent the system from using the aggregate bandwidth of the entire set of SSDs.
The Cisco HyperFlex HX Data Platform provides finely detailed inline deduplication and variable block inline compression that is always on for objects in the cache (SSD and memory) and capacity (SSD or HDD) layers. Unlike other solutions, which require you to turn off these features to maintain performance, the deduplication and compression capabilities in the Cisco data platform are designed to sustain and enhance performance and significantly reduce physical storage capacity requirements.
Data deduplication is used on all storage in the cluster, including memory and SSD drives. Based on a patent-pending Top-K Majority algorithm, the platform uses conclusions from empirical research that show that most data, when sliced into small data blocks, has significant deduplication potential based on a minority of the data blocks. By fingerprinting and indexing just these frequently used blocks, high rates of deduplication can be achieved with only a small amount of memory, which is a high-value resource in cluster nodes (Figure 26).
Figure 26 Cisco HyperFlex HX Data Platform Optimizes Data Storage with No Performance Impact
The Cisco HyperFlex HX Data Platform uses high-performance inline compression on data sets to save storage capacity. Although other products offer compression capabilities, many negatively affect performance. In contrast, the Cisco data platform uses CPU-offload instructions to reduce the performance impact of compression operations. In addition, the log-structured distributed-objects layer has no effect on modifications (write operations) to previously compressed data. Instead, incoming modifications are compressed and written to a new location, and the existing (old) data is marked for deletion, unless the data needs to be retained in a snapshot.
The data that is being modified does not need to be read prior to the write operation. This feature avoids typical read-modify-write penalties and significantly improves write performance.
In the Cisco HyperFlex HX Data Platform, the log-structured distributed-object store layer groups and compresses data that filters through the deduplication engine into self-addressable objects. These objects are written to disk in a log-structured, sequential manner. All incoming I/O—including random I/O—is written sequentially to both the caching (SSD and memory) and persistent (SSD or HDD) tiers. The objects are distributed across all nodes in the cluster to make uniform use of storage capacity.
By using a sequential layout, the platform helps increase flash-memory endurance. Because read-modify-write operations are not used, there is little or no performance impact of compression, snapshot operations, and cloning on overall performance.
Data blocks are compressed into objects and sequentially laid out in fixed-size segments, which in turn are sequentially laid out in a log-structured manner (Figure 27). Each compressed object in the log-structured segment is uniquely addressable using a key, with each key fingerprinted and stored with a checksum to provide high levels of data integrity. In addition, the chronological writing of objects helps the platform quickly recover from media or node failures by rewriting only the data that came into the system after it was truncated due to a failure.
Figure 27 Cisco HyperFlex HX Data Platform Optimizes Data Storage with No Performance Impact
Securely encrypted storage optionally encrypts both the caching and persistent layers of the data platform. Integrated with enterprise key management software, or with passphrase-protected keys, encrypting data at rest helps you comply with HIPAA, PCI-DSS, FISMA, and SOX regulations. The platform itself is hardened to Federal Information Processing Standard (FIPS) 140-1 and the encrypted drives with key management comply with the FIPS 140-2 standard.
The Cisco HyperFlex HX Data Platform provides a scalable implementation of space-efficient data services, including thin provisioning, space reclamation, pointer-based snapshots, and clones—without affecting performance.
The platform makes efficient use of storage by eliminating the need to forecast, purchase, and install disk capacity that may remain unused for a long time. Virtual data containers can present any amount of logical space to applications, whereas the amount of physical storage space that is needed is determined by the data that is written. You can expand storage on existing nodes and expand your cluster by adding more storage-intensive nodes as your business requirements dictate, eliminating the need to purchase large amounts of storage before you need it.
The Cisco HyperFlex HX Data Platform uses metadata-based, zero-copy snapshots to facilitate backup operations and remote replication: critical capabilities in enterprises that require always-on data availability. Space-efficient snapshots allow you to perform frequent online backups of data without needing to worry about the consumption of physical storage capacity. Data can be moved offline or restored from these snapshots instantaneously.
· Fast snapshot updates: When modified-data is contained in a snapshot, it is written to a new location, and the metadata is updated, without the need for read-modify-write operations.
· Rapid snapshot deletions: You can quickly delete snapshots. The platform simply deletes a small amount of metadata that is located on an SSD, rather than performing a long consolidation process as needed by solutions that use a delta-disk technique.
· Highly specific snapshots: With the Cisco HyperFlex HX Data Platform, you can take snapshots on an individual file basis. In virtual environments, these files map to drives in a virtual machine. This flexible specificity allows you to apply different snapshot policies on different virtual machines.
Many basic backup applications, read the entire dataset, or the changed blocks since the last backup at a rate that is usually as fast as the storage, or the operating system can handle. This can cause performance implications since HyperFlex is built on UCS with 10GbE which could result in multiple gigabytes per second of backup throughput. These basic backup applications, such as Windows Server Backup, should be scheduled during off-peak hours, particularly the initial backup if the application lacks some form of change block tracking.
Full featured backup applications, such as Veeam Backup and Replication v9.5, have the ability to limit the amount of throughput the backup application can consume which can protect latency sensitive applications during the production hours. With the release of v9.5 update 2, Veeam is the first partner to integrate HX native snapshots into the product. HX Native snapshots do not suffer the performance penalty of delta-disk snapshots, and do not require heavy disk IO impacting consolidation during snapshot deletion.
Particularly important for SQL administrators is the Veeam Explorer for SQL which can provide transaction level recovery within the Microsoft VSS framework. The three ways Veeam Explorer for SQL Server works to restore SQL Server databases include; from the backup restore point, from a log replay to a point in time, and from a log replay to a specific transaction – all without taking the VM or SQL Server offline.
In the Cisco HyperFlex HX Data Platform, clones are writable snapshots that can be used to rapidly provision items such as virtual desktops and applications for test and development environments. These fast, space-efficient clones rapidly replicate storage volumes so that virtual machines can be replicated through just metadata operations, with actual data copying performed only for write operations. With this approach, hundreds of clones can be created and deleted in minutes. Compared to full-copy methods, this approach can save a significant amount of time, increase IT agility, and improve IT productivity.
Clones are deduplicated when they are created. When clones start diverging from one another, data that is common between them is shared, with only unique data occupying new storage space. The deduplication engine eliminates data duplicates in the diverged clones to further reduce the clone’s storage footprint.
In the Cisco HyperFlex HX Data Platform, the log-structured distributed-object layer replicates incoming data, improving data availability. Based on policies that you set, data that is written to the write cache is synchronously replicated to one or two other SSD drives located in different nodes before the write operation is acknowledged to the application. This approach allows incoming writes to be acknowledged quickly while protecting data from SSD or node failures. If an SSD or node fails, the replica is quickly re-created on other SSD drives or nodes using the available copies of the data.
The log-structured distributed-object layer also replicates data that is moved from the write cache to the capacity layer. This replicated data is likewise protected from SSD or node failures. With two replicas, or a total of three data copies, the cluster can survive uncorrelated failures of two SSD drives or two nodes without the risk of data loss. Uncorrelated failures are failures that occur on different physical nodes. Failures that occur on the same node affect the same copy of data and are treated as a single failure. For example, if one disk in a node fails and subsequently another disk on the same node fails, these correlated failures count as one failure in the system. In this case, the cluster could withstand another uncorrelated failure on a different node. See the Cisco HyperFlex HX Data Platform system administrator’s guide for a complete list of fault-tolerant configurations and settings.
If a problem occurs in the Cisco HyperFlex HX controller software, data requests from the applications residing in that node are automatically routed to other controllers in the cluster. This same capability can be used to upgrade or perform maintenance on the controller software on a rolling basis without affecting the availability of the cluster or data. This self-healing capability is one of the reasons that the Cisco HyperFlex HX Data Platform is well suited for production applications.
In addition, native replication transfers consistent cluster data to local or remote clusters. With native replication, you can snapshot and store point-in-time copies of your environment in local or remote environments for backup and disaster recovery purposes.
A distributed file system requires a robust data rebalancing capability. In the Cisco HyperFlex HX Data Platform, no overhead is associated with metadata access, and rebalancing is extremely efficient. Rebalancing is a non-disruptive online process that occurs in both the caching and persistent layers, and data is moved at a fine level of specificity to improve the use of storage capacity. The platform automatically rebalances existing data when nodes and drives are added or removed or when they fail. When a new node is added to the cluster, its capacity and performance are made available to new and existing data. The rebalancing engine distributes existing data to the new node and helps ensure that all nodes in the cluster are used uniformly from capacity and performance perspectives. If a node fails or is removed from the cluster, the rebalancing engine rebuilds and distributes copies of the data from the failed or removed node to available nodes in the clusters.
Cisco HyperFlex HX-Series systems and the HX Data Platform support online upgrades so that you can expand and update your environment without business disruption. You can easily expand your physical resources; add processing capacity; and download and install BIOS, driver, hypervisor, firmware, and Cisco UCS Manager updates, enhancements, and bug fixes.
The Cisco HyperFlex system has several new capabilities and enhancements in version 2.1.1.
Figure 28 Addition of HX All-Flash Nodes in 2.1.1
· New All-Flash HX server models are added to the Cisco HyperFlex product family that offer all flash storage using SSDs for persistent storage devices.
· Enhancements to cluster scaling.
· Cisco HyperFlex now supports the latest generation of Cisco UCS software, Cisco UCS Manager 3.1(2g) and beyond. For new All-Flash deployments, verify that Cisco UCS Manager 3.1(2g) or later is installed.
· Support for adding external storage (iSCSI or Fibre Channel) adapters to HX nodes during HX Data Platform software installation, which simplifies the process to connect external storage arrays to the HX domain.
· Support for Self-Encrypting Drives
· Support for adding HX nodes to an existing Cisco UCS-FI domain.
· Support for Cisco HyperFlex Sizer — A new end to end sizing tool for compute, capacity and performance.
The Cisco HyperFlex HX Data Platform is administered through a VMware vSphere web client plug-in. Through this centralized point of control for the cluster, administrators can create volumes, monitor the data platform health, and manage resource use. Administrators can also use this data to predict when the cluster will need to be scaled. For customers that prefer a light weight web interface there is a tech preview URL management interface available by opening a browser to the IP address of the HX cluster interface. Additionally, there is an interface to assist in running CLI commands through a web browser.
Figure 29 HyperFlex Web Client Plug-in
For Tech Preview Web UI connect to HX controller cluster IP:
Figure 30 HyperFlex Tech Preview UI
To run CLI commands via HTTP, connect to HX controller cluster IP:
A Cisco HyperFlex HX Data Platform controller resides on each node and implements the distributed file system. The controller runs in user space within a virtual machine and intercepts and handles all I/O from guest virtual machines. The platform controller VM uses the VMDirectPath I/O feature to provide PCI pass-through control of the physical server’s SAS disk controller. This method gives the controller VM full control of the physical disk resources, utilizing the SSD drives as a read/write caching layer, and the HDDs as a capacity layer for distributed storage. The controller integrates the data platform into VMware software through the use of two preinstalled VMware ESXi vSphere Installation Bundles (VIBs):
· IO Visor: This VIB provides a network file system (NFS) mount point so that the ESXi hypervisor can access the virtual disks that are attached to individual virtual machines. From the hypervisor’s perspective, it is simply attached to a network file system.
· VMware API for Array Integration (VAAI): This storage offload API allows vSphere to request advanced file system operations such as snapshots and cloning. The controller implements these operations through manipulation of metadata rather than actual data copying, providing rapid response, and thus rapid deployment of new environments.
The Cisco Nexus 9372PX/9372PX-E Switches has 48 1/10-Gbps Small Form Pluggable Plus (SFP+) ports and 6 Quad SFP+ (QSFP+) uplink ports. All the ports are line rate, delivering 1.44 Tbps of throughput in a 1-rack-unit (1RU) form factor. Cisco Nexus 9372PX benefits are listed below:
Architectural Flexibility
· Includes top-of-rack or middle-of-row fiber-based server access connectivity for traditional and leaf-spine architectures
· Leaf node support for Cisco ACI architecture is provided in the roadmap
· Increase scale and simplify management through Cisco Nexus 2000 Fabric Extender support
Feature Rich
· Enhanced Cisco NX-OS Software is designed for performance, resiliency, scalability, manageability, and programmability
· ACI-ready infrastructure helps users take advantage of automated policy-based systems management
· Virtual Extensible LAN (VXLAN) routing provides network services
· Cisco Nexus 9372PX-E supports IP-based endpoint group (EPG) classification in ACI mode
Highly Available and Efficient Design
· High-density, non-blocking architecture
· Easily deployed into either a hot-aisle and cold-aisle configuration
· Redundant, hot-swappable power supplies and fan trays
Simplified Operations
· Power-On Auto Provisioning (POAP) support allows for simplified software upgrades and configuration file installation
· An intelligent API offers switch management through remote procedure calls (RPCs, JSON, or XML) over an HTTP/HTTPS infrastructure
· Python Scripting for programmatic access to the switch command-line interface (CLI)
· Hot and cold patching, and online diagnostics
Investment Protection
A Cisco 40 Gb bidirectional transceiver allows reuse of an existing 10 Gigabit Ethernet multimode cabling plant for 40 Gigabit Ethernet Support for 1 Gb and 10 Gb access connectivity for data centers migrating access switching infrastructure to faster speed. The following is supported:
· 1.44 Tbps of bandwidth in a 1 RU form factor
· 48 fixed 1/10-Gbps SFP+ ports
· 6 fixed 40-Gbps QSFP+ for uplink connectivity that can be turned into 10 Gb ports through a QSFP to SFP or SFP+ Adapter (QSA)
· Latency of 1 to 2 microseconds
· Front-to-back or back-to-front airflow configurations
· 1+1 redundant hot-swappable 80 Plus Platinum-certified power supplies
· Hot swappable 2+1 redundant fan tray
Figure 32 Cisco Nexus 9372PX Switch
VMware provides virtualization software. VMware’s enterprise software hypervisors for servers—VMware vSphere ESX, vSphere ESXi, and VSphere—are bare-metal hypervisors that run directly on server hardware without requiring an additional underlying operating system. VMware vCenter Server for vSphere provides central management and complete control and visibility into clusters, hosts, virtual machines, storage, networking, and other critical elements of your virtual infrastructure.
VMware vSphere 6.0 introduces many enhancements to vSphere Hypervisor, VMware virtual machines, vCenter Server, virtual storage, and virtual networking, further extending the core capabilities of the vSphere platform.
vSphere 6.0 introduces a number of new features in the hypervisor:
· Scalability Improvements
ESXi 6.0 dramatically increases the scalability of the platform. With vSphere Hypervisor 6.0, clusters can scale to as many as 64 hosts, up from 32 in previous releases. With 64 hosts in a cluster, vSphere 6.0 can support 8000 virtual machines in a single cluster. This capability enables greater consolidation ratios, more efficient use of VMware vSphere Distributed Resource Scheduler (DRS), and fewer clusters that must be separately managed. Each vSphere Hypervisor 6.0 instance can support up to 480 logical CPUs, 12 terabytes (TB) of RAM, and 1024 virtual machines. By using the newest hardware advances, ESXi 6.0 enables the virtualization of applications that previously had been thought to be non-virtualizable.
· Security Enhancements
- ESXi 6.0 offers these security enhancements:
o Account management: ESXi 6.0 enables management of local accounts on the ESXi server using new ESXi CLI commands. The capability to add, list, remove, and modify accounts across all hosts in a cluster can be centrally managed using a vCenter Server system. Previously, the account and permission management functions for ESXi hosts were available only for direct host connections. The setup, removal, and listing of local permissions on ESXi servers can also be centrally managed.
o Account lockout: ESXi Host Advanced System Settings have two new options for the management of failed local account login attempts and account lockout duration. These parameters affect Secure Shell (SSH) and vSphere Web Services connections, but not ESXi direct console user interface (DCUI) or console shell access.
o Password complexity rules: In previous versions of ESXi, password complexity changes had to be made by manually editing the /etc/pam.d/passwd file on each ESXi host. In vSphere 6.0, an entry in Host Advanced System Settings enables changes to be centrally managed for all hosts in a cluster.
o Improved auditability of ESXi administrator actions: Prior to vSphere 6.0, actions at the vCenter Server level by a named user appeared in ESXi logs with the vpxuser username: for example, [user=vpxuser]. In vSphere 6.0, all actions at the vCenter Server level for an ESXi server appear in the ESXi logs with the vCenter Server username: for example, [user=vpxuser: DOMAIN\User]. This approach provides a better audit trail for actions run on a vCenter Server instance that conducted corresponding tasks on the ESXi hosts.
o Flexible lockdown modes: Prior to vSphere 6.0, only one lockdown mode was available. Feedback from customers indicated that this lockdown mode was inflexible in some use cases. With vSphere 6.0, two lockdown modes are available:
· In normal lockdown mode, DCUI access is not stopped, and users on the DCUI access list can access the DCUI.
· In strict lockdown mode, the DCUI is stopped.
- Exception users: vSphere 6.0 offers a new function called exception users. Exception users are local accounts or Microsoft Active Directory accounts with permissions defined locally on the host to which these users have host access. These exception users are not recommended for general user accounts, but they are recommended for use by third-party applications—for service accounts, for example—that need host access when either normal or strict lockdown mode is enabled. Permissions on these accounts should be set to the bare minimum required for the application to perform its task and with an account that needs only read-only permissions on the ESXi host.
- Smart card authentication to DCUI: This function is for U.S. federal customers only. It enables DCUI login access using a Common Access Card (CAC) and Personal Identity Verification (PIV). The ESXi host must be part of an Active Directory domain.
Enterprise IT organizations are tasked with the challenge of provisioning Microsoft Windows apps and desktops while managing cost, centralizing control, and enforcing corporate security policy. Deploying Windows apps to users in any location, regardless of the device type and available network bandwidth, enables a mobile workforce that can improve productivity. With Citrix XenDesktop 7.13, IT can effectively control app and desktop provisioning while securing data assets and lowering capital and operating expenses.
The XenDesktop 7.13 release offers these benefits:
· Comprehensive virtual desktop delivery for any use case. The XenDesktop 7.13 release incorporates the full power of XenApp, delivering full desktops or just applications to users. Administrators can deploy both XenApp published applications and desktops (to maximize IT control at low cost) or personalized VDI desktops (with simplified image management) from the same management console. Citrix XenDesktop 7.13 leverages common policies and cohesive tools to govern both infrastructure resources and user access.
· Simplified support and choice of BYO (Bring Your Own) devices. XenDesktop 7.13 brings thousands of corporate Microsoft Windows-based applications to mobile devices with a native-touch experience and optimized performance. HDX technologies create a “high definition” user experience, even for graphics intensive design and engineering applications.
· Lower cost and complexity of application and desktop management. XenDesktop 7.13 helps IT organizations take advantage of agile and cost-effective cloud offerings, allowing the virtualized infrastructure to flex and meet seasonal demands or the need for sudden capacity changes. IT organizations can deploy XenDesktop application and desktop workloads to private or public clouds.
· Protection of sensitive information through centralization. XenDesktop decreases the risk of corporate data loss, enabling access while securing intellectual property and centralizing applications since assets reside in the datacenter.
· Virtual Delivery Agent improvements. Universal print server and driver enhancements and support for the HDX 3D Pro graphics acceleration for Windows 10 are key additions in XenDesktop 7.13
· Improved high-definition user experience. XenDesktop 7.13 continues the evolutionary display protocol leadership with enhanced Thinwire display remoting protocol and Framehawk support for HDX 3D Pro.
Citrix XenApp and XenDesktop are application and desktop virtualization solutions built on a unified architecture so they're simple to manage and flexible enough to meet the needs of all your organization's users. XenApp and XenDesktop have a common set of management tools that simplify and automate IT tasks. You use the same architecture and management tools to manage public, private, and hybrid cloud deployments as you do for on premises deployments.
Citrix XenApp delivers:
· XenApp published apps, also known as server-based hosted applications: These are applications hosted from Microsoft Windows servers to any type of device, including Windows PCs, Macs, smartphones, and tablets. Some XenApp editions include technologies that further optimize the experience of using Windows applications on a mobile device by automatically translating native mobile-device display, navigation, and controls to Windows applications; enhancing performance over mobile networks; and enabling developers to optimize any custom Windows application for any mobile environment.
· XenApp published desktops, also known as server-hosted desktops: These are inexpensive, locked-down Windows virtual desktops hosted from Windows server operating systems. They are well suited for users, such as call center employees, who perform a standard set of tasks.
· Virtual machine–hosted apps: These are applications hosted from machines running Windows desktop operating systems for applications that can’t be hosted in a server environment.
· Windows applications delivered with Microsoft App-V: These applications use the same management tools that you use for the rest of your XenApp deployment.
· Citrix XenDesktop: Includes significant enhancements to help customers deliver Windows apps and desktops as mobile services while addressing management complexity and associated costs. Enhancements in this release include:
· Unified product architecture for XenApp and XenDesktop: The FlexCast Management Architecture (FMA). This release supplies a single set of administrative interfaces to deliver both hosted-shared applications (RDS) and complete virtual desktops (VDI). Unlike earlier releases that separately provisioned Citrix XenApp and XenDesktop farms, the XenDesktop 7.13 release allows administrators to deploy a single infrastructure and use a consistent set of tools to manage mixed application and desktop workloads.
· Support for extending deployments to the cloud. This release provides the ability for hybrid cloud provisioning from Microsoft Azure, Amazon Web Services (AWS) or any Cloud Platform-powered public or private cloud. Cloud deployments are configured, managed, and monitored through the same administrative consoles as deployments on traditional on-premises infrastructure.
Citrix XenDesktop delivers:
· VDI desktops: These virtual desktops each run a Microsoft Windows desktop operating system rather than running in a shared, server-based environment. They can provide users with their own desktops that they can fully personalize.
· Hosted physical desktops: This solution is well suited for providing secure access powerful physical machines, such as blade servers, from within your data center.
· Remote PC access: This solution allows users to log in to their physical Windows PC from anywhere over a secure XenDesktop connection.
· Server VDI: This solution is designed to provide hosted desktops in multitenant, cloud environments.
· Capabilities that allow users to continue to use their virtual desktops: These capabilities let users continue to work while not connected to your network.
This product release includes the following new and enhanced features:
Some XenDesktop editions include the features available in XenApp.
Deployments that span widely-dispersed locations connected by a WAN can face challenges due to network latency and reliability. Configuring zones can help users in remote regions connect to local resources without forcing connections to traverse large segments of the WAN. Using zones allows effective Site management from a single Citrix Studio console, Citrix Director, and the Site database. This saves the costs of deploying, staffing, licensing, and maintaining additional Sites containing separate databases in remote locations.
Zones can be helpful in deployments of all sizes. You can use zones to keep applications and desktops closer to end users, which improves performance.
For more information, see the Zones article.
When you configure the databases during Site creation, you can now specify separate locations for the Site, Logging, and Monitoring databases. Later, you can specify different locations for all three databases. In previous releases, all three databases were created at the same address, and you could not specify a different address for the Site database later.
You can now add more Delivery Controllers when you create a Site, as well as later. In previous releases, you could add more Controllers only after you created the Site.
For more information, see the Databases and Controllers articles.
Configure application limits to help manage application use. For example, you can use application limits to manage the number of users accessing an application simultaneously. Similarly, application limits can be used to manage the number of simultaneous instances of resource-intensive applications, this can help maintain server performance and prevent deterioration in service.
For more information, see the Manage applications article.
You can now choose to repeat a notification message that is sent to affected machines before the following types of actions begin:
· Updating machines in a Machine Catalog using a new master image
· Restarting machines in a Delivery Group according to a configured schedule
If you indicate that the first message should be sent to each affected machine 15 minutes before the update or restart begins, you can also specify that the message be repeated every five minutes until the update/restart begins.
For more information, see the Manage Machine Catalogs and Manage machines in Delivery Groups articles.
By default, sessions roam between client devices with the user. When the user launches a session and then moves to another device, the same session is used and applications are available on both devices. The applications follow, regardless of the device or whether current sessions exist. Similarly, printers and other resources assigned to the application follow.
You can now use the PowerShell SDK to tailor session roaming. This was an experimental feature in the previous release.
For more information, see the Sessions article.
When using the PowerShell SDK to create or update a Machine Catalog, you can now select a template from other hypervisor connections. This is in addition to the currently-available choices of VM images and snapshots.
See the System requirements article for full support information. Information about support for third-party product versions is updated periodically.
By default, SQL Server 2012 Express SP2 is installed when you install the Delivery Controller. SP1 is no longer installed.
The component installers now automatically deploy newer Microsoft Visual C++ runtime versions: 32-bit and 64-bit Microsoft Visual C++ 2013, 2010 SP1, and 2008 SP1. Visual C++ 2005 is no longer deployed.
You can install Studio or VDAs for Windows Desktop OS on machines running Windows 10.
You can create connections to Microsoft Azure virtualization resources.
Figure 33 Logical Architecture of Citrix XenDesktop
Most enterprises struggle to keep up with the proliferation and management of computers in their environments. Each computer, whether it is a desktop PC, a server in a data center, or a kiosk-type device, must be managed as an individual entity. The benefits of distributed processing come at the cost of distributed management. It costs time and money to set up, update, support, and ultimately decommission each computer. The initial cost of the machine is often dwarfed by operating costs.
Citrix PVS takes a very different approach from traditional imaging solutions by fundamentally changing the relationship between hardware and the software that runs on it. By streaming a single shared disk image (vDisk) rather than copying images to individual machines, PVS enables organizations to reduce the number of disk images that they manage, even as the number of machines continues to grow, simultaneously providing the efficiency of centralized management and the benefits of distributed processing.
In addition, because machines are streaming disk data dynamically and in real time from a single shared image, machine image consistency is essentially ensured. At the same time, the configuration, applications, and even the OS of large pools of machines can be completed changed in the time it takes the machines to reboot.
Using PVS, any vDisk can be configured in standard-image mode. A vDisk in standard-image mode allows many computers to boot from it simultaneously, greatly reducing the number of images that must be maintained and the amount of storage that is required. The vDisk is in read-only format, and the image cannot be changed by target devices.
If you manage a pool of servers that work as a farm, such as Citrix XenApp servers or web servers, maintaining a uniform patch level on your servers can be difficult and time consuming. With traditional imaging solutions, you start with a clean golden master image, but as soon as a server is built with the master image, you must patch that individual server along with all the other individual servers. Rolling out patches to individual servers in your farm is not only inefficient, but the results can also be unreliable. Patches often fail on an individual server, and you may not realize you have a problem until users start complaining or the server has an outage. After that happens, getting the server resynchronized with the rest of the farm can be challenging, and sometimes a full reimaging of the machine is required.
With Citrix PVS, patch management for server farms is simple and reliable. You start by managing your golden image, and you continue to manage that single golden image. All patching is performed in one place and then streamed to your servers when they boot. Server build consistency is assured because all your servers use a single shared copy of the disk image. If a server becomes corrupted, simply reboot it, and it is instantly back to the known good state of your master image. Upgrades are extremely fast to implement. After you have your updated image ready for production, you simply assign the new image version to the servers and reboot them. You can deploy the new image to any number of servers in the time it takes them to reboot. Just as important, rollback can be performed in the same way, so problems with new images do not need to take your servers or your users out of commission for an extended period of time.
Because Citrix PVS is part of Citrix XenDesktop, desktop administrators can use PVS’s streaming technology to simplify, consolidate, and reduce the costs of both physical and virtual desktop delivery. Many organizations are beginning to explore desktop virtualization. Although virtualization addresses many of IT’s needs for consolidation and simplified management, deploying it also requires deployment of supporting infrastructure. Without PVS, storage costs can make desktop virtualization too costly for the IT budget. However, with PVS, IT can reduce the amount of storage required for VDI by as much as 90 percent. And with a single image to manage instead of hundreds or thousands of desktops, PVS significantly reduces the cost, effort, and complexity for desktop administration.
Different types of workers across the enterprise need different types of desktops. Some require simplicity and standardization, and others require high performance and personalization. XenDesktop can meet these requirements in a single solution using Citrix FlexCast delivery technology. With FlexCast, IT can deliver every type of virtual desktop, each specifically tailored to meet the performance, security, and flexibility requirements of each individual user.
Not all desktops applications can be supported by virtual desktops. For these scenarios, IT can still reap the benefits of consolidation and single-image management. Desktop images are stored and managed centrally in the data center and streamed to physical desktops on demand. This model works particularly well for standardized desktops such as those in lab and training environments and call centers and thin-client devices used to access virtual desktops.
Citrix PVS streaming technology allows computers to be provisioned and re-provisioned in real time from a single shared disk image. With this approach, administrators can completely eliminate the need to manage and patch individual systems. Instead, all image management is performed on the master image. The local hard drive of each system can be used for runtime data caching or, in some scenarios, removed from the system entirely, which reduces power use, system failure rate, and security risk.
The PVS solution’s infrastructure is based on software-streaming technology. After PVS components are installed and configured, a vDisk is created from a device’s hard drive by taking a snapshot of the OS and application image and then storing that image as a vDisk file on the network. A device used for this process is referred to as a master target device. The devices that use the vDisks are called target devices. vDisks can exist on a PVS, file share, or in larger deployments, on a storage system with which PVS can communicate (iSCSI, SAN, network-attached storage [NAS], and Common Internet File System [CIFS]). vDisks can be assigned to a single target device in private-image mode, or to multiple target devices in standard-image mode.
The Citrix PVS infrastructure design directly relates to administrative roles within a PVS farm. The PVS administrator role determines which components that administrator can manage or view in the console.
A PVS farm contains several components. Figure 34 provides a high-level view of a basic PVS infrastructure and shows how PVS components might appear within that implementation.
Figure 34 Logical Architecture of Citrix Provisioning Services
The following new features are available with Provisioning Services 7.13:
· Linux streaming
· XenServer proxy using PVS-Accelerator
There are many reasons to consider a virtual desktop solution such as an ever growing and diverse base of user devices, complexity in management of traditional desktops, security, and even Bring Your Own Computer (BYOC) to work programs. The first step in designing a virtual desktop solution is to understand the user community and the type of tasks that are required to successfully execute their role. The following user classifications are provided:
· Knowledge Workers today do not just work in their offices all day – they attend meetings, visit branch offices, work from home, and even coffee shops. These anywhere workers expect access to all of their same applications and data wherever they are.
· External Contractors are increasingly part of your everyday business. They need access to certain portions of your applications and data, yet administrators still have little control over the devices they use and the locations they work from. Consequently, IT is stuck making trade-offs on the cost of providing these workers a device vs. the security risk of allowing them access from their own devices.
· Task Workers perform a set of well-defined tasks. These workers access a small set of applications and have limited requirements from their PCs. However, since these workers are interacting with your customers, partners, and employees, they have access to your most critical data.
· Mobile Workers need access to their virtual desktop from everywhere, regardless of their ability to connect to a network. In addition, these workers expect the ability to personalize their PCs, by installing their own applications and storing their own data, such as photos and music, on these devices.
· Shared Workstation users are often found in state-of-the-art universities and business computer labs, conference rooms or training centers. Shared workstation environments have the constant requirement to re-provision desktops with the latest operating systems and applications as the needs of the organization change, tops the list.
After the user classifications have been identified and the business requirements for each user classification have been defined, it becomes essential to evaluate the types of virtual desktops that are needed based on user requirements. There are essentially five potential desktops environments for each user:
· Traditional PC: A traditional PC is what ―typically‖ constituted a desktop environment: physical device with a locally installed operating system.
· Hosted Shared Desktop: A hosted, server-based desktop is a desktop where the user interacts through a delivery protocol. With hosted, server-based desktops, a single installed instance of a server operating system, such as Microsoft Windows Server 2012, is shared by multiple users simultaneously. Each user receives a desktop "session" and works in an isolated memory space. Changes made by one user could impact the other users.
· Hosted Virtual Desktop: A hosted virtual desktop is a virtual desktop running either on virtualization layer (ESX) or on bare metal hardware. The user does not work with and sit in front of the desktop, but instead the user interacts through a delivery protocol.
· Published Applications: Published applications run entirely on the VMware RDSH Session Hosts and the user interacts through a delivery protocol. With published applications, a single installed instance of an application, such as Microsoft, is shared by multiple users simultaneously. Each user receives an application "session" and works in an isolated memory space.
· Streamed Applications: Streamed desktops and applications run entirely on the user‘s local client device and are sent from a server on demand. The user interacts with the application or desktop directly but the resources may only available while they are connected to the network.
· Local Virtual Desktop: A local virtual desktop is a desktop running entirely on the user‘s local device and continues to operate when disconnected from the network. In this case, the user’s local device is used as a type 1 hypervisor and is synced with the data center when the device is connected to the network.
For the purposes of the validation represented in this document, both XenDesktop Virtual Desktops and XenApp Hosted Shared Desktop server sessions were validated. Each of the sections provides some fundamental design decisions for this environment.
When the desktop user groups and sub-groups have been identified, the next task is to catalog group application and data requirements. This can be one of the most time-consuming processes in the VDI planning exercise, but is essential for the VDI project’s success. If the applications and data are not identified and co-located, performance will be negatively affected.
The process of analyzing the variety of application and data pairs for an organization will likely be complicated by the inclusion cloud applications, like SalesForce.com. This application and data analysis is beyond the scope of this Cisco Validated Design, but should not be omitted from the planning process. There are a variety of third party tools available to assist organizations with this crucial exercise.
Now that user groups, their applications, and their data requirements are understood, some key project and solution sizing questions may be considered.
General project questions should be addressed at the outset, including:
· Has a VDI pilot plan been created based on the business analysis of the desktop groups, applications, and data?
· Is there infrastructure and budget in place to run the pilot program?
· Are the required skill sets to execute the VDI project available? Can we hire or contract for them?
· Do we have end user experience performance metrics identified for each desktop sub-group?
· How will we measure success or failure?
· What is the future implication of success or failure?
Below is a short, non-exhaustive list of sizing questions that should be addressed for each user sub-group:
· What is the desktop OS planned? Windows 7, Windows 8, or Windows 10?
· 32-bit or 64-bit desktop OS?
· How many virtual desktops will be deployed in the pilot? In production? All Windows 7/8/10?
· How much memory per target desktop group desktop?
· Are there any rich media, Flash, or graphics-intensive workloads?
· What is the end point graphics processing capability?
· Will VMware RDSH for Remote Desktop Server Hosted Sessions used?
· What is the hypervisor for the solution?
· What is the storage configuration in the existing environment?
· Are there sufficient IOPS available for the write-intensive VDI workload?
· Will there be storage dedicated and tuned for VDI service?
· Is there a voice component to the desktop?
· Is anti-virus a part of the image?
· Is user profile management (e.g., non-roaming profile based) part of the solution?
· What is the fault tolerance, failover, disaster recovery plan?
· Are there additional desktop sub-group specific questions?
An ever growing and diverse base of user devices, complexity in management of traditional desktops, security, and even Bring Your Own (BYO) device to work programs are prime reasons for moving to a virtual desktop solution.
Citrix XenDesktop 7.13 integrates Hosted Shared and VDI desktop virtualization technologies into a unified architecture that enables a scalable, simple, efficient, and manageable solution for delivering Windows applications and desktops as a service.
Users can select applications from an easy-to-use “store” that is accessible from tablets, smartphones, PCs, Macs, and thin clients. XenDesktop delivers a native touch-optimized experience with HDX high-definition performance, even over mobile networks.
Collections of identical Virtual Machines (VMs) or physical computers are managed as a single entity called a Machine Catalog. In this CVD, VM provisioning relies on Citrix Provisioning Services to make sure that the machines in the catalog are consistent. In this CVD, machines in the Machine Catalog are configured to run either a Windows Server OS (for RDS hosted shared desktops) or a Windows Desktop OS (for hosted pooled VDI desktops).
To deliver desktops and applications to users, you create a Machine Catalog and then allocate machines from the catalog to users by creating Delivery Groups. Delivery Groups provide desktops, applications, or a combination of desktops and applications to users. Creating a Delivery Group is a flexible way of allocating machines and applications to users. In a Delivery Group, you can:
· Use machines from multiple catalogs
· Allocate a user to multiple machines
· Allocate multiple users to one machine
As part of the creation process, you specify the following Delivery Group properties:
· Users, groups, and applications allocated to Delivery Groups
· Desktop settings to match users' needs
· Desktop power management options
Figure 35 shows how users access desktops and applications through machine catalogs and delivery groups.
The Server OS and Desktop OS Machines configured in this CVD support the hosted shared desktops and hosted virtual desktops (both non-persistent and persistent).
Figure 35 Access Desktops and Applications through Machine Catalogs and Delivery Groups
Citrix XenDesktop 7.13 can be deployed with or without Citrix Provisioning Services (PVS). The advantage of using Citrix PVS is that it allows virtual machines to be provisioned and re-provisioned in real-time from a single shared-disk image. In this way administrators can completely eliminate the need to manage and patch individual systems and reduce the number of disk images that they manage, even as the number of machines continues to grow, simultaneously providing the efficiencies of a centralized management with the benefits of distributed processing.
The Provisioning Services solution’s infrastructure is based on software-streaming technology. After installing and configuring Provisioning Services components, a single shared disk image (vDisk) is created from a device’s hard drive by taking a snapshot of the OS and application image, and then storing that image as a vDisk file on the network. A device that is used during the vDisk creation process is the Master target device. Devices or virtual machines that use the created vDisks are called target devices.
When a target device is turned on, it is set to boot from the network and to communicate with a Provisioning Server. Unlike thin-client technology, processing takes place on the target device.
Figure 36 Citrix Provisioning Services Functionality
The target device downloads the boot file from a Provisioning Server (Step 2) and boots. Based on the boot configuration settings, the appropriate vDisk is mounted on the Provisioning Server (Step 3). The vDisk software is then streamed to the target device as needed, appearing as a regular hard drive to the system.
Instead of immediately pulling all the vDisk contents down to the target device (as with traditional imaging solutions), the data is brought across the network in real-time as needed. This approach allows a target device to get a completely new operating system and set of software in the time it takes to reboot. This approach dramatically decreases the amount of network bandwidth required and making it possible to support a larger number of target devices on a network without impacting performance
Citrix PVS can create desktops as Pooled or Private:
· Pooled Desktop: A pooled virtual desktop uses Citrix PVS to stream a standard desktop image to multiple desktop instances upon boot.
· Private Desktop: A private desktop is a single desktop assigned to one distinct user.
The alternative to Citrix Provisioning Services for pooled desktop deployments is Citrix Machine Creation Services (MCS), which is integrated with the XenDesktop Studio console.
When considering a PVS deployment, there are some design decisions that need to be made regarding the write cache for the target devices that leverage provisioning services. The write cache is a cache of all data that the target device has written. If data is written to the PVS vDisk in a caching mode, the data is not written back to the base vDisk. Instead it is written to a write cache file in one of the following locations:
· Cache on device hard drive. Write cache exists as a file in NTFS format, located on the target-device’s hard drive. This option frees up the Provisioning Server since it does not have to process write requests and does not have the finite limitation of RAM.
· Cache on device hard drive persisted. (Experimental Phase) This is the same as “Cache on device hard drive”, except that the cache persists. At this time, this method is an experimental feature only, and is only supported for NT6.1 or later (Windows 10 and Windows 2008 R2 and later). This method also requires a different bootstrap.
· Cache in device RAM. Write cache can exist as a temporary file in the target device’s RAM. This provides the fastest method of disk access since memory access is always faster than disk access.
· Cache in device RAM with overflow on hard disk. This method uses VHDX differencing format and is only available for Windows 10 and Server 2008 R2 and later. When RAM is zero, the target device write cache is only written to the local disk. When RAM is not zero, the target device write cache is written to RAM first. When RAM is full, the least recently used block of data is written to the local differencing disk to accommodate newer data on RAM. The amount of RAM specified is the non-paged kernel memory that the target device will consume.
· Cache on a server. Write cache can exist as a temporary file on a Provisioning Server. In this configuration, all writes are handled by the Provisioning Server, which can increase disk I/O and network traffic. For additional security, the Provisioning Server can be configured to encrypt write cache files. Since the write-cache file persists on the hard drive between reboots, encrypted data provides data protection in the event a hard drive is stolen.
· Cache on server persisted. This cache option allows for the saved changes between reboots. Using this option, a rebooted target device is able to retrieve changes made from previous sessions that differ from the read only vDisk image. If a vDisk is set to this method of caching, each target device that accesses the vDisk automatically has a device-specific, writable disk file created. Any changes made to the vDisk image are written to that file, which is not automatically deleted upon shutdown.
In this CVD, Provisioning Server 7.13 was used to manage Pooled/Non-Persistent VDI Machines and XenApp RDS Machines with “Cache in device RAM with Overflow on Hard Disk” for each virtual machine. This design enables good scalability to many thousands of desktops. Provisioning Server 7.13 was used for Active Directory machine account creation and management as well as for streaming the shared disk to the hypervisor hosts.
Two examples of typical XenDesktop deployments are the following:
· A distributed components configuration
· A multiple site configuration
Since XenApp and XenDesktop 7.13 are based on a unified architecture, combined they can deliver a combination of Hosted Shared Desktops (HSDs, using a Server OS machine) and Hosted Virtual Desktops (HVDs, using a Desktop OS).
You can distribute the components of your deployment among a greater number of servers, or provide greater scalability and failover by increasing the number of controllers in your site. You can install management consoles on separate computers to manage the deployment remotely. A distributed deployment is necessary for an infrastructure based on remote access through NetScaler Gateway (formerly called Access Gateway).
Figure 37 shows an example of a distributed components configuration. A simplified version of this configuration is often deployed for an initial proof-of-concept (POC) deployment. The CVD described in this document deploys Citrix XenDesktop in a configuration that resembles this distributed components configuration shown. Two Cisco C220 rack servers host the required infrastructure services (AD, DNS, DHCP, Profile, SQL, Citrix XenDesktop management, and StoreFront servers).
Figure 37 Example of a Distributed Components Configuration
If you have multiple regional sites, you can use Citrix NetScaler to direct user connections to the most appropriate site and StoreFront to deliver desktops and applications to users.
In Figure 38 depicting multiple sites, a site was created in two data centers. Having two sites globally, rather than just one, minimizes the amount of unnecessary WAN traffic.
You can use StoreFront to aggregate resources from multiple sites to provide users with a single point of access with NetScaler. A separate Studio console is required to manage each site; sites cannot be managed as a single entity. You can use Director to support users across sites.
Citrix NetScaler accelerates application performance, load balances servers, increases security, and optimizes the user experience. In this example, two NetScalers are used to provide a high availability configuration. The NetScalers are configured for Global Server Load Balancing and positioned in the DMZ to provide a multi-site, fault-tolerant solution.
With Citrix XenDesktop 7.13, the method you choose to provide applications or desktops to users depends on the types of applications and desktops you are hosting and available system resources, as well as the types of users and user experience you want to provide.
Server OS machines |
You want: Inexpensive server-based delivery to minimize the cost of delivering applications to a large number of users, while providing a secure, high-definition user experience. Your users: Perform well-defined tasks and do not require personalization or offline access to applications. Users may include task workers such as call center operators and retail workers, or users that share workstations. Application types: Any application. |
Desktop OS machines |
You want: A client-based application delivery solution that is secure, provides centralized management, and supports a large number of users per host server (or hypervisor), while providing users with applications that display seamlessly in high-definition. Your users: Are internal, external contractors, third-party collaborators, and other provisional team members. Users do not require off-line access to hosted applications. Application types: Applications that might not work well with other applications or might interact with the operating system, such as .NET framework. These types of applications are ideal for hosting on virtual machines. Applications running on older operating systems such as Windows XP or Windows Vista, and older architectures, such as 32-bit or 16-bit. By isolating each application on its own virtual machine, if one machine fails, it does not impact other users. |
Remote PC Access |
You want: Employees with secure remote access to a physical computer without using a VPN. For example, the user may be accessing their physical desktop PC from home or through a public WIFI hotspot. Depending upon the location, you may want to restrict the ability to print or copy and paste outside of the desktop. This method enables BYO device support without migrating desktop images into the datacenter. Your users: Employees or contractors that have the option to work from home, but need access to specific software or data on their corporate desktops to perform their jobs remotely. Host: The same as Desktop OS machines. Application types: Applications that are delivered from an office computer and display seamlessly in high definition on the remote user's device. |
The architecture deployed is highly modular. While each customer’s environment might vary in its exact configuration, the reference architecture contained in this document once built, can easily be scaled as requirements and demands change. This includes scaling both up (adding additional resources within existing Cisco HyperFlex system) and out (adding additional Cisco UCS HX-series nodes, or UCS B/C-series as compute nodes).
The solution includes Cisco networking, Cisco UCS, and Cisco HyperFlex hyper-converged storage, which efficiently fits into a single data center rack, including the access layer network switches.
This validated design document details the deployment of the multiple configurations extending to 4000 users for virtual desktop and hosted shared desktop workload featuring the following deployment methods:
· Citrix XenDesktop 7.13 Non-Persistent Hosted Virtual Desktops (HVD) provisioned with Citrix Provisioning Services (PVS) with Write Cache in device RAM with Overflow on Hard Disk on Cisco HyperFlex
· Citrix XenDesktop 7.13 Non-Persistent HVDs provisioned with Citrix Machine Creation Services (MCS) on Cisco HyperFlex
· Citrix XenDesktop 7.13 persistent HVDs provisioned with Citrix Machine Creation Services (MCS) and using full copy on Cisco HyperFlex
· Citrix XenDesktop 7.13 Hosted Shared Virtual Desktops (HSD) provisioned with Citrix Provisioning Services (PVS) with Write Cache in device RAM with Overflow on Hard Disk on Cisco HyperFlex
The solution contains the following hardware as shown in Figure 39:
· Two Cisco Nexus 9372PX Layer 2 Access Switches
· Two Cisco UCS C220 M4 Rack servers with dual socket Intel Xeon E5-2620v4 2.1-GHz 8-core processors, 128GB 2133-MHz RAM and VIC1227 mLOM card for the hosted infrastructure with N+1 server fault tolerance. (Not shown in the diagram).
· Sixteen Cisco UCS HXAF220c-M4S Rack servers with Intel Xeon E5-2690v4 2.6-GHz 14-core processors, 512GB 2400-MHz RAM and VIC1227 mLOM cards running Cisco HyperFlex data platform v2.1.1b for the virtual desktop workloads with N+1 server fault tolerance
· Eight Cisco UCS B200 M4 blade servers with Intel Xeon E5-2690v4 2.6-GHz 14-core processors, 512GB 2400-MHz RAM and VIC1340 mLOM cards running Cisco HyperFlex data platform v2.1.1b for the virtual desktop workloads with N+1 server fault tolerance
· Eight Cisco UCS C220 M4 Rack servers with Intel Xeon E5-2690v4 2.6-GHz 14-core processors, 512GB 2400-MHz RAM and VIC1227 mLOM cards running Cisco HyperFlex data platform v2.1.1b for the virtual desktop workloads with N+1 server fault tolerance
Table 1 lists the software and firmware version used in the study.
Table 1 Software and Firmware Versions
Vendor |
Product |
Version |
Cisco |
UCS Component Firmware |
3.1(2g) bundle release |
Cisco |
UCS Manager |
3.1(2g) bundle release |
Cisco |
UCS HXAF220c-M4S rack server |
3.1(2g) bundle release |
Cisco |
VIC 1227 |
4.1(2e) |
Cisco |
UCS B200 M4 blade server |
3.1(2g) bundle release |
Cisco |
UCS C220 M4 rack server |
3.1(2g) bundle release |
Cisco |
VIC 1340 |
4.1(2e) |
Cisco |
HyperFlex Data Platform |
2.1.1b-21013 |
Cisco |
Cisco eNIC |
2.3.0.10 |
Cisco |
Cisco fNIC |
1.6.0.33 |
Network |
Cisco Nexus 9000 NX-OS |
7.0(3)I2(2d) |
VMware |
vCenter Server Appliance |
6.0.0-5326177 |
VMware |
vSphere ESXi 6.0 Update 3 |
6.0.0.U3-5050593 |
Citrix |
XenApp VDA |
7.13.0.84 |
Citrix |
XenDesktop VDA |
7.13.0.84 |
Citrix |
XenDesktop Controller |
7.13.0.84 |
Citrix |
Provisioning Services |
7.13.0.13008 |
Citrix |
StoreFront Services |
3.9.0.56 |
Citrix |
User Profile Manager |
5.5.0.10005 |
Citrix |
License Server |
11.14.0.19005 |
Citrix |
NetScaler VPX Appliance |
NS11.1 52.13.nc |
The logical architecture of this solution is designed to support up to 4000 Hosted Virtual Microsoft Windows 10 Desktops and RDSH hosted shared, Windows Server 2016 based, desktop users on a 32-node (sixteen node Cisco UCS HXAF220c-M4S, eight Cisco UCS C220 M4 and eight Cisco UCS B200 M4) HyperFlex cluster. This solution architecture provides physical redundancy for each workload type.
Figure 40 Logical Architecture Design
Table 1 lists the software revisions for this solution.
This document is intended to allow you to fully configure your environment. In this process, various steps require you to insert customer-specific naming conventions, IP addresses, and VLAN schemes, as well as to record appropriate MAC addresses. Table 2 through Table 6 lists the information you need to configure your environment.
The VLAN configuration recommended for the environment includes a total of seven VLANs as outlined in Table 2.
Table 2 VLANs Configured in this Study
VLAN Name |
VLAN ID |
VLAN Purpose |
Default |
1 |
Native VLAN |
Hx-in-Band-Mgmt |
30 |
VLAN for in-band management interfaces |
Infra-Mgmt |
31 |
VLAN for Virtual Infrastructure |
Hx-storage-data |
32 |
VLAN for HyperFlex Storage |
Hx-vmotion |
33 |
VLAN for VMware vMotion |
Vm-network |
34 |
VLAN for VDI Traffic |
OOB-Mgmt |
132 |
VLAN for out-of-band management interfaces |
A dedicated network or subnet for physical device management is often used in datacenters. In this scenario, the mgmt0 interfaces of the two Fabric Interconnects would be connected to that dedicated network or subnet. This is a valid configuration for HyperFlex installations with the following caveat; wherever the HyperFlex installer is deployed it must have IP connectivity to the subnet of the mgmt0 interfaces of the Fabric Interconnects, and also have IP connectivity to the subnets used by the hx-inband-mgmt VLANs listed above.
All HyperFlex storage traffic traversing the hx-storage-data VLAN and subnet is configured to use jumbo frames, or to be precise all communication is configured to send IP packets with a Maximum Transmission Unit (MTU) size of 9000 bytes. Using a larger MTU value means that each IP packet sent carries a larger payload, therefore transmitting more data per packet, and consequently sending and receiving data faster. This requirement also means that the Cisco UCS uplinks must be configured to pass jumbo frames. Failure to configure the Cisco UCS uplink switches to allow jumbo frames can lead to service interruptions during some failure scenarios, particularly when cable or port failures would cause storage traffic to traverse the northbound Cisco UCS uplink switches.
Two VMware Clusters were configured in own vCenter datacenter instances to support the solution and testing environment:
· Infrastructure Cluster: Infrastructure VMs (vCenter, Microsoft Active Directory, DNS, DHCP, Microsoft SQL Servers, Citrix XenDesktop controllers, Citrix StoreFront servers, Citrix License Server, Citrix Provisioning servers, Citrix NS VPX, etc.)
· HyperFlex Cluster: Citrix XenDesktop HSD VMs (Windows Server 2016) or Persistent/Non-Persistent VDI VM Pools (Windows 10 64-bit)
HyperFlex release v2.0 supports maximum 32 nodes in single VMware cluster with sixteen HXAF series HXAF220 or HXAF240 and sixteen compute-only nodes.
Login VSI Infrastructure (Share and Launchers) was connected using the same set of switches, but was hosted on separate B200 servers and was managed by a separate vCenter)
Figure 41 VMware vSphere Clusters on vSphere Web GUI
The following sections detail the design of the elements within the VMware ESXi hypervisors, system requirements, virtual networking and the configuration of ESXi for the Cisco HyperFlex HX Distributed Data Platform.
The Cisco HyperFlex system has a pre-defined virtual network design at the ESXi hypervisor level. Four different virtual switches are created by the HyperFlex installer, each using two uplinks, which are each serviced by a vNIC defined in the UCS service profile. The vSwitches created are:
· vswitch-hx-inband-mgmt: This is the default vSwitch0 which is renamed by the ESXi kickstart file as part of the automated installation. The default vmkernel port, vmk0, is configured in the standard Management Network port group. The switch has two uplinks, active on fabric A and standby on fabric B, without jumbo frames. A second port group is created for the Storage Platform Controller VMs to connect to with their individual management interfaces. The VLAN is not a Native VLAN as assigned to the vNIC template, and therefore assigned in ESXi/vSphere
· vswitch-hx-storage-data: This vSwitch is created as part of the automated installation. A vmkernel port, vmk1, is configured in the Storage Hypervisor Data Network port group, which is the interface used for connectivity to the HX Datastores via NFS. The switch has two uplinks, active on fabric B and standby on fabric A, with jumbo frames required. A second port group is created for the Storage Platform Controller VMs to connect to with their individual storage interfaces. The VLAN is not a Native VLAN as assigned to the vNIC template, and therefore assigned in ESXi/vSphere
· vswitch-hx-vm-network: This vSwitch is created as part of the automated installation. The switch has two uplinks, active on both fabrics A and B, and without jumbo frames. The VLAN is not a Native VLAN as assigned to the vNIC template, and therefore assigned in ESXi/vSphere
· vmotion: This vSwitch is created as part of the automated installation. The switch has two uplinks, active on fabric A and standby on fabric B, with jumbo frames required. The VLAN is not a Native VLAN as assigned to the vNIC template, and therefore assigned in ESXi/vSphere
The following table and figures help give more details into the ESXi virtual networking design as built by the HyperFlex installer:
Table 3 ESXi Host Virtual Switch Configuration
Virtual Switch |
Port Groups |
Active vmnic(s) |
Passive vmnic(s) |
VLAN IDs |
Jumbo |
vswitch-hx-inband-mgmt |
Management Network Storage Controller Management Network |
vmnic0 |
vmnic1 |
hx-inband-mgmt |
no |
vswitch-hx-storage-data |
Storage Controller Data Network Storage Hypervisor Data Network |
vmnic3 |
vmnic2 |
hx-storage-data |
yes |
vswitch-hx-vm-network |
none |
vmnic4,vmnic5 |
none |
vm-network |
no |
vmotion |
none |
vmnic6 |
vmnic7 |
hx-vmotion |
yes |
Figure 42 ESXi Network Design
VMDirectPath I/O allows a guest VM to directly access PCI and PCIe devices in an ESXi host as though they were physical devices belonging to the VM itself, also referred to as PCI pass-through. With the appropriate driver for the hardware device, the guest VM sends all I/O requests directly to the physical device, bypassing the hypervisor. In the Cisco HyperFlex system, the Storage Platform Controller VMs use this feature to gain full control of the Cisco 12Gbps SAS HBA cards in the Cisco HX-series rack-mount servers. This gives the controller VMs direct hardware level access to the physical disks installed in the servers, which they consume to construct the Cisco HX Distributed Filesystem. Only the disks connected directly to the Cisco SAS HBA or to a SAS extender, in turn connected to the SAS HBA are controlled by the controller VMs. Other disks, connected to different controllers, such as the SD cards, remain under the control of the ESXi hypervisor. The configuration of the VMDirectPath I/O feature is done by the Cisco HyperFlex installer, and requires no manual steps.
A key component of the Cisco HyperFlex system is the Storage Platform Controller Virtual Machine running on each of the nodes in the HyperFlex cluster. The controller VMs cooperate to form and coordinate the Cisco HX Distributed Filesystem, and service all the guest VM IO requests. The controller VMs are deployed as a vSphere ESXi agent, which is similar in concept to that of a Linux or Windows service. ESXi agents are tied to a specific host, they start and stop along with the ESXi hypervisor, and the system is not considered to be online and ready until both the hypervisor and the agents have started. Each ESXi hypervisor host has a single ESXi agent deployed, which is the controller VM for that node, and it cannot be moved or migrated to another host. The collective ESXi agents are managed via an ESXi agency in the vSphere cluster.
The storage controller VM runs custom software and services that manage and maintain the Cisco HX Distributed Filesystem. The services and processes that run within the controller VMs are not exposed as part of the ESXi agents to the agency, therefore the ESXi hypervisors nor vCenter server have any direct knowledge of the storage services provided by the controller VMs. Management and visibility into the function of the controller VMs, and the Cisco HX Distributed Filesystem is done via a plugin installed to the vCenter server or appliance managing the vSphere cluster. The plugin communicates directly with the controller VMs to display the information requested, or make the configuration changes directed, all while operating within the same web-based interface of the vSphere Web Client. The deployment of the controller VMs, agents, agency, and vCenter plugin are all done by the Cisco HyperFlex installer, and requires no manual steps.
The physical storage location of the controller VMs differs between the Cisco HXAF220c-M4S and HXAF240c-M4SX model servers, due to differences with the physical disk location and connections on the two models of servers. The storage controller VM is operationally no different from any other typical virtual machines in an ESXi environment. The VM must have a virtual disk with the bootable root filesystem available in a location separate from the SAS HBA that the VM is controlling via VMDirectPath I/O. The configuration details of the models are as follows:
· HX220c/HXAF220c: The controller VM’s root filesystem is stored on a 2.2 GB virtual disk, /dev/sda, which is placed on a 3.5 GB VMFS datastore, and that datastore is provisioned from the internal mirrored SD cards. The controller VM has full control of all the front facing hot-swappable disks via PCI pass-through control of the SAS HBA. The controller VM operating system sees the 120 GB SSD, also commonly called the “housekeeping” disk as /dev/sdb, and places HyperFlex binaries, logs, and zookeeper partitions on this disk. The remaining disks seen by the controller VM OS are used by the HX Distributed filesystem for caching and capacity layers.
· HX240c/HXAF240c: The HX240c-M4SX or HXAF240c-M4SXserver has a built-in SATA controller provided by the Intel Wellsburg Platform Controller Hub (PCH) chip, and the 120 GB housekeeping disk is connected to it, placed in an internal drive carrier. Since this model does not connect the 120 GB housekeeping disk to the SAS HBA, the ESXi hypervisor remains in control of this disk, and a VMFS datastore is provisioned there, using the entire disk. On this VMFS datastore, a 2.2 GB virtual disk is created and used by the controller VM as /dev/sda for the root filesystem, and an 87 GB virtual disk is created and used by the controller VM as /dev/sdb, placing the HyperFlex binaries, logs, and zookeeper partitions on this disk. The front-facing hot swappable disks, seen by the controller VM OS through PCI pass-through control of the SAS HBA, are used by the HX Distributed filesystem for caching and capacity layers.
The following figures detail the Storage Platform Controller VM placement on the ESXi hypervisor hosts.
Figure 43 HX220c or HXAF220c Controller VM Placement
The Cisco UCS B200-M4 compute-only blades also place a lightweight storage controller VM on a 3.5 GB VMFS datastore, provisioned from the SD cards.
Figure 44 HX240c or HXAF20c Controller VM Placement
The new HyperFlex cluster has no default datastores configured for virtual machine storage, therefore the datastores must be created using the vCenter Web Client plugin. A minimum of two datastores is recommended to satisfy vSphere High Availability datastore heartbeat requirements, although one of the two datastores can be very small. It is important to recognize that all HyperFlex datastores are thinly provisioned, meaning that their configured size can far exceed the actual space available in the HyperFlex cluster. Alerts will be raised by the HyperFlex system in the vCenter plugin when actual space consumption results in low amounts of free space, and alerts will be sent via auto support email alerts. Overall space consumption in the HyperFlex clustered filesystem is optimized by the default deduplication and compression features.
Figure 45 Datastore Example
Since the storage controller VMs provide critical functionality of the Cisco HX Distributed Data Platform, the HyperFlex installer will configure CPU resource reservations for the controller VMs. This reservation guarantees that the controller VMs will have CPU resources at a minimum level, in situations where the physical CPU resources of the ESXi hypervisor host are being heavily consumed by the guest VMs. The following table details the CPU resource reservation of the storage controller VMs:
Table 4 Controller VM CPU Reservations
Number of vCPU |
Shares |
Reservation |
Limit |
8 |
Low |
10800 MHz |
unlimited |
Since the storage controller VMs provide critical functionality of the Cisco HX Distributed Data Platform, the HyperFlex installer will configure memory resource reservations for the controller VMs. This reservation guarantees that the controller VMs will have memory resources at a minimum level, in situations where the physical memory resources of the ESXi hypervisor host are being heavily consumed by the guest VMs.
The following table details the memory resource reservation of the storage controller VMs.
Table 5 Controller VM Memory Reservations
Server Model |
Amount of Guest Memory |
Reserve All Guest Memory |
HX220c-M4S HXAF220c-M4S |
48 GB |
Yes |
HX240c-M4SX HXAF240c-m4SX |
72 GB |
Yes |
The Cisco UCS B200-M4 compute-only blades and UCS C220-M4 compute-only rack servers have a lightweight storage controller VM; it is configured with only 1 vCPU (1000MHz) and 512 MB of memory reservation.
This section details the configuration and tuning that was performed on the individual components to produce a complete, validated solution. Figure 46 illustrates the configuration topology for this solution.
Figure 46 Configuration Topology for Scalable Citrix XenDesktop Workload with HyperFlex
The following subsections detail the physical connectivity configuration of the Citrix XenDesktop environment.
The information in this section is provided as a reference for cabling the physical equipment in this Cisco Validated Design environment. To simplify cabling requirements, the tables include both local and remote device and port locations.
The tables in this section contain the details for the prescribed and supported configuration.
This document assumes that out-of-band management ports are plugged into an existing management infrastructure at the deployment site. These interfaces will be used in various configuration steps.
Be sure to follow the cabling directions in this section. Failure to do so will result in necessary changes to the deployment procedures that follow because specific port locations are mentioned.
Figure 39 shows a cabling diagram for a Citrix XenDesktop configuration on HyperFlex using the Cisco Nexus 9000 and Cisco UCS Fabric Interconnect.
Table 6 Cisco Nexus 9372-Cabling Information
Local Device |
Local Port |
Connection |
Remote Device |
Remote Port |
Cisco Nexus 9372 A
|
Eth1/1 |
10GbE |
Cisco Nexus 9372 B |
Eth1/1 |
Eth1/2 |
10GbE |
Cisco Nexus 9372 B |
Eth1/2 |
|
Eth1/3 |
10GbE |
Cisco Nexus 9372 B |
Eth1/3 |
|
Eth1/4 |
10GbE |
Cisco Nexus 9372 B |
Eth1/4 |
|
Eth1/5 |
10GbE |
Cisco UCS fabric interconnect A |
Eth1/29 |
|
Eth1/6 |
10GbE |
Cisco UCS fabric interconnect A |
Eth1/30 |
|
Eth1/7 |
10GbE |
Cisco UCS fabric interconnect B |
Eth1/31 |
|
Eth1/8 |
10GbE |
Cisco UCS fabric interconnect B |
Eth1/32 |
|
Eth1/17 |
10GbE |
Infra-host-01 |
Port01 |
|
Eth1/18 |
10GbE |
Infra-host-02 |
Port01 |
|
|
MGMT0 |
GbE |
GbE management switch |
Any |
For devices requiring GbE connectivity, use the GbE Copper SFP+s (GLC-T=)
Table 7 Cisco Nexus 9372-B Cabling Information
Local Device |
Local Port |
Connection |
Remote Device |
Remote Port |
Cisco Nexus 9372 B
|
Eth1/1 |
10GbE |
Cisco Nexus 9372 A |
Eth1/1 |
Eth1/2 |
10GbE |
Cisco Nexus 9372 A |
Eth1/2 |
|
Eth1/3 |
10GbE |
Cisco Nexus 9372 A |
Eth1/3 |
|
Eth1/4 |
10GbE |
Cisco Nexus 9372 A |
Eth1/4 |
|
Eth1/5 |
10GbE |
Cisco UCS fabric interconnect B |
Eth1/29 |
|
Eth1/6 |
10GbE |
Cisco UCS fabric interconnect B |
Eth1/30 |
|
Eth1/7 |
10GbE |
Cisco UCS fabric interconnect A |
Eth1/31 |
|
Eth1/8 |
10GbE |
Cisco UCS fabric interconnect A |
Eth1/32 |
|
Eth1/17 |
10GbE |
Infra-host-01 |
Port02 |
|
Eth1/18 |
10GbE |
Infra-host-02 |
Port02 |
|
|
MGMT0 |
GbE |
GbE management switch |
Any |
Table 8 Cisco UCS Fabric Interconnect A Cabling Information
Local Device |
Local Port |
Connection |
Remote Device |
Remote Port |
Cisco UCS fabric interconnect A |
Eth1/29 |
10GbE |
Cisco Nexus 9372 A |
Eth1/5 |
Eth1/30 |
10GbE |
Cisco Nexus 9372 A |
Eth1/6 |
|
Eth1/31 |
10GbE |
Cisco Nexus 9372 B |
Eth1/7 |
|
Eth1/32 |
10GbE |
Cisco Nexus 9372 B |
Eth 1/8 |
|
MGMT0 |
GbE |
GbE management switch |
Any |
|
L1 |
GbE |
Cisco UCS fabric interconnect B |
L1 |
|
|
L2 |
GbE |
Cisco UCS fabric interconnect B |
L2 |
Table 9 Cisco UCS Fabric Interconnect B Cabling Information
Local Device |
Local Port |
Connection |
Remote Device |
Remote Port |
Cisco UCS fabric interconnect B
|
Eth1/29 |
10GbE |
Cisco Nexus 9372 B |
Eth1/5 |
Eth1/30 |
10GbE |
Cisco Nexus 9372 B |
Eth1/6 |
|
Eth1/31 |
10GbE |
Cisco Nexus 9372 A |
Eth1/7 |
|
Eth1/32 |
10GbE |
Cisco Nexus 9372 A |
Eth 1/8 |
|
MGMT0 |
GbE |
GbE management switch |
Any |
|
L1 |
GbE |
Cisco UCS fabric interconnect A |
L1 |
|
|
L2 |
GbE |
Cisco UCS fabric interconnect A |
L2 |
Figure 47 Cable Connectivity Between Cisco Nexus 9372 A and B to Cisco UCS 6248 Fabric A and B
This section details the Cisco UCS configuration performed as part of the infrastructure build out by the Cisco HyperFlex installer. Many of the configuration elements are fixed in nature, meanwhile, the HyperFlex installer does allow for some items to be specified at the time of creation, for example, VLAN names and IDs, IP pools and more. Where the elements can be manually set during the installation, those items will be noted in << >> brackets.
For complete detail on racking, power, and installation of the chassis is described in the install guide (see www.cisco.com/c/en/us/support/servers-unified-computing/ucs-manager/products-installation-guides-list.html) and it is beyond the scope of this document. For more information about each step, refer to the following documents: Cisco UCS Manager Configuration Guides – GUI and Command Line Interface (CLI) Cisco UCS Manager - Configuration Guides - Cisco
During the HyperFlex Installation, a Cisco UCS Sub-Organization is created named “hx-cluster”. The sub-organization is created below the root level of the Cisco UCS hierarchy and is used to contain all policies, pools, templates and service profiles used by HyperFlex. This arrangement allows for organizational control using Role-Based Access Control (RBAC) and administrative locales at a later time if desired. In this way, control can be granted to administrators of only the HyperFlex specific elements of the Cisco UCS domain, separate from control of root level elements or elements in other sub-organizations.
Figure 48 Cisco UCSM configuration: HyperFlex Sub-organization
To deploy and configure the HyperFlex Data Platform, you must complete the following prerequisites:
1. Set Time Zone and NTP: From the Cisco UCS Manager, from the Admin tab, Configure TimeZone and add NTP server. Save changes.
2. Configure Server Ports: Under the Equipment tab, Select Fabric A, select port to be configured as server port to manager HyperFlex rack server through Cisco UCS Manager.
3. Repeat this step to configure server port on Fabric B.
4. Configure Uplink Ports: On Fabric A, Select port to be configured as uplink port for network connectivity to north bound switch.
5. Repeat this same on Fabric B.
6. Create Port Channels: Under LAN tab, select expand LAN à LAN cloud à Fabric A. Right-click Port Channel.
7. Select Create port-channel to connect with the upstream switch as per UCS best practice. For our reference architecture, we connected a pair of Nexus 9372PX switches.
8. Enter port-channel ID number and name to be created, click Next.
9. Select uplink ports to add as part of the port-channel.
10. Click Finish.
11. Follow the previous steps to create the port-channel on Fabric B, using a different port-channel ID.
12. Configure QoS System Classes: From the LAN tab, below the Lan Cloud node, select QoS system class and configure the Platinum through Bronze system classes as shown in the following figure.
- Set MTU to 9216 for Platinum (Storage data) and Bronze (vMotion)
- Uncheck Enable Packet drop on the Platinum class
- Set Weight for Platinum and Gold priority class to 4 and everything else as best-effort
- Enable multicast for silver class
13. Verify UCS Manager Software Version: In the Equipment tab, select Firmware Management à Installed Firmware.
14. Check and verify, both Fabric Interconnects and Cisco USC Manager are configured with Cisco UCS Manager v3.1.2g.
It is recommended to let the HX Installer handle upgrading the server firmware automatically as designed. This will occur once the service profiles are applied to the HX nodes during the automated deployment process.
15. Optional: If you are familiar with Cisco UCS Manager or you wish to break the install into smaller pieces, you can use the server auto firmware download to pre-stage the correct firmware on the nodes. This will speed up the association time in the HyperFlex installer at the cost of running two separate reboot operations. This method is not required or recommended if doing the install in one sitting.
Download latest installer OVA from Cisco.com. Software Download Link: https://software.cisco.com/download/release.html?mdfid=286305544&flowid=79522&softwareid=286305994&release=2.1(1b)&relind=AVAILABLE&rellifecycle=&reltype=latest.
Deploy OVA to an existing host in the environment. Use either your existing vCenter Thick Client (C#) or vSphere Web Client to deploy OVA on ESXi host. This document outlines the procedure to deploy the OVA from the web client.
To deploy the OVA from the web client, complete the following steps:
1. Log into vCenter web client via login to the web browser with a vCenter management IP address: Error! Hyperlink reference not valid. or IP address for VC>:9443/vcenter-client
2. Select ESXi host under hosts and cluster when HyperFlex data platform installer VM to deploy.
3. Right-click ESXi host, select Deploy OVF Template.
4. Follow the deployment steps to configure HyperFlex data-platform installer VM deployment.
5. Select OVA file to deploy, click Next.
6. Review and verify the details for OVF template to deploy, click Next.
7. Enter a name for OVF to template deploy, select datacenter and folder location. Click Next.
8. Select virtual disk format, VM storage policy set to datastore default, select datastore for OVF deployment. Click Next.
9. Select Network adapter destination port-group.
10. Fill out the parameters requested for hostname, gateway, DNS, IP address, and netmask. Alternatively, leave all blank for a DHCP assigned address.
Provide a single DNS server only. Inputting multiple DNS servers will cause queries to fail. You must connect to vCenter to deploy the OVA file and provide the IP address properties. Deploying directly from an ESXi host will not allow you to set these values correctly.
If you have internal firewall rules between these networks, please contact TAC for assistance.
If required, an additional network adapter can be added to the HyperFlex Platform Installer VM after OVF deployment is completed successfully. For example, in the case of a separate Inband and Out-Of-Mgmt network, see the screenshot below:
11. Review settings selected part of the OVF deployment, click the check box for Power on after deployment. Click Finish.
The default credentials for the HyperFlex installer VM are: user name: root password: Cisco123
Verify or Set DNS Resolution
SSH to HX installer VM, verify or set DNS resolution is set on HyperFlex Installer VM:
root@Cisco-HX-Data-Platform-Installer: # more /etc/network/eth0.interface
auto eth0
iface eth0 inet static
metric 100
address 10.10.50.19
netmask 255.255.255.0
gateway 10.10.50.1
dns-search vdilab-hc.local
dns-nameservers 10.10.51.21 10.10.51.22
root@Cisco-HX-Data-Platform-Installer:~# more /run/resolvconf/resolv.conf
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 10.10.51.21
nameserver 10.10.51.22
search vdilab-hc.local
To configuring the Cisco HyperFlex Cluster, complete the following steps:
1. Login to HX Installer VM through a web browser: Error! Hyperlink reference not valid.>
2. Select the workflow for cluster creation to deploy a new HyperFlex cluster on sixteen Cisco HXAF220c-M4S nodes.
3. On the credentials page, enter the access details for Cisco UCS Manager, vCenter server, and Hypervisor. Click Continue.
4. Select the top-most check box at the top right corner of the HyperFlex installer to select all unassociated servers. (To configure a subset of available of the HyperFlex servers, manually click the check box for individual servers.)
5. Click Continue after completing server selection.
The required server ports can be configured from Installer workflow but it will extend the time to complete server discovery. Therefore, we recommend configuring the server ports and complete HX node discovery in Cisco UCS Manager as described in the Pre-requisites section above prior starting workflow for HyperFlex installer.
If you choose to allow the installer to configure the server ports, complete the following steps:
1. Click Configure Server Ports at the top right corner of the Server Selection window.
2. Provide the port numbers for each Fabric Interconnect in the form:
A1/x-y,B1/x-y where A1 and B1 designate Fabric Interconnect A and B and where x=starting port number and y=ending port number on each Fabric Interconnect.
3. Click Configure.
4. Enter the Details for the Cisco UCS Manager Configuration:
a. Enter VLAN ID for hx-inband-mgmt, hx-storage-data, hx-vmotion, vm-network.
b. MAC Pool Prefix: The prefix to use for each HX MAC address pool. Please select a prefix that does not conflict with any other MAC address pool across all Cisco UCS domains.
c. The blocks in the MAC address pool will have the following format:
${prefix}:${fabric_id}${vnic_id}:{service_profile_id}
The first three bytes should always be “00:25:B5”.
5. Enter the range of IP address to create a block of IP addresses for external management and access to CIMC/KVM.
6. Cisco UCS firmware version is set to 3.1(2g) which is the required Cisco UCS Manager release for HyperFlex v2.1.1 installation.
7. Enter HyperFlex cluster name.
8. Enter Org name to be created in Cisco UCS Manager.
9. Click Continue.
To configure the Hypervisor settings, complete the following steps:
1. In the Configure common Hypervisor Settings section, enter:
- Subnet Mask
- Gateway
- DNS server(s)
2. In the Hypervisor Settings section:
- Select check box Make IP Address and Hostnames Sequential if they are following in sequence.
- Provide the starting IP Address.
- Provide the starting Host Name or enter Static IP address and Host Names manually for each node
3. Click Continue.
To add the IP addresses, complete the following steps:
When the IP Addresses page appears, the hypervisor IP address for each node that was configured in the Hypervisor Configuration tab, appears under the Management Hypervisor column.
Three additional columns appear on this page:
· Storage Controller/Management
· Hypervisor/Data
· Storage Controller/Data
The Data network IP addresses are for vmkernel addresses for storage access by the hypervisor and storage controller virtual machine.
4. On the IP Addresses page, check the box Make IP Addresses Sequential or enter the IP address manually for each node for the following requested values:
- Storage Controller/Management
- Hypervisor/Data
- Storage Controller/Data
5. Enter subnet and gateway details for the Management and Data subnets configured.
6. Click Continue to proceed.
7. On the Cluster Configuration page, enter the following:
- Cluster Name
- Cluster management IP address
- Cluster data IP Address
- Set Replication Factor: 2 or 3
- Controller VM password
- vCenter configuration
o vCenter Datacenter name
o vCenter Cluster name
- System Services
o DNS Server(s)
o NTP Server(s)
o Time Zone
- Auto Support
o Click the check box for Enable Auto Support
o Mail Server
o Mail Sender
o ASUP Recipient(s)
- Advanced Networking
o Management vSwitch
o Data vSwitch
- Advanced Configuration
o Click the check box to Optimize for VDI only deployment
o Enable jumbo Frames on Data Network
o Clean up disk partitions (optional)
- vCenter Single-Sign-On server
8. The configuration details can be exported to a JSON file by clicking the down arrow icon in the top right corner of the Web browser page as shown in the screenshot below.
9. Configuration details can be reviewed on Configuration page on right side section. Verify entered details for IP address entered in Credentials page, server selection for cluster deployment and creation workflow, Cisco UCS Manager configuration, Hypervisor Configuration, IP addresses.
10. Click Start after verifying details.
When the installation workflow begins, it will go through the Cisco UCS Manager validation.
11. After a successful validation, the workflow continues with the Cisco UCS Manager configuration.
12. After a successful Cisco UCS Manager configuration, the installer proceeds with the Hypervisor configuration.
13. After a successful Hypervisor configuration, deploy validation task is performed which checks for required components and accessibility prior Deploy task is performed on Storage Controller VM.
14. The installer performs deployment task after successfully validating Hypervisor configuration.
15. After a successful deployment of the ESXi hosts configuration, the Controller VM software components for HyperFlex installer checks for validation prior to creating the cluster.
16. After a successful validation, the installer creates and starts the HyperFlex cluster service.
17. After a successful HyperFlex Installer VM workflow completion, the installer GUI provides a summary of the cluster that has been created.
For this exercise, we will add the compute node workflow part of the cluster expansion.
Configure the service profile for compute-only nodes and install ESXi hypervisor.
To add the compute node workflow, complete the following steps:
1. Login to Cisco UCS Manager.
2. Under “hx-cluster” sub-organization:
a. In the existing vMedia policy “HyperFlex” add vMedia mount details to boot ESXi image from data platform installer VM.
b. For Hostname/IP Address – Add IP address of data-platform installer VM which can also communicate with Cisco UCS Manager.
3. Change the existing service profile template to accommodate the new changes; install ESXi via vMedia policy.
4. In the existing service profile template “compute-nodes” select vMedia Policy tab.
5. Click Modify vMedia Policy.
6. From the drop-down list of vMedia Policy, select HyperFlex.
7. In the existing service profile template “compute-nodes” click Boot Order tab.
8. Click Modify Boot Policy.
9. From the drop-down list of Boot Policies, select HyperFlexInstall.
10. Save changes.
11. Create the service profile from the “compute-nodes” updating service profile template located in the HyperFlex cluster sub organization.
12. Add the Naming Prefix and Number of Instances to be created.
13. Click OK.
14. After the of ESXi install, assign the VLAN tag on the ESXi host; the static IP address configuration is located in the Configure Management Network section.
15. Log into the HyperFlex data platform installer WebUI. Click “I know what I'm doing, let me customize my workflow”.
16. Select Deploy HX Software, Expand HX Cluster. Click Continue.
17. Enter the credentials for vCenter server and ESXi. Click Continue.
18. Select Cluster to expand and click Continue.
Since you are performing a compute-node only expansion, no servers report into the Cisco UCS Manager configuration tab.
19. Click Add Compute Server tab for N number of compute-only node expansion to existing HyperFlex cluster. Provide Hypervisor Management IP address and vmkernel IP address to access storage cluster. Click Continue.
20. Cluster expansion workflow starts which performs deploy validation task first.
21. Performs deployment of HyperFlex controller VM create and deployment task.
22. Performs expansion validation.
23. Summary of Expansion cluster workflow performed.
As part of the cluster creation operations, the HyperFlex Installer adds HyperFlex functionality to the vSphere vCenter identified in earlier steps. This functionality allows vCenter administrators to manage the HyperFlex cluster entirely from their vSphere Web Client.
24. Click Launch vSphere Web Client.
Cisco HyperFlex installer creates and configured a controller VM on each converged or compute-only node. Naming convention used is as “stctlvm-<Serial Number for Cisco UCS Node>” shown in Figure 49.
Do not to change name or any resource configuration for controller VM.
Figure 49 Cisco UCS Node Naming Convention
After a successful installation of HyperFlex cluster, run the post_install script by logging into the Data Platform Installer VM via SSH, using the credentials configured earlier.
A built-in post install script automates basic final configuration tasks like enabling HA/DRS on HyperFlex cluster, configuring vmKernel for vMotion interface, creating datastore for ESXi logging, etc., as shown in the following figures.
1. To run the script, first, use your tool of choice to make a secure connection to the Cisco HyperFlex Data Platform installer using it’s IP address and port 22.
2. Authenticate with the credentials provided earlier. (user name: root with password Cisco 123 if you did not change the defaults.)
3. When authenticated, enter post_install at the command prompt, then press Enter.
4. Provide a valid vCenter administrator user name and password and the vCenter url IP address.
5. Type y for yes to each of the prompts that follow except Add VM network VLANs? (y/n) where you can choose whether or not to send health status data via SMS to Cisco support.
6. Provide the requested user credentials, the vMotion netmask, VLAN ID and an IP address on the vMotion VLAN for each host when prompted for the vmkernel IP.
7. Sample post install input and output:
root@Cisco-HX-Data-Platform-Installer:root@Cisco-HX-Data-Platform-Installer:~#post_install Getting ESX hosts from HX cluster...
vCenter URL: 10.10.50.20
Enter vCenter username (user@domain): administrator@vsphere.local
vCenter Password:
Found datacenter VDILAB-HX
Found cluster HX-VDI-CL
Enable HA/DRS on cluster? (y/n) y
Disable SSH warning? (y/n) y
Add vmotion interfaces? (y/n) y
Netmask for vMotion: 255.255.255.0
VLAN ID: (0-4096) 53
vMotion IP for 10.10.50.27: 10.10.53.27
Adding vmotion to 10.10.50.27
Adding vmkernel to 10.10.50.27
vMotion IP for 10.10.50.28: 10.10.53.28
Adding vmotion to 10.10.50.28
Adding vmkernel to 10.10.50.28
vMotion IP for 10.10.50.29: 10.10.53.29
Adding vmotion to 10.10.50.29
Adding vmkernel to 10.10.50.29
vMotion IP for 10.10.50.30: 10.10.53.30
Adding vmotion to 10.10.50.30
Adding vmkernel to 10.10.50.30
vMotion IP for 10.10.50.31: 10.10.53.31
Adding vmotion to 10.10.50.31
Adding vmkernel to 10.10.50.31
vMotion IP for 10.10.50.32: 10.10.53.32
Adding vmotion to 10.10.50.32
Adding vmkernel to 10.10.50.32
vMotion IP for 10.10.50.33: 10.10.53.33
Adding vmotion to 10.10.50.33
Adding vmkernel to 10.10.50.33
vMotion IP for 10.10.50.34: 10.10.53.34
Adding vmotion to 10.10.50.34
Adding vmkernel to 10.10.50.34
Add VM network VLANs? (y/n) n
Enable NTP on ESX hosts? (y/n) y
Starting ntpd service on 10.10.50.27
Starting ntpd service on 10.10.50.28
Starting ntpd service on 10.10.50.29
Starting ntpd service on 10.10.50.30
Starting ntpd service on 10.10.50.31
Starting ntpd service on 10.10.50.32
Starting ntpd service on 10.10.50.33
Starting ntpd service on 10.10.50.34
Send test email? (y/n) n
Validating cluster health and configuration...
Found UCSM 10.29.132.11, logging with username admin. Org is hx-vdi-org
UCSM Password:
TChecking MTU settings
Pinging 10.10.52.107 from vmk1
Pinging 10.10.52.101 from vmk1
Pinging 10.10.52.105 from vmk1
Pinging 10.10.52.108 from vmk1
Pinging 10.10.52.102 from vmk1
Pinging 10.10.52.104 from vmk1
Pinging 10.10.52.106 from vmk1
Pinging 10.10.52.103 from vmk1
Setting vnic2 to active and vmnic3 to standby
Pinging 10.10.52.107 from vmk1
Pinging 10.10.52.107 with mtu 8972 from vmk1
Pinging 10.10.52.101 from vmk1
Pinging 10.10.52.101 with mtu 8972 from vmk1
Pinging 10.10.52.105 from vmk1
Pinging 10.10.52.105 with mtu 8972 from vmk1
Pinging 10.10.52.108 from vmk1
Pinging 10.10.52.108 with mtu 8972 from vmk1
Pinging 10.10.52.102 from vmk1
Pinging 10.10.52.102 with mtu 8972 from vmk1
Pinging 10.10.52.104 from vmk1
Pinging 10.10.52.104 with mtu 8972 from vmk1
Pinging 10.10.52.106 from vmk1
Pinging 10.10.52.106 with mtu 8972 from vmk1
Pinging 10.10.52.103 from vmk1
Pinging 10.10.52.103 with mtu 8972 from vmk1
Setting vmnic3 to active and vnic2 to standby
Pinging 10.10.50.33 from vmk0
Pinging 10.10.50.27 from vmk0
Pinging 10.10.50.31 from vmk0
Pinging 10.10.50.34 from vmk0
Pinging 10.10.50.28 from vmk0
Pinging 10.10.50.30 from vmk0
Pinging 10.10.50.32 from vmk0
Pinging 10.10.50.29 from vmk0
Setting vnic1 to active and vmnic0 to standby
Pinging 10.10.50.33 from vmk0
Pinging 10.10.50.27 from vmk0
Pinging 10.10.50.31 from vmk0
Pinging 10.10.50.34 from vmk0
Pinging 10.10.50.28 from vmk0
Pinging 10.10.50.30 from vmk0
Pinging 10.10.50.32 from vmk0
Pinging 10.10.50.29 from vmk0
Setting vmnic0 to active and vnic1 to standby
Pinging 10.10.53.27 from vmk2
Pinging 10.10.53.28 from vmk2
Pinging 10.10.53.29 from vmk2
Pinging 10.10.53.30 from vmk2
Pinging 10.10.53.31 from vmk2
Pinging 10.10.53.32 from vmk2
Pinging 10.10.53.33 from vmk2
Pinging 10.10.53.34 from vmk2
Setting vnic7 to active and vmnic6 to standby
Pinging 10.10.53.27 from vmk2
Pinging 10.10.53.27 with mtu 8972 from vmk2
Pinging 10.10.53.28 from vmk2
Pinging 10.10.53.28 with mtu 8972 from vmk2
Pinging 10.10.53.29 from vmk2
Pinging 10.10.53.29 with mtu 8972 from vmk2
Pinging 10.10.53.30 from vmk2
Pinging 10.10.53.30 with mtu 8972 from vmk2
Pinging 10.10.53.31 from vmk2
Pinging 10.10.53.31 with mtu 8972 from vmk2
Pinging 10.10.53.32 from vmk2
Pinging 10.10.53.32 with mtu 8972 from vmk2
Pinging 10.10.53.33 from vmk2
Pinging 10.10.53.33 with mtu 8972 from vmk2
Pinging 10.10.53.34 from vmk2
Pinging 10.10.53.34 with mtu 8972 from vmk2
Setting vmnic6 to active and vnic7 to standby
Network Summary:
Host: 10.10.50.27
vswitch: vswitch-hx-inband-mgmt - mtu: 1500 - policy: loadbalance_srcid
vmnic0 - 1 - K22-HXVDI-A - active
vmnic1 - 1 - K22-HXVDI-B - standby
Portgroup Name - VLAN
Storage Controller Management Network - 50
Management Network - 50
vswitch: vswitch-hx-vm-network - mtu: 1500 - policy: loadbalance_srcid
vmnic4 - 1 - K22-HXVDI-A - active
vmnic5 - 1 - K22-HXVDI-B - active
Portgroup Name - VLAN
vm-network-54 - 54
vswitch: vmotion - mtu: 9000 - policy: loadbalance_srcid
vmnic6 - 1 - K22-HXVDI-A - active
vmnic7 - 1 - K22-HXVDI-B - standby
Portgroup Name - VLAN
vmotion - 53
vswitch: vswitch-hx-storage-data - mtu: 9000 - policy: loadbalance_srcid
vmnic2 - 1 - K22-HXVDI-A - standby
vmnic3 - 1 - K22-HXVDI-B - active
Portgroup Name - VLAN
Storage Controller Data Network - 52
Storage Hypervisor Data Network - 52
Host: 10.10.50.28
vswitch: vswitch-hx-inband-mgmt - mtu: 1500 - policy: loadbalance_srcid
vmnic0 - 1 - K22-HXVDI-A - active
vmnic1 - 1 - K22-HXVDI-B - standby
Portgroup Name - VLAN
Storage Controller Management Network - 50
Management Network - 50
vswitch: vswitch-hx-vm-network - mtu: 1500 - policy: loadbalance_srcid
vmnic4 - 1 - K22-HXVDI-A - active
vmnic5 - 1 - K22-HXVDI-B - active
Portgroup Name - VLAN
vm-network-54 - 54
vswitch: vmotion - mtu: 9000 - policy: loadbalance_srcid
vmnic6 - 1 - K22-HXVDI-A - active
vmnic7 - 1 - K22-HXVDI-B - standby
Portgroup Name - VLAN
vmotion - 53
vswitch: vswitch-hx-storage-data - mtu: 9000 - policy: loadbalance_srcid
vmnic2 - 1 - K22-HXVDI-A - standby
vmnic3 - 1 - K22-HXVDI-B - active
Portgroup Name - VLAN
Storage Controller Data Network - 52
Storage Hypervisor Data Network - 52
Host: 10.10.50.29
vswitch: vswitch-hx-inband-mgmt - mtu: 1500 - policy: loadbalance_srcid
vmnic0 - 1 - K22-HXVDI-A - active
vmnic1 - 1 - K22-HXVDI-B - standby
Portgroup Name - VLAN
Storage Controller Management Network - 50
Management Network - 50
vswitch: vswitch-hx-vm-network - mtu: 1500 - policy: loadbalance_srcid
vmnic4 - 1 - K22-HXVDI-A - active
vmnic5 - 1 - K22-HXVDI-B - active
Portgroup Name - VLAN
vm-network-54 - 54
vswitch: vmotion - mtu: 9000 - policy: loadbalance_srcid
vmnic6 - 1 - K22-HXVDI-A - active
vmnic7 - 1 - K22-HXVDI-B - standby
Portgroup Name - VLAN
vmotion - 53
vswitch: vswitch-hx-storage-data - mtu: 9000 - policy: loadbalance_srcid
vmnic2 - 1 - K22-HXVDI-A - standby
vmnic3 - 1 - K22-HXVDI-B - active
Portgroup Name - VLAN
Storage Controller Data Network - 52
Storage Hypervisor Data Network - 52
Host: 10.10.50.30
vswitch: vswitch-hx-inband-mgmt - mtu: 1500 - policy: loadbalance_srcid
vmnic0 - 1 - K22-HXVDI-A - active
vmnic1 - 1 - K22-HXVDI-B - standby
Portgroup Name - VLAN
Storage Controller Management Network - 50
Management Network - 50
vswitch: vswitch-hx-vm-network - mtu: 1500 - policy: loadbalance_srcid
vmnic4 - 1 - K22-HXVDI-A - active
vmnic5 - 1 - K22-HXVDI-B - active
Portgroup Name - VLAN
vm-network-54 - 54
vswitch: vmotion - mtu: 9000 - policy: loadbalance_srcid
vmnic6 - 1 - K22-HXVDI-A - active
vmnic7 - 1 - K22-HXVDI-B - standby
Portgroup Name - VLAN
vmotion - 53
vswitch: vswitch-hx-storage-data - mtu: 9000 - policy: loadbalance_srcid
vmnic2 - 1 - K22-HXVDI-A - standby
vmnic3 - 1 - K22-HXVDI-B - active
Portgroup Name - VLAN
Storage Controller Data Network - 52
Storage Hypervisor Data Network - 52
Host: 10.10.50.31
vswitch: vswitch-hx-inband-mgmt - mtu: 1500 - policy: loadbalance_srcid
vmnic0 - 1 - K22-HXVDI-A - active
vmnic1 - 1 - K22-HXVDI-B - standby
Portgroup Name - VLAN
Storage Controller Management Network - 50
Management Network - 50
vswitch: vswitch-hx-vm-network - mtu: 1500 - policy: loadbalance_srcid
vmnic4 - 1 - K22-HXVDI-A - active
vmnic5 - 1 - K22-HXVDI-B - active
Portgroup Name - VLAN
vm-network-54 - 54
vswitch: vmotion - mtu: 9000 - policy: loadbalance_srcid
vmnic6 - 1 - K22-HXVDI-A - active
vmnic7 - 1 - K22-HXVDI-B - standby
Portgroup Name - VLAN
vmotion - 53
vswitch: vswitch-hx-storage-data - mtu: 9000 - policy: loadbalance_srcid
vmnic2 - 1 - K22-HXVDI-A - standby
vmnic3 - 1 - K22-HXVDI-B - active
Portgroup Name - VLAN
Storage Controller Data Network - 52
Storage Hypervisor Data Network - 52
Host: 10.10.50.32
vswitch: vswitch-hx-inband-mgmt - mtu: 1500 - policy: loadbalance_srcid
vmnic0 - 1 - K22-HXVDI-A - active
vmnic1 - 1 - K22-HXVDI-B - standby
Portgroup Name - VLAN
Storage Controller Management Network - 50
Management Network - 50
vswitch: vswitch-hx-vm-network - mtu: 1500 - policy: loadbalance_srcid
vmnic4 - 1 - K22-HXVDI-A - active
vmnic5 - 1 - K22-HXVDI-B - active
Portgroup Name - VLAN
vm-network-54 - 54
vswitch: vmotion - mtu: 9000 - policy: loadbalance_srcid
vmnic6 - 1 - K22-HXVDI-A - active
vmnic7 - 1 - K22-HXVDI-B - standby
Portgroup Name - VLAN
vmotion - 53
vswitch: vswitch-hx-storage-data - mtu: 9000 - policy: loadbalance_srcid
vmnic2 - 1 - K22-HXVDI-A - standby
vmnic3 - 1 - K22-HXVDI-B - active
Portgroup Name - VLAN
Storage Controller Data Network - 52
Storage Hypervisor Data Network - 52
Host: 10.10.50.33
vswitch: vswitch-hx-inband-mgmt - mtu: 1500 - policy: loadbalance_srcid
vmnic0 - 1 - K22-HXVDI-A - active
vmnic1 - 1 - K22-HXVDI-B - standby
Portgroup Name - VLAN
Storage Controller Management Network - 50
Management Network - 50
vswitch: vswitch-hx-vm-network - mtu: 1500 - policy: loadbalance_srcid
vmnic4 - 1 - K22-HXVDI-A - active
vmnic5 - 1 - K22-HXVDI-B - active
Portgroup Name - VLAN
vm-network-54 - 54
vswitch: vmotion - mtu: 9000 - policy: loadbalance_srcid
vmnic6 - 1 - K22-HXVDI-A - active
vmnic7 - 1 - K22-HXVDI-B - standby
Portgroup Name - VLAN
vmotion - 53
vswitch: vswitch-hx-storage-data - mtu: 9000 - policy: loadbalance_srcid
vmnic2 - 1 - K22-HXVDI-A - standby
vmnic3 - 1 - K22-HXVDI-B - active
Portgroup Name - VLAN
Storage Controller Data Network - 52
Storage Hypervisor Data Network - 52
Host: 10.10.50.34
vswitch: vswitch-hx-inband-mgmt - mtu: 1500 - policy: loadbalance_srcid
vmnic0 - 1 - K22-HXVDI-A - active
vmnic1 - 1 - K22-HXVDI-B - standby
Portgroup Name - VLAN
Storage Controller Management Network - 50
Management Network - 50
vswitch: vswitch-hx-vm-network - mtu: 1500 - policy: loadbalance_srcid
vmnic4 - 1 - K22-HXVDI-A - active
vmnic5 - 1 - K22-HXVDI-B - active
Portgroup Name - VLAN
vm-network-54 - 54
vswitch: vmotion - mtu: 9000 - policy: loadbalance_srcid
vmnic6 - 1 - K22-HXVDI-A - active
vmnic7 - 1 - K22-HXVDI-B - standby
Portgroup Name - VLAN
vmotion - 53
vswitch: vswitch-hx-storage-data - mtu: 9000 - policy: loadbalance_srcid
vmnic2 - 1 - K22-HXVDI-A - standby
vmnic3 - 1 - K22-HXVDI-B - active
Portgroup Name - VLAN
Storage Controller Data Network - 52
Storage Hypervisor Data Network - 52
Host: 10.10.50.27
No errors found
Host: 10.10.50.28
No errors found
Host: 10.10.50.29
No errors found
Host: 10.10.50.30
No errors found
Host: 10.10.50.31
No errors found
Host: 10.10.50.32
No errors found
Host: 10.10.50.33
No errors found
Host: 10.10.50.34
No errors found
Controller VM Clocks:
stCtlVM-FCH1937V2JV - 2016-10-07 05:32:09
stCtlVM-FCH1937V2TV - 2016-10-07 05:32:25
stCtlVM-FCH1842V1JG - 2016-10-07 05:32:41
stCtlVM-FCH1936V0GE - 2016-10-07 05:32:57
stCtlVM-FCH1937V2JT - 2016-10-07 05:33:14
stCtlVM-FCH1938V085 - 2016-10-07 05:33:30
stCtlVM-FCH1937V2TS - 2016-10-07 05:33:46
stCtlVM-FCH1937V2JU - 2016-10-07 05:34:02
Cluster:
Version - 1.8.1c-19725
Model - HXAF220C-M4S
Health - HEALTHY
Access Policy - LENIENT
ASUP enabled - False
SMTP Server -
root@Cisco-HX-Data-Platform-Installer:~#
8. Login to vSphere WebClient to create additional shared datastore.
9. Go to the Summary tab on the cluster created via the HyperFlex cluster creation workflow.
10. On Cisco HyperFlex Systems click the cluster name.
The Summary tab shows the details about the cluster status, capacity, and performance.
11. Click Manage, select Datastores. Click the Add datastore icon, select the datastore name and size to provision.
We created a 250TB datastore for this environment.
This section details how to configure the software infrastructure components that comprise this solution.
Install and configure the infrastructure virtual machines by following the process provided in Table 10
Table 10 Test Infrastructure Virtual Machine Configuration
Configuration |
Citrix XenDesktop Controllers Virtual Machines |
Citrix Provisioning Servers Virtual Machines |
Operating system |
Microsoft Windows Server 2016 |
Microsoft Windows Server 2016 |
Virtual CPU amount |
6 |
8 |
Memory amount |
8 GB |
8 GB |
Network |
VMXNET3 InBand-Mgmt |
VMXNET3 InBand-Mgmt |
Disk-1 (OS) size and location |
40 GB Infra-DS volume |
40 GB Infra-DS volume |
Disk-2 size and location |
– |
200 GB
|
Configuration |
Microsoft Active Directory DCs Virtual Machines |
vCenter Server Appliance Virtual Machine |
Operating system |
Microsoft Windows Server 2012 R2 |
VCSA – SUSE Linux |
Virtual CPU amount |
4 |
8 |
Memory amount |
4 GB |
24 GB |
Network |
VMXNET3 InBand-Mgmt |
VMXNET3 InBand-Mgmt |
Disk size and location |
40 GB |
460 GB (across 11 VMDKs) |
Configuration |
Microsoft SQL Server Virtual Machine |
Citrix StoreFront Virtual Machines |
Operating system |
Microsoft Windows Server 2016 Microsoft SQL Server 2016 |
Microsoft Windows Server 2016 |
Virtual CPU amount |
4 |
4 |
Memory amount |
16 GB |
8 GB |
Network |
VMXNET3 InBand-Mgmt |
VMXNET3 InBand-Mgmt |
Disk-1 (OS) size and location |
40 GB Infra-DS volume |
40 GB Infra-DS volume |
Disk-2 size and location |
200 GB Infra-DS volume SQL Logs |
– |
Configuration |
Citrix License Server Virtual Machines |
NetScaler VPX Appliance Virtual Machine |
Operating system |
Microsoft Windows Server 2012 R2 |
NS11.1 52.13.nc |
Virtual CPU amount |
4 |
2 |
Memory amount |
4 GB |
2 GB |
Network |
VMXNET3 InBand-Mgmt |
VMXNET3 InBand-Mgmt |
Disk size and location |
40 GB |
20 GB |
This section provides guidance around creating the golden (or master) images for the environment. VMs for the master images must first be installed with the software components needed to build the golden images. For this CVD, the images contain the basics needed to run the Login VSI workload.
To prepare the master VMs for the Hosted Virtual Desktops (HVDs) and Hosted Shared Desktops (HSDs), there are three major steps once the base virtual machine has been created:
· Installing OS and application software
· Installing the PVS Target Device x64 software
· Installing the Virtual Delivery Agents (VDAs)
The master image HVD and HSD VMs were configured as follows in Table 11 :
Table 11 HVD and HSD Configurations
Configuration |
HVDI Virtual Machines |
HSD Virtual Machines |
Operating system |
Microsoft Windows 10 64-bit |
Microsoft Windows Server 2016 |
Virtual CPU amount |
2 |
6 |
Memory amount |
2.0 GB (reserved) |
24 GB (reserved) |
Network |
VMXNET3 vm-network |
VMXNET3 vm-network |
Citrix PVS vDisk size and location |
24 GB (thick) Infra-DS volume |
100 GB (thick) Infra-DS volume |
Citrix PVS write cache Disk size |
6 GB |
24 GB |
Additional software used for testing |
Microsoft Office 2016 Login VSI 4.1.5 (Knowledge Worker Workload) |
Microsoft Office 2016 Login VSI 4.1.5 (Knowledge Worker Workload) |
This section details the installation of the core components of the XenDesktop/XenApp 7.13 system. This CVD provide the process to install two XenDesktop Delivery Controllers to support hosted shared desktops (HSD), non-persistent virtual desktops (VDI), and persistent virtual desktops (VDI).
Citrix recommends that you use Secure HTTP (HTTPS) and a digital certificate to protect vSphere communications. Citrix recommends that you use a digital certificate issued by a certificate authority (CA) according to your organization's security policy. Otherwise, if security policy allows, use the VMware-installed self-signed certificate.
To install vCenter Server self-signed Certificate, complete the following steps:
1. Add the FQDN of the computer running vCenter Server to the hosts file on that server, located at SystemRoot/
WINDOWS/system32/Drivers/etc/. This step is required only if the FQDN of the computer running vCenter Server is not already present in DNS.
2. Open Internet Explorer and enter the address of the computer running vCenter Server (e.g., https://FQDN as the URL).
3. Accept the security warnings.
4. Click the Certificate Error in the Security Status bar and select View certificates.
5. Click Install certificate, select Local Machine, and then click Next.
6. Select Place all certificates in the following store and then click Browse.
7. Select Show physical stores.
8. Select Trusted People.
9. Click Next and then click Finish.
10. Perform the above steps on all Delivery Controllers and Provisioning Servers.
The process of installing the XenDesktop Delivery Controller also installs other key XenDesktop software components, including Studio, which is used to create and manage infrastructure components, and Director, which is used to monitor performance and troubleshoot problems.
To install the Citrix License Server, complete the following steps:
1. To begin the installation, connect to the first Citrix License server and launch the installer from the Citrix XenDesktop 7.13 ISO.
2. Click Start.
3. Click “Extend Deployment – Citrix License Server.”
4. Read the Citrix License Agreement.
5. If acceptable, indicate your acceptance of the license by selecting the “I have read, understand, and accept the terms of the license agreement” radio button.
6. Click Next.
7. Click Next.
8. Select the default ports and automatically configured firewall rules.
9. Click Next
10. Click Install.
11. Click Finish to complete the installation.
To install the Citrix Licenses, complete the following steps:
1. Copy the license files to the default location (C:\Program Files (x86)\Citrix\Licensing\ MyFiles) on the license server.
2. Restart the server or Citrix licensing services so that the licenses are activated.
3. Run the application Citrix License Administration Console.
4. Confirm that the license files have been read and enabled correctly.
1. To begin the installation, connect to the first XenDesktop server and launch the installer from the Citrix XenDesktop 7.13 ISO.
2. Click Start.
The installation wizard presents a menu with three subsections.
3. Click “Get Started - Delivery Controller.”
4. Read the Citrix License Agreement.
5. If acceptable, indicate your acceptance of the license by selecting the “I have read, understand, and accept the terms of the license agreement” radio button.
6. Click Next.
7. Select the components to be installed on the first Delivery Controller Server:
a. Delivery Controller
b. Studio
c. Director
8. Click Next.
Dedicated StoreFront and License servers should be implemented for large scale deployments.
9. Since a SQL Server will be used to Store the Database, leave “Install Microsoft SQL Server 2012 SP1 Express” unchecked.
10. Click Next.
11. Select the default ports and automatically configured firewall rules.
12. Click Next.
13. Click Install to begin the installation.
14. (Optional) Click the Call Home participation.
15. Click Next.
16. Click Finish to complete the installation.
17. (Optional) Check Launch Studio to launch Citrix Studio Console.
Citrix Studio is a management console that allows you to create and manage infrastructure and resources to deliver desktops and applications. Replacing Desktop Studio from earlier releases, it provides wizards to set up your environment, create workloads to host applications and desktops, and assign applications and desktops to users.
Citrix Studio launches automatically after the XenDesktop Delivery Controller installation, or if necessary, it can be launched manually. Citrix Studio is used to create a Site, which is the core XenDesktop 7.13 environment consisting of the Delivery Controller and the Database.
To configure XenDesktop, complete the following steps:
1. From Citrix Studio, click the Deliver applications and desktops to your users button.
2. Select the “A fully configured, production-ready Site” radio button.
3. Enter a site name.
4. Click Next.
5. Provide the Database Server Locations for each data type and click Next.
6. For an AlwaysOn Availability Group, use the group’s listener DNS name.
7. Provide the FQDN of the license server.
8. Click Connect to validate and retrieve any licenses from the server.
If no licenses are available, you can use the 30-day free trial or activate a license file.
9. Select the appropriate product edition using the license radio button.
10. Click Next.
11. Select the Connection type of VMware vSphere®.
12. Enter the FQDN of the vCenter server (in Server_FQDN/sdk format).
13. Enter the username (in domain\username format) for the vSphere account.
14. Provide the password for the vSphere account.
15. Provide a connection name.
16. Select the Other tools radio button.
17. Click Next.
18. Select HyperFlex Cluster that will be used by this connection.
19. Check Studio Tools radio button required to support desktop provisioning task by this connection.
20. Click Next.
21. Make Storage selection to be used by this connection.
22. Click Next.
23. Make Network selection to be used by this connection.
24. Click Next.
25. Select Additional features.
26. Click Next.
27. Review Site configuration Summary and click Finish.
1. Connect to the XenDesktop server and open Citrix Studio Management console.
2. From the Configuration menu, right-click Administrator and select Create Administrator from the drop-down list.
3. Select/Create appropriate scope and click Next.
4. Choose an appropriate Role.
5. Review the Summary, check Enable administrator, and click Finish.
After the first controller is completely configured and the Site is operational, you can add additional controllers. In this CVD, we created two Delivery Controllers.
To configure additional XenDesktop controllers, complete the following steps:
1. To begin the installation of the second Delivery Controller, connect to the second XenDesktop server and launch the installer from the Citrix XenDesktop 7.13 ISO.
2. Click Start.
3. Click Delivery Controller.
4. Repeat the same steps used to install the first Delivery Controller, including the step of importing an SSL certificate for HTTPS between the controller and vSphere.
5. Review the Summary configuration.
6. Click Install.
7. (Optional) Click the “I want to participate in Call Home.”
8. Click Next.
9. Verify the components installed successfully.
10. Click Finish.
To add the second Delivery Controller to the XenDesktop Site, complete the following steps:
1. In Desktop Studio click the “Connect this Delivery Controller to an existing Site” button.
2. Enter the FQDN of the first delivery controller.
3. Click OK.
4. Click Yes to allow the database to be updated with this controller’s information automatically.
5. When complete, test the site configuration and verify the Delivery Controller has been added to the list of Controllers.
Citrix StoreFront stores aggregate desktops and applications from XenDesktop sites, making resources readily available to users. In this CVD, we created two StoreFront servers on dedicated virtual machines.
To install and configure StoreFront, complete the following steps:
1. To begin the installation of the StoreFront, connect to the first StoreFront server and launch the installer from the Citrix XenDesktop 7.13 ISO.
2. Click Start.
3. Click Extend Deployment Citrix StoreFront
4. If acceptable, indicate your acceptance of the license by selecting the “I have read, understand, and accept the terms of the license agreement” radio button.
5. Click Next.
6. Click Next.
7. Select the default ports and automatically configured firewall rules.
8. Click Next.
9. Click Install.
10. (Optional) Click “I want to participate in Call Home.”
11. Click Next.
12. Check “Open the StoreFront Management Console.
13. Click Finish.
14. Click Create a new deployment.
15. Specify the URL of the StoreFront server and click Next.
For a multiple server deployment use the load balancing environment in the Base URL box.
16. Click Next.
17. Specify a name for your store and click Next.
18. Add the required Delivery Controllers to the store and click Next.
19. Specify how connecting users can access the resources, in this environment only local users on the internal network are able to access the store, and click Next.
20. On the ”Authentication Methods” page, select the methods your users will use to authenticate to the store and click Next. You can select from the following methods:
21. Username and password: Users enter their credentials and are authenticated when they access their stores.
22. Domain passthrough: Users authenticate to their domain-joined Windows computers and their credentials are used to log them on automatically when they access their stores.
23. Configure the XenApp Service URL for users who use PNAgent to access the applications and desktops and click Create.
24. After creating the store click Finish.
After the first StoreFront server is completely configured and the Store is operational, you can add additional servers.
To configure additional StoreFront server, complete the following steps:
1. To begin the installation of the second StoreFront, connect to the second StoreFront server and launch the installer from the Citrix XenDesktop 7.13 ISO.
2. Click Start.
3. Click Extended Deployment Citrix StoreFront.
4. Repeat the same steps used to install the first StoreFront.
5. Review the Summary configuration.
6. Click Install.
7. (Optional) Click “I want to participate in Call Home.”
8. Click Next.
9. Check “Open the StoreFront Management Console."
10. Click Finish.
To configure the second StoreFront if used, complete the following steps:
1. From the StoreFront Console on the second server select “Join existing server group.”
2. In the Join Server Group dialog, enter the name of the first Storefront server.
3. Before the additional StoreFront server can join the server group, you must connect to the first Storefront server, add the second server, and obtain the required authorization information.
4. Connect to the first StoreFront server.
5. Using the StoreFront menu on the left, you can scroll through the StoreFront management options.
6. Select Server Group from the menu.
7. To add the second server and generate the authorization information that allows the additional StoreFront server to join the server group, select Add Server.
8. Copy the Authorization code from the Add Server dialog.
9. Connect to the second Storefront server and paste the Authorization code into the Join Server Group dialog.
10. Click Join.
11. A message appears when the second server has joined successfully.
12. Click OK.
The second StoreFront is now in the Server Group.
In most implementations, there is a single vDisk providing the standard image for multiple target devices. Thousands of target devices can use a single vDisk shared across multiple Provisioning Services (PVS) servers in the same farm, simplifying virtual desktop management. This section describes the installation and configuration tasks required to create a PVS implementation.
The PVS server can have many stored vDisks, and each vDisk can be several gigabytes in size. Your streaming performance and manageability can be improved using a RAID array, SAN, or NAS. PVS software and hardware requirements are available at: https://docs.citrix.com/en-us/provisioning/7-13/system-requirements.html.
Set the following Scope Options on the DHCP server hosting the PVS target machines (for example, VDI, RDS).
The Boot Server IP was configured for Load Balancing by NetScaler VPX to support High Availability of TFTP service.
To Configure TFTP Load Balancing, complete the following steps:
1. Create Virtual IP for TFTP Load Balancing.
2. Configure servers that are running TFTP (your Provisioning Servers).
3. Define TFTP service for the servers (Monitor used: udp-ecv).
4. Configure TFTP for load balancing.
5. As a Citrix best practice cited in this CTX article, apply the following registry setting both the PVS servers and target machines:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\TCPIP\Parameters\
Key: "DisableTaskOffload" (dword)
Value: "1"
Only one MS SQL database is associated with a farm. You can choose to install the Provisioning Services database software on an existing SQL database, if that machine can communicate with all Provisioning Servers within the farm, or with a new SQL Express database machine, created using the SQL Express software that is free from Microsoft.
The following databases are supported: Microsoft SQL Server 2008 SP3 through 2016 (x86, x64, and Express editions). Microsoft SQL 2016 was installed separately for this CVD.
High availability will be available for the databases once added to the SQL AlwaysOn Availability Group CTX201203.
To install and configure Citrix Provisioning Service 7.13, complete the following steps:
1. Insert the Citrix Provisioning Services 7.13 ISO and let AutoRun launch the installer.
2. Click the Console Installation button.
3. Click Install to install the required prerequisites.
4. Read the Citrix License Agreement.
5. If acceptable, select the radio button labeled “I accept the terms in the license agreement.”
6. Click Next.
7. Optionally provide User Name and Organization.
8. Click Next.
9. Accept the default path.
10. Click Next.
11. Click Install to start the console installation.
12. Click Finish.
13. From the main installation screen, select Server Installation.
14. The installation wizard will check to resolve dependencies and then begin the PVS server installation process.
15. Click Install on the prerequisites dialog.
16. Click Yes when prompted to install the SQL Native Client.
17. Click Next when the Installation wizard starts.
18. Review the license agreement terms.
19. If acceptable, select the radio button labeled “I accept the terms in the license agreement.”
20. Click Next.
21. Provide User Name and Organization information. Select who will see the application.
22. Click Next.
23. Accept the default installation location.
24. Click Next.
25. Click Install to begin the installation.
26. Click Finish when the install is complete.
27. The PVS Configuration Wizard starts automatically.
28. Click Next.
29. Since the PVS server is not the DHCP server for the environment, select the radio button labeled, “The service that runs on another computer.”
30. Click Next.
31. Since DHCP boot options 66 and 67 are used for TFTP services, select the radio button labeled, “The service that runs on another computer.”
32. Click Next.
33. Since this is the first server in the farm, select the radio button labeled, “Create farm.”
34. Click Next.
35. Enter the FQDN of the SQL server.
36. Click Next.
37. Provide the Database, Farm, Site, and Collection names.
38. Click Next.
39. Provide the vDisk Store details.
40. Click.
For large scale PVS environment, it is recommended to create the share using support for CIFS/SMB3 on an enterprise ready File Server.
41. Provide the FQDN of the license server.
42. Optionally, provide a port number if changed on the license server.
43. Click Next.
If an Active Directory service account is not already setup for the PVS servers, create that account prior to clicking Next on this dialog.
44. Select the Specified user account radio button.
45. Complete the User name, Domain, Password, and Confirm password fields, using the PVS account information created earlier.
46. Click Next.
47. Set the Days between password updates to 7.
The updates will vary per environment. “7 days” for the configuration was appropriate for testing purposes.
48. Click Next.
49. Keep the defaults for the network cards.
50. Click Next
51. Select Use the Provisioning Services TFTP service checkbox.
52. Click Next.
53. Make sure that the IP Addresses for all PVS servers are listed in the Stream Servers Boot List
54. Click Next.
55. If Soap Server is used, provide details.
56. Click Next.
57. If desired fill in Problem Report Configuration.
58. Click Next.
59. Click Finish to start the installation.
60. When the installation is completed, click Done.
Complete the installation steps on the additional PVS servers up to the configuration step where it asks to Create or Join a farm. In this CVD, we repeated the procedure to add a total of two PVS servers. To install additional PVS server, complete the following steps:
1. On the Farm Configuration dialog, select “Join existing farm.”
2. Click Next.
3. Provide the FQDN of the SQL Server.
4. Click Next.
5. Accept the Farm Name.
6. Click Next.
7. Accept the Existing Site.
8. Click Next.
9. Accept the existing vDisk store.
10. Click Next.
11. Provide the PVS service account information.
12. Click Next.
13. Set the Days between password updates to 7.
14. Click Next.
15. Accept the network card settings.
16. Click Next.
17. Select Use the Provisioning Services TFTP service checkbox.
18. Click Next.
19. Make sure that the IP Addresses for all PVS servers are listed in the Stream Servers Boot List
20. Click Next.
21. If Soap Server is used, provide details.
22. Click Next.
23. If desired fill in Problem Report Configuration.
24. Click Next.
25. Click Finish to start the installation process.
26. Click Done when the installation finishes.
You can optionally install the Provisioning Services console on the second PVS server following the procedure in the section Installing Provisioning Services.
After completing the steps to install the second PVS server, launch the Provisioning Services Console to verify that the PVS Servers and Stores are configured and that DHCP boot options are defined.
27. Launch Provisioning Services Console and select Connect to Farm.
28. Enter localhost for the PVS1 server.
29. Click Connect.
30. Select Store Properties from the drop-down menu.
31. In the Store Properties dialog, add the Default store path to the list of Default write cache paths.
32. Click Validate. If the validation is successful, click Close and OK to continue.
Virtual Delivery Agents (VDAs) are installed on the server and workstation operating systems, and enable connections for desktops and apps. The following procedure was used to install VDAs for both HVD and HSD environments.
By default, when you install the Virtual Delivery Agent, Citrix User Profile Management is installed silently on master images. (Using profile management as a profile solution is optional but was used for this CVD, and is described in a later section.)
To install XenDesktop Virtual Desktop Agents, complete the following steps:
1. Launch the XenDesktop installer from the XenDesktop 7.13 ISO.
2. Click Start on the Welcome Screen.
3. To install the VDA for the Hosted Virtual Desktops (VDI), select Virtual Delivery Agent for Windows Desktop OS. After the VDA is installed for Hosted Virtual Desktops, repeat the procedure to install the VDA for Hosted Shared Desktops (RDS). In this case, select Virtual Delivery Agent for Windows Server OS and follow the same basic steps.
4. Select “Create a Master Image.”
5. Click Next.
6. Optional: Select Citrix Receiver.
7. Click Next.
8. Click Next.
9. Select “Do it manually” and specify the FQDN of the Delivery Controllers.
10. Click Next.
11. Accept the default features.
12. Click Next.
13. Allow the firewall rules to be configured Automatically.
14. Click Next.
15. Verify the Summary and click Install.
16. (Optional) Select Call Home participation.
17. (Optional) check “Restart Machine.”
18. Click Finish.
19. Repeat the procedure so that VDAs are installed for both HVD (using the Windows 10 OS image) and the HSD desktops (using the Windows Server 2016 image).
20. Select an appropriate workflow for the HSD desktop.
The Master Target Device refers to the target device from which a hard disk image is built and stored on a vDisk. Provisioning Services then streams the contents of the vDisk created to other target devices. This procedure installs the PVS Target Device software that is used to build the RDS and VDI golden images.
To install the Citrix Provisioning Server Target Device software, complete the following steps:
The instructions below outline the installation procedure to configure a vDisk for VDI desktops. When you have completed these installation steps, repeat the procedure to configure a vDisk for RDS.
1. On the Window 10 Master Target Device, launch the PVS installer from the Provisioning Services 7.13 ISO.
2. Click the Target Device Installation button.
The installation wizard will check to resolve dependencies and then begin the PVS target device installation process.
3. Click Next.
4. Accept License Agreement and click Next.
5. Click Next.
6. Confirm the installation settings and click Next.
7. Click Install.
8. Deselect the checkbox to launch the Imaging Wizard and click Finish.
9. Reboot the machine.
The PVS Imaging Wizard automatically creates a base vDisk image from the master target device. To create the Citrix Provisioning Server vDisks, complete the following steps:
The instructions below describe the process of creating a vDisk for VDI desktops. When you have completed these steps, repeat the procedure to build a vDisk for HSD.
1. The PVS Imaging Wizard's Welcome page appears.
2. Click Next.
3. The Connect to Farm page appears. Enter the name or IP address of a Provisioning Server within the farm to connect to and the port to use to make that connection.
4. Use the Windows credentials (default) or enter different credentials.
5. Click Next.
6. Select Create new vDisk.
7. Click Next.
8. The Add Target Device page appears.
9. Select the Target Device Name, the MAC address associated with one of the NICs that was selected when the target device software was installed on the master target device, and the Collection to which you are adding the device.
10. Click Next.
11. The New vDisk dialog displays. Enter the name of the vDisk.
12. Select the Store where the vDisk will reside. Select the vDisk type, either Fixed or Dynamic, from the drop-down menu. (This CVD used Dynamic rather than Fixed vDisks.)
13. Click Next.
14. On the Microsoft Volume Licensing page, select the volume license option to use for target devices. For this CVD, volume licensing is not used, so the None button is selected.
15. Click Next.
16. Select Image entire boot disk on the Configure Image Volumes page.
17. Click Next.
18. Select Optimize for hard disk again for Provisioning Services before imaging on the Optimize Hard Disk for Provisioning Services.
19. Click Next.
20. Select Create on the Summary page.
21. Review the configuration and click Continue.
22. When prompted, click No to shut down the machine.
23. Edit the VM settings and select Force BIOS Setup under Boot Options.
24. Restart Virtual Machine.
25. Configure the BIOS/VM settings for PXE/network boot, putting Network boot from VMware VMXNET3 at the top of the boot device list.
26. Select Exit Saving Changes.
After restarting the VM, log into the VDI or RDS master target. The PVS imaging process begins, copying the contents of the C: drive to the PVS vDisk located on the server.
27. If prompted to Restart select Restart Later.
28. A message is displayed when the conversion is complete, click Done.
29. Shutdown the VM used as the HVD or HSD master target.
30. Connect to the PVS server and validate that the vDisk image is available in the Store.
31. Right-click the newly created vDisk and select Properties.
32. On the vDisk Properties dialog, change Access mode to “Standard Image (multi-device, read-only access)”.
33. Set the Cache Type to “Cache in device RAM with overflow on hard disk.”
34. Set Maximum RAM Size (for testing 2GB was used for Windows Server 2016 and 128 MB was used Windows 10 virtual machines).
35. Click OK.
Repeat this procedure to create vDisks for both the Hosted VDI Desktops (using the Windows 10 OS image) and the Hosted Shared Desktops (using the Windows Server 2016 image).
To create HVD and HSD machines, complete the following steps:
1. Select the Master Target Device VM from the vSphere Client.
2. Right-click the VM and select Clone.
3. Name the cloned VM Desktop-Template.
4. Select the cluster and datastore where the first phase of provisioning will occur.
5. Remove Hard disk 1 from the Template VM.
Hard disk 1 is not required to provision desktop machines as the XenDesktop Setup Wizard dynamically creates the write cache disk.
6. Convert to the Desktop-Template VM to a Template.
7. Start the XenDesktop Setup Wizard from the Provisioning Services Console.
8. Right-click the Site.
9. Choose XenDesktop Setup Wizard… from the context menu.
10. Click Next.
11. Enter the XenDesktop Controller address that will be used for the wizard operations.
12. Click Next.
13. Select the Host Resources on which the virtual machines will be created.
14. Click Next.
15. Provide the Host Resources Credentials (Username and Password) to the XenDesktop controller when prompted.
16. Click OK.
17. Select the Template created earlier.
18. Click Next.
19. Select the vDisk that will be used to stream virtual machines.
20. Click Next.
21. Select “Create a new catalog” and provide catalog name.
The catalog name is also used as the collection name in the PVS site.
22. Click Next.
23. On the Operating System dialog, specify the operating system for the catalog. Specify Desktop OS for VDI and Server OS for RDS.
24. Click Next.
25. If you specified a Windows Desktop OS for VDIs, a User Experience dialog appears. Specify that the user will connect to “A fresh new (random) desktop each time.”
26. Click Next.
27. Chose a Scope for the new Catalog.
28. Click Next.
29. On the Virtual machines dialog, specify:
a. The number of VMs to create. (Note that it is recommended to create 200 or less per provisioning run. Create a single VM at first to verify the procedure.)
b. Number of vCPUs for the VM (2 for VDI, 6 for RDS)
c. The amount of memory for the VM (1.7GB for VDI, 24GB for RDS)
d. The write-cache disk size (10GB for VDI, 30GB for RDS)
e. PXE boot as the Boot Mode
30. Click Next.
31. Select the Create new accounts radio button.
32. Click Next.
33. Specify the Active Directory Accounts and Location. This is where the wizard should create the computer accounts.
34. Provide the Account naming scheme. An example name is shown in the text box below the naming scheme selection location.
35. Click Next.
36. Click Finish to create the virtual machine.
37. When the wizard is done provisioning the virtual machines, click Done.
Provisioning process takes ~10 seconds per machine.
38. Verify the desktop machines were successfully created in the following locations:
- Provisioning Server > Provisioning Services Console > Farm > Site > Device Collections
- Delivery Controller > Citrix Studio > Machine Catalogs
- Domain Controller > Active Directory Users and Computers
39. Logon to the newly provisioned desktop machine, using the Virtual Disk Status verify the image mode is set to Ready Only and the cache type as Device Ram with overflow on local hard drive.
1. Connect to a XenDesktop server and launch Citrix Studio.
2. Choose Create Machine Catalog from the drop-down menu.
3. Select Desktop OS and click Next.
4. Select appropriate machine management and click Next.
5. Select Random for the Desktop Experience.
6. Master Image; select a VM and click Next.
7. Specify the number of the desktops to create and machine configuration. Click Next.
8. Specify AD account naming scheme and OU where accounts will be created.
9. On Summary page specify Catalog name and click Finish to start deployment.
10. Verify the desktop machines were successfully created in the following locations:
- Provisioning Server > Provisioning Services Console > Farm > Site > Device Collections
- Delivery Controller > Citrix Studio > Machine Catalogs
- Domain Controller > Active Directory Users and Computers
1. Connect to a XenDesktop server and launch Citrix Studio.
2. Choose Create Machine Catalog from the drop-down menu.
3. Click Next.
4. Select Desktop OS.
5. Click Next.
6. Select appropriate machine management.
7. Click Next.
8. Select Static, Dedicated Virtual Machine for Desktop Experience.
9. Click Next.
10. Select a Virtual Machine to be used for Catalog Master image.
11. Click Next.
12. Specify the number of the desktops to create and machine configuration.
13. Set amount of memory (MB) to be used by virtual desktops.
14. Select Full Copy for machine copy mode.
15. Click Next.
16. Specify AD account naming scheme and OU where accounts will be created.
17. Click Next.
18. On Summary page specify Catalog name and click Finish to start deployment.
19. Verify the desktop machines were successfully created in the following locations:
- Provisioning Server > Provisioning Services Console > Farm > Site > Device Collections
- Delivery Controller > Citrix Studio > Machine Catalogs
- Domain Controller > Active Directory Users and Computers
Delivery Groups are collections of machines that control access to desktops and applications. With Delivery Groups, you can specify which users and groups can access which desktops and applications.
To create delivery groups, complete the following steps:
The instructions below outline the procedure to create a Delivery Group for VDI desktops. When you have completed these steps, repeat the procedure to a Delivery Group for HVD desktops.
1. Connect to a XenDesktop server and launch Citrix Studio.
2. Choose Create Delivery Group from the drop-down menu.
3. Click Next.
4. Select Machine catalog.
5. Provide the number of machines to be added to the delivery Group.
6. Click Next.
7. To make the Delivery Group accessible, you must add users, select Allow any authenticated users to use this Delivery Group.
8. Click Next.
User assignment can be updated any time after Delivery group creation by accessing Delivery group properties in Desktop Studio.
9. (Optional) specify Applications catalog will deliver.
10. Click Next.
11. On the Summary dialog, review the configuration. Enter a Delivery Group name and a Display name (for example, HVD or HSD).
12. Click Finish.
13. Citrix Studio lists the created Delivery Groups and the type, number of machines created, sessions, and applications for each group in the Delivery Groups tab. Select Delivery Group and in Action List, select “Turn on Maintenance Mode.”
Policies and profiles allow the Citrix XenDesktop environment to be easily and efficiently customized.
Citrix XenDesktop policies control user access and session environments, and are the most efficient method of controlling connection, security, and bandwidth settings. You can create policies for specific groups of users, devices, or connection types with each policy. Policies can contain multiple settings and are typically defined through Citrix Studio. (The Windows Group Policy Management Console can also be used if the network environment includes Microsoft Active Directory and permissions are set for managing Group Policy Objects). The screenshot below shows policies for Login VSI testing in this CVD.
Figure 50 XenDesktop Policy
Profile management provides an easy, reliable, and high-performance way to manage user personalization settings in virtualized or physical Windows environments. It requires minimal infrastructure and administration, and provides users with fast logons and logoffs. A Windows user profile is a collection of folders, files, registry settings, and configuration settings that define the environment for a user who logs on with a particular user account. These settings may be customizable by the user, depending on the administrative configuration. Examples of settings that can be customized are:
· Desktop settings such as wallpaper and screen saver
· Shortcuts and Start menu setting
· Internet Explorer Favorites and Home Page
· Microsoft Outlook signature
· Printers
Some user settings and data can be redirected by means of folder redirection. However, if folder redirection is not used these settings are stored within the user profile.
The first stage in planning a profile management deployment is to decide on a set of policy settings that together form a suitable configuration for your environment and users. The automatic configuration feature simplifies some of this decision-making for XenDesktop deployments. Screenshots of the User Profile Management interfaces that establish policies for this CVD’s RDS and VDI users (for testing purposes) are shown below. Basic profile management policy settings are documented here:
http://docs.citrix.com/en-us/xenapp-and-xendesktop/7-11.html
Figure 51 VDI User Profile Manager Policy
In this project, we tested a single Cisco HyperFlex cluster running eight (8) Cisco UCS HXAF220c-MS4 Rack Servers, eight (8) Cisco UCS C220-M4 Rack Servers, and eight (8) Cisco UCS B200 M4 Blade Servers in a single Cisco UCS domain. This solution is tested to illustrate linear scalability for each workload studied.
Hardware Components:
· 2 x Cisco UCS 6248UP Fabric Interconnects
· 2 x Cisco Nexus 9372PX Access Switches
· 16 x Cisco UCS HXAF220c-M4SX Rack Servers (2 Intel Xeon processor E5-2690 v4 CPUs at 2.6 GHz, with 512 GB of memory per server [32 GB x 16 DIMMs at 2400 MHz]).
· 8 x Cisco UCS C220 M4 Rack Servers (2 Intel Xeon processor E5-2690 v4 CPUs at 2.6 GHz, with 512 GB of memory per server [32 GB x 16 DIMMs at 2400 MHz]).
· Cisco VIC 1227 mLOM
· 12G modular SAS HBA Controller
· 120GB 2.5” 6G SATA SSD drive
· 800GB 2.5” 6G SAS SSD drive
· 6 x 960GB 2.5” SATA SSD drive
· 8 x Cisco UCS B200 M4 Blade Servers (2 Intel Xeon processor E5-2690 v4 CPUs at 2.6 GHz, with 512 GB of memory per server [32 GB x 16 DIMMs at 2400 MHz]).
· Cisco VIC 1340 mLOM
· 2 x 64GB SD card
Software components:
· Cisco UCS firmware 3.1(2g)
· Cisco HyperFlex data platform 2.1.1b
· VMware vSphere 6.0
· Citrix XenDesktop 7.13
· Citrix Provisioning Server 7.13
· Citrix User Profile Management
· Citrix NetScaler VPX NS11.1 52.13.nc
· Microsoft SQL Server 2016
· Microsoft Windows 10
· Microsoft Windows 2016
· Microsoft Office 2016
· Login VSI 4.1.5
All validation testing was conducted on-site within the Cisco labs in San Jose, California.
The testing results focused on the entire process of the virtual desktop lifecycle by capturing metrics during the desktop boot-up, user logon and virtual desktop acquisition (also referred to as ramp-up,) user workload execution (also referred to as steady state), and user logoff for the Hosted Shared Desktop Session under test.
Test metrics were gathered from the virtual desktop, storage, and load generation software to assess the overall success of an individual test cycle. Each test cycle was not considered passing unless all of the planned test users completed the ramp-up and steady state phases (described below) and unless all metrics were within the permissible thresholds as noted as success criteria.
Three successfully completed test cycles were conducted for each hardware configuration and results were found to be relatively consistent from one test to the next.
You can obtain additional information and a free test license from http://www.loginvsi.com.
The following protocol was used for each test cycle in this study to insure consistent results.
All machines were shut down utilizing the Citrix XenDesktop 7.13 Administrator Console.
All Launchers for the test were shut down. They were then restarted in groups of 10 each minute until the required number of launchers was running with the Login VSI Agent at a “waiting for test to start” state.
To simulate severe, real-world environments, Cisco requires the log-on and start-work sequence, known as Ramp Up, to complete in 48 minutes. Additionally, we require all sessions started, whether 60 single server users or 4000 full scale test users to become active within two minutes after the last session is launched.
In addition, Cisco requires that the Login VSI Benchmark method is used for all single server and scale testing. This assures that our tests represent real-world scenarios. For each of the three consecutive runs on single server tests, the same process was followed. Complete the following steps:
1. Time 0:00:00 Start esxtop Logging on the following systems:
— Infrastructure and VDI Host Blades used in test run
— All Infrastructure VMs used in test run (AD, SQL, View Connection brokers, image mgmt., etc.)
2. Time 0:00:10 Start Storage Partner Performance Logging on Storage System.
3. Time 0:05: Boot RDS Machines using Citrix XenDesktop 7.13 Administrator Console.
4. Time 0:06 First machines boot.
5. Time 0:35 Single Server or Scale target number of RDS Servers registered on XD.
No more than 60 Minutes of rest time is allowed after the last desktop is registered and available on Citrix XenDesktop 7.13 Administrator Console dashboard. Typically a 20-30 minute rest period for Windows 10 desktops and 10 minutes for RDS VMs is sufficient.
6. Time 1:35 Start Login VSI 4.1.5 Knowledge Worker Benchmark Mode Test, setting auto-logoff time at 900 seconds, with Single Server or Scale target number of desktop VMs utilizing sufficient number of Launchers (at 20-25 sessions/Launcher).
7. Time 2:23 Single Server or Scale target number of desktop VMs desktops launched (48 minute benchmark launch rate).
8. Time 2:25 All launched sessions must become active.
All sessions launched must become active for a valid test run within this window.
9. Time 2:40 Login VSI Test Ends (based on Auto Logoff 900 Second period designated above).
10. Time 2:55 All active sessions logged off.
11. All sessions launched and active must be logged off for a valid test run. The Citrix XenDesktop 7.13 Administrator Dashboard must show that all desktops have been returned to the registered/available state as evidence of this condition being met.
12. Time 2:57 All logging terminated; Test complete.
13. Time 3:15 Copy all log files off to archive; Set virtual desktops to maintenance mode through broker; Shutdown all Windows 7 machines.
14. Time 3:30 Reboot all hypervisors.
15. Time 3:45 Ready for new test sequence.
Our “pass” criteria for this testing is as follows: Cisco will run tests at a session count levels that effectively utilize the server capacity measured by CPU, memory, storage and network utilization. We use Login VSI version 4.1.25 to launch Knowledge Worker workload sessions. The number of launched sessions must equal active sessions within two minutes of the last session launched in a test as observed on the VSI Management console.
The Citrix XenDesktop Studio will be monitored throughout the steady state to make sure of the following:
· All running sessions report In Use throughout the steady state
· No sessions move to unregistered, unavailable or available state at any time during steady state
Within 20 minutes of the end of the test, all sessions on all launchers must have logged out automatically and the Login VSI Agent must have shut down. Cisco’s tolerance for Stuck Sessions is 0.5% (half of one percent.) If the Stuck Session count exceeds that value, we identify it as a test failure condition.
Cisco requires three consecutive runs with results within +/-1% variability to pass the Cisco Validated Design performance criteria. For white papers written by partners, two consecutive runs within +/-1% variability are accepted. (All test data from partner run testing must be supplied along with proposed white paper.)
We will publish Cisco Validated Designs with our recommended workload following the process above and will note that we did not reach a VSImax dynamic in our testing.
The purpose of this testing is to provide the data needed to validate Citrix XenDesktop 7.13 Hosted Shared Desktop with Citrix XenDesktop 7.13 Composer provisioning using Microsoft Windows Server 2016 sessions on Cisco UCS HXAF220c-M4S, Cisco UCS 220 M4 and Cisco UCS B200 M4 servers.
The information contained in this section provides data points that a customer may reference in designing their own implementations. These validation results are an example of what is possible under the specific environment conditions outlined here and do not represent the full characterization of Citrix and VMware products.
Four test sequences, each containing three consecutive test runs generating the same result, were performed to establish system performance and linear scalability.
The philosophy behind Login VSI is different to conventional benchmarks. In general, most system benchmarks are steady state benchmarks. These benchmarks execute one or multiple processes, and the measured execution time is the outcome of the test. Simply put: the faster the execution time or the bigger the throughput, the faster the system is according to the benchmark.
Login VSI is different in approach. Login VSI is not primarily designed to be a steady state benchmark (however, if needed, Login VSI can act like one). Login VSI was designed to perform benchmarks for SBC or VDI workloads through system saturation. Login VSI loads the system with simulated user workloads using well-known desktop applications like Microsoft Office, Internet Explorer and Adobe PDF reader. By gradually increasing a number of simulated users, the system will eventually be saturated. Once the system is saturated, the response time of the applications will increase significantly. This latency in application response times shows a clear indication whether the system is (close to being) overloaded. As a result, by nearly overloading a system it is possible to find out what its true maximum user capacity is.
After a test is performed, the response times can be analyzed to calculate the maximum active session/desktop capacity. Within Login VSI this is calculated as VSImax. When the system is coming closer to its saturation point, response times will rise. When reviewing the average response time it will be clear the response times escalate at saturation point.
This VSImax is the “Virtual Session Index (VSI)”. With Virtual Desktop Infrastructure (VDI) and Terminal Services (RDS) workloads this is valid and useful information. This index simplifies comparisons and makes it possible to understand the true impact of configuration changes on hypervisor host or guest level.
It is important to understand why specific Login VSI design choices have been made. An important design choice is to execute the workload directly on the target system within the session instead of using remote sessions. The scripts simulating the workloads are performed by an engine that executes workload scripts on every target system, and are initiated at logon within the simulated user’s desktop session context.
An alternative to the Login VSI method would be to generate user actions client side through the remoting protocol. These methods are always specific to a product and vendor dependent. More importantly, some protocols simply do not have a method to script user actions client side.
For Login VSI the choice has been made to execute the scripts completely server side. This is the only practical and platform independent solutions, for a benchmark like Login VSI.
The simulated desktop workload is scripted in a 48-minute loop when a simulated Login VSI user is logged on, performing generic Office worker activities. After the loop is finished it will restart automatically. Within each loop the response times of sixteen specific operations are measured in a regular interval: sixteen times in within each loop. The response times of these five operations are used to determine VSImax.
The five operations from which the response times are measured are:
· Notepad File Open (NFO)
Loading and initiating VSINotepad.exe and opening the openfile dialog. This operation is handled by the OS and by the VSINotepad.exe itself through execution. This operation seems almost instant from an end-user’s point of view.
· Notepad Start Load (NSLD)
Loading and initiating VSINotepad.exe and opening a file. This operation is also handled by the OS and by the VSINotepad.exe itself through execution. This operation seems almost instant from an end-user’s point of view.
· Zip High Compression (ZHC)
This action copy's a random file and compresses it (with 7zip) with high compression enabled. The compression will very briefly spike CPU and disk IO.
· Zip Low Compression (ZLC)
This action copy's a random file and compresses it (with 7zip) with low compression enabled. The compression will very briefly disk IO and creates some load on the CPU.
· CPU
Calculates a large array of random data and spikes the CPU for a short period of time.
These measured operations within Login VSI do hit considerably different subsystems such as CPU (user and kernel), Memory, Disk, the OS in general, the application itself, print, GDI, etc. These operations are specifically short by nature. When such operations become consistently long: the system is saturated because of excessive queuing on any kind of resource. As a result, the average response times will then escalate. This effect is clearly visible to end-users. If such operations consistently consume multiple seconds the user will regard the system as slow and unresponsive.
Figure 52 Sample of a VSI Max Response Time Graph, Representing a Normal Test
Figure 53 Sample of a VSI Test Response Time Graph with a Clear Performance Issue
When the test is finished, VSImax can be calculated. When the system is not saturated, and it could complete the full test without exceeding the average response time latency threshold, VSImax is not reached and a number of sessions ran successfully.
The response times are very different per measurement type, for instance Zip with compression can be around 2800 ms, while the Zip action without compression can only take 75ms. This response time of these actions are weighted before they are added to the total. This ensures that each activity has an equal impact on the total response time.
In comparison to previous VSImax models, this weighting much better represent system performance. All actions have very similar weight in the VSImax total. The following weighting of the response times are applied.
The following actions are part of the VSImax v4.1 calculation and are weighted as follows (US notation):
· Notepad File Open (NFO): 0.75
· Notepad Start Load (NSLD): 0.2
· Zip High Compression (ZHC): 0.125
· Zip Low Compression (ZLC): 0.2
· CPU: 0.75
This weighting is applied on the baseline and normal Login VSI response times.
With the introduction of Login VSI 4.1 we also created a new method to calculate the base phase of an environment. With the new workloads (Taskworker, Powerworker, etc.) enabling 'base phase' for a more reliable baseline has become obsolete. The calculation is explained below. In total 15 lowest VSI response time samples are taken from the entire test, the lowest 2 samples are removed and the 13 remaining samples are averaged. The result is the Baseline. The calculation is as follows:
· Take the lowest 15 samples of the complete test
· From those 15 samples remove the lowest 2
· Average the 13 results that are left is the baseline
The VSImax average response time in Login VSI 4.1.x is calculated on a number of active users that are logged on the system.
Always a 5 Login VSI response time samples are averaged + 40% of the amount of “active” sessions. For example, if the active sessions is 60, then latest 5 + 24 (=40% of 60) = 31 response time measurement are used for the average calculation.
To remove noise (accidental spikes) from the calculation, the top 5% and bottom 5% of the VSI response time samples are removed from the average calculation, with a minimum of 1 top and 1 bottom sample. As a result, with 60 active users, the last 31 VSI response time sample are taken. From those 31 samples the top 2 samples are removed and lowest 2 results are removed (5% of 31 = 1.55, rounded to 2). At 60 users the average is then calculated over the 27 remaining results.
VSImax v4.1.x is reached when the VSIbase + a 1000 ms latency threshold is not reached by the average VSI response time result. Depending on the tested system, VSImax response time can grow 2 - 3x the baseline average. In end-user computing, a 3x increase in response time in comparison to the baseline is typically regarded as the maximum performance degradation to be considered acceptable.
In VSImax v4.1.x this latency threshold is fixed to 1000ms, this allows better and fairer comparisons between two different systems, especially when they have different baseline results. Ultimately, in VSImax v4.1.x, the performance of the system is not decided by the total average response time, but by the latency is has under load. For all systems, this is now 1000ms (weighted).
The threshold for the total response time is: average weighted baseline response time + 1000ms.
When the system has a weighted baseline response time average of 1500ms, the maximum average response time may not be greater than 2500ms (1500+1000). If the average baseline is 3000 the maximum average response time may not be greater than 4000ms (3000+1000).
When the threshold is not exceeded by the average VSI response time during the test, VSImax is not hit and a number of sessions ran successfully. This approach is fundamentally different in comparison to previous VSImax methods, as it was always required to saturate the system beyond VSImax threshold.
Lastly, VSImax v4.1.x is now always reported with the average baseline VSI response time result. For example: “The VSImax v4.1 was 125 with a baseline of 1526ms”. This helps considerably in the comparison of systems and gives a more complete understanding of the system. The baseline performance helps to understand the best performance the system can give to an individual user. VSImax indicates what the total user capacity is for the system. These two are not automatically connected and related:
When a server with a very fast dual core CPU, running at 3.6 GHZ, is compared to a 10 core CPU, running at 2,26 GHZ, the dual core machine will give and individual user better performance than the 10 core machine. This is indicated by the baseline VSI response time. The lower this score is, the better performance an individual user can expect.
However, the server with the slower 10 core CPU will easily have a larger capacity than the faster dual core system. This is indicated by VSImax v4.1.x, and the higher VSImax is, the larger overall user capacity can be expected.
With Login VSI 4.1.x a new VSImax method is introduced: VSImax v4.1. This methodology gives much better insight into system performance and scales to extremely large systems.
A key performance metric for desktop virtualization environments is the ability to boot the virtual machines quickly and efficiently to minimize user wait time for their desktop.
As part of Cisco’s virtual desktop test protocol, we shut down each virtual machine at the conclusion of a benchmark test. When we run a new test, we cold boot all 4000 desktops and measure the time it takes for the 4000th virtual machine to register as available in the Citrix XenDesktop Studio.
The Cisco HyperFlex HXAF220cM4SX, Cisco UCS C220 M4, and B200 M4 hybrid cluster (32-node) running Data Platform version 2.1(1b) software can accomplish this task in 30 minutes as shown in the following chart:
Figure 54 3072 XenDesktop 7.13 virtual machines (3000 Windows 10 and 72 Windows Server 2016 running Office 2016) boot and register with XenDesktop Delivery Controllers in 30 minutes.
For the Citrix XenDesktop 7.13 combined Hosted Shared Desktop and Hosted Virtual Desktop use case, the recommended maximum workload was determined based on both Login VSI Knowledge Worker workload end user experience measures, and HXAF220c-M4S, Cisco UCS C220 M4 and B200 M4 server operating parameters.
This recommended maximum workload approach allows you to determine the server N+1 fault tolerance load the server can successfully support in the event of a server outage for maintenance or upgrade.
Our recommendation is that the Login VSI Average Response and VSI Index Average should not exceed the Baseline plus 2000 milliseconds to ensure that end user experience is outstanding. Additionally, during steady state, the processor utilization should average no more than 90-95%.
Memory should never be oversubscribed for Desktop Virtualization workloads.
Test Phase |
Description |
Boot |
Start all RDS and/or VDI virtual machines at the same time. |
Login |
The Login VSI phase of test is where sessions are launched and start executing the workload over a 48 minutes duration. |
Steady state |
The steady state phase is where all users are logged in and performing various workload tasks such as using Microsoft Office, Web browsing, PDF printing, playing videos, and compressing files. |
Logoff |
Sessions finish executing the Login VSI workload and logoff. |
The recommended maximum workload for a Cisco HyperFlex cluster configured on Cisco HXAF2240c-M4S, Cisco UCS C220 M4 and B200 M4 nodes with E2690 v4 processors and 512GB of RAM for Windows Server 2016Hosted Sessions and persistent/non-persistent Hosted Virtual Desktop users is 4000 sessions with Office 2016 virtual desktops respectively.
This section shows the key performance metrics captured on the Cisco UCS HyperFlex storage cluster, configured with sixteen HXAF220c-M4S converged nodes and sixteen compute-only nodes (eight Cisco UCS C220 M4 and eight Cisco UCS B200 M4), running HSD VMs and non–persistent/persistent HVD during the full-scale testing. The full-scale testing with 4000 users comprised of: 1000 HSD Sessions, 1000 non-persistent HVDs (PVS) 1000 VDI non-persistent HVDs (MCS) and 1000 persistent HVDs (MCS. Full copy).
Test result highlights include:
· 0.665 second baseline response time (sub-second)
· 0.919 second average response time with 4000 desktop sessions running (sub-second)
· Average CPU utilization of 76 percent during steady state
· Average of 283 GB of RAM used out of 512 GB available
· 9700Mbps peak network utilization per host.
· Average Read Latency 0.6ms/Max Read Latency 2.2ms
· Average Write Latency 3.6ms/Max Write Latency 11.3ms
· 130000 peak I/O operations per second (IOPS) per cluster at steady state
· 2700MBps peak throughput per cluster at steady state
· 74% Deduplication savings
· 45% Compression savings
· Total of 86% storage space savings
Figure 55 LoginVSI Analyzer Chart for 4000 Users Test
Figure 56
Figure 57 LoginVSI Analyzer Chart for three (3) consecutive 4000 user tests running Knowledge workload on 32 Node HyperFlex cluster.
Figure 58 Sample ESXi host CPU Core Utilization Running 4000 User Test on 32 Nodes
Figure 59 Sample ESXi Host Memory Usage in Mbytes running 4000 User Test on 32 Node
Figure 60 Sample ESXi Host Network Adapter (VMNICs) Mbits Received/ Transmitted Per Sec Running 4000 User Test on 32 Node
Figure 61 HyperFlex Cluster WebUI Performance Chart for Knowledge Worker Workload Running 4000 User Test on 32 Node
Figure 62 vCenter WebUI reporting HyperFlex cluster De-duplication and Compression savings for 4000 User sessions supported on Windows Server 2016 based hosted shared sessions and Windows 10 based Hosted Virtual Desktops deployed on 32 Node HyperFlex cluster.
This Cisco HyperFlex solution addresses urgent needs of IT by delivering a platform that is cost effective and simple to deploy and manage. The architecture and approach used provide for a flexible and high-performance system with a familiar and consistent management model from Cisco. In addition, the solution offers numerous enterprise-class data management features to deliver the next-generation hyperconverged system.
Only Cisco offers the flexibility to add compute only nodes to a hyperconverged cluster for compute intensive workloads like desktop virtualization. This translates to lower cost for the customer, since no hyperconvergence licensing is required for those nodes.
Delivering responsive, resilient, high performance Citrix XenDesktop 7.13 provisioned Microsoft Windows 10 Virtual Machines and Microsoft Windows Server 2016 for hosted Apps or desktops has many advantages for desktop virtualization administrators.
Virtual desktop end-user experience, as measured by the Login VSI tool in benchmark mode, is outstanding with sub-second index average response time with all users running the knowledge worker workload test on Intel Broadwell E5-2600 v4 processors and Cisco 2400Mhz memory. In fact, we have set a new industry standard in performance for Desktop Virtualization on a hyper-converged platform.
Vadim is a subject matter expert on Cisco HyperFlex, Cisco Unified Computing System, Cisco Nexus Switching, and Citrix Certified Expert - Virtualization. Vadim is a member of the Cisco’s Computer Systems Product Group team.
For their support and contribution to the design, validation, and creation of this Cisco Validated Design, we would like to acknowledge their contribution and expertise that resulted in developing this document:
· Mike Brennan, Product Manager, Desktop Virtualization and Graphics Solutions, Cisco Systems, Inc.
· Swapnil Deshmukh, Technical Marketing Engineer, Springpath, Inc.
version 7.0(3)I2(2d)
switchname SJC02L151-K21-N9372-A
class-map type network-qos class-fcoe
match qos-group 1
class-map type network-qos class-all-flood
match qos-group 2
class-map type network-qos class-ip-multicast
match qos-group 2
vdc SJC02L151-K21-N9372-A id 1
limit-resource vlan minimum 16 maximum 4094
limit-resource vrf minimum 2 maximum 4096
limit-resource port-channel minimum 0 maximum 511
limit-resource u4route-mem minimum 248 maximum 248
limit-resource u6route-mem minimum 96 maximum 96
limit-resource m4route-mem minimum 58 maximum 58
limit-resource m6route-mem minimum 8 maximum 8
feature telnet
feature nxapi
cfs eth distribute
feature interface-vlan
feature hsrp
feature lacp
feature dhcp
feature vpc
feature lldp
clock protocol ntp vdc 1
no password strength-check
username admin password 5 $1$L35qoMhx$uOtdwpZBicXGW/vQleWbp/ role network-admin
ip domain-lookup
no service unsupported-transceiver
class-map type qos match-all class-fcoe
policy-map type qos jumbo
class class-default
set qos-group 0
copp profile strict
snmp-server user admin network-admin auth md5 0x9ccfab158439762740c487854f1b1e76
priv 0x9ccfab158439762740c487854f1b1e76 localizedkey
rmon event 1 log trap public description FATAL(1) owner PMON@FATAL
rmon event 2 log trap public description CRITICAL(2) owner PMON@CRITICAL
rmon event 3 log trap public description ERROR(3) owner PMON@ERROR
rmon event 4 log trap public description WARNING(4) owner PMON@WARNING
rmon event 5 log trap public description INFORMATION(5) owner PMON@INFO
ntp server 10.10.30.2
ntp peer 10.10.30.3
ntp server 10.10.40.2
ntp peer 10.10.40.3
ntp server 72.163.32.44 use-vrf management
ntp logging
ntp master 8
vlan 1,30-34,40-44,132
vlan 30
name InBand-Mgmt-C1-XD
vlan 31
name Infra-Mgmt-C1-XD
vlan 32
name Storage-IP-C1-XD
vlan 33
name vMotion-C1-XD
vlan 34
name VM-Data-C1-XD
vlan 40
name InBand-Mgmt-C3A
vlan 41
name Infra-Mgmt-C3A
vlan 42
name StorageIP-C3A
vlan 43
name vMotion-C3A
vlan 44
name VM-Data-C3A
vlan 132
name OOB
service dhcp
ip dhcp relay
ipv6 dhcp relay
vrf context management
ip route 0.0.0.0/0 10.29.132.1
vpc domain 30
role priority 100
peer-keepalive destination 10.29.132.20
interface Vlan1
no shutdown
ip address 10.29.132.209/24
interface Vlan30
no shutdown
ip address 10.10.30.2/24
hsrp version 2
hsrp 30
preempt
priority 110
ip 10.10.30.1
ip dhcp relay address 10.10.31.21
interface Vlan31
no shutdown
ip address 10.10.31.2/24
hsrp version 2
hsrp 31
preempt
priority 110
ip 10.10.31.1
interface Vlan32
no shutdown
ip address 10.10.32.2/24
hsrp version 2
hsrp 32
preempt
priority 110
ip 10.10.32.1
interface Vlan33
no shutdown
ip address 10.10.33.2/24
hsrp version 2
hsrp 33
preempt
priority 110
ip 10.10.33.1
interface Vlan34
no shutdown
ip address 10.34.0.2/20
hsrp version 2
hsrp 34
preempt
priority 110
ip 10.34.0.1
ip dhcp relay address 10.10.31.21
ip dhcp relay address 10.10.31.22
interface Vlan40
no shutdown
ip address 10.10.40.2/24
hsrp version 2
hsrp 40
preempt
priority 110
ip 10.10.40.1
ip dhcp relay address 10.10.41.21
interface Vlan41
no shutdown
ip address 10.10.41.2/24
hsrp version 2
hsrp 41
preempt
priority 110
ip 10.10.41.1
interface Vlan42
no shutdown
ip address 10.10.42.2/24
hsrp version 2
hsrp 42
preempt
priority 110
ip 10.10.42.1
interface Vlan43
no shutdown
ip address 10.10.43.2/24
hsrp version 2
hsrp 43
preempt
priority 110
ip 10.10.43.1
interface Vlan44
no shutdown
ip address 10.44.0.2/20
hsrp version 2
hsrp 44
preempt
priority 110
ip 10.44.0.1
ip dhcp relay address 10.10.41.21
ip dhcp relay address 10.10.41.22
interface port-channel10
description vPC-PeerLink
switchport mode trunk
switchport trunk allowed vlan 1,30-34,40-44,132
spanning-tree port type network
service-policy type qos input jumbo
vpc peer-link
interface port-channel11
description FIA-XD-C1
switchport mode trunk
switchport trunk allowed vlan 1,30-34
spanning-tree port type edge trunk
mtu 9216
service-policy type qos input jumbo
vpc 11
interface port-channel12
description FIB-XD-C1
switchport mode trunk
switchport trunk allowed vlan 1,30-34
spanning-tree port type edge trunk
mtu 9216
service-policy type qos input jumbo
vpc 12
interface port-channel15
description FI-Uplink-K22
switchport mode trunk
switchport trunk allowed vlan 1,30-34
spanning-tree port type edge trunk
mtu 9216
service-policy type qos input jumbo
vpc 15
interface port-channel16
description FI-Uplink-K22
switchport mode trunk
switchport trunk allowed vlan 1,30-34
spanning-tree port type edge trunk
mtu 9216
service-policy type qos input jumbo
vpc 16
interface port-channel25
description C3a-FI-Uplink
switchport mode trunk
switchport trunk allowed vlan 1,40-44
spanning-tree port type edge trunk
mtu 9216
vpc 25
interface port-channel26
description C3a-FI-Uplink
switchport mode trunk
switchport trunk allowed vlan 1,40-44
spanning-tree port type edge trunk
mtu 9216
vpc 26
interface Ethernet1/1
switchport mode trunk
switchport trunk allowed vlan 1,30-34,40-44,132
channel-group 10 mode active
interface Ethernet1/2
switchport mode trunk
switchport trunk allowed vlan 1,30-34,40-44,132
channel-group 10 mode active
interface Ethernet1/3
switchport mode trunk
switchport trunk allowed vlan 1,30-34,40-44,132
channel-group 10 mode active
interface Ethernet1/4
switchport mode trunk
switchport trunk allowed vlan 1,30-34,40-44,132
channel-group 10 mode active
interface Ethernet1/5
description FIA-XD-C1
switchport mode trunk
switchport trunk allowed vlan 1,30-34
mtu 9216
channel-group 11 mode active
interface Ethernet1/6
description FIA-XD-C1
switchport mode trunk
switchport trunk allowed vlan 1,30-34
mtu 9216
channel-group 11 mode active
interface Ethernet1/7
description FIB-XD-C1
switchport mode trunk
switchport trunk allowed vlan 1,30-34
mtu 9216
channel-group 12 mode active
interface Ethernet1/8
description FIB-XD-C1
switchport mode trunk
switchport trunk allowed vlan 1,30-34
mtu 9216
channel-group 12 mode active
interface Ethernet1/9
interface Ethernet1/10
interface Ethernet1/11
interface Ethernet1/12
interface Ethernet1/13
description FI-Uplink-K22
switchport mode trunk
switchport trunk allowed vlan 1,30-34
mtu 9216
channel-group 15 mode active
interface Ethernet1/14
description FI-Uplink-K22
switchport mode trunk
switchport trunk allowed vlan 1,30-34
mtu 9216
channel-group 15 mode active
interface Ethernet1/15
description FI-Uplink-K22
switchport mode trunk
switchport trunk allowed vlan 1,30-34
mtu 9216
channel-group 16 mode active
interface Ethernet1/16
description FI-Uplink-K22
switchport mode trunk
switchport trunk allowed vlan 1,30-34
mtu 9216
channel-group 16 mode active
interface Ethernet1/17
description Infrastructure-XD-C1
switchport mode trunk
switchport trunk allowed vlan 1,30-34,132
interface Ethernet1/18
description Infrastructure-XD-C1
switchport mode trunk
switchport trunk allowed vlan 1,30-34,132
interface Ethernet1/19
description Infrastructure-XD-C1
switchport mode trunk
switchport trunk allowed vlan 1,30-34,132
interface Ethernet1/20
description Infrastructure-XD-C1
switchport mode trunk
switchport trunk allowed vlan 1,30-34,132
interface Ethernet1/21
description Infrastructure-XD-C1
switchport mode trunk
switchport trunk allowed vlan 1,30-34,132
interface Ethernet1/22
description Infrastructure-XD-C1
switchport mode trunk
switchport trunk allowed vlan 1,30-34,132
interface Ethernet1/23
description Infrastructure-XD-C1
switchport mode trunk
switchport trunk allowed vlan 1,30-34,132
interface Ethernet1/24
description Infrastructure-XD-C1
switchport mode trunk
switchport trunk allowed vlan 1,30-34,132
interface Ethernet1/25
switchport mode trunk
switchport trunk allowed vlan 1,40-44
mtu 9216
channel-group 25 mode active
interface Ethernet1/26
switchport mode trunk
switchport trunk allowed vlan 1,40-44
mtu 9216
channel-group 25 mode active
interface Ethernet1/27
switchport mode trunk
switchport trunk allowed vlan 1,40-44
mtu 9216
channel-group 26 mode active
interface Ethernet1/28
switchport mode trunk
switchport trunk allowed vlan 1,40-44
mtu 9216
channel-group 26 mode active
interface Ethernet1/29
description Launcher-ESXi
switchport mode trunk
switchport trunk allowed vlan 1,40-44
interface Ethernet1/30
description Launcher-ESXi
switchport mode trunk
switchport trunk allowed vlan 1,40-44
interface Ethernet1/31
description Launcher-ESXi
switchport mode trunk
switchport trunk allowed vlan 1,40-44
interface Ethernet1/32
description Launcher-ESXi
switchport mode trunk
switchport trunk allowed vlan 1,40-44
interface Ethernet1/33
description C3-InfraHost
switchport mode trunk
switchport trunk allowed vlan 1,40-44
interface Ethernet1/34
description C3-InfraHost
switchport mode trunk
switchport trunk allowed vlan 1,40-44
interface Ethernet1/35
description Launcher-ESXi
switchport mode trunk
switchport trunk allowed vlan 1,40-44
interface Ethernet1/36
description Launcher-ESXi
switchport mode trunk
switchport trunk allowed vlan 1,40-44
interface Ethernet1/37
description M10-ESXi
switchport mode trunk
switchport trunk allowed vlan 1,40-44
interface Ethernet1/38
interface Ethernet1/39
interface Ethernet1/40
interface Ethernet1/41
interface Ethernet1/42
interface Ethernet1/43
interface Ethernet1/44
interface Ethernet1/45
interface Ethernet1/46
interface Ethernet1/47
description Jumphost Uplink to VLAN 40
switchport access vlan 40
interface Ethernet1/48
interface Ethernet1/49
interface Ethernet1/50
interface Ethernet1/51
interface Ethernet1/52
interface Ethernet1/53
interface Ethernet1/54
interface mgmt0
vrf member management
ip address 10.29.132.19/24
clock timezone PST -7 0
line console
line vty
version 7.0(3)I2(2d)
switchname SJC02L151-K21-N9372-B
class-map type network-qos class-fcoe
match qos-group 1
class-map type network-qos class-all-flood
match qos-group 2
class-map type network-qos class-ip-multicast
match qos-group 2
vdc SJC02L151-K21-N9372-B id 1
limit-resource vlan minimum 16 maximum 4094
limit-resource vrf minimum 2 maximum 4096
limit-resource port-channel minimum 0 maximum 511
limit-resource u4route-mem minimum 248 maximum 248
limit-resource u6route-mem minimum 96 maximum 96
limit-resource m4route-mem minimum 58 maximum 58
limit-resource m6route-mem minimum 8 maximum 8
feature telnet
feature nxapi
cfs eth distribute
feature interface-vlan
feature hsrp
feature lacp
feature dhcp
feature vpc
feature lldp
clock protocol ntp vdc 1
no password strength-check
username admin password 5 $1$AZCyddv1$dd5M8SYzBCD2SjMlM4z9O1 role network-admin
ip domain-lookup
no service unsupported-transceiver
class-map type qos match-all class-fcoe
policy-map type qos jumbo
class class-default
set qos-group 0
copp profile strict
snmp-server user admin network-admin auth md5 0xe4a98be832e9789cef09399b76d1044f
priv 0xe4a98be832e9789cef09399b76d1044f localizedkey
rmon event 1 log trap public description FATAL(1) owner PMON@FATAL
rmon event 2 log trap public description CRITICAL(2) owner PMON@CRITICAL
rmon event 3 log trap public description ERROR(3) owner PMON@ERROR
rmon event 4 log trap public description WARNING(4) owner PMON@WARNING
rmon event 5 log trap public description INFORMATION(5) owner PMON@INFO
ntp peer 10.10.30.2
ntp server 10.10.30.3
ntp peer 10.10.40.2
ntp server 10.10.40.3
ntp server 72.163.32.44 use-vrf management
ntp logging
ntp master 8
vlan 1,30-34,40-44,132
vlan 30
name InBand-Mgmt-C1-XD
vlan 31
name Infra-Mgmt-C1-XD
vlan 32
name Storage-IP-C1-XD
vlan 33
name vMotion-C1-XD
vlan 34
name VM-Data-C1-XD
vlan 40
name InBand-Mgmt-C3A
vlan 41
name Infra-Mgmt-C3A
vlan 42
name StorageIP-C3A
vlan 43
name vMotion-C3A
vlan 44
name VM-Data-C3A
vlan 132
name OOB
service dhcp
ip dhcp relay
ipv6 dhcp relay
vrf context management
ip route 0.0.0.0/0 10.29.132.1
vpc domain 30
role priority 200
peer-keepalive destination 10.29.132.19
interface Vlan1
no shutdown
ip address 10.29.132.210/24
interface Vlan30
no shutdown
ip address 10.10.30.3/24
hsrp version 2
hsrp 30
preempt
priority 110
ip 10.10.30.1
ip dhcp relay address 10.10.31.21
interface Vlan31
no shutdown
ip address 10.10.31.3/24
hsrp version 2
hsrp 31
preempt
priority 110
ip 10.10.31.1
interface Vlan32
no shutdown
ip address 10.10.32.3/24
hsrp version 2
hsrp 32
preempt
priority 110
ip 10.10.32.1
interface Vlan33
no shutdown
ip address 10.10.33.3/24
hsrp version 2
hsrp 33
preempt
priority 110
ip 10.10.33.1
interface Vlan34
no shutdown
ip address 10.34.0.3/20
hsrp version 2
hsrp 34
preempt
priority 110
ip 10.34.0.1
ip dhcp relay address 10.10.31.21
ip dhcp relay address 10.10.31.22
interface Vlan40
no shutdown
ip address 10.10.40.3/24
hsrp version 2
hsrp 40
preempt
priority 110
ip 10.10.40.1
ip dhcp relay address 10.10.41.21
interface Vlan41
no shutdown
ip address 10.10.41.3/24
hsrp version 2
hsrp 41
preempt
priority 110
ip 10.10.41.1
interface Vlan42
no shutdown
ip address 10.10.42.3/24
hsrp version 2
hsrp 42
preempt
priority 110
ip 10.10.42.1
interface Vlan43
no shutdown
ip address 10.10.43.3/24
hsrp version 2
hsrp 43
preempt
priority 110
ip 10.10.43.1
interface Vlan44
no shutdown
ip address 10.44.0.3/20
hsrp version 2
hsrp 44
preempt
priority 110
ip 10.44.0.1
ip dhcp relay address 10.10.41.21
ip dhcp relay address 10.10.41.22
interface port-channel10
description vPC-PeerLink
switchport mode trunk
switchport trunk allowed vlan 1,30-34,40-44,132
spanning-tree port type network
service-policy type qos input jumbo
vpc peer-link
interface port-channel11
description FIA-XD-C1
switchport mode trunk
switchport trunk allowed vlan 1,30-34
spanning-tree port type edge trunk
mtu 9216
service-policy type qos input jumbo
vpc 11
interface port-channel12
description FIB-XD-C1
switchport mode trunk
switchport trunk allowed vlan 1,30-34
spanning-tree port type edge trunk
mtu 9216
service-policy type qos input jumbo
vpc 12
interface port-channel15
description FI-Uplink-K22
switchport mode trunk
switchport trunk allowed vlan 1,30-34
spanning-tree port type edge trunk
mtu 9216
service-policy type qos input jumbo
vpc 15
interface port-channel16
description FI-Uplink-K22
switchport mode trunk
switchport trunk allowed vlan 1,30-34
spanning-tree port type edge trunk
mtu 9216
service-policy type qos input jumbo
vpc 16
interface port-channel25
description C3a-FI-Uplink
switchport mode trunk
switchport trunk allowed vlan 1,40-44
spanning-tree port type edge trunk
mtu 9216
vpc 25
interface port-channel26
description C3a-FI-Uplink
switchport mode trunk
switchport trunk allowed vlan 1,40-44
spanning-tree port type edge trunk
mtu 9216
vpc 26
interface Ethernet1/1
switchport mode trunk
switchport trunk allowed vlan 1,30-34,40-44,132
channel-group 10 mode active
interface Ethernet1/2
switchport mode trunk
switchport trunk allowed vlan 1,30-34,40-44,132
channel-group 10 mode active
interface Ethernet1/3
switchport mode trunk
switchport trunk allowed vlan 1,30-34,40-44,132
channel-group 10 mode active
interface Ethernet1/4
switchport mode trunk
switchport trunk allowed vlan 1,30-34,40-44,132
channel-group 10 mode active
interface Ethernet1/5
description FIB-XD-C1
switchport mode trunk
switchport trunk allowed vlan 1,30-34
mtu 9216
channel-group 12 mode active
interface Ethernet1/6
description FIB-XD-C1
switchport mode trunk
switchport trunk allowed vlan 1,30-34
mtu 9216
channel-group 12 mode active
interface Ethernet1/7
description FIA-XD-C1
switchport mode trunk
switchport trunk allowed vlan 1,30-34
mtu 9216
channel-group 11 mode active
interface Ethernet1/8
description FIA-XD-C1
switchport mode trunk
switchport trunk allowed vlan 1,30-34
mtu 9216
channel-group 11 mode active
interface Ethernet1/9
interface Ethernet1/10
interface Ethernet1/11
interface Ethernet1/12
interface Ethernet1/13
description FI-Uplink-K22
switchport mode trunk
switchport trunk allowed vlan 1,30-34
mtu 9216
channel-group 16 mode active
interface Ethernet1/14
description FI-Uplink-K22
switchport mode trunk
switchport trunk allowed vlan 1,30-34
mtu 9216
channel-group 16 mode active
interface Ethernet1/15
description FI-Uplink-K22
switchport mode trunk
switchport trunk allowed vlan 1,30-34
mtu 9216
channel-group 15 mode active
interface Ethernet1/16
description FI-Uplink-K22
switchport mode trunk
switchport trunk allowed vlan 1,30-34
mtu 9216
channel-group 15 mode active
interface Ethernet1/17
description Infrastructure-XD-C1
switchport mode trunk
switchport trunk allowed vlan 1,30-34,132
interface Ethernet1/18
description Infrastructure-XD-C1
switchport mode trunk
switchport trunk allowed vlan 1,30-34,132
interface Ethernet1/19
description Infrastructure-XD-C1
switchport mode trunk
switchport trunk allowed vlan 1,30-34,132
interface Ethernet1/20
description Infrastructure-XD-C1
switchport mode trunk
switchport trunk allowed vlan 1,30-34,132
interface Ethernet1/21
description Infrastructure-XD-C1
switchport mode trunk
switchport trunk allowed vlan 1,30-34,132
interface Ethernet1/22
description Infrastructure-XD-C1
switchport mode trunk
switchport trunk allowed vlan 1,30-34,132
interface Ethernet1/23
description Infrastructure-XD-C1
switchport mode trunk
switchport trunk allowed vlan 1,30-34,132
interface Ethernet1/24
description Infrastructure-XD-C1
switchport mode trunk
switchport trunk allowed vlan 1,30-34,132
interface Ethernet1/25
switchport mode trunk
switchport trunk allowed vlan 1,40-44
mtu 9216
channel-group 25 mode active
interface Ethernet1/26
switchport mode trunk
switchport trunk allowed vlan 1,40-44
mtu 9216
channel-group 25 mode active
interface Ethernet1/27
switchport mode trunk
switchport trunk allowed vlan 1,40-44
mtu 9216
channel-group 26 mode active
interface Ethernet1/28
switchport mode trunk
switchport trunk allowed vlan 1,40-44
mtu 9216
channel-group 26 mode active
interface Ethernet1/29
description Launcher-ESXi
switchport mode trunk
switchport trunk allowed vlan 1,40-44
interface Ethernet1/30
description Launcher-ESXi
switchport mode trunk
switchport trunk allowed vlan 1,40-44
interface Ethernet1/31
description Launcher-ESXi
switchport mode trunk
switchport trunk allowed vlan 1,40-44
interface Ethernet1/32
description Launcher-ESXi
switchport mode trunk
switchport trunk allowed vlan 1,40-44
interface Ethernet1/33
description C3-InfraHost
switchport mode trunk
switchport trunk allowed vlan 1,40-44
interface Ethernet1/34
description C3-InfraHost
switchport mode trunk
switchport trunk allowed vlan 1,40-44
interface Ethernet1/35
description Launcher-ESXi
switchport mode trunk
switchport trunk allowed vlan 1,40-44
interface Ethernet1/36
description Launcher-ESXi
switchport mode trunk
switchport trunk allowed vlan 1,40-44
interface Ethernet1/37
description M10-ESXi
switchport mode trunk
switchport trunk allowed vlan 1,40-44
interface Ethernet1/38
interface Ethernet1/39
interface Ethernet1/40
interface Ethernet1/41
interface Ethernet1/42
interface Ethernet1/43
interface Ethernet1/44
interface Ethernet1/45
interface Ethernet1/46
interface Ethernet1/47
description Jumphost Uplink to VLAN 40
switchport access vlan 40
interface Ethernet1/48
interface Ethernet1/49
interface Ethernet1/50
interface Ethernet1/51
interface Ethernet1/52
interface Ethernet1/53
interface Ethernet1/54
interface mgmt0
vrf member management
ip address 10.29.132.20/24
clock timezone PST -7 0
line console
line vty
The following charts delineate performance parameters for the 32-node cluster during a Login VSI 4.1.25 Knowledge Worker workload test on 4000 Citrix XenDesktop 7.13 deployed user benchmark test.
The performance charts indicates that the HyperFlex All-Flash nodes and compute-only nodes in Hybrid configuration running Data Platform version 2.1(1b) were operating consistently from node to node and well within normal operating parameters for hardware in this class. The data also supports the even distribution of the workload across all 32 servers.
Figure 63 HXAF Server 1: Memory Usage in Mbytes
Figure 64 HXAF Server 1: CPU Core Utilization
Figure 65 HXAF Server 1: Network Adapter (VMNICs) Mbits Received /Transmitted Per Sec
Figure 66 HXAF Server 2: Memory Usage in Mbytes
Figure 67 HXAF Server 2: CPU Core Utilization
Figure 68 HXAF Server 2: Network Adapter (VMNICs) Mbits Received /Transmitted Per Sec
Figure 69 HXAF Server 3: Memory Usage in Mbytes
Figure 70 HXAF Server 3: CPU Core Utilization
Figure 71 HXAF Server 3: Network Adapter (VMNICs) Mbits Received /Transmitted Per Sec
Figure 72 HXAF Server 4: Memory Usage in Mbytes
Figure 73 HXAF Server 4: CPU Core Utilization
Figure 74 HXAF Server 4: Network Adapter (VMNICs) Mbits Received /Transmitted Per Sec
Figure 75 HXAF Server 5: Memory Usage in Mbytes
Figure 76 HXAF Server 5: CPU Core Utilization
Figure 77 HXAF Server 5: Network Adapter (VMNICs) Mbits Received /Transmitted Per Sec
Figure 78 HXAF Server 6: Memory Usage in Mbytes
Figure 79 HXAF Server 6: CPU Core Utilization
Figure 80 HXAF Server 6: Network Adapter (VMNICs) Mbits Received /Transmitted Per Sec
Figure 81 HXAF Server 7: Memory Usage in Mbytes
Figure 82 HXAF Server 7: CPU Core Utilization
Figure 83 HXAF Server 7: Network Adapter (VMNICs) Mbits Received /Transmitted Per Sec
Figure 84 HXAF Server 8: Memory Usage in Mbytes
Figure 85 HXAF Server 8: CPU Core Utilization
Figure 86 HXAF Server 8: Network Adapter (VMNICs) Mbits Received /Transmitted Per Sec
Figure 87 HXAF Server 9: Memory Usage in Mbytes
Figure 88 HXAF Server 9: CPU Core Utilization
Figure 89 HXAF Server 9: Network Adapter (VMNICs) Mbits Received /Transmitted Per Sec
Figure 90 HXAF Server 10: Memory Usage in Mbytes
Figure 91 HXAF Server 10: CPU Core Utilization
Figure 92 HXAF Server 10: Network Adapter (VMNICs) Mbits Received /Transmitted Per Sec
Figure 93 HXAF Server 11: Memory Usage in Mbytes
Figure 94 HXAF Server 11: CPU Core Utilization
Figure 95 HXAF Server 11: Network Adapter (VMNICs) Mbits Received /Transmitted Per Sec
Figure 96 HXAF Server 12: Memory Usage in Mbytes
Figure 97 HXAF Server 12: CPU Core Utilization
Figure 98 HXAF Server 12: Network Adapter (VMNICs) Mbits Received /Transmitted Per Sec
Figure 99 HXAF Server 13: Memory Usage in Mbytes
Figure 100 HXAF Server 13: CPU Core Utilization
Figure 101 HXAF Server 13: Network Adapter (VMNICs) Mbits Received /Transmitted Per Sec
Figure 102 HXAF Server 14: Memory Usage in Mbytes
Figure 103 HXAF Server 14: CPU Core Utilization
Figure 104 HXAF Server 14: Network Adapter (VMNICs) Mbits Received /Transmitted Per Sec
Figure 105 HXAF Server 15: Memory Usage in Mbytes
Figure 106 HXAF Server 15: CPU Core Utilization
Figure 107 HXAF Server 15: Network Adapter (VMNICs) Mbits Received /Transmitted Per Sec
Figure 108 HXAF Server 16: Memory Usage in Mbytes
Figure 109 HXAF Server 16: CPU Core Utilization
Figure 110 HXAF Server 16: Network Adapter (VMNICs) Mbits Received /Transmitted Per Sec
Figure 111 Compute Blade 1: Memory Usage in Mbytes
Figure 112 Compute Blade 1: CPU Core Utilization
Figure 113 Compute Blade 1: Network Adapter (VMNICs) Mbits Received /Transmitted Per Sec
Figure 114 Compute Blade 2: Memory Usage in Mbytes
Figure 115 Compute Blade 2: CPU Core Utilization
Figure 116 Compute Blade 2: Network Adapter (VMNICs) Mbits Received /Transmitted Per Sec
Figure 117 Compute Blade 3: Memory Usage in Mbytes
Figure 118 Compute Blade 3: CPU Core Utilization
Figure 119 Compute Blade 3: Network Adapter (VMNICs) Mbits Received /Transmitted Per Sec
Figure 120 Compute Blade 4: Memory Usage in Mbytes
Figure 121 Compute Blade 4: CPU Core Utilization
Figure 122 Compute Blade 4: Network Adapter (VMNICs) Mbits Received /Transmitted Per Sec
Figure 123 Compute Blade 5: Memory Usage in Mbytes
Figure 124 Compute Blade 5: CPU Core Utilization
Figure 125 Compute Blade 5: Network Adapter (VMNICs) Mbits Received /Transmitted Per Sec
Figure 126 Compute Blade 6: Memory Usage in Mbytes
Figure 127 Compute Blade 6: CPU Core Utilization
Figure 128 Compute Blade 6: Network Adapter (VMNICs) Mbits Received /Transmitted Per Sec
Figure 129 Compute Blade7: Memory Usage in Mbytes
Figure 130 Compute Blade 7: CPU Core Utilization
Figure 131 Compute Blade 7: Network Adapter (VMNICs) Mbits Received /Transmitted Per Sec
Figure 132 Compute Blade 8: Memory Usage in Mbytes
Figure 133 Compute Blade 8: CPU Core Utilization
Figure 134 Compute Blade 8: Network Adapter (VMNICs) Mbits Received /Transmitted Per Sec
Figure 135 Compute Server 1: Memory Usage in Mbytes
Figure 136 Compute Server 1: CPU Core Utilization
Figure 137 Compute Server 1: Network Adapter (VMNICs) Mbits Received /Transmitted Per Sec
Figure 138 Compute Server 2: Memory Usage in Mbytes
Figure 139 Compute Server 2: CPU Core Utilization
Figure 140 Compute Server 2: Network Adapter (VMNICs) Mbits Received /Transmitted Per Sec
Figure 141 Compute Server 3: Memory Usage in Mbytes
Figure 142 Compute Server 3: CPU Core Utilization
Figure 143 Compute Server 3: Network Adapter (VMNICs) Mbits Received /Transmitted Per Sec
Figure 144 Compute Server 4: Memory Usage in Mbytes
Figure 145 Compute Server 4: CPU Core Utilization
Figure 146 Compute Server 4: Network Adapter (VMNICs) Mbits Received /Transmitted Per Sec
Figure 147 Compute Server 5: Memory Usage in Mbytes
Figure 148 Compute Server 5: CPU Core Utilization
Figure 149 Compute Server 5: Network Adapter (VMNICs) Mbits Received /Transmitted Per Sec
Figure 150 Compute Server 6: Memory Usage in Mbytes
Figure 151 Compute Server 6: CPU Core Utilization
Figure 152 Compute Server 6: Network Adapter (VMNICs) Mbits Received /Transmitted Per Sec
Figure 153 Compute Server 7: Memory Usage in Mbytes
Figure 154 Compute Server 7: CPU Core Utilization
Figure 155 Compute Server 7: Network Adapter (VMNICs) Mbits Received /Transmitted Per Sec
Figure 156 Compute Server 8: Memory Usage in Mbytes
Figure 157 Compute Server 8: CPU Core Utilization
Figure 158 Compute Server 8: Network Adapter (VMNICs) Mbits Received /Transmitted Per Sec