Cisco Solution for EMC VSPEX Microsoft SQL 2014 Consolidation
Deployment Guide for Microsoft SQL 2014 with Cisco UCS B200 M4 Blade Server, Nexus 9000 Switches (Standalone), Microsoft Windows Server 2012 R2, VMware vSphere 5.5 and EMC VNX5400
Last Updated: October 9, 2015
About Cisco Validated Designs
The CVD program consists of systems and solutions designed, tested, and documented to facilitate faster, more reliable, and more predictable customer deployments. For more information visit
http://www.cisco.com/go/designzone.
ALL DESIGNS, SPECIFICATIONS, STATEMENTS, INFORMATION, AND RECOMMENDATIONS (COLLECTIVELY, "DESIGNS") IN THIS MANUAL ARE PRESENTED "AS IS," WITH ALL FAULTS. CISCO AND ITS SUPPLIERS DISCLAIM ALL WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE. IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THE DESIGNS, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
THE DESIGNS ARE SUBJECT TO CHANGE WITHOUT NOTICE. USERS ARE SOLELY RESPONSIBLE FOR THEIR APPLICATION OF THE DESIGNS. THE DESIGNS DO NOT CONSTITUTE THE TECHNICAL OR OTHER PROFESSIONAL ADVICE OF CISCO, ITS SUPPLIERS OR PARTNERS. USERS SHOULD CONSULT THEIR OWN TECHNICAL ADVISORS BEFORE IMPLEMENTING THE DESIGNS. RESULTS MAY VARY DEPENDING ON FACTORS NOT TESTED BY CISCO.
CCDE, CCENT, Cisco Eos, Cisco Lumin, Cisco Nexus, Cisco StadiumVision, Cisco TelePresence, Cisco WebEx, the Cisco logo, DCE, and Welcome to the Human Network are trademarks; Changing the Way We Work, Live, Play, and Learn and Cisco Store are service marks; and Access Registrar, Aironet, AsyncOS, Bringing the Meeting To You, Catalyst, CCDA, CCDP, CCIE, CCIP, CCNA, CCNP, CCSP, CCVP, Cisco, the Cisco Certified Internetwork Expert logo, Cisco IOS, Cisco Press, Cisco Systems, Cisco Systems Capital, the Cisco Systems logo, Cisco Unity, Collaboration Without Limitation, EtherFast, EtherSwitch, Event Center, Fast Step, Follow Me Browsing, FormShare, GigaDrive, HomeLink, Internet Quotient, IOS, iPhone, iQuick Study, IronPort, the IronPort logo, LightStream, Linksys, MediaTone, MeetingPlace, MeetingPlace Chime Sound, MGX, Networkers, Networking Academy, Network Registrar, PCNow, PIX, PowerPanels, ProConnect, ScriptShare, SenderBase, SMARTnet, Spectrum Expert, StackWise, The Fastest Way to Increase Your Internet Quotient, TransPath, WebEx, and the WebEx logo are registered trademarks of Cisco Systems, Inc. and/or its affiliates in the United States and certain other countries.
All other trademarks mentioned in this document or website are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (0809R)
© 2015 Cisco Systems, Inc. All rights reserved.
Table of Contents
Cisco Unified Computing System (UCS)
Cisco UCS 6248UP Fabric Interconnects
Cisco UCS 5108 Blade Server Chassis
Cisco UCS B200 M4 Blade Servers
Cisco UCS 2208 Series Fabric Extenders
EMC Storage Technology and Benefits
VNX Intel MCx Code Path Optimization
EMC Unisphere Management Suite
Microsoft Windows Server 2012 R2
Hardware and Software Specifications
Cisco VSPEX Environment Deployment
Configure Cisco Nexus 9396PX Switches
Configure Cisco Unified Computing System
Configure EMC VNX5400 Storage Array
Install and Configure vSphere ESXi
Set Up VMware ESXi Installation
Set Up Management Networking for ESXi Hosts
Deploy Virtual Machines and Install OS
Install Windows Server 2012 R2
Install SQL Server 2014 Standalone Instance on Virtual machines
Performance of Consolidated SQL Servers
Cisco UCS B200 M4 BIOS Settings
Install Windows Server Failover Cluster
Verifying the Cluster Configuration
Install SQL Server 2014 Standalone Instance on each Virtual Machine
Configure SQL Server AlwaysOn Availability Group
Enable AlwaysOn Availability Group Feature
Create AlwaysOn Availability Group
SQL Server AlwaysOn Availability Group Failover Validation
The emerging IT infrastructure strategies have given businesses an opportunity to evolve with greater agility while improving efficiency. Private cloud infrastructures allow businesses to consolidate infrastructure, centralize management and realize an improved total cost of ownership in a short time.
Legacy Information Technology architectures were typically silos of infrastructure, dedicated to a single purpose. Core business applications and database servers were frequently sequestered on dedicated machinery, resulting in low asset utilization and challenging asset refresh project. Modern advancements in virtualization make migration easy, while improved server and network infrastructure ensure that key applications have the resources needed to serve business needs.
Microsoft SQL Server has evolved as the database of choice for a huge number of built-in business applications because of its robust and rich set of features, ease of use and competitive pricing. Given its important role in many businesses, it makes more sense to host Microsoft SQL Server workloads efficiently on private cloud infrastructures that can optimize management and performance of this critical business engine. Microsoft SQL Server adapts itself perfectly to the virtualized private cloud platform, like Cisco-EMC VSPEX solution.
This Cisco Validated Design (CVD) provides a framework with virtualization as an option for consolidation of multi-database, multi-instance SQL Server running OLTP applications by highlighting some of the key decision points based on technical analysis. With this consolidation scenario one can integrate it with Cloud orchestration and provisioning processes to offer Database-as–a-Service (DaaS) with Microsoft SQL server. This deployment guide shows how to design the deployment strategy for virtualized SQL server instance while satisfying key business requirement like scalability, availability and disaster recovery.
This Cisco-EMC VSPEX solution showcases improved efficiency by guiding the reader through a typical consolidation scenario for Microsoft SQL Server supporting enterprise level applications. It showcases both HA and DR solutions as value added offering along with the virtualized SQL Server instance. The other major highlights of this solution will be VMware ESX based deployment, features like SQL Server 2014 AlwaysOn, VMware DRS and the use of Cisco Nexus 9000 Series switches for client traffic.
The objective of this activity is to deliver optimized operational efficiency for businesses through the use of this reference deployment guide. It describes a Microsoft SQL Server 2014 consolidation architecture using Cisco UCS B-Series Servers, VMware ESXi, and EMC VNX5400 storage array The HA/DR features of Microsoft SQL Server 2014 and VMware vSphere will be highlighted.
Developing an infrastructure that allows flexibility, redundancy, high-availability, ease of management, security, and access control while reducing costs, hardware footprints, and computing complexities is one of the biggest challenges faced by IT across all industries.
The tremendous growth in hardware computing capacity, database applications, and physical server sprawl has resulted in high-priced and complex computing environments containing many over-provisioned and under-utilized database servers. Many of these servers implement a single instance of SQL Server utilizing 10 to 15 percent CPU utilization on an average which certainly results in insufficient utilization of server resources. Additionally, with all its concomitant complexities, the system administrator’s workload increases in terms of SQL Server management. Especially during a catastrophic physical database server failure, system administrators may be required to rebuild and restore environments within a short period of time.
Consolidating database environments through virtualization represents a significant opportunity to address these challenges. By integrating services into a private cloud, not only does the business benefit from greater efficiency, the services themselves improve. Consolidation through private cloud solutions like VSPEX drive standardization of the database environment, simplify maintenance and administration tasks while improving asset utilization.
Microsoft SQL Server is the de facto database for most Windows applications; this database technology is an ideal prospect for consolidation on a virtualized platform. Once consolidated onto a private cloud like Cisco-EMC VSPEX solution, SQL Server better supports enterprise applications through improved failover and DR capabilities inherent to the private cloud hosting solution.
This CVD is intended for customers, partners, solution architects, storage administrators, and database administrators who are evaluating or planning consolidation of Microsoft SQL Server by virtualizing on VMware vSphere environment using HA/DRS feature for high availability and load balancing. This document also showcases protection against application failure using the Microsoft SQL Server AlwaysOn Availability Group feature. It provides an overview of various considerations and reference architectures for consolidating of SQL Server 2014 deployments in a highly available environment.
This document captures the architecture, configuration information and deployment procedure of consolidating Microsoft SQL Server 2014 databases by virtualizing on VMware vSphere 5.5 using Cisco UCS, Cisco Nexus 9000 series switches and EMC VNX 5400 series storage arrays. This document focuses mainly on consolidation of multiple Microsoft SQL Server 2014 databases on a two-node VMware vSphere cluster with HA/DRS enabled to maximize the availability of VMs and application uptime. The performance of this consolidation study on a single host is also measured to understand the throughput, latency, CPU usage and VM scalability.
This document also showcases building a secondary site as a near-site disaster recovery location for the consolidated SQL Servers in the primary site by configuring SQL Server AlwaysOn availability group pairs with synchronous replication between them.
Many Microsoft SQL Server deployments have a single application database residing on a physical server. Hitherto, this was a simplistic way to provision new applications for business requirements. As business expanded, the simple applications evolved into business-critical applications, which impeded the functioning of the organization by causing disruption in the information availability due to the lack of high availability features in the simplistic deployments used. Hence, IT organizations face the biggest challenge in keeping the system up and running round the clock with increasing demands and growing complexities.
By following the simplistic process, the problems may not be immediately perceived; however, the challenges become apparent when the databases are moved to the data center:
· When the applications and their servers are moved to a data center, the heterogeneous hardware in the data center can lead to increased maintenance and administrative costs.
· The deployments may not be license-compliant for all the software that is in use, and the cost of becoming legal can potentially be quite expensive.
· Systems and applications that may have started out small and non-mission-critical may have changed over time to become central to the business, but the deployment of those servers is the same as when they were first deployed, so the hardware and architecture does not meet the availability and/or performance needs of the business.
· Poor server utilization and space usage in the data center. Large numbers of older servers consume space and other resources that may be needed for newer servers. The servers themselves may be either sitting nearly idle or completely maximized as a result of poor capacity planning or the lack of a hardware refresh during the lifetime.
This section provides an overview of the Cisco solution for EMC VSPEX for virtualized Microsoft SQL Server and the key technologies used in this solution. This solution has been designed and validated by Cisco to provide the server, network and storage resources for hardware consolidation of Microsoft SQL Server deployments using Cisco UCS, VMware virtualization technologies and EMC VNX5400 storage arrays.
This solution enables customers to consolidate and deploy multiple small or medium virtualized SQL Servers in a Cisco solution for EMC VSPEX environment.
This solution was validated using Fibre Channel for EMC VNX5400 storage arrays. This solution requires the presence of Active Directory, DNS and VMware vCenter. The implementation of these services is beyond the scope of this guide, but the services are prerequisites for a successful deployment.
Following are the components used for the design and deployment of this solution:
· Cisco Unified Computing Solution
· Cisco UCS Server B200 M4 Blades
· Cisco UCS 6248 Fabric Interconnects
· Cisco UCS 5108 Blade Server Chassis
· EMC VNX5400 Storage Arrays
· VMware vSphere 5.5 Update 2
· Microsoft Windows Server 2012 R2
· Microsoft SQL Server 2014
The Cisco Unified Computing System™ (Cisco UCS®) is a next-generation data center platform that unites computing, networking, storage access, and virtualization resources into a cohesive system designed to reduce total cost of ownership (TCO) and increase business agility. The system integrates a low-latency; lossless 10 Gigabit Ethernet unified network fabric with enterprise-class, x86-architecture servers. The system is an integrated, scalable, multi chassis platform in which all resources participate in a unified management domain.
Figure 1 Cisco UCS Components
Figure 2 Cisco UCS Overview
The main components of the Cisco UCS are:
· Compute - The system is based on an entirely new class of computing system that incorporates rack mount and blade servers based on Intel Xeon 2600 v2 Series Processors.
· Network - The system is integrated onto a low-latency, lossless, 10-Gbps unified network fabric. This network foundation consolidates LANs, SANs, and high-performance computing networks which are separate networks today. The unified fabric lowers costs by reducing the number of network adapters, switches, and cables, and by decreasing the power and cooling requirements.
· Virtualization - The system unleashes the full potential of virtualization by enhancing the scalability, performance, and operational control of virtual environments. Cisco security, policy enforcement, and diagnostic features are now extended into virtualized environments to better support changing business and IT requirements.
· Storage access - The system provides consolidated access to both SAN storage and Network Attached Storage (NAS) over the unified fabric. By unifying the storage access the Cisco Unified Computing System can access storage over Ethernet (SMB 3.0 or iSCSI), Fibre Channel, and Fibre Channel over Ethernet (FCoE). This provides customers with storage choices and investment protection. In addition, the server administrators can pre-assign storage-access policies to storage resources, for simplified storage connectivity and management leading to increased productivity.
· Management - the system uniquely integrates all system components to enable the entire solution to be managed as a single entity by the Cisco UCS Manager. The Cisco UCS Manager has an intuitive graphical user interface (GUI), a command-line interface (CLI), and a powerful scripting library module for Microsoft PowerShell built on a robust application programming interface (API) to manage all system configuration and operations.
Cisco Unified Computing System (Cisco UCS) fuses access layer networking and servers. This high-performance, next-generation server system provides a data center with a high degree of workload agility and scalability.
The Cisco Unified Computing System is designed to deliver:
· A reduced Total Cost of Ownership and increased business agility.
· Increased IT staff productivity through just-in-time provisioning and mobility support.
· A cohesive, integrated system which unifies the technology in the data center. The system is managed, serviced, and tested as a whole.
· Scalability through a design for hundreds of discrete servers and thousands of virtual machines and the capability to scale I/O bandwidth to match demand.
· Industry standards supported by a partner ecosystem of industry leaders.
Cisco UCS Manager provides unified, embedded management of all software and hardware components of the Cisco Unified Computing System through an intuitive GUI, a command line interface (CLI), a Microsoft PowerShell module, or an XML API. The Cisco UCS Manager provides unified management domain with centralized management capabilities and controls multiple chassis and thousands of virtual machines.
The Fabric interconnects provide a single point for connectivity and management for the entire system. Typically deployed as an active-active pair, the system’s fabric interconnects integrate all components into a single, highly-available management domain controlled by Cisco UCS Manager. The fabric interconnects manage all I/O efficiently and securely at a single point, resulting in deterministic I/O latency regardless of a server or virtual machine’s topological location in the system.
Cisco UCS 6200 Series Fabric Interconnects support the system’s 80-Gbps unified fabric with low-latency, lossless, cut-through switching that supports IP, storage, and management traffic using a single set of cables. The fabric interconnects feature virtual interfaces that terminate both physical and virtual connections equivalently, establishing a virtualization-aware environment in which blade, rack servers, and virtual machines are interconnected using the same mechanisms. The Cisco UCS 6248UP is a 1-RU Fabric Interconnect that features up to 48 universal ports that can support 80 Gigabit Ethernet, Fibre Channel over Ethernet, or native Fibre Channel connectivity.
Figure 3 Cisco UCS 6248UP Fabric Interconnect – Front View
The Cisco UCS 5100 Series Blade Server Chassis is a crucial building block of the Cisco Unified Computing System, delivering a scalable and flexible blade server chassis.
The Cisco UCS 5108 Blade Server Chassis is six rack units (6RU) high and can mount in an industry-standard 19-inch rack. A single chassis can house up to eight half-width Cisco UCS B-Series Blade Servers and can accommodate both half-width and full-width blade form factors.
Four single-phase, hot-swappable power supplies are accessible from the front of the chassis. These power supplies are 92 percent efficient and can be configured to support non-redundant, N+ 1 redundant and grid-redundant configurations. The rear of the chassis contains eight hot-swappable fans, four power connectors (one per power supply), and two I/O bays for Cisco UCS 2208 XP Fabric Extenders.
A passive mid-plane provides up to 40 Gbps of I/O bandwidth per server slot and up to 80 Gbps of I/O bandwidth for two slots. The chassis is capable of supporting future 80 Gigabit Ethernet standards.
Figure 4 Cisco UCS5100 Series Blade Server Chassis (Front and Rear Views)
The enterprise-class Cisco UCS B200 M4 blade server extends the capabilities of Cisco’s Unified Computing System portfolio in a half-width blade form factor. The Cisco UCS B200 M4 uses the power of the latest Intel® Xeon® E5-2600 v3 Series processor family CPUs with up to 1536 GB of RAM (using 64 GB DIMMs), two solid-state drives (SSDs) or hard disk drives (HDDs), and up to 80 Gbps throughput connectivity. The Cisco UCS B200 M4 Blade Server mounts in a Cisco UCS 5100 Series blade server chassis or Cisco UCS Mini blade server chassis. It has 24 total slots for registered ECC DIMMs (RDIMMs) or load-reduced DIMMs (LR DIMMs) for up to 1536 GB total memory capacity (B200 M4 configured with two CPUs using 64 GB DIMMs). It supports one connector for Cisco’s VIC 1340 or 1240 adapter, which provides Ethernet and Fibre Channel over Ethernet (FCoE).
Figure 5 Cisco UCS B200 M4 Blade Server
The Cisco UCS 2208XP Fabric Extender has eight 10 Gigabit Ethernet, FCoE-capable, Enhanced Small Form-Factor Pluggable (SFP+) ports that connect the blade chassis to the fabric interconnect. Each Cisco UCS 2208XP has thirty-two 10 Gigabit Ethernet ports connected through the midplane to each half-width slot in the chassis. Typically configured in pairs for redundancy, two fabric extenders provide up to 160 Gbps of I/O to the chassis.
Figure 6 Cisco UCS 2208XP Fabric Extender
The Cisco UCS Virtual Interface Card (VIC) 1340 is a 2-port 40-Gbps Ethernet or dual 4 x 10-Gbps Ethernet, Fibre Channel over Ethernet (FCoE)-capable modular LAN on motherboard (mLOM) designed exclusively for the M4 generation of Cisco UCS B-Series Blade Servers. When used in combination with an optional port expander, the Cisco UCS VIC 1340 capabilities is enabled for two ports of 40-Gbps Ethernet.
The Cisco UCS VIC 1340 enables a policy-based, stateless, agile server infrastructure that can present over 256 PCIe standards-compliant interfaces to the host that can be dynamically configured as either network interface cards (NICs) or host bus adapters (HBAs). In addition, the Cisco UCS VIC 1340 supports Cisco® Data Center Virtual Machine Fabric Extender (VM-FEX) technology, which extends the Cisco UCS fabric interconnect ports to virtual machines, simplifying server virtualization deployment and management.
Figure 7 Cisco Virtual Interface Card (VIC) 1340
Cisco's Unified Computing System is revolutionizing the way servers are managed in data centers. Following are the unique differentiators of Cisco UCS and Cisco UCS Manager.
· Embedded management—In Cisco Unified Computing System, the servers are managed by the embedded firmware in the Fabric Interconnects, eliminating need for any external physical or virtual devices to manage the servers. Also, a pair of FIs can manage up to 40 chassis, each containing up 8 blade servers, to a total of 160 servers with fully redundant connectivity. This gives enormous scaling on the management plane.
· Unified fabric—In Cisco Unified Computing System, from blade server chassis or rack server fabric-extender to FI, there is a single Ethernet cable used for LAN, SAN, and management traffic. This converged I/O results in reduced cables, SFPs, and adapters - reducing capital and operational expenses of overall solution.
· Auto Discovery—By simply inserting the blade server in the chassis, discovery and inventory of compute resource occurs automatically without any management intervention. The combination of unified fabric and auto-discovery enables the wire-once architecture of Cisco Unified Computing System, where compute capability of Cisco Unified Computing System can be extended easily while keeping the existing external connectivity to LAN, SAN, and management networks. Policy based resource classification—When a compute resource is discovered by Cisco UCS Manager, it can be automatically classified to a given resource pool based on policies defined. This capability is useful in multi-tenant cloud computing.
· Combined Rack and Blade server management—Cisco UCS Manager can manage Cisco UCS B-series blade servers and Cisco UCS C-series rack server under the same Cisco UCS domain. This feature, along with stateless computing makes compute resources truly hardware form factor agnostic.
· Model based management architecture—Cisco UCS Manager architecture and management database is model based and data driven. An open, standard based XML API is provided to operate on the management model. This enables easy and scalable integration of Cisco UCS Manager with other management system, such as VMware vCloud director, Microsoft System Center, and Citrix Cloud Platform.
· Policies, Pools, Templates—The management approach in Cisco UCS Manager is based on defining policies, pools and templates, instead of cluttered configuration, which enables a simple, loosely coupled, data driven approach in managing compute, network and storage resources.
· Loose referential integrity—In Cisco UCS Manager, a service profile, port profile, or policies can refer to other policies or logical resources with loose referential integrity. A referred policy does not have to exist at the time of authoring the referring policy or a referred policy can be deleted even though other policies are referring to it. This provides different subject matter experts to work independently from each other. This provides great flexibility where different experts from different domains, such as network, storage, security, server, and virtualization work together to accomplish a complex task.
· Policy resolution—In Cisco UCS Manager, a tree structure of organizational unit hierarchy can be created that mimics the real life tenants and/or organization relationships. Various policies, pools, and templates can be defined at different levels of organization hierarchy. A policy referring to another policy by name is resolved in the organization hierarchy with closest policy match. If no policy with specific name is found in the hierarchy of the root organization, then special policy named "default" is searched. This policy resolution practice enables automation friendly management APIs and provides great flexibility to owners of different organizations.
· Service profiles and stateless computing—A service profile is a logical representation of a server, carrying its various identities and policies. This logical server can be assigned to any physical compute resource as far as it meets the resource requirements. Stateless computing enables procurement of a server within minutes, which used to take days in legacy server management systems.
· Built-in multi-tenancy support—The combination of policies, pools, templates, loose referential integrity, policy resolution in organization hierarchy, and a service profiles based approach to compute resources makes Cisco UCS Manager inherently friendly to multi-tenant environment typically observed in private and public clouds.
· Extended Memory—The extended memory architecture of Cisco Unified Computing System servers allows up to 760 GB RAM per server - allowing huge VM to physical server ratio required in many deployments, or allowing large memory operations required by certain architectures like Big-Data.
· Virtualization aware network—VM-FEX technology makes access layer of network aware about host virtualization. This prevents domain pollution of compute and network domains with virtualization when virtual network is managed by port-profiles defined by the network administrators' team. VM-FEX also off loads hypervisor CPU by performing switching in the hardware, thus allowing hypervisor CPU to do more virtualization related tasks. VM-FEX technology is well integrated with VMware vCenter, Linux KVM, and Hyper-V SR-IOV to simplify cloud management.
· Simplified QoS—When the Fibre Channel and Ethernet are converged in Cisco Unified Computing System fabric, built-in support for QoS and lossless Ethernet makes it seamless. Network Quality of Service (QoS) is simplified in Cisco UCS Manager by representing all system classes in one GUI panel.
The Cisco Nexus 9396X Switch delivers comprehensive line-rate layer 2 and layer 3 features in a two-rack-unit (2RU) form factor. It supports line rate 1/10/40 GE with 960 Gbps of switching capacity. It is ideal for top-of-rack and middle-of-row deployments in both traditional and Cisco Application Centric Infrastructure (ACI)–enabled enterprise, service provider, and cloud environments.
Figure 8 Cisco Nexus 9396PX Switch
The VNX storage series provides both file and block access with a broad feature set, which makes it an ideal choice for any private cloud implementation.
VNX storage includes the following components, sized for the stated reference architecture workload:
· Host Adapter Ports (for block)—Provide host connectivity through fabric to the array
· Storage Processors—The compute components of the storage array, which are used for all aspects of data moving into, out of, and between arrays
· Disk Drives—Disk spindles and solid state drives (SSDs) that contain the host or application data and their enclosures
· Data Movers (for file)—Front-end appliances that provide file services to hosts (optional if CIFS services are provided)
The term Data Mover refers to a VNX hardware component, which has a CPU, memory, and I/O ports. It enables Common Internet File System (CIFS-SMB) and Network File System (NFS) protocols on the VNX.
The VNX5400 array can support a maximum of 250 drives, the VNX5600 can host up to 500 drives, and the VNX5800 can host up to 750 drives.
The VNX series supports a wide range of business-class features that are ideal for the private cloud environment including:
· EMC Fully Automated Storage Tiering for Virtual Pools (FAST VP™)
· EMC FAST Cache
· File-level data deduplication and compression
· Block deduplication
· Thin provisioning
· Replication
· Snapshots or checkpoints
· File-level retention
· Quota management
The EMC VNX flash-optimized unified storage platform delivers innovation and enterprise capabilities for file, block, and object storage in a single, scalable, and easy-to-use solution. Ideal for mixed workloads in physical or virtual environments, VNX combines powerful and flexible hardware with advanced efficiency, management, and protection software to meet the demanding needs of today's virtualized application environments.
VNX has new features and enhancements that are designed and built upon the first generation's success. These features and enhancements are:
· More capacity with Multicore optimization with Multicore Cache, Multicore RAID, and Multicore
FAST Cache (MCx)
· Greater efficiency with a flash-optimized hybrid array
· Better protection by increasing application availability with active/active storage processors
· Easier administration and deployment by increasing productivity with a new Unisphere
Management Suite
VNX is a flash-optimized hybrid array that provides automated tiering to deliver the best performance to your critical data, while intelligently moving less frequently accessed data to lower-cost disks. In this hybrid approach, a small percentage of flash drives in the overall system can provide percentage high percentage of the overall IOPS. A flash-optimized VNX takes full advantage of the low latency of flash to deliver cost-saving optimization and high performance scalability. The EMC Fully Automated Storage Tiering Suite (FAST Cache and FAST VP) tiers both block and file data across heterogeneous drives and boosts the most active data to the cache, ensuring that customers never have to make concessions for cost or performance.
FAST VP dynamically absorbs unpredicted spikes in system workloads. As that data ages and becomes less active over time, FAST VP tiers the data from high-performance to high-capacity drives automatically, based on customer-defined policies. This functionality has been enhanced with four times better granularity and with new FAST VP solid-state disks (SSDs) based on enterprise multi-level cell (eMLC) technology to lower the cost per gigabyte. All VSPEX use cases benefit from the increased efficiency.
VSPEX Proven Infrastructures deliver private cloud, end-user computing, and virtualized application solutions. With VNX, customers can realize an even greater return on their investment. VNX provides out-of-band, block-based deduplication that can dramatically lower the costs of the flash tier.
The advent of flash technology has been a catalyst in totally changing the requirements of midrange storage systems. EMC redesigned the midrange storage platform to efficiently optimize multicore CPUs to provide the highest performing storage system at the lowest cost in the market. MCx distributes all VNX data services across all cores—up to 32, as shown in the below figure. The VNX series with MCx has dramatically improved the file performance for transactional applications like databases or virtual machines over network-attached storage (NAS).
Figure 9 Next-Generation VNX with Multicore Optimization
The cache is the most valuable asset in the storage subsystem; its efficient use is key to the overall efficiency of the platform in handling variable and changing workloads. The cache engine has been modularized to take advantage of all the cores available in the system.
Another important part of the MCx redesign is the handling of I/O to the permanent back-end storage-hard disk drives (HDDs) and SSDs. Greatly increased performance improvements in VNX come from the modularization of the back-end data management processing, which enables MCx to seamlessly scale across all processors.
VNX storage, enabled with the MCx architecture, is optimized for FLASH 1st and provides unprecedented overall performance, optimizing for transaction performance (cost per IOPS), bandwidth performance (cost per GB/s) with low latency, and providing optimal capacity efficiency (cost per GB).
VNX provides the following performance improvements:
· Up to four times more file transactions when compared with dual controller arrays
· Increased file performance for transactional applications (for example, Microsoft Exchange on
VMware over NFS) by up to three times with a 60 percent better response time
· Up to four times more Oracle and Microsoft SQL Server OLTP transactions
· Up to four times more virtual machines, a greater than three times improvement
· Active/Active Array Storage Processors
The new VNX architecture provides active/active array storage processors, as shown in the following figure, which eliminate application timeouts during path failover since both paths are actively serving I/O.
Load balancing is also improved and applications can achieve an up to two times improvement in performance. Active/Active for block is ideal for applications that require the highest levels of availability and performance, but do not require tiering or efficiency services like compression, deduplication, or snapshot.
With this VNX release, VSPEX customers can use virtual Data Movers (VDMs) and VNX Replicator to perform automated and high-speed file system migrations between systems.
Figure 10 Active/Active Processors Increase Performance, Resiliency, and Efficiency
The new EMC Unisphere Management Suite extends Unisphere's easy-to-use, interface to include VNX Monitoring and Reporting for validating performance and anticipating capacity requirements. As shown in the following figure, the suite also includes Unisphere Remote for centrally managing up to thousands of VNX systems with new support for XtremSW Cache.
Figure 11 Unisphere Management Suite
EMC Storage Integrator (ESI) is aimed towards the Windows and Application administrator. ESI is easy to use, delivers end-to end monitoring, and is hypervisor agnostic. Administrators can provision ESI in both virtual and physical environments for a Windows platform, and troubleshoot by viewing the topology of an application from the underlying hypervisor to the storage.
ESXi 5.5 is a "bare-metal" hypervisor, so it installs directly on top of the physical server and partitions it into multiple virtual machines that can run simultaneously, sharing the physical resources of the underlying server. VMware introduced ESXi in 2007 to deliver industry-leading performance and scalability while setting a new bar for reliability, security and hypervisor management efficiency.
Due to its ultra-thin architecture with less than 100MB of code-base disk footprint, ESXi delivers industry-leading performance and scalability plus:
Improved Reliability and Security — with fewer lines of code and independence from general purpose OS, ESXi drastically reduces the risk of bugs or security vulnerabilities and makes it easier to secure your hypervisor layer.
Streamlined Deployment and Configuration — ESXi has far fewer configuration items than ESX, greatly simplifying deployment and configuration and making it easier to maintain consistency.
Higher Management Efficiency — The API-based, partner integration model of ESXi eliminates the need to install and manage third party management agents. You can automate routine tasks by leveraging remote command line scripting environments such as vCLI or PowerCLI.
Simplified Hypervisor Patching and Updating — Due to its smaller size and fewer components, ESXi requires far fewer patches than ESX, shortening service windows and reducing security vulnerabilities.
Microsoft Windows Server 2012 R2 offers businesses an enterprise-class infrastructure that simplifies the deployment of IT services. With Windows Server 2012 R2, you can achieve affordable, multi-node business continuity scenarios with high service uptime and at-scale disaster recovery. As an open application and web platform, Windows Server 2012 R2 helps you build, deploy, and scale modern applications and high-density websites for the datacenter and the cloud. Windows Server 2012 R2 also enables IT to empower users by providing them with flexible, policy-based resources while protecting corporate information.
Microsoft SQL Server 2014 builds on the mission-critical capabilities delivered in the prior release by making it easier and more cost effective to develop high-performance applications. Apart from the several performance improving capabilities Microsoft SQL Server delivers a robust platform for hosting mission critical database environments.
The AlwaysOn Availability Groups feature is a high-availability and disaster-recovery solution that provides an enterprise-level alternative to database mirroring. Introduced in SQL Server 2012, AlwaysOn Availability Groups maximize the availability of a set of user databases for an enterprise. An availability group supports a failover environment for a discrete set of user databases, known as availability databases, which fail over together. An availability group supports a set of read-write primary databases and one to eight sets of corresponding secondary databases. Optionally, secondary databases can be made available for read-only access and/or some backup operations.
An availability group fails over at the level of an availability replica. Failovers are not caused by database issues such as a database becoming suspect due to a loss of a data file, deletion of a database, or corruption of a transaction log.
Figure 12 illustrates the architecture for the Cisco solution for EMC VSPEX infrastructure for SQL Server consolidation.
Figure 12 Reference Architecture
This reference architecture showcases the possibility of building a balanced and scalable infrastructure enabling a solution-based consolidation of SQL Server within a customer environment. The solution provides lower cost, predicable performance while enabling options such as accelerated backup and comprehensive resiliency features. The design leverages the flexibility of the Cisco UCS Fabric Interconnects to operate in FC switching mode, eliminating the need for a separate fibre channel switch thus reducing the deployment costs. SAN connectivity policies in Cisco UCS Manager are used to automate SAN zoning, which reduces the administrative tasks.
The architecture for this solution is divided into two sections and they are:
1. Primary Site for SQL Server Consolidation
2. Secondary Site for near-site disaster recovery solution
Primary site showcases the consolidation of multiple Windows Server 2012 R2 virtual machines each hosting single SQL Server 2014 database instance on a VMware environment. Virtual machines are fully isolated from the other virtual machines and communicate with the other servers on the network as if they were physical machines. Optimal resource governance between multiple virtual machines are automatically managed by the VMware ESXi hypervisor. VMware vSphere HA, DRS and vMotion features are configured to provide high availability of virtual machines within the primary site.
Secondary site is built similarly to the primary site to provide a near-site disaster recovery solution using the SQL Server AlwaysOn Availability Group feature. The hardware requirements for both primary and secondary site are same; however, in this solution we have used a pair of Cisco Nexus Switch between the primary and secondary sites as shown in the above figure.
In this solution single instance SQL Server virtual machines are deployed on both the sites. SQL Server AlwaysOn Availability Groups with synchronous replication is setup between a pair of SQL Server virtual machines, one in the primary site and the other in the secondary site. To enable and configure SQL Server AlwaysOn Availability Group, it is required to have the virtual machines hosting SQL Server database to be a part of Microsoft Windows Server Failover Cluster (WSFC).
VMware requires you to create the virtual machine to virtual machine anti-affinity rules for the WSFC virtual machines across physical ESXi hosts in a VMware vSphere cluster enabled with HA/ DRS. This solution is designed to keep the WSFC nodes hosting SQL Server AlwaysOn availability group replicas on separate vSphere clusters and different storage arrays preventing them from running on the same physical host but allowing WSFC virtual machines to vMotion to other ESXi hosts within their specific site and vSphere cluster.
This solution leverages the high availability features of both VMware and SQL Server 2014. In case of ESXi host failure, the VMware vSphere HA restarts the virtual machine on any available host in the cluster and vMotion helps in reducing the downtime during the maintenance and upgrade activities. The SQL Server AlwaysOn availability group feature provides database level availability between the primary and secondary sites, and offloads tasks such as backup and reporting services from the primary to the secondary replica.
For managing the various components in the deployment, a separate Cisco UCS server with Windows Active Directory, VMware vCenter and other infrastructure components is configured. It is assumed that these are part of an existing infrastructure management framework that customers have deployed for the entire common infrastructure managed within their data center.
For this solution we are using a total of four Cisco UCS B200 M4 servers. Each site has two blade servers with VMware ESXi 5.5 hypervisor installed on them. As per the reference architecture, two ESXi hosts in the primary site are part of a VMware vSphere cluster and similarly the other two ESXi hosts in the secondary site are part of another VMware vSphere cluster. On the ESXi hosts multiple Windows Server 2012 R2 virtual machines, each running single SQL Server 2014 database instance, are deployed. Each virtual machine is configured with four vCPU, 8 GB RAM, and three virtual disks (VMDKs) for OS, SQL Data and Log files.
Figure 13 shows the logical representation of network connectivity. The networking configuration is designed to ensure high availability and bandwidth for the network traffic at every layer.
The Cisco UCS provides high availability to the high end applications by providing redundancy to all the critical components in the infrastructure stack. Cisco UCS B200 M4 blades are connected to a pair of Cisco UCS 6248 Fabric Interconnects through Cisco UCS IOM 2208 Fabric Extenders housed in the Cisco UCS 5108 Chassis. The Cisco Nexus 9396 switches deployed in pair are configured for virtual port channel (vPC). Virtual port channels are configured between Cisco UCS 6248 Fabric Interconnects and Cisco Nexus 9396 switches for optimal network connectivity.
The QoS policies created from the Cisco UCS Manager were used along with VLANs isolating the different traffic types to create a network infrastructure with prioritized network traffic. Cisco UCS service profiles associated to Cisco UCS B200 M4 blades having 1340 VIC adapter is configured with four virtual network interfaces (vNICs) and two virtual host bus adapters (vHBAs). The vNICs were designed to optimally segregate the different types of management and data traffic for the recommended architecture. Each NIC is configured to have its own VLAN for traffic segregation. vNICs, eth0 and eth2 connect to the ports on fabric-A and eth1 and eth3 connect to the ports on fabric-B.
On the VMware ESXi hosts, two virtual standard switches (vSS) with virtual port groups (vmkernel and VM port groups) are configured for the purpose of differentiating the different kinds of traffic passing through a virtual switch. On one vSS, it is configured with two vmkernel port groups, one for vMotion and the other for Management. The second vSS is configured with two VM port groups with one used for VM and SQL traffic while the other is used for Microsoft Windows server failover cluster traffic. The virtual switches on both the VMware ESXi hosts use two uplink adapters (vmnics) teamed in active-active configuration for redundancy and load balancing as shown in the below figure. Jumbo frame support was enabled for vMotion traffic for better performance. The virtual machines running on VMware ESXi hosts are configured with two vNICs, where one vNIC connects to the VM port group configured for VM and SQL traffic and the other vNIC connects to VM port group configured for Microsoft WSFC traffic on the same vSwitch.
Figure 13 Primary Site Logical Connectivity
The EMC VNX5400 storage arrays are directly connected to the Cisco UCS 6248UP Fabric Interconnects in highly available configuration, without the need of a separate Fibre Channel switch. The fabric interconnects were configured to be in Fibre Channel switching mode and were configured to perform the zoning operations for the storage access. The Service Profile associated with Cisco UCS B200 M4 blade is configured with two virtual host adapters with each bound to separate fabric interconnect on different VSAN. This configuration provides redundant multiple paths to both the controllers on the storage array. VMware ESXi native multipathing plugin with round robin as the path selection policy is used to provide the benefit of multiple active/optimized paths for I/O scalability.
The EMC VNX5400 used for this solution has disk configuration of 66x600GB SAS drives and 12x200GB Flash drives. The recommended hot spare policy of one hot spare per thirty disks is configured on the storage arrays. The storage pool design for the SQL Server virtual machine consolidation use case is given in the table below.
Table 1 Storage Pool Details
Pool Name |
RAID Type |
Disk Configuration |
Fast VP (Yes/No) |
LUN Details |
Purpose |
SAN_Boot |
RAID5(4+1) |
5 SAS Disks |
No |
SAN Boot LUN 1 SAN Boot LUN 2 |
Performance tier for ESXi SAN Boot |
VM_Datastore |
RAID5(4+1) |
5 SAS Disks |
No |
VM LUN |
Performance tier for virtual machine OS |
SQL_DATA |
RAID5(4+1) RAID1/0(4+4) |
25 SAS + 8 Flash |
Yes |
SQL DATA LUN |
Extreme Performance tier for SQL Server OLTP database |
SQL_LOG |
RAID1/0(4+4) |
16 SAS |
No |
SQL Log LUN |
Performance tier for SQL Server OLTP database logs |
Separate Storage pools are created for ESXi SAN boot, virtual machine OS and SQL Server database data and log files to isolate the spindles for better performance. The VM LUN, SQL data and Log LUNs are presented to both the VMware ESXi hosts in the cluster. These LUNs are formatted using the VMware VMFS and the volumes are mounted on both the VMware ESXi hosts participating in the cluster. Virtual machines use these VMFS datastore volumes to store their OS, SQL Data and Log virtual disks (vmdk). Figure 14 shows the physical and logical disk layout used for this solution.
Figure 14 Storage Layout Diagram
All the components in this base design can be scaled easily to support specific business requirements. For example, more (or different) servers or even blade chassis can be deployed to increase compute capacity, additional disk shelves can be deployed to improve I/O capability and throughput, and special hardware or software features can be added to introduce new features.
This document guides you through the steps for deploying the base architecture. These procedures cover everything from the network, compute and storage device configuration perspectives.
This section lists the hardware and software models versions validated as part of this guide.
Table 2 Hardware and Software Details
Description |
Vendor |
Name |
Version |
Cisco UCS Fabric Interconnects |
Cisco |
Cisco 6248UP |
Cisco UCSM 2.2(3f) |
Blade Chassis |
Cisco |
Cisco UCS 5108 Chassis |
|
Network Switches |
Cisco |
Cisco Nexus 9396PX Switch |
NXOS: version 6.1(2)I2(2a) |
Blade Server |
Cisco |
Cisco UCS B200 M4 Half width Blade Server |
BIOS : B200M4.2.2.3d.0.111420141438 |
Processors per Server |
Intel |
Intel(R) Xeon(R) E5-2650 v3 2.30 GHz |
2 x Intel E5-2650 v3, 2.30 GHz, 10 Cores/Socket, HT enabled, 40 Threads |
Memory per Server |
Samsung |
256 GB |
16GB DDR4-2133-MHz RDIMM/PC4-17000/dual rank/x4/1.2v |
Network Adapter |
Cisco |
Virtual Interface Card(VIC) 1340 |
4.0(1f) |
Hypervisor |
VMware |
vSphere ESXi 5.5 U2 |
Cisco Custom Image Build # 2068190 |
Guest Operating System |
Microsoft |
Windows Server 2012 R2 |
Datacenter Edition |
Database Server |
Microsoft |
SQL Server 2014 |
Enterprise Edition |
Primary Server Storage |
EMC |
VNX5400 |
Block S/W – 05.33.000.5.081 File S/W – 8.1.3-79 |
Secondary Server Storage |
EMC |
VNX5400 |
Block S/W – 05.33.000.5.081 File S/W – 8.1.3-79 |
Test Tool |
Open source |
HammerDB |
2.16 |
This section provides an end-to-end guidance on setting up the Cisco VSPEX environment for consolidating SQL Server database workloads as in the proposed reference architecture.
The below flow chart depicts the high level flow of the deployment procedure for the primary site.
Figure 15 Primary Site Workflow Setup for Microsoft SQL Server 2014 Consolidation
This section explains switch configuration needed for the Cisco solution for EMC VSPEX VMware architecture. Details about configuring password, management connectivity and strengthening the device are not covered here; please refer to the Cisco Nexus 9000 series configuration guide for that.
For this VSPEX solution we have created VLANs using the below reference table.
Table 3 VLAN Details
VLAN Name |
VLAN ID |
Description |
Mgmt. |
604 |
Management VLAN for vSphere servers to reach vCenter management plane |
vMotion |
40 |
VLAN for virtual machine vMotion |
VM-Data |
613 |
VLAN for the virtual machine (application) traffic (can be multiple VLANs) |
Win_Clus |
50 |
VLAN for Windows Server Failover Cluster and SQL AlwaysOn AG |
Following figure shows how to configure VLAN on a Cisco Nexus 9000 series switches. Create VLANs on both the switches as shown in the below figure.
Figure 16 Create VLANs
Virtual port-channel effectively enables two physical switches to behave like a single virtual switch, and port-channel can be formed across the two physical switches. Following are the steps to configure vPC:
1. Enable LACP feature on both switches.
2. Enable vPC feature on both switches as shown in the below figure.
Figure 17 Enable Features
3. Configure a unique vPC domain ID, identical on both switches.
4. Configure mutual management IP addresses on both switches and configure peer-gateway as shown in the following figure. Refer to Cisco Nexus 9K Switch vPC configuration for more details.
Figure 18 Create VPC Domain
5. Create and configure port-channel on the inter-switch links. The steps to configure these port-channels are shown in the figure below. Make sure that “vpc peer-link” is configured on this port-channel.
Figure 19 Create Port-Channels for Inter-Switch Link Interfaces
6. Add ports (inter-switch link ports) with LACP protocol to the port-channel created in the above step using “channel-group 10 force mode active” command under the interface subcommand as shown below.
Figure 20 Add Ports to the Port-Channel
7. Repeat the steps from 1 to 6 to create vPC domain on the peer N9K switch.
8. Verify vPC status using “show vpc” command. Successful vPC configuration would look like as shown in the below figure:
Figure 21 Verify vPC Status
Interfaces connected to fabric interconnects need to be in the trunk mode and vMotion, Mgmt and application VLANs are allowed on this port. From the switch side, interfaces connected to FI-A and FI-B are in a vPC, and from the FI side the links connected to Cisco Nexus 9396 A and B switches are in regular LACP port-channels. It is a good practice to use a good description for each port and port-channel on the switch to aid in diagnosis if any problem arises later. Refer to the following figure for exact configuration commands for Cisco Nexus 9K switch A & B:
9. To create and configure the port channels for the interfaces connected to Fabric Interconnects follow the steps as given in the below figure.
Figure 22 Create Port Channel for Interfaces Connected to Fabric Interconnects
10. Add ports on the port-channel as shown below.
Figure 23 Add Ports to the Port-Channels
11. Repeat the above 9 and 10 steps on the peer switch.
At this point of time, all ports and port-channels are configured with necessary VLANs, switchport mode and vPC configuration. Validate this configuration using the “show vlan”, “show port-channel summary” and “show vpc” commands as shown in the following figures. Note that ports would be “up” only after the peer devices are also configure properly, so you should revisit this subsection after configuring the fabric interconnects in the Cisco UCS configuration section.
Figure 24 Show VLAN
“show vlan” command can be restricted to a given VLAN or set of VLANs as shown in the above figure. Ensure that all required VLANs are in “active” status on both switches and the right set of ports and port-channels are part of the necessary VLANs.
The port-channel configuration can be verified using “show port-channel summary” command. Following figure shows the expected output of this command.
Figure 25 Show Port-Channel Summary
In this example, port-channel 10 is the vPC peer-link port-channel; port-channels 25 and 26 are connected to the Cisco UCS Fabric Interconnects. Make sure that state of the member ports of each port-channel is “P” (Up in port-channel). Note that ports may not come up if the peer ports are not properly configured. Common reasons for port-channel port being down are:
· Port-channel protocol mis-match across the peers (LACP v/s none)
· Inconsistencies across two vPC peer switches. Use “show vpc consistency-parameters {global | interface {port-channel | port} <id>} command to diagnose such inconsistencies.
vPC status can be verified using “show vpc” command. Example output is shown in the figure below:
Figure 26 Show vPC Brief
Make sure that vPC peer status is “peer adjacency formed ok” and all the port-channels, including the peer-link port-channel, have status “up”, except one of the two port-channels connected to the storage array as explained before.
The Cisco solution for EMC VSPEX VMware architectures require MTU set at 9216 (jumbo frames) for efficient storage and vMotion traffic. You can configure the system jumbo MTU size, which can be used to specify the MTU size for Layer 2 interfaces. You can specify an even number between 1500 and 9216. If you do not configure the system jumbo MTU size, it defaults to 9216 bytes
To configure jumbo MTU on the Cisco Nexus 9000 series switches, use the following steps on both switches A and B as shown in the below figure.
Figure 27 Configure Jumbo Frames
Depending on the available network infrastructure, several methods and features can be used to uplink this VSPEX environment. If an existing Cisco Nexus environment is present, Cisco recommends using vPCs to uplink the
Cisco Nexus 9396 switches included in this solution into the infrastructure. The previously described procedures can be used to create an uplink vPC to the existing environment.
The following section provides a detailed procedure for configuring the Cisco Unified Computing System for use in this VSPEX environment. These steps should be followed precisely because a failure to do so could result in an improper configuration.
These steps provide details for initial setup of the Cisco UCS 6248 fabric Interconnects.
Cisco UCS 6248 A
1. Connect to the console port on the first Cisco UCS 6248 fabric interconnect.
2. At the prompt to enter the configuration method, enter console to continue.
3. If asked to either do a new setup or restore from backup, enter setup to continue.
4. Enter y to continue to set up a new fabric interconnect.
5. Enter y to enforce strong passwords.
6. Enter the password for the admin user.
7. Enter the same password again to confirm the password for the admin user.
8. When asked if this fabric interconnect is part of a cluster, answer y to continue.
9. Enter A for the switch fabric.
10. Enter the <cluster name> for the system name.
11. Enter the <Mgmt0 IPv4> address.
12. Enter the <Mgmt0 IPv4> netmask.
13. Enter the <IPv4 address> of the default gateway.
14. Enter the <cluster IPv4 address>.
15. To configure DNS, answer y.
16. Enter the <DNS IPv4 address>.
17. Answer y to set up the default domain name.
18. Enter the <default domain name>.
19. Review the settings that were printed to the console, and if they are correct, answer yes to save the
20. configuration.
21. Wait for the login prompt to make sure the configuration has been saved.
Cisco UCS 6248 B
1. Connect to the console port on the second Cisco UCS 6248 fabric interconnect.
2. When prompted to enter the configuration method, enter console to continue.
3. The installer detects the presence of the partner fabric interconnect and adds this fabric interconnect to the cluster. Enter y to continue the installation.
4. Enter the admin password for the first fabric interconnect.
5. Enter the <Mgmt0 IPv4 address>.
6. Answer yes to save the configuration.
7. Wait for the login prompt to confirm that the configuration has been saved.
8. Connect to the Cisco UCSM cluster IP address that was configured in the previous steps using SSH and verify the HA status as shown below.
Figure 28 Verify FI Cluster State
These steps provide details for logging into the Cisco UCS environment.
1. Open a web browser and navigate to the Cisco UCS 6248 fabric interconnect cluster address to launch the Cisco UCS Manager.
Figure 29 Connect to Cisco UCSM Manager
2. Click the Launch link to download the Cisco UCS Manager software. If prompted to accept security certificates, accept as necessary.
3. In the next screen click Launch UCS Manager.
Figure 30 Launch Cisco UCSM Manager
4. When prompted, enter the credentials to login to the Cisco UCS Manager software.
Figure 31 Cisco UCSM Manger Login
These steps provide details for creating a block of KVM IP addresses for server access in the Cisco UCS environment:
1. Select the LAN tab at the top of the left window.
2. Select Pools > root > IP Pools > IP Pool ext-mgmt
3. Select the appropriate radio button for the preferred assignment order.
4. Select Create Block of IP Addresses.
5. Enter the starting IP address of the block and number of IPs needed as well as the subnet and gateway information.
6. Click OK to create the IP block.
7. Click OK in the message box.
Figure 32 Add a Block of IP Address
These steps provide details for synchronizing the Cisco UCS environment to the NTP server:
1. Select the Admin tab at the top of the left window.
2. Select All > Timezone Management.
3. In the right pane, select the appropriate timezone in the Timezone drop-down list.
4. Click Add NTP Server.
5. Input the NTP server IP and click OK.
6. Click Next and then click Save Changes. Then click OK.
Figure 33 Adding NTP Server
These steps provide details to modify the chassis discovery policy (as the base architecture includes two uplinks from each IO module installed in the Cisco UCS chassis).
1. Click the Equipment tab in the left pane and select the Equipment top-node object. In the right pane, click the Policies tab.
2. Under Global Policies, change the Chassis Discovery Policy to 2-link or set it to match the number of uplink ports that are cabled between the chassis or IO modules (IOMs) and the fabric interconnects.
3. Select Link Grouping Preference to be Port Channel and click Save Changes in the bottom right corner.
Figure 34 Specify Chassis Discovery Policy Settings
These steps provide details for enabling Fibre Channel, server and uplinks ports.
Server Ports
In the current configuration, the ports 1 and 2 of the fabric interconnects are connected to the blade chassis. Follow the steps given below to configure the ports as server ports:
1. Click the Equipment tab on the top left of the window.
2. Select Equipment > Fabric Interconnects > Fabric Interconnect A (primary) > Fixed Module.
3. Expand the Ethernet Ports object.
4. Select the ports that are connected to the chassis, right-click them, and select Configure as Server Port.
5. Click Yes to confirm the server ports, and then click OK.
Repeat the above procedure to configure the ports on Fabric Interconnect B.
The ports connected to the chassis now configured as server ports.
Figure 35 Configure Server Ports
The ports connected to the chassis are now configured as server ports.
Figure 36 Port Status and Role
In this configuration we are connecting the storage array directly to Cisco UCS fabric interconnects without any upstream SAN switch. The below steps provide details on configuring direct-attached storage in the Cisco UCS. Here were using the Fibre Channel ports on the expansion module of the fabric interconnect.
Follow the steps given below to configure Fibre Channel Ports:
1. Select Equipment > Fabric Interconnects > Fabric Interconnect A (primary) and right-click.
2. Click Set FC Switching Mode.
Figure 37 FC Switching Mode
3. Reboot the fabric interconnect and repeat the steps 1 to 3 on the other fabric interconnect.
4. Select Equipment > Fabric Interconnects > Fabric Interconnect A.
5. On the General tab of the right hand side of the window, click Configure Unified Ports under the Actions pane.
Figure 38 Configure Unified Ports
6. Click Yes to allow the change of port mode from Ethernet to Fibre Channel.
Figure 39 Port Mode Change Message
7. Select one of the following buttons to select the module for which you want to configure the port modes:
— Configure Expansion Module
Figure 40 Configure Unified Ports
In the current configuration, we have selected the Expansion Module.
8. Adjust the slider to configure the desired number of ports as FC ports.
Figure 41 Configure Expansion Module Ports
9. Click Yes on the confirmation message box to continue with the port mode configuration.
10. Click Finish to save the port mode configuration.
Depending upon the module for which you configured the port modes, data traffic for the Cisco UCS domain is interrupted as follows:
· Fixed module—the fabric interconnect reboots. All data traffic through that fabric interconnect is interrupted. In a cluster configuration that provides high availability and includes servers with vNICs that are configured for failover, traffic fails over to the other fabric interconnect and no interruption occurs. It takes about 8 minutes for the fixed module to reboot.
· Expansion module—the module reboots. All data traffic through ports in that module is interrupted. It takes about 1 minute for the expansion module to reboot.
Figure 42 Confirm the Configuration Change
11. Repeat the steps 4 to 10 on the Fabric Interconnect B
12. Once the FC port mode configuration is complete, make sure the selected ports list the Desired IF Role as FC Uplink.
13. The next task is to configure the FC uplink port to be of the FC storage port type.
14. Select Equipment > Fabric Interconnects > Fabric Interconnect A (primary) and select Expansion Module 2.
15. Select the ports configured with FC Uplink role (here Ports 15 and 16), right-click and then click Configure as FC Storage Port.
Figure 43 Configure as FC Storage Port
16. Click Yes on the next prompt.
17. Repeat the step 12 to configure the ports on Fabric Interconnect B.
Now the desired ports should be able to be used as FC ports connecting directly to the storage.
Figure 44 FC Storage Ports
The below steps provide details to configure LAN Uplink Ports:
1. Click the Equipment tab on the top left of the window.
2. Select Equipment > Fabric Interconnects > Fabric Interconnect A (primary) > Fixed Module.
3. Expand the Ethernet Ports object.
4. Select the ports that are connected to the LAN, right-click them and then select Configure as Uplink Port.
5. Click Yes to confirm and then click OK.
Repeat the above procedure to configure the ports on Fabric Interconnect B.
Now we have the uplink ports configured.
Figure 45 Configure LAN Uplink Ports
The connected chassis needs to be acknowledged before it can be managed by Cisco UCS Manager. Follow the steps given below to acknowledge the Cisco UCS Chassis:
1. Select Chassis 1 in the left pane.
2. Click Acknowledge Chassis under the Actions pane in the right hand side window.
Figure 46 Acknowledge Cisco UCS Chassis
The uplink port channels need to be configured to enable the communication from the Cisco UCS environment to the external environments. In the proposed configuration, the below uplinks were configured in the Cisco UCS Manager.
Table 4 Acknowledge Cisco UCS Chassis
Cisco Fabric Interconnects |
Port Channel Name |
Port Channel ID |
Member ports |
Fabric A |
PO-25-FabA |
25 |
Fabric Interconnect A Eth1/19 and Eth1/20 |
Fabric B |
PO-26-FabB |
26 |
Fabric Interconnect B Eth1/19 and Eth1/20 |
To configure the necessary port channels out of the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the LAN tab in the navigation pane.
In this procedure, two port channels are created: one from fabric A to both Cisco Nexus switches and one from fabric B to both Cisco Nexus switches.
2. Under LAN > LAN Cloud, expand the Fabric A tree.
3. Right-click Port Channels.
4. Click Create Port Channel.
Figure 47 Create Port-Channel
5. Enter 25 as the unique ID of the port channel.
Figure 48 Set Port Channel Name
6. Provide a name to the port channel.
7. Click Next.
8. Select the following ports to be added to the port channel:
— Slot ID 1 and port 19
— Slot ID 1 and port 20
9. Click >> to add the ports to the port channel.
Figure 49 Add Ports to Port Channel
10. Click Finish to create the port channel
11. Click OK.
12. In the navigation pane, under LAN > LAN Cloud, expand the fabric B tree.
13. Right-click Port Channels.
14. Select Create Port Channel.
15. Enter 26 as the unique ID of the port channel.
16. Enter PO-26-FabB as the name of the port channel.
17. Click Next.
18. Select the following ports to be added to the port channel:
— Slot ID 1 and port 19
— Slot ID 1 and port 20
19. Click >> to add the ports to the port channel.
20. Click Finish to create the port channel.
21. Click OK
Shown below is the screenshot from Cisco UCS Manager after creating the Uplink port channels.
Figure 50 Port Channel Status
Follow the steps given below to configure the necessary VSANs for the Cisco UCS environment directly connected to the EMC VNX 5400 fibre channel storage array.
1. Click the SAN tab at the top left of the window.
2. Expand the Storage Cloud tree.
3. Right-click VSANs
4. Click Create Storage VSAN.
Figure 51 Create Storage VSAN
Using the below table as a reference complete the steps 5 to 10 to create the two VSANs Fab-A and Fab-B.
Table 5 VSAN Details
VSAN Name |
FC Zoning |
Fabric |
VSAN ID |
FCoE VLAN |
Fab-A |
Enabled |
Fabric A |
100 |
100 |
Fab-B |
Enabled |
Fabric B |
200 |
200 |
5. Enter the VSAN name.
6. Select the FC zoning option to be enabled.
7. Select the Fabric.
8. Enter the VSAN ID.
9. Enter the FCoE VLAN ID. Make sure that FCoE VLAN ID is a VLAN ID that is not currently used in the network.
10. Click OK to create the VSAN.
Figure 52 Create Storage VSAN Wizard
This section describes the procedure to add FC storage ports (in this case they are ports 15 and 16) to the appropriate VSANs created in the previous section.
1. Select Equipment > Fabric Interconnects > Fabric Interconnect A and then select Expansion Module 2.
2. Select the FC Port 15.
3. Click the General tab of the FC port 15, Select Fab-A (100) from the drop-down list.
4. Click Save Changes to commit the configuration.
Figure 53 Add FI-A Interfaces to VSAN
5. Repeat the above steps for FC Storage port 16.
6. Select Equipment > Fabric Interconnects > Fabric Interconnect B and the select Expansion Module 2.
7. Select the FC Port 15.
8. Click General tab of the FC port 15, Select Fab-B (200) from the drop-down list.
Figure 54 Add FI-B Interfaces to VSAN
9. Click Save Changes to commit the configuration.
10. Repeat the above steps for FC Storage port 16 and should be see the status as shown in the below figure. If required disable and enable the ports to get the port status to up and Enabled.
Figure 55 Storage FC Interfaces View
Follow the below procedure to verify if the WWPNs of the storage array are logged in to the fabric.
1. Log into Cisco UCS Manager Console by using Secured Shell (SSH) or Telnet connection.
2. Enter the connect nxos { a | b } command, where a | b represents FI A or FI B.
3. Enter the show flogi database vsan <vsan ID> command, where <vsan ID> is the identifier for the VSAN.
Figure 56 Show Flogi Database on FI-A
Figure 57 Show Flogi Database on FI-B
The below steps describes the procedure to create storage connection policies. We are creating two storage connection policies for the Cisco UCS environment; one for fabric A and other one for fabric B.
1. Select the SAN tab > Policies > Root.
2. Right-click Storage Connection Policies and select Create Storage Connection Policy.
Figure 58 Create Storage Connection Policy
3. Enter the SAN-Fabric-A as the name of the Storage Connection Policy.
4. Select the zoning type as Single Initiator Multiple Targets, since we have multiple storage ports connected to the same fabric.
5. Click the plus sign next to the FC Target Endpoints section and the Create FC Target Endpoint window opens.
6. Enter the WWPN of the FC target.
Figure 59 Create FC Target End Point
7. Provide the path for the fabric to be Fabric A.
8. For Select VSAN, select the Fab-A (100) from the drop-down list.
9. Click OK.
10. Repeat Steps 5 to 9 to add multiple FC targets on fabric A path.
Figure 60 Storage Connection Policies View Fabric A
11. Click OK to save changes.
12. Repeat steps 2 to 11 to create SAN-Fabric-B for Fabric B.
Figure 61 Storage Connection Policies View for Fabric B
The below figure shows the storage connection policies created in the above steps.
Figure 62 Storage Connection Policies View
The below steps describes the procedure to configure the necessary UUID suffix pools for the Cisco UCS environment.
1. Click the Servers tab on the top left of the window.
2. Select Pools > root.
3. Right-click UUID Suffix Pools and select Create UUID Suffix Pool.
Figure 63 Create UUID Suffix Pool
4. Enter UUID_Pool as the name of the UUID suffix pool.
5. (Optional) Give the UUID suffix pool a description. Leave the prefix at the derived option.
6. Leave the Assignment order as Default.
7. Click Next to continue.
8. Click Add to add a block of UUIDs.
9. Leave the From field is as the default setting.
10. Specify a size of the UUID block sufficient to support the available blade resources.
11. Click OK to proceed.
12. Click Finish and then click OK.
Figure 64 Create UUID Suffix Pool – Add UUID Blocks
The below steps describes the procedure to configure the necessary MAC address pool for the Cisco UCS environment.
1. Click the LAN tab on the left of the window.
2. Select Pools > root > MAC Pools.
3. In the right pane click Create MAC Pool.
Figure 65 Create MAC Pool
4. Enter a name for the MAC pool.
5. (Optional) Enter a description of the MAC pool.
6. Select Default assignment order and click Next.
7. Click Add.
8. Specify a starting MAC address and a size of the MAC address pool sufficient to support the available blade resources.
9. Click OK.
10. Click Finish and then OK to create the MAC Address Pool.
Figure 66 Create MAC Pool – Add MAC Addresses
The below steps describes the procedure to configure the necessary WWxN pools for the Cisco UCS environment.
1. Click the SAN tab at the top left of the window.
2. Select Pools > root.
3. Right-click WWxN Pools and select Create WWxN Pool.
Figure 67 Create WWxN Pool
4. Enter a name for the WWxN pool.
5. (Optional) Add a description for the WWxN pool. Click Next to continue.
Figure 68 Create WWxN Pool –Define Name and Description
6. Select 3 Ports per Node from the drop-down list for Max Ports per Node and click Next.
7. Click Add to Add WWN Blocks.
8. Specify a starting WWN address and a size of the WWxN address pool sufficient to support the available blade resources.
Figure 69 Create WWN Block
9. Click OK and click Finish.
Figure 70 Create WWxN Pool Wizard
The below steps describes the procedure to configure the necessary VLANs for the Cisco UCS environment.
The below table shows the VLANs that were used as part of the configuration detailed in this document:
Table 6 VLAN Details
VLAN Name |
VLAN Purpose |
VLAN ID |
default |
VLAN for Cisco UCS KVM IP Pool |
1 |
vMotion |
VLAN for vMotion of virtual machines |
40 |
Win_Clus |
VLAN for cluster connectivity of Virtual Machines |
50 |
mgmt |
VLAN for management access |
604 |
vm-data |
VLAN for client connectivity |
613 |
1. Click the LAN tab on the left of the window. Click LAN Cloud.
2. Right-click VLANs.
3. Click Create VLANs.
Figure 71 Create VLANs
4. Enter the name of the VLAN. Keep the Common/Global option selected for the scope of the VLAN.
5. Enter the VLAN ID. Keep the sharing type as none. Click OK.
Figure 72 Create VLANs Wizard
6. Click OK on the message box which pops up.
7. Repeat the above procedure for creating other VLANs as shown in the above table except for the default VLAN ID 1. The default VLAN 1 is pre-configured in the Fabric Interconnect.
Figure 73 List of Created VLANs in Cisco UCSM
These steps provide details for enabling the quality of service in the Cisco UCS Fabric and setting Jumbo frames:
1. Select the LAN tab at the top left of the window.
2. Go to LAN Cloud > QoS System Class.
3. In the right pane, click the General tab.
4. Check the checkbox next to the Platinum and type 9216 in the MTU boxes.
5. Click Save Changes in the bottom right corner.
6. Click OK to continue.
Figure 74 QoS System Class
In this section we will be creating QoS policy for the vMotion traffic with Platinum as the priority and leaving the other network traffic types to defaults for which the priority is Best Effort.
The below steps describes the procedure to enable QoS policy in Cisco UCS fabric.
1. Click the LAN tab on the left of the window.
2. Select LAN > Policies > root.
3. Right-click QoS Policies.
4. Click Create QoS Policy.
Figure 75 Create QoS Policy
5. Enter the QoS Policy name.
6. Modify the Priority as shown in the below figure. Leave Burst (Bytes) set to 10240. Leave Rate (Kbps) set to line-rate. Leave Host Control set to None.
7. Click OK.
Figure 76 Create QoS Policy Wizard
The below steps describes the procedure to create a local disk configuration for the Cisco UCS environment. This is necessary if the servers do not have a local disk.
This policy should not be used on blades that contain local disks.
1. Click the Servers tab on the left of the window.
2. Select Policies > root.
3. Right-click Local Disk Config Policies.
4. Click Create Local Disk Configuration Policy.
5. Enter SAN- Boot as the local disk configuration policy name.
6. Change the Mode to No Local Storage.
7. Click OK to complete creating the local disk configuration policy.
8. Click OK.
Figure 77 Create Local Disk Configuration Policy
The below steps describes the procedure to configure a maintenance policy. The maintenance policy controls the timing of a server reboot after an update has been made that requires the server to reboot prior to the update taking affect.
1. Click the Servers tab on the left of the window.
2. Select Policies > root.
3. Right-click Maintenance Policy and click Create Maintenance Policy.
4. Enter the name of the policy as User_Acknowledge and add an optional description for the policy.
5. For the Reboot Policy, select the User Ack option.
6. Click OK to create the policy.
Figure 78 Create Maintenance Policy
This section describes the Server BIOS policy recommended to deploy a virtualized SQL Server configuration on a VMware vSphere environment keeping the database performance in mind.
The below steps describes the procedure to create a server BIOS policy:
1. From the left side of the Cisco UCSM window, click the Servers tab.
2. Select Policies > root.
3. Right-click BIOS Policies.
4. Select Create BIOS Policy to create a BIOS policy. The Unified Computing System Manager wizard takes you through a sequence of steps to complete the creation process.
5. In the Main page, enter the name as VM-Host-SQL, description and BIOS settings.
6. Click Next.
Figure 79 Create BIOS Policy - Specify the BIOS Policy Name
7. In the Processor page, specify the processor settings as shown in the below figure.
Figure 80 Create BIOS Policy - Processor Settings
8. Click Next.
9. In the Intel Directed IO page, specify the Intel Directed Input Output settings.
Figure 81 Create BIOS Policy - Intel Directed IO Settings
10. Click Next.
11. In the RAS Memory page, specify the RAS memory configuration and DRAM settings.
Figure 82 Create BIOS Policy - RAS Memory Settings
12. Click Next.
13. Click Finish to complete the creation process.
A VNIC template is created for each of the vNICs in the configuration. Follow the procedure mentioned below to create vNIC templates with details provided in the table. In this configuration we are not using the Cisco UCS fabric failover feature instead we are using the NIC teaming feature of ESXi for redundancy and load balancing.
It is not recommended to use simultaneously both the Cisco UCS fabric failover and OS NIC teaming features at the same time.
Table 7 vNIC Template Details
vNIC Template Name |
Fabric ID |
Enable Failover (Yes/No) |
Target |
Template Type |
VLAN |
Native VLAN |
MTU |
MAC Pool |
QoS Policy |
eth0 |
Fabric A |
No |
Adapter |
Updating |
mgmt & vMotion |
mgmt |
9000 |
VSPEX-SQL-MAC-Pool |
JumboFrame |
eth1 |
Fabric B |
No |
Adapter |
Updating |
mgmt & vMotion |
mgmt |
9000 |
VSPEX-SQL-MAC-Pool |
JumboFrame |
eth2 |
Fabric A |
No |
Adapter |
Updating |
vm-data & Win_Clus |
vm-data |
1500 |
VSPEX-SQL-MAC-Pool |
Not Set |
eth3 |
Fabric B |
No |
Adapter |
Updating |
vm-data & Win_Clus |
vm-data |
1500 |
VSPEX-SQL-MAC-Pool |
Not Set |
The below steps describes the procedure to create vNIC templates for the Cisco UCS environment:
1. Click the LAN tab on the left of the window.
2. Select Policies > root.
3. Right-click vNIC Templates.
4. Select Create vNIC Template.
5. Enter the vNIC template name.
6. Select the template options as mentioned for the specific VLAN template in the above table.
Figure 83 Create vNIC Templates
7. Click OK to create the vNIC template.
8. Click OK.
9. Repeat the above steps to create the remaining vNIC template.
The below steps describes the procedure to create the boot policy using the below reference table. The WWPN mentioned in the below table are specific to this configuration and the information can be gathered from the storage array.
Table 8 Boot Policy Details
Device Name |
Boot Order |
vHBA |
Name |
|
LUN ID |
VNX5400 Port # |
WWPN |
Local CD/DVD |
1 |
|
|
|
|
|
|
SAN |
2 |
vHBA-A |
SAN Primary |
SAN Target Primary |
0 |
SP-A Port 2 |
50:06:01:66:36:e0:01:f6 |
SAN Target Secondary |
0 |
SP-B Port 2 |
50:06:01:6e:36:e0:01:f6 |
||||
vHBA-B |
SAN Secondary |
SAN Target Primary |
0 |
SP-B Port 3 |
50:06:01:6f:36:e0:01:f6 |
||
SAN Target Secondary |
0 |
SP-A Port 3 |
50:06:01:67:36:e0:01:f6 |
1. From the top left corner of the window, click the Servers tab.
2. Select Policies > root.
3. Right-click Boot Policies.
4. Select Create Boot Policy.
Figure 84 Create Boot Policy
5. Name the boot policy SAN-Boot.
6. (Optional) Give the boot policy a description.
7. Select Reboot on Boot Order Change and Enforce vNIC/vHBA Name.
Figure 85 Create Boot Policy Wizard
8. Expand the Local Devices drop-down list and select Add Remote CD/DVD.
9. Expand the vHBAs drop-down list and select Add SAN Boot.
10. Enter vHBA-A in the vHBA field in the Add SAN Boot window.
11. Make sure that you select Primary type and click OK.
Figure 86 Add SAN Boot -Primary
12. Under the vHBA drop-down list, select Add SAN Boot Target.
Figure 87 Add SAN Boot Target
13. Set Boot Target LUN value as 0.
14. Enter the WWPN of VNX5400 SP-A Port 2 (Refer to the above table in this section)
15. Set the boot target type as Primary.
16. Click OK to add the SAN boot target.
Figure 88 Add SAN Boot Target -Primary
17. Select Add SAN Boot Target again to add the secondary Boot target.
18. Enter the WWPN of VNX5400 SP-B Port 2 (Refer to the above table in this section)
19. Click OK to add the SAN boot target.
Figure 89 Add SAN Boot Target -Secondary
20. Under the vHBAs drop-down list, select Add SAN Boot again to configure the SAN Boot through the second HBA.
21. Enter vHBA-B in the vHBA field in the Add SAN Boot window. The type will be automatically selected as secondary.
22. Click OK.
Figure 90 Add SAN Boot – Secondary
23. Select Add SAN Boot Target. Set Boot Target LUN value as 0.
24. Enter the WWPN of VNX5400 SP-B Port 3 (Refer to the above table in this section)
25. Set the type as Primary.
26. Click OK to add the SAN boot target.
Figure 91 Add SAN Boot Target -Primary
27. Select Add SAN Boot Target again to add the secondary Boot target.
28. Enter the WWPN of VNX5400 SP-A Port 3 (Refer to the above table in this section)
29. Click OK to add the SAN boot target.
Figure 92 Add SAN Boot Target -Secondary
30. Click OK on the main Create Boot Policy window to complete the creation of the boot policy.
Figure 93 Boot Order View
The below steps describes the procedure to create Service Profile Template:
1. Click the Servers tab at the top left corner of the window.
2. Select Service Profile Templates > root.
3. Right-click root and select Create Service Profile Template.
Figure 94 Create Service Profile Template.
The Create Service Profile Template page has a list of sections displayed in the left pane. The wizard takes you through a sequence of steps to complete the creation process.
4. In the identify Service Profile Template page, provide the service profile template name.
5. Select the type as Updating Template.
6. Select the created UUID pool from the UUID Assignment drop-down list.
Figure 95 Create Service Profile Template - Specify the Name and UUID
7. In the Networking page, leave the Dynamic vNIC Connection Policy field as default.
8. For the How would you like to configure LAN connectivity radio button, select Expert option.
9. Click Add to add a vNIC to the service profile template.
10. In the Create vNIC window:
a. Provide a Name (eth0) to the vNIC.
b. Select Use vNIC Template.
c. From the vNIC Template drop-down list, select a template (eth0).
d. For the Adapter Policy, select VMware from the drop-down list.
e. Click OK to add the vNIC to the service template.
Figure 96 Service Profile Template: Create vNIC
11. Repeat the above steps to add all the desired vNICs by referring to the below table.
Table 9 Adapter Policy for vNIC Templates
vNIC Name |
Use vNIC Template |
vNIC Template Name |
Adapter Policy |
eth0 |
Yes |
eth0 |
VMware |
eth1 |
Yes |
eth1 |
VMware |
eth2 |
Yes |
eth2 |
VMware |
eth3 |
Yes |
eth3 |
VMware |
Figure 97 Create Service Profile Template - Networking
12. Click Next to continue.
13. In the Storage section, select Local Storage policy defined earlier.
14. Select Expert option for the How would you like to configure SAN connectivity?
15. Select the pool (VSPEXSQL-WWxn-Pool) for WWNN Assignment.
16. In the WWPN section, click Add to add the WWPNs to be used.
Figure 98 Create Service Profile Template – Storage
17. Enter a value in the Name field.
18. For the WWPN Assignment under World Wide Port Name, select Derived from the drop-down list.
19. Select A as the Fabric ID and select a “Fab-A” as the VSAN from the drop-down list.
20. Select VMware from the Adapter Policy drop-down list and leave the rest at defaults.
21. Click OK to deploy the vHBA.
Figure 99 Create Service Profile Template – Create vHBA Fabric-A
22. Repeat steps 16 to 21 to create vHBA-B. Here select “B” as the Fabric ID and “Fab-B” as the VSAN from the drop-down list as shown in the below figure.
23. Click OK and click Next to continue.
Figure 100 Create Service Profile Template – Create vHBA Fabric-B
24. In the Zoning page, click Add to select vHBA Initiator Groups.
Figure 101 Create Service Profile Template – Zoning
25. In the Create vHBA Initiator Group, enter SAN-A as Name and for Storage Connection Policy select “SAN-Fabric-A” from the drop-down list as shown in the below figure.
Figure 102 Create Service Profile Template – Create vHBA Initiator Group Fabric-A
26. Repeat steps 24 and 25 to create the second vHBA initiator group with SAN-B as the name and SAN-Fabric-B as the Storage Connection Policy as shown in the below figure.
Figure 103 Create Service Profile Template – Create vHBA Initiator Group Fabric-B
27. Click OK to return to the Zoning page.
28. Select vHBA Initiators, then select the corresponding vHBA Initiator Group and click Add To, this adds the selected Initiators to the selected Initiator Groups as shown in the below figure.
Figure 104 Service Profile Template: Zoning – Select vHBA Initiators
29. Click Next on the Zoning page.
30. Click Next on the vNIC/vHBA Placement page with default settings.
31. Click Next on the vMedia Policy page with default settings.
32. In the Server Boot Order page, select the created Boot Policy (SAN-Boot) from the drop-down list and click Next.
Figure 105 Service Profile Template - Server Boot Order
33. Click Next on the Maintenance Policy page with default settings
34. In the Server Assignment page, select Assign Later from the drop-down list for the Pool Assignment and click Next.
Figure 106 Service Profile Template - Server Assignment
35. In the Operation Policies window, expand the BIOS configuration section and for BIOS Policy and select the created policy (ESXi-BIOS) and click Finish.
Figure 107 Service Profile Template - Operational Policies
That concludes the service profile template creation.
The below steps describes the procedure to create service profiles from the service profile template:
1. In the Cisco UCSM, navigate to Service profiles under the Servers tab and right-click on it.
Figure 108 Create Service Profiles from Template
2. Select Create Service Profiles from Template.
3. Enter Naming Prefix.
4. Enter 1 as the name suffix starting number.
5. Enter 2 as the number of instances, since we need to have two ESXi servers deployed.
6. Select the Created Service Profile Template (VSPEX-SQL-ST) from the drop-down list.
7. Click OK to create the service profile.
8. Verify the Service Profiles created in the Cisco UCSM as shown in the below figure.
Figure 109 Service Profiles View
This section describes the steps to manually associate the service profiles created in the above section to the blade servers.
1. In the Cisco UCS Manager, navigate to Servers > Service Profiles > root and right-click on the service profile (ESXi-Host1) created in the above section and click Change Service Profile Association.
Figure 110 Change Service Profile Association
2. In the Associate Service Profile window, under Available Servers select a blade/slot and click OK to associate with it as shown in the below figure.
Figure 111 Associate Service Profile
3. Click Yes to apply the changes.
Figure 112 Associate Service Profile Confirmation
4. The service profile association takes some minutes to finish and when it is complete the Overall Status and Status details should like as shown in the below figure.
Figure 113 View Service Profile Instantiation and Association
5. Once the service profiles are associated successfully, you can see the zones are automatically created as shown in the below figure.
Figure 114 WWxN and Zoning information in Service Profile
6. Repeat the above steps in this section to associate the second service profile (ESXi-Host2) to the second blade/slot.
For the initial setup of the EMC VNX5400, refer to the EMC web site.
This section describes the steps to create storage pools on EMC VNX5400 storage array using the information provided in the table below.
Table 10 Storage Pool Details
Pool Name |
RAID Type |
Disk Configuration |
Fast VP (Yes/No) |
LUN Details |
Purpose |
SAN_Boot |
RAID5(4+1) |
5 SAS Disks |
No |
SAN Boot LUN 1 SAN Boot LUN 2 |
Performance tier for ESXi SAN Boot |
VM_Datastore |
RAID5(4+1) |
5 SAS Disks |
No |
VM LUN |
Performance tier for virtual machine OS |
SQL_DATA |
RAID5(4+1) RAID1/0(4+4) |
25 SAS + 8 Flash |
Yes |
SQL DATA LUN |
Extreme Performance tier for SQL Server OLTP database |
SQL_LOG |
RAID1/0(4+4) |
16 SAS |
No |
SQL Log LUN |
Performance tier for SQL Server OLTP database logs |
1. Login to the Unisphere and navigate to Storage > Storage Configuration > Storage Pools.
Figure 115 EMC Unisphere
2. Select Pools tab and click Create.
Figure 116 EMC Unisphere – Storage Pools
3. In the Create Storage Pool, the following settings were used for the SAN_Boot Storage Pool:
f. Select the General Tab.
g. Select Pool as the Storage Pool Type.
h. Provide a name (SAN_Boot) for the Storage Pool Name.
i. For Extreme Performance, select 0 as the Number of Flash Disks
j. For Performance, select RAID5 (4+1) from the drop-down list for RAID Configuration and select 5. (Recommended) from the drop-down list for Number of SAS Disks.
k. In the Disks section, select Automatic for automatic selection of available disks that will be used for this pool.
Figure 117 EMC Unisphere – Create Storage Pool for SAN Boot
l. In the Advanced tab, uncheck the FAST Cache Enabled box and click OK.
Figure 118 EMC Unisphere – Create Storage Pool Advanced Tab
4. Click Yes twice and then OK to start the storage pool creation.
5. Repeat the above steps in this section with same settings to create the storage pool for VM_Datastore.
6. Create Storage Pool for SQL_Data with the settings as shown in the below figures.
Figure 119 EMC Unisphere – Create Storage Pool for SQL Server Data
Figure 120 EMC Unisphere – Create Storage Pool Advanced Tab for SQL Server Data
7. Similarly create Storage Pool for SQL_LOG with the settings as shown in the below figures.
Figure 121 EMC Unisphere – Create Storage Pool for SQL Server Log
Figure 122 EMC Unisphere – Create Storage Pool Advanced Tab for SQL Server Log
This section describes the steps to create LUNs. The information in the below reference table can be used to create the LUNs in the storage pools created in the earlier section.
Table 11 EMC VNX5400 LUN Details
LUN Name |
Storage Pool for LUN |
LUN Properties |
||
User Capacity in GB |
Thin (Yes/No) |
Shared LUN (Yes/No) |
||
ESXi_Boot_LUN_1 |
SAN_Boot |
100 |
Yes |
No |
ESXi_Boot_LUN_2 |
SAN_Boot |
100 |
Yes |
No |
VM_Datastore |
VM_Datastore |
2000 |
No |
Yes |
SQL_Data_Datastore |
SQL_DATA |
9000 |
No |
Yes |
SQL_Log_Datastore |
SQL_LOG |
4000 |
No |
Yes |
1. Login to the Unisphere and navigate to Storage > LUNs
Figure 123 EMC Unisphere – Navigate to LUNs
2. Select LUNs tab and click Create.
Figure 124 EMC Unisphere – LUNs Section
3. Create the two boot LUNs with the configuration as shown in the below figure and click Apply.
Figure 125 EMC Unisphere – Create LUN for SAN Boot
4. Click Yes on the Create LUN Confirm dialogue box.
Figure 126 EMC Unisphere – Create LUN Confirmation Window
5. Click OK on the Message dialogue box.
Figure 127 EMC Unisphere – LUN Creation Success Message.
6. Repeat the above steps in this section to create the VM_Datastore, SQL_Data_Datastore and SQL_Log_Datastore LUNs with the information provided in the above table.
To register the host initiators in the Unisphere, first gather information about vHBAs and its associated WWPNs from the Cisco UCS Manager as shown in the below figure. In this solution we are using two vHBAs per server.
Figure 128 vHBA WWPN Details in Cisco UCSM
1. Login to the EMC Unisphere and navigate to Hosts > Initiators. If the configurations in the previous sections are correct, you will see here the initiators logged in but not registered.
2. Select vHBA-A WWNN of EXi-Host1 service profile from the list as shown in the below figure and click Register.
Figure 129 EMC Unisphere – Host Initiators
3. In the Register Initiator Record window:
a. Select CLARiiON/VNX from the drop-down list for Initiator Type.
b. Select Active-Active mode (ALUA)-failovermode 4 as Failover Mode.
c. Select New Host and provide a name and IP address and click OK.
Figure 130 EMC Unisphere – Initiator Registration
4. Click Yes to confirm the registration and click OK twice.
5. Now select the vHBA-B WWNN of ESXi-Host1 from the initiators list and register by selecting Existing Host Name (ESXi-Host1) created in the above step.
Figure 131 EMC Unisphere – Initiator Registration
6. Repeat the above steps in this section to register the vHBA-A and vHBA-B WWNNs for ESXi-Host2 service profile
Once all the WWNNs are registered, their connectivity status in EMC Unisphere should like as shown in the below figure.
Figure 132 EMC Unisphere – Registered Initiators Status
This section describes the steps to create storage groups, add LUNs and connect hosts to the storage group. After completion of these steps, the LUNs should be visible to all the hosts connected to the storage group. The below reference table is used in this environment to create storage groups, add LUNs and connect Hosts.
Table 12 Storage Group, LUNs and Hosts Mapping Details
Storage Group Name |
Hosts Connected to Storage Group |
LUNs Added to Storage Group |
ESXi-Host1 |
ESXi-Host1 |
ESXi_Boot_LUN_1 |
VM_Datastore |
||
SQL_Data_Datastore |
||
SQL_Log_Datastore |
||
ESXi-Host2 |
ESXi-Host2 |
ESXi_Boot_LUN_2 |
VM_Datastore |
||
SQL_Data_Datastore |
||
SQL_Log_Datastore |
1. Login to the Unisphere and navigate to Hosts > Storage Groups and click Create.
Figure 133 EMC Unisphere – Navigate to Storage Groups
2. In the Create Storage Group, provide a name for the Storage Group and click OK.
Figure 134 EMC Unisphere – Create Storage Group
3. Click Yes on the dialogue box to continue adding LUNs to this storage group.
Figure 135 EMC Unisphere – Create Storage Group Confirm Window
4. Select all the LUNs (ESXi_Boot_LUN_1, VM_Datastore, SQL_Data_Datastore and SQL_Log_Datastore) to add to this storage group and click Add.
Figure 136 EMC Unisphere – Storage Group Properties
5. Click Add and click OK.
Figure 137 EMC Unisphere – Add LUNs to Storage Group
For SAN boot, the boot LUN should have a Host LUN ID of 0. Make sure to add this LUN first to the storage group to automatically assign 0 as the host LUN ID.
6. Click the Hosts tab of ESXi-Host1 Storage Group Properties, select ESXi-Host1 host from under the Available Hosts section and click the forward arrow button to move it under the Hosts to be connected section.
Figure 138 EMC Unisphere – Add Hosts to Storage Group
7. Repeat the above steps in this section to create Storage Group by name ESXi-Host2. Add the LUNs (ESXi_Boot_LUN_2, VM_Datastore, SQL_Data_Datastore and SQL_Log_Datastore) and connect the host (ESXi-Host2) to this storage group.
Figure 139 EMC Unisphere – Add Hosts to Second Storage Group
If the configurations in the previous sections are correct, then the boot LUN should be visible in the server BIOS as shown in the below figure. The boot LUN appears twice here because there two paths to that same LUN.
Figure 140 EMC Boot LUNs Visible During Server Boot
This section provides detailed instructions for installing VMware ESXi 5.5 Update 2 in a VSPEX environment. After the procedures are completed, two SAN-booted ESXi hosts will be provisioned.
Several methods exist for installing ESXi in a VMware environment. These procedures focus on how to use the built-in Keyboard, Video, Mouse (KVM) console and virtual media features in Cisco UCS Manager to map remote installation media to individual servers and connect to their boot logical unit numbers (LUNs). In this method we are using the Cisco Custom image file which is downloaded from the below URL. This is required for this procedure as it contains custom Cisco drivers and thereby reduces installation steps.
https://my.vmware.com/web/vmware/details?downloadGroup=OEM-ESXI55U2-CISCO&productId=353
The IP KVM enables the administrator to begin the installation of the operating system (OS) through remote media. It is necessary to log in to the Cisco UCS environment to run the IP KVM.
Complete the following steps:
1. Download the Cisco Custom ISO for ESXi from the VMware website.
2. Open a web browser and enter the IP address for the Cisco UCS cluster address. This step launches the Cisco UCS Manager application.
3. Log in to Cisco UCS Manager by using the admin user name and password.
4. From the main menu, click the Servers tab.
5. Select Servers > Service Profiles > root > ESXi-Host1.
6. Right-click ESXi-Host1 and select KVM Console.
To prepare the server for the OS installation, complete the following steps on each ESXi host:
1. In the KVM window, click the Virtual Media tab.
Figure 141 Cisco UCS Virtual Media Map
2. Click Activate Virtual Devices, select Accept this Session, and then click Apply.
3. Select Virtual Media, Map CD/DVD, then browse to the ESXi installer ISO image file and click Open.
4. Select Map Device to map the newly added image.
Figure 142 Cisco UCS Virtual Media Map ISO
5. Click KVM tab to monitor the server boot.
6. If the server is power on, first shutdown the server, then boot the server by selecting Boot Server and clicking OK, then click OK again.
To install VMware ESXi to the SAN-bootable LUN of the hosts, complete the following steps on each host:
1. On boot, the machine detects the presence of the ESXi installation media. Select the ESXi installer from the menu that is displayed.
2. After the installer is finished loading, press Enter to continue with the installation.
Figure 143 VMware ESXi Installation
3. Read and accept the end-user license agreement (EULA). Press F11 to accept and continue.
4. Select the EMC LUN that was previously set up as the installation disk for ESXi and press Enter to continue with the installation.
5. Select the appropriate keyboard layout and press Enter.
6. Enter and confirm the root password and press Enter.
7. The installer issues a warning that existing partitions will be removed from the volume. Press F11 to continue with the installation.
Figure 144 VMware ESXi Installation Confirmation
8. After the installation is complete, click check icon to clear the Mapped ISO (located in the Virtual Media tab of the KVM console) to unmap the ESXi installation image.
9. The Virtual Media window might issue a warning stating that it is preferable to eject the media from the guest. Because the media cannot be ejected and it is read-only, simply click Yes to unmap the image.
10. From the KVM window, press Enter to reboot the server.
Adding a management network for each VMware host is necessary for managing the host. To add a management network for the VMware hosts, complete the following steps on each ESXi host:
To configure the host with access to the management network, complete the following steps:
1. After the server has finished rebooting, press F2 to customize the system.
2. Log in as root and enter the corresponding password.
3. Select the Configure the Management Network option and press Enter.
Figure 145 Configure Management Network
4. From the Configure Management Network menu, select Network Adapters and press Enter.
Figure 146 Configure Management Network – Network Adapters
5. Select vmnic0 and vmnic1 by pressing spacebar for fault tolerance and load balancing and press Enter.
Figure 147 Configure Management Network – Select vmnic
6. From the Configure Management Network menu, select IP Configuration and press Enter.
7. Select the Set Static IP Address and Network Configuration option by using the space bar.
8. Enter the IP address for managing the first ESXi host
9. Enter the subnet mask for the first ESXi host.
10. Enter the default gateway for the first ESXi host.
11. Press Enter to accept the changes to the IP configuration.
Figure 148 Configure Management Network – IP Configuration
12. Press Esc to exit the Configure Management Network submenu.
13. Press Y to confirm the changes to restart the management network and return to the main menu.
14. Select Test Management Network to verify that the management network is set up correctly and press Enter.
15. Press Enter to run the test.
16. Press Enter to exit the window.
17. Press Esc to log out of the VMware console.
18. Repeat the above steps in this section to configure the management network of other ESXi hosts.
1. Open a web browser to the VMware vSphere Web Client “https://<<vCenter_ip>>:9443/vsphere-client/”
2. Login as administrator@vsphere.local with the admin password.
3. Click Create Datacenter as shown in the below figure.
Figure 149 VMware vSphere Web Client
4. Provide a name for the Datacenter.
Figure 150 VMware vSphere Web Client – New Datacenter
5. Select Datacenter and click Add a Host as shown in the below figure.
Figure 151 VMware vSphere Web Client – Add Host
6. In the Name and Location page, enter the IP address of the first ESXi host.
Figure 152 VMware vSphere Web Client – Add Host - Name and Location
7. In the Connection Settings page, provide the credentials of the ESXi host for the vSphere Web Client to connect to it.
Figure 153 VMware vSphere Web Client – Add Host - Connection Settings
8. Click Yes on the Security Alert dialogue box.
9. In the Host Summary page, verify the details and click Next.
10. In the Assign License page, click Next after providing a valid license key.
11. Select/Deselect the lockdown mode option as per your security requirements and click Next.
12. Select the Datacenter created for the VM Location page and click Next.
13. Verify the details in the Ready to complete page and click Finish.
Figure 154 VMware vSphere Web Client – Add Host - Ready to Complete
14. Repeat the above steps in this section to add other ESXi hosts to the vCenter.
Figure 155 VMware vSphere Web Client – Hosts View
This section provides steps to configure power policy settings on ESXi hosts for high performance. Repeat the below steps on all the ESXi hosts.
1. In the vSphere Web Client page, navigate to vCenter > Hosts.
2. Select an ESXi host and navigate to Manage > Settings > Power Management and click Edit.
Figure 156 VMware vSphere Web Client – Power Management
3. Select High Performance and click OK.
Figure 157 VMware vSphere Web Client – Edit Power Policy Settings
This section provides details to create new cluster, enable VMware vSphere HA and DRS add hosts to the cluster.
1. In the vSphere Web Client page, navigate to vCenter > Datacenters in the left window pane.
2. Select Datacenters tab and click the Datacenter created (Primary_DC)
3. Click the drop-down arrow next to Actions and select New Cluster.
Figure 158 VMware vSphere Web Client – New Cluster
4. Provide a name for the Cluster and check the boxes to Turn ON DRS and vSphere HA as shown in the below figure.
Figure 159 VMware vSphere Web Client – New Cluster Wizard
5. In the vSphere Web Client page, navigate to vCenter > Primary DC > Clusters in the left window pane.
6. Click Clusters tab and select the newly created cluster (VSPEX-SQL) as shown in the below figure.
7. Click the drop-down arrow next to Actions and select Add Host.
Figure 160 VMware vSphere Web Client – Add Host to Cluster
8. From the Move Hosts into Cluster list, select the ESXi host and click OK.
Figure 161 VMware vSphere Web Client – Move Hosts into Cluster
9. Select to put all the host’s VMs in the cluster’s root resource pool and click OK.
Figure 162 VMware vSphere Web Client – Move Hosts into Cluster – Resource Pool Settings
VMFS datastores serve as repositories for virtual machines. VMFS is a cluster file system that lets multiple ESXi hosts access the same VMFS datastore concurrently. Sharing VMFS volumes across multiple hosts allows you to take advantage of vSphere features like HA/DRS and vMotion. In the previous sections we have configured the HBAs and zoning and then presented the shared SAN LUNs to the ESXi hosts in the cluster. This section provides the steps to create and share VMFS datastores across ESXi Hosts in the cluster.
1. In the vSphere Web Client page, navigate to vCenter > Datacenter (primary DC) in the left window pane.
2. Select Datastores tab and select a device (EMC SAN LUN) from the list and right-click the selected device to create a new datastore.
Figure 163 VMware vSphere Web Client – Create New Datastore
3. Select the Datacenter (Primary_DC) as the location for the New Datastore and click Next.
Figure 164 VMware vSphere Web Client – Create New Datastore - Location
4. Select VMFS as the type for the New Datastore and click Next.
Figure 165 VMware vSphere Web Client – Create New Datastore - Type
5. In the Name and Device Selection page, provide a name for the Datastore and select a host from the drop-down list and click Next.
Figure 166 VMware vSphere Web Client – Create New Datastore – Name and Device Selection
6. In the Partition Configuration page, select use all the available partitions and click Next.
Figure 167 VMware vSphere Web Client – Create New Datastore – Partition Configuration
7. In the Ready to Complete page, verify the details and click Finish.
8. Select the newly created datastore (VM_Datastore) in the left window pane and navigate to Manage > Settings > Connectivity and Multipathing.
9. In the right window pane, select ESXi host where the datastore is in unmounted state and click Mount button. The VM_Datastore is now mounted on both the ESXi hosts in a cluster as shown in the below figure.
Figure 168 VMware vSphere Web Client – New Datastore Mount
10. Repeat the above steps in this section to create the other two datastores and mount it on both the ESXi hosts as shown in the below two figures.
Figure 169 VMware vSphere Web Client – Create New Datastore for SQL Data
Figure 170 VMware vSphere Web Client – Create New Datastore for SQL Log
This section describes the steps to configure networking using the below reference table. vSwitch0 virtual standard switch (vSS) is already created at the time of ESXi installation and Management VMkernel port group is also configured just after the ESXi installation is complete. This section deals with creating and configuring vmk port for vMotion, a new vSwitch with VM port group for VM data traffic and active-active NIC teaming for fault tolerance and load balancing. As a best practice, jumbo frame is enabled for vMotion traffic for better performance.
Table 13 VMware vSwitch Details
vSwitch Name |
Physical Adapters |
VM Network Name/ VLAN ID |
VMkernel Port Group Name and VLAN ID |
Purpose/Services Enabled |
Jumbo Enabled (Yes/no) |
vSwitch0 |
vmnic0 and vmnic3 |
None |
Management Network VLAN ID – None (0) |
Management Traffic |
No |
None |
vMotion VLAN ID - 40 |
vMotion Traffic |
Yes |
||
vSwitch1 |
vmnic1 and vmnic2 |
VM_Network_1 VLAN ID – None (0) |
None |
For Virtual Machine traffic |
No |
1. In the vSphere Web Client page, navigate to vCenter > Datacenter (primary DC) > Host (ESXi-Host1) in the left window pane.
2. In the right window pane, select Networking tab and click Virtual Switches.
3. Move the mouse cursor over the first icon under the Virtual Switches and click Add Host Networking.
Figure 171 VMware vSphere Web Client – Add Host Networking
4. Select Virtual Machine Port Group for a Standard Switch for the connection type In the Add Networking wizard and click Next.
Figure 172 VMware vSphere Web Client – Add Networking – VM Port group Connection Type
5. Select New Standard Switch for the target device and click Next.
Figure 173 VMware vSphere Web Client – Add Networking – Select Target Device
6. Select Active Adapters and click plus sign to Add Adapters. Click Next.
Figure 174 VMware vSphere Web Client – Add Networking – Create Standard Switch
7. Select two physical adapters from under Network Adapters list to add to the switch and click Next.
Figure 175 VMware vSphere Web Client – Add Physical Adapters to the Switch
8. In the Create a Standard Switch section, verify the newly added adapters and click Next.
Figure 176 VMware vSphere Web Client – Add Networking – Create a Standard Switch with Assigned Adapters
9. Provide a Network Label and VLAN ID in the Connection Settings section of Add Networking wizard.
Figure 177 VMware vSphere Web Client – Add Networking – Connection Settings
10. Verify the details in the Ready to Complete section of Add Networking wizard and click Finish.
The below figure shows the newly created virtual machine port group (VM_Netwok_1) for VM data traffic in a new virtual standard switch (vSwitch1). It also shows that this vSS is configured with two active-active physical adapters for fault tolerance and load balancing.
Figure 178 VMware vSphere Web Client – Virtual Switches
11. Repeat steps 3 and 4 but select VMkernel Network Adapter for the connection type and click Next.
Figure 179 VMware vSphere Web Client – Add Networking – VMkernel Network Adpater Connection Type
12. Select the option Select an Existing Standard Switch for the target device and click Next.
Figure 180 VMware vSphere Web Client – Add Networking – Select Target Device
13. In the Port Properties section, provide a Network Label, enter appropriate VLAN ID, select vMotion Traffic to use this port for virtual machine migration and click Next.
Figure 181 VMware vSphere Web Client – Add Networking – Port Properties
14. In the IPv4 settings section, select Use Static IPv4 Settings and enter IP address, Subnet Mask and click Next.
Figure 182 VMware vSphere Web Client – Add Networking – IPv4 Settings
15. In the Ready to Complete section, verify the details and click Finish.
The below figure shows the newly created VMkernel port group for vMotion traffic in vSwitch0. It also shows the VMkernel port group created for management traffic in the earlier section. This virtual switch is configured with two active-active physical adapters for fault tolerance and load balancing.
Figure 183 VMware vSphere Web Client – Virtual Switch
16. To enable jumbo frames for vMotion traffic, select vSwitch0 and click Edit Icon (pencil icon). In the Properties section enter 9000 in the field next to MTU (Bytes) and click Ok.
Figure 184 VMware vSphere Web Client – Edit Virtual Switch Settings
17. Select the vMotion VMkernel port group and click Edit icon. In the NIC settings enter 9000 in the field next to the MTU.
Figure 185 VMware vSphere Web Client – Edit VMkernel Settings
18. Select the vMotion VMkernel port group and click Edit icon. In the NIC settings enter 9000 in the field next to the MTU.
19. Repeat all the above steps in this section to configure the networking for other ESXi hosts in the cluster.
20. Connect to an ESXi host via ssh and verify the jumbo packet support by issuing the ping command to the IP address assigned to the vmk (vMotion) port group of other ESXi host as shown in the below figure.
Figure 186 End-to-End Jumbo Frame Verification
This section describes the steps to create a new virtual machine with virtual hardware version 10 using the vSphere Web client. Once a virtual machine is created and the OS is installed with the latest updates and patches, you can use the VMware vCenter virtual machine template feature to create a golden image and later use it to instantiate and quickly provision multiple virtual machines required for this solution. This document does not cover the steps to create virtual machine templates. For creating virtual machine template using vCenter refer to the VMware web site.
You cannot use vSphere client to create or edit the settings of Virtual machines of version 10 or higher.
1. In the vSphere Web Client home page, Select Hosts and Cluster under the Inventories.
2. Expand the Cluster (VSPEX-SQL), right-click ESXi host and select New Virtual Machine.
Figure 187 VMware vSphere Web Client – New Virtual Machine
3. In the Select a Creation Type, select Create a New Virtual Machine and click Next.
Figure 188 VMware vSphere Web Client – New Virtual Machine - Select a Creation Type
4. In Select a Name and Folder, provide a name for the VM, select datacenter and click Next.
Figure 189 VMware vSphere Web Client – New Virtual Machine - Select a Name and Folder
5. Select a compute resource to host this virtual machine and click Next.
Figure 190 VMware vSphere Web Client – New Virtual Machine - Select a Compute Resource
6. In Select Storage, select VM_Datastore as the storage location for the virtual machine’s boot disk and click Next.
Figure 191 VMware vSphere Web Client – New Virtual Machine - Select Storage
7. In the Select Compatibility page, select ESXi 5.5 and later to create a virtual machine with VM version 10 and click Next.
Figure 192 VMware vSphere Web Client – New Virtual Machine - Select Compatibility
8. In the Select a Guest OS, select Windows from the drop-down list next to Guest OS Family and select Microsoft Windows Server 2012 (64-bit) as the Guest OS version. Click Next.
Figure 193 VMware vSphere Web Client – New Virtual Machine - Select a Guest OS
9. In the Customize hardware page:
a. Assign memory (8GB) and vCPUs (4 vCPUs)
b. Assign 100GB hard disk size for OS.
c. Select “Thick Provision Eager Zeroed” for Disk provisioning.
d. Select “LSI Logic SAS” as the SCSI controller type.
e. Select VM_Network_1 from the drop-down list for the Network Adapter 1.
f. Map and mount Windows Server 2012 R2 installation iso file.
Figure 194 VMware vSphere Web Client – New Virtual Machine – Customize Hardware
g. Click the drop-down arrow next to New Device, select SCSI Controller and click Add.
Figure 195 VMware vSphere Web Client – New Virtual Machine – Customize Hardware
h. Change the Type of this SCSI Controller to VMware Paravirtual.
Figure 196 VMware vSphere Web Client – New Virtual Machine – Customize Hardware
i. Click the drop-down arrow next to New Device, select New Hard Disk and click Add.
j. Assign 225 GB size to this newly added hard disk, Select Location as “SQL_Data_Datastore”, Disk Provisioning as “Thick Provision Eager Zeroed” and SCSI(1:0) as the Virtual Device Node as shown in the below figure.
k. Click the drop-down arrow next to New Device, select New Hard Disk and click Add.
l. Assign 60 GB size to this newly added hard disk, Select Location as “SQL_Log_Datastore”, Disk Provisioning as “Thick Provision Eager Zeroed” and SCSI(1:1) as the Virtual Device Node.
Figure 197 VMware vSphere Web Client – New Virtual Machine – Customize Hardware
10. Verify the details in the Ready to complete page and click Finish.
Figure 198 VMware vSphere Web Client – New Virtual Machine –Verify in Ready to Complete
This section provides instructions on how to install Windows Server 2012 R2 on the newly created VMs. Follow the below steps and install the OS on the VM.
1. Go to the Edit Settings of the virtual machine created in the above section and mount Windows server ISO image and power ON the VM to begin the installation.
2. Select appropriate language and other preferences and click Next.
Figure 199 Windows Server 2012 R2 Installation
3. In the next page, click Install now.
4. In the next page, select the operating system to install and click Next.
5. Accept the license terms and click Next.
6. Select Custom: Install Windows only (advanced) and click Next.
7. Select the disk to install Windows and click Next.
Figure 200 Windows Setup – Select Disk
8. The installation begins and reboots the virtual machine upon completion.
9. Provide a password for the virtual machine’s built-in administrator account.
Figure 201 Windows Setup – Password Settings
This section provides instructions on preparing the virtual machines for setting up WSFC later. This section deals with:
1. Assign IP addresses to network adapters.
2. Rename and join the host to Windows AD domain
3. Install Windows updates and add required roles and features
4. Power Configuration for performance
5. Prepare the Disks for SQL server use
Perform the below steps on the virtual machine to rename and assign IP addresses to the network adapters.
1. Login to the virtual machine using administrator account and open Windows PowerShell.
2. Assign static IP addresses to the network interface. An example of assigning IP addresses using PowerShell is shown in the below figure:
Figure 202 Get-NetAdapter
Perform the below steps to rename the computer and join it to Active Directory domain.
1. Rename the computer host name and restart the virtual machine.
Figure 203 Rename-Computer
2. Join the computer to Active Directory domain and restart the virtual machine.
Figure 204 Join AD Domain
3. Assign static IP addresses to the network interfaces. An example of assigning IP addresses using PowerShell is shown in the below figure:
Perform the below steps on the virtual machine to install the latest Windows updates and add roles and features required for this VSPEX environment.
1. Install any latest updates and patches from the Microsoft web site and also make sure the latest version of VMware tool is also running.
2. Navigate to Server Manager > Add Roles and Features and install .NET Framework 3.5 and Failover Clustering features as shown in the below example figure.
Figure 205 Add Roles and Features Wizard
Perform the below steps to change the power scheme from balanced to high performance for better CPU performance.
1. In the Windows PowerShell window, issue the command as shown in the below figure to gather the Power Scheme GUID for the High Performance plan.
Figure 206 Check Power Configuration
2. Change the power plan scheme from balanced to high performance as shown in the below example figure.
Figure 207 Power Configuration set to High Performance
3. Verify the power plan in use now as shown in the below example figure. The * seen next to power scheme is the active power scheme in use.
Figure 208 Verify Power Configuration Settings
1. From Server Manager, navigate to File and Storage Services > Volumes > Disks.
2. Select an Offline disk, right-click on it and select Bring Online.
Figure 209 Server Manager – Disk Online
3. After the disk comes online, right-click and select Initialize.
Figure 210 Server Manager – Disk Initialize
4. Repeat the above steps 2 and 3 to bring online the other disk and Initialize it.
5. Right-click on the 225 GB disk and create a new simple volume using the settings as shown in the below figure.
Figure 211 New Volume Wizard – SQL Server Data Volume
6. Right-click the 60 GB disk and create a new simple volume using the settings as shown in the below figure.
Figure 212 New Volume Wizard – SQL Server Log Volume
This section explains the procedure to install a Standalone SQL Server Instance on each of the virtual machines. Make sure that the SQL Server Installation media is mounted to the virtual machines prior to following the steps below.
Follow the steps given below to install the SQL Server standalone instance:
1. From the mounted SQL Server installation DVD, launch the SQL Server installation wizard.
2. In the Installation page, click the New SQL Server Stand-alone installation or add features to an existing installation link to launch the installation wizard.
Figure 213 SQL Server Installation Center
3. In the Product Key page, enter the product key and click Next.
4. In the License Terms page, read and accept license terms to install the SQL Server installation and click Next.
5. In the Global Rules page, the setup procedure will automatically advance to the next window if there are no rule errors.
6. The Microsoft Update page will appear next if the Microsoft Update check box in Control Panel\All Control Panel Items\Windows Update\Change settings is not checked. Putting a check in the Microsoft Update page will change the computer settings to include the latest updates when you scan for Windows Update.
7. On the Product Updates page, the latest available SQL Server product updates are displayed. If no product updates are discovered, SQL Server Setup does not display this page and auto advances to the Install Setup Files page.
8. On the Install Setup files page, Setup provides the progress of downloading, extracting, and installing the Setup files. If an update for SQL Server Setup is found, and is specified to be included, that update will also be installed.
9. The Install Rules page runs the rules that are essential for a successful SQL Server installation. Confirm that this step displays no errors and verify the warnings. Click Next.
10. In the Setup Role page, Select SQL Server Feature Installation radio button to install SQL Server engine components and click Next.
11. In the Feature Selection page, select the Database Engine services and the Management Tools and click Next.
Figure 214 SQL Server 2014 Setup – Feature Selection
12. In the Instance Configuration page, leave it to defaults and click Next.
13. In the Server Configuration page, specify the service accounts and collation configuration details and click Next.
14. In the Database Engine Configuration page, specify the database engine authentication security mode, administrators and data directory details. In the data directory tab, make sure that the root directory and the temp database directory are set appropriately to match with the drive letters intended for SQL database and log files.
The workload running in this test case does not generate much tempdb activity and hence is placed along with the user database files in the same drive/volume. But care should be taken to store the tempdb files on to separate drive/volume where the workload has high tempdb activity and generates lot of IOPs.
Figure 215 SQL Server 2014 Setup – Database Engine Configuration
15. Click Next.
16. The Feature Configuration Rules automatically runs the Feature configuration rules. Verify the output and click Next.
17. In Ready to Install page, verify the installation options and click Install to start the SQL Server installation.
18. Once the installation is complete, verify the installation summary and click Close to close the wizard.
The SQL Server instance may now be accessed using the SQL Server Management Studio.
Virtualization delivers many benefits and is being deployed widely by many organizations in their IT department for testing, software development, etc. Server consolidation has been the biggest drivers of virtualization with today’s powerful multi-core, multi-threaded and multi-socket processors. Database virtualization also benefits from more efficient hardware utilization, easy management and high availability.
This section deals with the performance of consolidated virtual SQL Servers in a Cisco VSPEX solution. In the previous sections of this document, we created and configured multiple virtual machines, each running a SQL Server database instance on it.
For this case study, we will be using the highly available virtual machines that are already created in the previous sections and are running on two ESXi hosts in a cluster. Each virtual machine is running a SQL Server with a single OLTP database instance for the performance study. The goal of this performance study is to analyze the scalability of consolidating multiple SQL Server virtual machines on a single physical host.
To analyze the performance of SQL server consolidation on a single Cisco UCS B200 M4 server, we turned off the vSphere HA/DRS and ensured all the virtual machines were migrated to and running on only one ESXi host. Storage was configured with three shared VMFS volumes for the guest OS files and SQL database and log files on the EMC VNX5400 storage array. The ESXi hosts have two HBA ports connected separately to the upstream Cisco UCS Fabric Interconnects ‘A’ and ‘B’. The Fabric Interconnects in turn connect directly to the EMC VNX5400 storage array on both the controllers in a highly available configuration. ESXi supports multipathing and lets you use more than one physical path from host to storage to provide redundancy and load balancing of storage traffic. The default VMware native multipathing PSP with round-robin is used for this configuration.
The below table shows the configuration details of the physical server used for this test bed.
Table 14 Physical Server Configuration
Physical Server |
|
Server |
Cisco UCS B200 M4 |
Processor |
E5-2660 v3 (2.3 GHz) |
Processor Sockets |
2 |
Cores/Socket |
10 |
Total Logical Processors (HT enabled) |
40 |
Memory |
256 GB |
Adapter |
Cisco VIC 1340 |
Hypervisor |
VMware ESXi 5.5 U2 |
The below tables shows the virtual machine and SQL Server database configuration used for this test bed.
Table 15 Virtual Machine Configuration
Virtual Machine |
|
Guest OS |
Windows Server 2012 R2 Datacenter |
Database |
SQL Server 2014 |
OLTP Database size |
200 GB |
vCPU |
4 |
VM Memory |
8 GB |
Max Memory for SQL Server |
6 GB |
Virtual Hard Disk (for GOS) |
100 GB |
Virtual Hard Disk (for SQL Data) |
225 GB |
Virtual Hard Disk (for SQL Log) |
60 GB |
The SQL Server 2014 on all the virtual machines was configured with the below options.
exec sp_configure 'show advanced options', '1'
reconfigure with override
exec sp_configure 'max server memory', 6144
exec sp_configure 'recovery interval','32767'
exec sp_configure 'max degree of parallelism', '1'
exec sp_configure 'lightweight pooling','1'
exec sp_configure 'priority boost','1'
exec sp_configure 'max worker threads', 3000
exec sp_configure 'default trace enabled', 0
go
reconfigure with override
The below figure shows the disk summary of EMC VNX5400 used for this testing environment. Based on that disk configuration the storage was sized to support a total of five virtual database machines running OLTP application workload with each driving close to 4000 IOPs on a 200 GB database size. More information about the storage building block for five SQL Server virtual machines can be found in the “Configuring EMC VNX5400 Storage Array” section.
Figure 216 EMC VNX5400 Disk Configuration
The VMFS datastores for ESXi SAN boot, virtual machines, SQL data and log files are on different storage pools ensuring the isolation of spindles. All the LUNs created in the EMC storage pool are thick LUNs and all the virtual machines disks (VMDK) are eagerzeroedthick format. VMware paravirtual SCSI controller type is used for the virtual disks (VMDK) to store the SQL Server database data and log files and LSI Logic SAS SCSI controller type is used for guest OS virtual disk (VMDK). Refer to the figure 14 in the Solution Architecture section for the logical storage layout used for this performance study.
One Cisco UCS B200 M4 blade server was used for this test running ESXi 5.5 U2. In this consolidation process we created five Windows 2012 R2 virtual machines with each having four vCPU and 8GB memory. Microsoft SQL Server 2014 datacenter version was installed on each virtual machine.
Using the HammerDB tool, we created a 200 GB database initially on one SQL server instance and the same was copied and attached to other virtual database servers running on the same ESXi host. All data was contained in a storage system and was accessed through FC (Fibre Channel). The client machine running Windows Server 2012 R2 with the HammerDB tool installed on it was used to initiate the workload on each database.
For more details about the HammerDB tool, refer to the below link.
http://hammerora.sourceforge.net/hammerdb_mssql_oltp_best_practice.pdf
We started the test workload on one database in a single SQL server instance on one virtual machine and captured the benchmark score and resource utilization. One at a time, we added other SQL Server VMs to the workload. We tested to five virtual machines on the same physical machine and captured the benchmark score at every instance of adding a new virtual machine. The throughput scaled linearly up to five virtual machines and after that it started to saturate at the storage end because of the limited number of hard disk spindles that were available at the time the tests were conducted.
We ran the same workload on all virtual machines simultaneously for 15 minutes with 10 minutes of warm-up time. The test workload was configured with fifteen virtual users per database instance in the HammerDB tool. Performance of the blade server was measured, especially as it relates to an active OLTP environment. The performance metrics of interest for this testing were captured from all the virtual machines using Windows Performance Monitor and VMware ESXi host using esxtop.
For testing the perfomance of consolidated virtual database servers, HammerDB tool was installed on a client machine outside the VSPEX environment with network connectivity.
Figure 217 Test Workload for Performance Study
This section provides details of the tunings that were configured for this performance test. The tunings mentioned below gave better results than without any tunings.
Refer to the BIOS settings in the Create Server BIOS Policy section that was used for the performance testing.
Power management settings for both the hypervisor and the guest OS are set for high performance.
It is the total number of IO requests that can be outstanding on a per-LUN basis and “fnic_max_qdepth” is the parameter name for it with default value of 32. For testing the performance of SQL Server consolidation, the value of “fnic_max_qdepth” was set to 128 using the command syntax as shown in the below figure.
esxcli system module parameters set -p fnic_max_qdepth=128 -m fnic
Verify the new settings as shown in the below figures
Figure 218 Configure HBA LUN Queue Depth
Figure 219 Verify HBA LUN Queue Depth Settings
This parameter sets the maximum number of outstanding disk requests per device/LUN. When two or more virtual machines share a LUN (logical unit number), this parameter controls the total number of outstanding commands permitted from all virtual machines collectively on the host to that LUN (this setting is not per virtual machine).
The storage device queue depth was set to a value of 128 for both the SQL database data and log datastores. The command used to set the value for a device is given below. The values can be set between 1 and 256.
esxcli storage core device set -d naa.xxx -O 128
Verify the new settings in the esxtop as shown in the below figure.
Figure 220 Verify esxtop DQLEN Settings
The IO Throttle Count for FC Adapter Policy VMware in the Cisco UCS Manager is set to 1024 as shown the below figure is left to default values.
Figure 221 FC Adapter Policy VMware IO Throttle Count
The same is reflected in the below esxtop command showing the storage adapter queue depth under the AQLEN column. This is the maximum number of ESX VMernel active commands that the device is configured to support
Figure 222 Check esxtop AQLEN Settings
The experiments carried out in this performance study demonstrate that the multiple medium-sized SQL Server virtual machines can be consolidated to achieve a scalable aggregated throughput with minimal performance impact on the individual VMs. The below figure shows the % CPU utilization of a physical host and the total throughput achieved from all the five SQL Server virtual machines running on a single physical host that were added to the OLTP workload. The Windows Performance Monitor tool was used to capture the virtual machine level metrics and esxtop was used to capture the host % CPU utilization in this test case.
The graph below demonstrates that each SQL Server virtual machine with four vCPUs consumes about average 9% of host CPU utilization and delivers close to 3300 IOPs. Adding five such virtual machines to the workload utilized 42% of the host CPU utilization and delivered an aggregate throughput of 16483 IOPs. It scaled linearly up to five virtual machines and thereafter the throughput started to saturate at the storage end because of the limited number of hard disk spindles that were available at the time of testing.
From the above performance analysis, we see that there are available compute and memory resouces on the physical host, which means the host is not saturated with above test workload running on five SQL Server VMs. Hence there is room to add more such virtualized database workloads provided there is enough capacity on the storage array.
Figure 223 IOPs and % Host CPU Utilization Graph
The below graph shows the average disk response time (in milliseconds) observed within one virtual machine across all the test runs starting with a single VM and then adding VMs one-at-a-time to the workload. The graph shows a VM’s disk IO response time for log write, data read and write. It is observerd that the disk response time gradually increased with the addition of the workloads but it remained below 5 milliseconds even when all the five virtual machines were running simultaneously.
Figure 224 Disk IO response Time Graph
The below table provides some more information on the performance metrics that were captured during this test.
Table 16 Performance Test Results
This section provides an end-to-end guidance on setting up the Cisco solution for EMC VSPEX environment for building a near-site disaster recovery solution for the virtualized SQL Servers in the primary site using SQL Server AlwaysOn Availability Group feature as shown in the proposed reference architecture.
The below work flow chart depicts the high level of the deployment tasks for the setting up the Secondary site.
Figure 225 Secondary Site Workflow Setup for Disaster Recovery
To build the near-site disaster recovery infrastructure, the hardware and software requirements are the same except for the pair of Cisco Nexus 9396 switches which are shared in this solution. The recommended solution is to use separate pair of Cisco Nexus switches at both the sites.
A Secondary site is built to provide disaster recovery solution for the virtualized SQL servers in the Primary site as per the workflow shown in the figure above.
In the workflow the steps from configuring Cisco Nexus Switch through Deploying VMs and Installing OS remain the same as that of the Primary site. For configuration details, refer to the section: above.
This section gives the procedure for installing the required features for installing SQL Server and configuring the AlwaysOn feature on the virtual machines that will serve as the AlwaysOn replica nodes.
Follow these steps to enable the Windows features:
1. Open Windows Server Manager. In the Installation Type page of the Add Roles and Features Wizard, select the installation type as ‘Role-based or feature-based installation’ and click Next.
Figure 226 Server Manager – Add Roles and Features
2. In the Server Selection page, select the server from the list and click Next.
3. In the Server Roles page, click Next.
4. In the Features page, select .Net Framework 3.5 and Failover Clustering features and click Next.
5. In the Confirmation page, confirm the list of options (features and servers) you have selected and click Install.
6. Once the installation is complete, click Close button on the wizard.
SQL Server AlwaysOn requires the virtual machines hosting the AlwaysOn replica set to be part of the same Windows Failover Cluster. This section explains the steps to create a Windows Failover Cluster on the virtual machines (primary and secondary) which will be hosting the SQL Server AlwaysOn replica later.
To create a Windows Failover Cluster on the virtual machines, follow these steps on one of the virtual machines:
1. Open the Windows Failover Cluster Manager and Launch the Validate a Configuration Wizard. (It is recommended to validate the cluster nodes to confirm whether the configuration is suitable for clustering or not. This is essential to have the support from Microsoft.)
2. Click Next on the Before you Begin page.
3. In the Select Servers or a Cluster page, browse for the SQL Server VMs to be formed into a cluster and click Next.
4. In the Testing Options page, select “Run All tests” to validate the cluster confirmation and click Next.
5. In the Confirmation page, confirm the server settings to start the validation and click Next. The Validating page displays the list of tests that are being executed.
6. Once the validation is complete, the Summary page displays the results. View the validation report to ensure the suitability for clustering. In case of any errors or warnings, review them and correct the configuration as needed. Once the issues are solved, rerun the validation wizard to confirm.
Figure 227 Failover Cluster Validation Report
You may ignore the storage related warnings in the report as the storage is not shared between the virtual machines in the specific configuration.
Figure 228 Failover Cluster Validation Report Storage
7. You also have the option of validating the cluster configuration using the PowerShell cmdlet as shown in the screenshot below.
Figure 229 Failover Cluster Validation using PowerShell
8. Once the Validation wizard reports that the configuration is suitable for clustering, we may opt for creating the cluster now by checking the option given in the summary page.
9. In the Create Cluster Wizard, click Next on the Before You Begin screen.
10. In the Select Servers page, browse and select the virtual machines that need to be clustered for hosting the AlwaysOn replica.
Figure 230 Create Cluster Wizard – Select Servers
11. In the Access Point for Administering the Cluster page, enter the cluster name, select the network to be used and provide an IP address in that range and click Next.
Figure 231 Create Cluster Wizard – Access Point for Administering the Cluster
12. In the Confirmation page, verify the cluster details and click Next. We will be adding the disks later, so we leave the “Add all eligible storage to the cluster” option unchecked.
Figure 232 Create Cluster Wizard – Confirmation
13. Creating New Cluster screen displays the progress of cluster installation. Once the cluster is successfully created, the Summary page displays the summary of the cluster configuration details. The summary page displays the warning that “An appropriate disk was not found for configuring disk witness”. We will be configuring the File Share witness as the quorum configuration in the next section. So we can ignore the warning.
Figure 233 Create Cluster Wizard – Summary
14. Click Finish.
The cluster configuration details may be reviewed by connecting to the newly created cluster from the Failover Cluster Manager window on the virtual machines.
Figure 234 Failover Cluster Manager showing Cluster Status
The Windows failover cluster needs to be configured using the File Share majority quorum model.
Follow the steps given below, on one of the virtual machines, to configure the File Share Witness:
1. In Failover Cluster Manager page, navigate to Failover Cluster Manager > Cluster Name
2. Right-click the cluster and select More Actions > Configure Custom Quorum Settings. The “Configure Cluster Quorum Wizard” opens.
Figure 235 Configure Cluster Quorum Settings
3. Click Next on the Before You Begin screen of the Configure Cluster Quorum wizard.
4. In the Select Quorum Configuration Option page, select Select the quorum witness option and Click Next.
Figure 236 Configure Cluster Quorum Wizard – Select the Quorum Witness
5. In the Select Quorum Witness page, select the Configure a file share witness option and Click OK.
Figure 237 Configure Cluster Quorum Wizard – Configure a File Share Witness
6. In the Configure File Share Witness page, specify the file share that will be used by the file share witness resource and Click Next.
Figure 238 Configure Cluster Quorum Wizard – File Share Path
7. In the Confirmation page, verify the configuration details and click Next.
8. The Summary displays the summary of quorum setting that is configured for the cluster.
Figure 239 Configure Cluster Quorum Wizard – Summary
9. Click Finish.
The File share witness configuration, created in the previous step may be verified from the Failover Cluster Manager window (on any of the member nodes or virtual machines) as shown below.
Figure 240 Verify the Cluster Configuration
Verify that the both the member nodes are up and healthy as given in the screenshots below.
Figure 241 Verify the Cluster Nodes Status
Verify the network configuration for the member nodes as shown below.
Figure 242 Verify the Cluster Node Network Status
In the workflow, the steps to install SQL Server 2014 standalone Instance on virtual machines remain the same as that of the Primary site. For configuration details, refer to the section: above
This section explains the procedure to setup AlwaysOn availability group on the SQL Server standalone instances. The synchronous replication will be configured between the primary and secondary copies of the database. This ensures that there is no data loss on the configuration if the primary replicas fail for any reason. To accomplish this, it is highly recommended to have a high bandwidth – low latency network connection between the replicas. As an optional configuration, you may dedicate a specific network connection for the AlwaysOn replication traffic. This document does not cover the steps to configure a dedicated network for the replication traffic.
This section explains the procedure to enable the SQL Server AlwaysOn feature on the primary and secondary (Disaster Recovery) virtual machines. Please refer https://msdn.microsoft.com/en-us/library/ff878259.aspx for more information on enabling/disabling AlwaysOn feature on the participating instances.
Follow the below procedure on each virtual machine to enable the SQL Server AlwaysOn availability groups on the standalone SQL Server instances using SQL Server Configuration Manager.
1. From the Windows Server Manager, search for and open SQL Server Configuration Manager.
2. In SQL Server Configuration Manager, click SQL Server Services, right-click SQL Server (instance name), where instance name is the name of a local server instance for which you want to enable AlwaysOn Availability Groups, and click Properties.
3. Select the AlwaysOn High Availability tab.
4. Verify that Windows failover cluster name field contains the name of the Guest Windows failover cluster.
5. Select the Enable AlwaysOn Availability Groups check box, and click OK. SQL Server Configuration Manager saves your change.
Figure 243 SQL Server Configuration Manager – Enable AlwaysOn Availability Group
6. Restart the SQL Server service.
7. Once the settings are done on the primary and Disaster Recovery instance, login to the SQL Server Management Studio.
8. Connect to the SQL Server Instance.
9. Right-click the instance name and open the Server properties.
10. Verify that the “Is HADR enabled” property is set to TRUE, as we have enabled the AlwaysOn Feature on the instance. Given below is the SQL Server Properties screenshot for the Disaster Recovery instance configured in the setup.
Figure 244 Verify SQL Server Properties
This section explains the procedure to create the Availability Group with the SQL Server standalone instances. We have chosen to have the full database synchronization model where the database copies will be configured end to end to be part of the Availability Group using the same wizard. Please refer https://msdn.microsoft.com/en-us/library/ff878265.aspx for more options in configuring the AlwaysOn availability groups. We also opted to create the AlwaysOn listener as a separate step to add more clarity to the procedure.
Follow the procedure below, on each virtual machine designed for hosting the primary database replica, to create the SQL Server AlwaysOn availability group.
1. Use SQL Server Management Studio to connect to the SQL Server Instance where we want to host the primary replica.
2. From the Object Explorer pane, right-click the AlwaysOn High Availability folder and Select the New Availability Group Wizard option.
Figure 245 New Availability Group Wizard
The New Availability Group wizard displays a list of sections in the left pane. The Wizard takes you through these sections to complete the configuration:
1. Click Next in the Introduction page of the wizard.
2. In the Specify Availability Group Name page, provide a unique name for the availability group and Click Next. (This name must be a valid SQL Server identifier that is unique on the WSFC failover cluster and in your domain as a whole. The maximum length for an availability group name is 128 characters.)
Figure 246 New Availability Group – Specify Name
3. The Select Databases page displays the user databases in the current instances which are eligible to be part of AlwaysOn availability group (If the prerequisites are not met, a brief status description indicates the reason that the database is ineligible). Select one or more of the listed databases to participate in the new availability group. These databases will initially be the primary databases.
Figure 247 New Availability Group – Select Databases
4. In the Specify Replicas page, Click Add Replica button and connect to the Disaster Recovery SQL Server instance to configure it to host the secondary replica.
Figure 248 New Availability Group – Add Replica
Figure 249 Connect to SQL Server
5. Select the checkboxes for the Automatic Failover and Synchronous Commit for both the replicas and click Next.
6. Once the SQL Server secondary replica Instance is added to the Specify Replicas page, opt for the Automatic Failover and Synchronous Commit for the replicas.
Figure 250 New Availability Group – Specify Replicas
Please note that we will be configuring the AlwaysOn listener after the AlwaysOn availability group creation is complete.
7. Click Next.
8. In the Select Initial Data Synchronization page, select the data synchronization preference as ‘Full’ to automatically start the initial data synchronization between the database replicas. To verify whether your environment meets the prerequisites for initial automatic data synchronization, please refer https://msdn.microsoft.com/en-us/library/hh403415.aspx#Prerequisites.
Figure 251 New Availability Group – Select Data Synchronization
9. Click Next.
10. The Validation page verifies whether the values you specified in this Wizard meet the requirements of the New Availability Group Wizard and displays the validation result.
Figure 252 New Availability Group – Validation
11. Click Next.
12. The Summary page displays the list of options selected in this wizard. Verify the options listed and click Finish to start the creation of the availability group.
Figure 253 New Availability Group – Summary
13. The Progress page displays the progress of the steps for creating the availability group (configuring endpoints, creating the availability group, and joining the secondary replica to the group).
14. When these steps complete, the Results page displays the result of each step. If all these steps succeed, the new availability group is completely configured.
Figure 254 New Availability Group – Results
You may now verify the configuration and health of the availability group from the SQL Server Management Studio. Given below is the screenshot of the SQL Server AlwaysOn dashboard accessible by right-clicking the newly creating availability group in the Management studio window.
Figure 255 Availability Group Status
This section describes the procedure for configuring the AlwaysOn listener for the availability group. An availability group listener is a server name to which clients can connect in order to access a database in a primary or secondary replica of an AlwaysOn availability group.
Follow the procedure given below to configure the listener for the Availability Group created in the previous section, using SQL Server Management studio.
1. Open the SQL Server Management Studio and connect to the instance which is configured as the Primary Replica
2. From the Object Explorer, right-click the Availability Group that is created and click Add Listener option.
Figure 256 Add Availability Group Listener
3. The New availability group listener window opens up.
4. On the New availability group dialog box, click Add and enter the AlwaysOn listener name and TCP port to be used by the listener.
5. Select the Network Mode as Static IP and click Add to specify the subnet and IP address to which the clients should connect to access the availability group. Click OK.
6. Verify the settings specified in the New availability group dialog box and click OK to configure the listener.
Figure 257 Availability Group Listener Properties
Once the AlwaysOn Availability Group listener is configured, the same may be verified by navigating to the Availability Group Listeners Node in the SQL Server Management Studio as shown in the screenshot below.
Figure 258 Availability Group Listener Status
The availability group listener configuration may also be verified from the Windows Failover Cluster Manager as shown below.
Figure 259 Verify Availability Group Status in Failover Cluster Manager
We also ran a simple test workload using HammerDB tool to verify if both the replicas are synchronizing. The below figure shows that both the replicas are in synchronized state during the test workload was in progress.
Figure 260 Verify Replicas Synchronization State During Test Workload
Windows Server Failover Clustering uses heartbeat for the health detection between the nodes and it is based on the two parameter settings; Delay and Threshold. Delay defines the frequency at which Windows cluster heartbeats are sent between nodes. It is the number of seconds before the next heartbeat is sent. Within the same cluster there can be different delays between nodes on the same subnet and between nodes which are on different subnets. Threshold defines the number of heartbeats which are missed before the cluster takes recovery action. Within the same cluster there can be different thresholds between nodes on the same subnet and between nodes which are on different subnets.
Windows Server Failover Cluster nodes in multi-site multi-subnet stretched configuration and vSphere vMotion operations can induce delays in the Windows cluster health monitoring activity and may cause unnecessary failovers. In such situations, it is recommended to allow for a relaxed configuration setting by modifying the SameSubnetThreshold setting from its default value to 10. The command to modify the SameSubnetThreshold setting using PowerShell is shown in the below figure.
Figure 261 Failover Availability Group Wizard
When increasing the cluster thresholds, it is recommended to increase in moderation. It is important to understand that increasing resiliency to network hiccups comes at the cost of increased downtime when a hard failure occurs.
In this section we performed a planned manual failover without data loss on the AlwasyOn availability group using the failover availability group wizard. A planned manual failover is supported only if both the primary replica and secondary replica are configured for synchronous-commit mode and the secondary replica is in synchronized state.
In this form of planned manual failover the secondary replica to which you are connected transitions its role from secondary to primary and the former primary replica transitions its role from primary to secondary.
In the below failover test scenario, the primary replica is hosted on the SQL Server instance running on SQLVM10 virtual machine and the secondary synchronous replica is hosted on the SQL Server instance running on SQLVM09 virtual machine
1. Launch SQL Server Management Studio and in the Object Explorer, connect to a server instance that hosts a secondary replica of the availability group that needs to be failed over, and expand the server tree.
2. Expand the AlwaysOn High Availability node and the Availability Groups node.
3. Right-click the Availability Group and select Failover to launch the Fail Over Availability Group wizard as shown in the below figure and click Next.
Figure 262 Fail Over Availability Group Wizard
4. On the Select New Primary Replica page, select a new primary replica (SQLVM09 virtual machine) and click Next. The current primary replica is hosted on SQLVM10 virtual machine. Note that the failover readiness value is “No data loss” for the selected new primary replica as shown in the below figure.
Figure 263 Fail Over Availability Group – Select New Primary Replica
5. On the Connect to Replica page, click Connect to connect to the failover target and click Next.
Figure 264 Fail Over Availability Group – Connect to Replica
6. In the Summary section, verify the choices made and click Finish.
Figure 265 Fail Over Availability Group - Summary
7. The planned failover completed successfully without any data loss as shown in the below figure.
Figure 266 Fail Over Availability Group - Results
In the Microsoft SQL Server Management Studio, it is observed that the role transitions happened as per the expected behavior. The secondary replica hosted on SQLVM09 virtual machine transitioned its role to primary and the primary replica hosted on SQLVM09 virtual machine transitioned its role to secondary.
Figure 267 SQL Server Management Studio – AG Status
Automatic failover happens on the loss of the primary replica. Automatic failover is supported only when the current primary and one secondary replica are both configured with failover mode set to automatic and the secondary replica is currently in synchronized state. If the failover mode of either the primary or secondary replica is manual, automatic failover cannot occur.
In this section we validated the automatic failover of a SQL Server AlwaysOn Availability Group by bringing down the primary replica. In the below test scenario, SQLVM09 virtual machine is hosting the primary replica and SQLVM10 virtual machine is hosting the secondary replica.
1. SQLVM09 virtual machine hosting the primary replica is shut down.
2. The ping status shows the primary replica is not reachable as shown in the below figure.
Figure 268 SQL Server Management Studio – AG Status with Primary Replica Down
3. The SQLVM09 virtual machine hosting the former primary replica is powered on later.
Figure 269 SQL Server Management Studio – AG Status with both Replicas up and Synchronized
It was observed that when the SQLVM09 virtual machine hosting the primary replica goes down, SQLVM10 virtual machine hosting the secondary replica transitions its role from secondary to primary. Later when the SQLVM09 virtual machine hosting the former primary replica comes back online, it connected to the availability group as secondary replica and resynchronized its database with the current primary replica as shown in the above figure.
This performance study shows SQL database considerations for developing a strategy to efficiently utilize hardware and considerably reduce cost associated with the database usage. The strategies and reference architectures can be considered as a starting point to consolidate SQL Server on the Cisco UCS Blade server using a building block approach to design, configure, and deploy with the best practice recommendations to streamline associated IT complexities. To simplify the design and deployment of a virtualized infrastructure, Cisco offers Solution Architecture bundles for Cisco Blade servers, and VMware vSphere. These bundles provide configuration and best practices to achieve full redundancy—with no single point of failure, scalability, and ease of management. The tests performed divulged that gains could be achieved significantly by developing a strategy to maximize hardware utilization, reducing database sprawl, power and cooling costs by consolidating and vitalizing SQL Server on Cisco B200 M4 Blade servers, while the performance met the most demanding customers’ workloads. Care should be taken to size and configure databases and virtual machines based on the actual requirement.
Sanjeev Naldurgkar Sanjeev has over 14 years of experience in information technology; his focus areas include UCS, Microsoft product technologies, server virtualization, and storage technologies. Prior to joining Cisco, Sanjeev was Support Engineer at Microsoft Global Technical Support Center. Sanjeev holds a Bachelor’s Degree in Electronics and Communication Engineering and Industry certifications from Microsoft, and VMware.
Vadiraja Bhatt is a Performance Architect at Cisco, managing the solutions and benchmarking effort on UCS Platform. Vadi has over 19 years of experience in performance and benchmarking the large enterprise systems deploying mission critical applications. Vadi specializes in optimizing and fine tuning complex hardware and software systems and has delivered many benchmark results on TPC and other industry standard benchmarks. Vadi has 6 patents to his credits in the Database (OLTP and DSS) optimization area.
For their support and contribution to the design, validation, and creation of this CVD, we would like to thank:
· Tim Cerling, Cisco Systems, Inc.
· Frank Cicalese, Cisco Systems, Inc.
· Jisha Jyotheendran, Cisco Systems, Inc.
· Rambabu Jayachandiran, Cisco Systems, Inc.
· Sindhu Sudhir, Cisco Systems, Inc.
· James Baldwin, EMC Corporation
· Anthony O'Grady, EMC Corporation