SmartStack Design Guide with Cisco UCS Mini and Nimble CS300
Last Updated: November 6, 2015
The CVD program consists of systems and solutions designed, tested, and documented to facilitate faster, more reliable, and more predictable customer deployments. For more information, visit:
http://www.cisco.com/go/designzone.
ALL DESIGNS, SPECIFICATIONS, STATEMENTS, INFORMATION, AND RECOMMENDATIONS (COLLECTIVELY, "DESIGNS") IN THIS MANUAL ARE PRESENTED "AS IS," WITH ALL FAULTS. CISCO AND ITS SUPPLIERS DISCLAIM ALL WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE. IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THE DESIGNS, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
THE DESIGNS ARE SUBJECT TO CHANGE WITHOUT NOTICE. USERS ARE SOLELY RESPONSIBLE FOR THEIR APPLICATION OF THE DESIGNS. THE DESIGNS DO NOT CONSTITUTE THE TECHNICAL OR OTHER PROFESSIONAL ADVICE OF CISCO, ITS SUPPLIERS OR PARTNERS. USERS SHOULD CONSULT THEIR OWN TECHNICAL ADVISORS BEFORE IMPLEMENTING THE DESIGNS. RESULTS MAY VARY DEPENDING ON FACTORS NOT TESTED BY CISCO.
CCDE, CCENT, Cisco Eos, Cisco Lumin, Cisco Nexus, Cisco StadiumVision, Cisco TelePresence, Cisco WebEx, the Cisco logo, DCE, and Welcome to the Human Network are trademarks; Changing the Way We Work, Live, Play, and Learn and Cisco Store are service marks; and Access Registrar, Aironet, AsyncOS, Bringing the Meeting To You, Catalyst, CCDA, CCDP, CCIE, CCIP, CCNA, CCNP, CCSP, CCVP, Cisco, the Cisco Certified Internetwork Expert logo, Cisco IOS, Cisco Press, Cisco Systems, Cisco Systems Capital, the Cisco Systems logo, Cisco Unity, Collaboration Without Limitation, EtherFast, EtherSwitch, Event Center, Fast Step, Follow Me Browsing, FormShare, GigaDrive, HomeLink, Internet Quotient, IOS, iPhone, iQuick Study, IronPort, the IronPort logo, LightStream, Linksys, MediaTone, MeetingPlace, MeetingPlace Chime Sound, MGX, Networkers, Networking Academy, Network Registrar, PCNow, PIX, PowerPanels, ProConnect, ScriptShare, SenderBase, SMARTnet, Spectrum Expert, StackWise, The Fastest Way to Increase Your Internet Quotient, TransPath, WebEx, and the WebEx logo are registered trademarks of Cisco Systems, Inc. and/or its affiliates in the United States and certain other countries.
All other trademarks mentioned in this document or website are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (0809R)
© 2015 Cisco Systems, Inc. All rights reserved.
Cisco UCS 5108 Blade Server Chassis
Cisco UCS 6324 Fabric Interconnect
The Nimble Storage Adaptive Flash Platform
Nimble Storage CASL Architecture
Thin-Provisioning and Efficient Capacity Utilization
Efficient, Fully Integrated Data Protection
SmartSnap: Thin, Redirect-on Write Snapshots
SmartReplicate: Efficient Replication
Application-Consistent Snapshots
SmartSecure: Flexible Data Encryption
Cisco UCS Server Configuration for vSphere
Validated Hardware and Software
Cisco Validated Designs (CVD) are systems and solutions that have been designed, tested and documented to facilitate and accelerate customer deployments. These designs incorporate a wide range of technologies and products into a portfolio of solutions that have been developed to address the business needs of a customer. CVDs deliver a validated design, documentation and support to guide customers from design to deployment.
Cisco and Nimble have partnered to deliver a series of SmartStack™ solutions that combine Cisco Unified Computing System servers, Cisco Nexus family of switches, and Nimble storage into a single, flexible architecture. SmartStack solutions are pre-designed, integrated and validated architectures for the data center.
Customers looking to solve business problems using shared data center infrastructure face a number of challenges. A constant challenge is achieving the levels of IT agility and efficiency necessary to meet business objectives. Addressing these challenges requires having an optimal solution with the following characteristics:
· Availability: Helps ensure applications and services are accessible at all times with no single point of failure
· Flexibility: Ability to support new services without requiring infrastructure modifications.
· Efficiency: Facilitate efficient operation of the infrastructure through re-usable policies and API management.
· Ease of deployment and management to minimize operating costs.
· Scalability: Ability to expand and grow with some degree of investment protection
· Minimal risk by ensuring optimal design and compatibility of integrated components
SmartStack enables a data center platform with the above characteristics by delivering an integrated architecture that incorporates compute, storage and network design best practices. SmartStack minimizes IT risk by testing the integrated architecture to ensure compatibility between integrated components. SmartStack also addresses IT pain points by providing documented design guidance, deployment guidance and support that can be used in all stages (planning, design and implementation) of a deployment.
In this document, the focus is on a SmartStack platform for mid-sized enterprises or departmental deployments within larger environments. The SmartStack design uses Cisco UCS Mini for compute with VMware vSphere 6.0, Cisco Nexus 9000 switches for networking and Nimble CS300 for storage. Cisco Nexus 9000 switches are an optional component of the SmartStack design. Customers can use a different Cisco Nexus switch model or use their existing switching infrastructure but doing so could limit the flexibility that this design offers and impact their ability to deploy some of the design best practices and features used.
SmartStack™ is pre-designed, validated integrated infrastructure architecture for the data center. SmartStack solution portfolio combines Nimble® storage systems, Cisco® UCS servers, and Cisco Nexus fabric into a single, flexible architecture. SmartStack solutions are designed and validated to minimize deployment time, project risk, and overall IT costs.
SmartStack is designed for high availability, with no single points of failure while maintaining cost-effectiveness and flexibility in design to support the growing needs of a mid-size enterprise or a department or remote location of a large enterprise. SmartStack design can support different hypervisor options, bare metal and also be sized and optimized to support different use cases and requirements.
This document describes the SmartStack® integrated infrastructure solution targeted for mid-sized enterprises or departmental deployments within larger environments. This solution is based on Cisco UCS Mini, Nimble CS300 and VMware vSphere 6.0
The intended audience of this document includes, but is not limited to, sales engineers, field consultants, professional services, IT managers, partner engineering, and customers who want to take advantage of an infrastructure built to deliver IT efficiency and enable IT innovation.
SmartStack program is the result of a joint partnership between Cisco and Nimble to deliver a series of infrastructure and application solutions optimized and validated on Cisco UCS, Nimble and Cisco Nexus switches. Customers must use Cisco UCS, Nimble Storage and one of the approved application stacks to be a valid SmartStack solution and they must also have valid support contracts with Cisco and Nimble Storage.
Each SmartStack solution will include (though not limited to) the following items.
· CVD Design Guide with the architecture, design and best practices validated
· CVD Deployment guide with the validated hardware and software details and implementation details to deploy the solution
Cisco and Nimble Storage have a solid support program focused on SmartStack solution, from customer account and technical sales representatives to professional services and technical support engineers. The support alliance provided by Cisco and Nimble Storage provides customers and channel partners with direct access to technical expert who can collaborate with cross vendors and have access to shared lab resources to resolve potential issues.
The architectural benefits of the infrastructure components in the SmartStack design are summarized in the table below.
Table 1 SmartStack Architectural Benefits
Cisco UCS Mini |
Nimble CS300 Array |
Unified Fabric |
Adaptive Flash platform |
Virtualized IO |
Thin-Provisioning and Efficient Capacity Utilization |
Extended Memory |
In-Line Compression |
Stateless Servers with policy based management |
Write-Optimized Data Layout |
Centralized management |
Efficient Replication |
Investment Protection |
Application-Consistent Snapshots |
Scalability |
Efficiently scale both Performance and Capacity |
Automation |
Flexible Data Encryption |
Smart Stack delivers a data center architecture using the following infrastructure components for compute, network and storage:
· Cisco Unified Computing System (Cisco UCS)
· Cisco Nexus Switches
· Nimble Storage
The validated design discussed in this document is based on the following models of the above infrastructure components.
· Cisco UCS Mini
· Cisco Nexus 9300 Switches
· Nimble CS300
This section provides a technical overview of the above components.
The Cisco Unified Computing System™ (Cisco UCS) is a next-generation data center platform that integrates computing, networking, storage access, and virtualization resources into a cohesive system designed to reduce total cost of ownership (TCO) and increase business agility. The system integrates a low-latency; lossless 10 Gigabit Ethernet unified network fabric with enterprise-class, x86-architecture servers. The system is an integrated, scalable, multi-chassis platform where all resources are managed through a unified management domain.
The main components of the Cisco UCS are:
· Compute - The system is based on an entirely new class of computing system that incorporates rack mount and blade servers based on Intel processors.
· Network - The system is integrated onto a low-latency, lossless, 10-Gbps unified network fabric. This network foundation consolidates LANs, SANs, and high-performance computing networks which are separate networks today. The unified fabric lowers costs by reducing the number of network adapters, switches, and cables, and by decreasing the power and cooling requirements.
· Virtualization - The system unleashes the full potential of virtualization by enhancing the scalability, performance, and operational control of virtual environments. Cisco security, policy enforcement, and diagnostic features are now extended into virtualized environments to better support changing business and IT requirements.
· Storage access – Cisco UCS system provides consolidated access to both SAN storage and Network Attached Storage over the unified fabric. This provides customers with storage choices and investment protection. Also, the server administrators can pre-assign storage-access policies to storage resources, for simplified storage connectivity and management leading to increased productivity. Only iSCSI and Fibre Channel based access is supported in this SmartStack solution.
Cisco UCS Mini used in this design delivers all of the above capabilities in an easy-to-deploy compact form factor. Cisco UCS Mini is optimal for smaller deployments with less server needs but require the same Enterprise-class features and management as that of a Cisco UCS system.
Cisco UCS Mini consists of the following components.
· Cisco UCS 5108 Blade Server Chassis – Cisco UCS chassis can accommodate up to eight half-width Cisco UCS B200 M4 Blade Servers.
· Cisco UCS 6324 Fabric Interconnect - Cisco UCS 6324 is embedded within the Cisco UCS 5108 Blade Server Chassis and provides the same unified management capabilities as the standalone Cisco UCS 6200 Series Fabric Interconnects.
· Cisco UCS Manager – Cisco UCS Manager provides unified, embedded management of all software and hardware components in a Cisco UCS Mini solution.
· Cisco UCS B200 M4 Blade Server – Cisco UCS B200 M4 Blade Server addresses a broadest set of workloads, delivering performance, versatility, and density without compromise.
· Cisco UCS C220 M4 Rack Server - This one-rack-unit (1RU) server offers superior performance and density over a wide range of business workloads.
· Cisco UCS C240 M4 Rack Server - This 2RU server is designed for both performance and expandability over a wide range of storage-intensive infrastructure workloads.
· Cisco UCS Central - Cisco UCS Central manages multiple Cisco UCS Mini and Cisco UCS domains.
The Cisco UCS 5100 Series Blade Server Chassis is a crucial building block of the Cisco Unified Computing System, delivering a scalable and flexible blade server chassis. The Cisco UCS 5108 Blade Server Chassis is six rack units (6RU) high and can mount in an industry-standard 19-inch rack. A single chassis can house up to eight half-width Cisco UCS B-Series Blade Servers and can accommodate both half-width and full-width blade form factors. The Cisco UCS Mini chassis supports the B22 M3, B200 M3, B420 M3, and B200 M4 blade servers today. Cisco UCS C-series rack mount servers can also be connected into the Cisco UCS Mini chassis – see link below for a complete list of supported servers. Four single-phase, hot-swappable power supplies are accessible from the front of the chassis. These power supplies are 92 percent efficient and can be configured to support non-redundant, N+ 1 redundant and grid-redundant configurations. The rear of the chassis contains eight hot-swappable fans, four power connectors (one per power supply), and two I/O bays. On the Cisco UCS Mini chassis, the I/O bays are used to accommodate the Cisco UCS 6324 Fabric Interconnect modules. A passive mid-plane provides up to 80 Gbps of I/O bandwidth per server slot and up to 160 Gbps of I/O bandwidth for two slots. The chassis is capable of supporting future 40 Gigabit Ethernet standards.
For more information, see:
http://www.cisco.com/c/dam/en/us/products/collateral/servers-unified-computing/ucs-b-series-blade-servers/ucsmini-specsheet.pdf
Figure 1 Cisco UCS 5108 Blade Server Chassis
Front View |
Back View |
The key to delivering the power of Cisco Unified Computing System in a smaller form factor on the Cisco UCS Mini is the Cisco UCS 6324 Fabric Interconnect. The Fabric Interconnect modules (up to two) plug into the back of the Cisco UCS Mini-series blade server chassis. A midplane connects the blade servers to the Fabric Interconnects. The Cisco UCS 6324 Fabric Interconnect combines the Fabric Extender and Fabric Interconnect functions into one plug-in module, and allows direct connection to an external switch.
The Cisco UCS 6324 Fabric Interconnect supports the integrated Cisco UCS Management software (Cisco UCS Manager) and allows direct LAN and storage connectivity for the blade servers and directly-connected rack-mount servers in one plug-in module.
From a networking perspective, the Cisco UCS 6324 Fabric Interconnect supporting deterministic, low-latency, line-rate traffic with a maximum switching capacity of up to 500Gbps, independent of packet size and enabled services. Sixteen 10Gbps links connect to the servers, providing a 20Gbps link from each Cisco UCS 6324 Fabric Interconnect to each server.
The fabric interconnect supports multiple traffic classes over a lossless Ethernet fabric from the blade through the fabric interconnect. Significant TCO savings come from an optimized server design in which network interface cards (NICs), host bus adapters (HBAs), cables, and switches can be consolidated.
The Cisco UCS 6324 Fabric Interconnect is a 10 Gigabit Ethernet and Fibre Channel switch offering up to 500-Gbps throughput and up to four unified ports and one scalability port.
Figure 2 Cisco UCS 6324 Fabric Interconnect Details
Cisco Unified Computing System (UCS) Manager provides unified, embedded management for all software and hardware components in the Cisco UCS. Using Cisco Single Connect technology, it manages, controls, and administers multiple chassis for thousands of virtual machines. Administrators use the software to manage the entire Cisco Unified Computing System as a single logical entity through an intuitive GUI, a command-line interface (CLI), or an XML API. The Cisco UCS Manager resides on a pair of Cisco UCS 6200 Series Fabric Interconnects using a clustered, active-standby configuration for high availability.
Cisco UCS Manager offers unified embedded management interface that integrates server, network, and storage. Cisco UCS Manger performs auto-discovery to detect inventory, manage, and provision system components that are added or changed. It offers comprehensive set of XML API for third part integration, exposes 9000 points of integration and facilitates custom development for automation, orchestration, and to achieve new levels of system visibility and control.
Service profiles benefit both virtualized and non-virtualized environments and increase the mobility of non-virtualized servers, such as when moving workloads from server to server or taking a server offline for service or upgrade. Profiles can also be used in conjunction with virtualization clusters to bring new resources online easily, complementing existing virtual machine mobility.
For more Cisco UCS Manager information, go to: http://www.cisco.com/c/en/us/products/servers-unified-computing/ucs-manager/index.html
The enterprise-class Cisco UCS B200 M4 Blade Server extends the capabilities of Cisco’s Unified Computing System portfolio in a half-width blade form factor. The Cisco UCS B200 M4 uses the power of the latest Intel® Xeon® E5-2600 v3 Series processor family CPUs with up to 768 GB of RAM (using 32 GB DIMMs), two solid-state drives (SSDs) or hard disk drives (HDDs), and up to 80 Gbps throughput connectivity. The Cisco UCS B200 M4 Blade Server mounts in a Cisco UCS 5100 Series blade server chassis or Cisco UCS Mini blade server chassis. It has 24 total slots for registered ECC DIMMs (RDIMMs) or load-reduced DIMMs (LR DIMMs) for up to 768 GB total memory capacity (B200 M4 configured with two CPUs using 32 GB DIMMs). It supports one connector for Cisco’s VIC 1340 or 1240 adapter, which provides Ethernet and FCoE.
For more information, see: http://www.cisco.com/c/en/us/products/servers-unified-computing/ucs-b200-m4-blade-server/index.html
Figure 3 Cisco UCS B200 M4 Blade Server
The Cisco UCS Virtual Interface Card (VIC) 1340 is a 2-port 40-Gbps Ethernet or dual 4 x 10-Gbps Ethernet, FCoE-capable modular LAN on motherboard (mLOM) designed exclusively for the M4 generation of Cisco UCS B-Series Blade Servers. When used in combination with an optional port expander, the Cisco UCS VIC 1340 capabilities is enabled for two ports of 40-Gbps Ethernet. The Cisco UCS VIC 1340 enables a policy-based, stateless, agile server infrastructure that can present over 256 PCIe standards-compliant interfaces to the host that can be dynamically configured as either network interface cards (NICs) or host bus adapters (HBAs). In addition, the Cisco UCS VIC 1340 supports Cisco® Virtual Machine Fabric Extender (VM-FEX) technology, which extends the Cisco UCS Fabric interconnect ports to virtual machines, simplifying server virtualization deployment and management.
For more information, see: http://www.cisco.com/c/en/us/products/interfaces-modules/ucs-virtual-interface-card-1340/index.html
Figure 4 Cisco VIC 1340
Cisco’s Unified Compute System is revolutionizing the way servers are managed in data-center. The following are the unique differentiators of Cisco UCS and Cisco UCS Manager.
· Embedded Management —In Cisco UCS, the servers are managed by the embedded firmware in the Fabric Interconnects, eliminating need for any external physical or virtual devices to manage the servers.
· Unified Fabric —In Cisco UCS, from blade server chassis or rack servers to FI, there is a single Ethernet cable used for LAN, SAN and management traffic. This converged I/O results in reduced cables, SFPs and adapters – reducing capital and operational expenses of overall solution.
· Auto Discovery —By simply inserting the blade server in the chassis or connecting rack server to the fabric interconnect, discovery and inventory of compute resource occurs automatically without any management intervention. The combination of unified fabric and auto-discovery enables the wire-once architecture of Cisco UCS, where compute capability of Cisco UCS can be extended easily while keeping the existing external connectivity to LAN, SAN and management networks.
· Policy Based Resource Classification —Once a compute resource is discovered by Cisco UCS Manager, it can be automatically classified to a given resource pool based on policies defined. This capability is useful in multi-tenant cloud computing. This CVD showcases the policy based resource classification of Cisco UCS Manager.
· Combined Rack and Blade Server Management —Cisco UCS Manager can manage B-series blade servers and C-series rack server under the same Cisco UCS domain. This feature, along with stateless computing makes compute resources truly hardware form factor agnostic.
· Model based Management Architecture —Cisco UCS Manager Architecture and management database is model based and data driven. An open XML API is provided to operate on the management model. This enables easy and scalable integration of Cisco UCS Manager with other management systems.
· Policies, Pools, Templates —The management approach in Cisco UCS Manager is based on defining policies, pools and templates, instead of cluttered configuration, which enables a simple, loosely coupled, data driven approach in managing compute, network and storage resources.
· Loose Referential Integrity —In Cisco UCS Manager, a service profile, port profile or policies can refer to other policies or logical resources with loose referential integrity. A referred policy cannot exist at the time of authoring the referring policy or a referred policy can be deleted even though other policies are referring to it. This provides different subject matter experts to work independently from each-other. This provides great flexibility where different experts from different domains, such as network, storage, security, server and virtualization work together to accomplish a complex task.
· Policy Resolution —In Cisco UCS Manager, a tree structure of organizational unit hierarchy can be created that mimics the real life tenants and/or organization relationships. Various policies, pools and templates can be defined at different levels of organization hierarchy. A policy referring to another policy by name is resolved in the organization hierarchy with closest policy match. If no policy with specific name is found in the hierarchy of the root organization, then special policy named “default” is searched. This policy resolution practice enables automation friendly management APIs and provides great flexibility to owners of different organizations.
· Service Profiles and Stateless Computing —A service profile is a logical representation of a server, carrying its various identities and policies. This logical server can be assigned to any physical compute resource as far as it meets the resource requirements. Stateless computing enables procurement of a server within minutes, which used to take days in legacy server management systems.
· Built-in Multi-Tenancy Support —The combination of policies, pools and templates, loose referential integrity, policy resolution in organization hierarchy and a service profiles based approach to compute resources makes Cisco UCS Manager inherently friendly to multi-tenant environment typically observed in private and public clouds.
· Extended Memory — The enterprise-class Cisco UCS B200 M4 blade server extends the capabilities of Cisco’s Unified Computing System portfolio in a half-width blade form factor. The Cisco UCS B200 M4 harnesses the power of the latest Intel® Xeon® E5-2600 v3 Series processor family CPUs with up to 1536 GB of RAM (using 64 GB DIMMs) – allowing huge VM to physical server ratio required in many deployments, or allowing large memory operations required by certain architectures like big data.
· Virtualization Aware Network —Cisco VM-FEX technology makes the access network layer aware about host virtualization. This prevents domain pollution of compute and network domains with virtualization when virtual network is managed by port-profiles defined by the network administrators’ team. VM-FEX also off-loads hypervisor CPU by performing switching in the hardware, thus allowing hypervisor CPU to do more virtualization related tasks. VM-FEX technology is well integrated with VMware vCenter, Linux KVM and Hyper-V SR-IOV to simplify cloud management.
· Simplified QoS —Even though Fibre Channel and Ethernet are converged in Cisco UCS fabric, built-in support for QoS and lossless Ethernet makes it seamless. Network Quality of Service (QoS) is simplified in Cisco UCS Manager by representing all system classes in one GUI panel.
Cisco Nexus 9000 Series is an optional component of SmartStack platform. A converged infrastructure platform requires a switching or networking layer but SmartStack does not require it to be a Cisco 9000 Series switch.
The Cisco Nexus 9000 family of switches offers both modular (9500 switches) and fixed (9300 switches) 1/10/40/100 Gigabit Ethernet switch configurations designed to operate in one of two modes:
· Application Centric Infrastructure (ACI) mode that uses an application centric policy model with simplified automation and centralized management
· Cisco NX-OS mode for traditional architectures – the SmartStack design in this document uses this mode
Architectural Flexibility
· Delivers high performance and density, and energy-efficient traditional 3-tier or leaf-spine architectures
· Provides a foundation for Cisco ACI, automating application deployment and delivering simplicity, agility, and flexibility
Scalability
· Up to 60-Tbps of non-blocking performance with less than 5-microsecond latency
· Up to 2304 10-Gbps or 576 40-Gbps non-blocking layer 2 and layer 3 Ethernet ports
· Wire-speed virtual extensible LAN (VXLAN) gateway, bridging, and routing
· High Availability
· Full Cisco In-Service Software Upgrade (ISSU) and patching without any interruption in operation
· Fully redundant and hot-swappable components
· A mix of third-party and Cisco ASICs provide for improved reliability and performance
Energy Efficiency
· The chassis is designed without a mid-plane to optimize airflow and reduce energy requirements
· The optimized design runs with fewer ASICs, resulting in lower energy use
· Efficient power supplies included in the switches are rated at 80 Plus Platinum
Investment Protection
· Cisco 40-Gb bidirectional transceiver allows for reuse of an existing 10 Gigabit Ethernet cabling plant for 40 Gigabit Ethernet
· Designed to support future ASIC generations
· Easy migration from NX-OS mode to ACI mode
The SmartStack design covered in this document uses NX-OS mode of operation using a pair of Cisco Nexus 9300 Series (Cisco Nexus 9372PX) switches. By using Cisco Nexus 9300 Series switches, it lays the foundation for migrating to ACI at a future time.
For more information, refer to: http://www.cisco.com/c/en/us/products/switches/nexus-9000-series-switches/index.html
Nimble CS300 is ideal for midsize IT organizations or distributed sites of larger organizations. It offers a compelling capacity per $ for workloads like Microsoft applications, VDI, or virtual server consolidation.
For more information, refer to: https://www.nimblestorage.com/products-technology/
Figure 5 Nimble CS300 Array
|
Back View |
The Nimble Storage Adaptive Flash platform enables enterprise IT organizations to implement a single architectural approach to dynamically cater to the needs of varying workloads. Adaptive Flash is the only storage platform that optimizes across performance, capacity, data protection, and reliability within a dramatically smaller footprint.
Adaptive Flash is built upon Nimble’s CASL™ architecture, and InfoSight™, the company’s cloud-connected management system. CASL scales performance and capacity seamlessly and independently. InfoSight leverages the power of deep data analytics to provide customers with precise guidance on the optimal approach to scaling flash, CPU, and capacity around changing application needs, while ensuring peak storage health.
Nimble Storage solutions are built on its patented Cache Accelerated Sequential Layout (CASL™) architecture. CASL leverages the unique properties of flash and disk to deliver high performance and capacity – all within a dramatically small footprint. CASL and InfoSight™ form the foundation of the Adaptive Flash platform, which allows for the dynamic and intelligent deployment of storage resources to meet the growing demands of business-critical applications.
Using systems modeling, predictive algorithms, and statistical analysis, InfoSight™ solves storage administrators’ most difficult problems. InfoSight also ensures storage resources are dynamically and intelligently deployed to satisfy the changing needs of business-critical applications, a key facet of Nimble Storage’s Adaptive Flash platform. At the heart of InfoSight is a powerful engine comprised of deep data analytics applied to telemetry data gathered from Nimble arrays deployed across the globe. More than 30 million sensor values are collected per day per Nimble Storage array. The InfoSight Engine transforms the millions of gathered data points into actionable information that allows customers to realize significant operational efficiency through:
· Maintaining optimal storage performance
· Projecting storage capacity needs
· Proactively monitoring storage health and getting granular alerts
· Proactively diagnoses and automatically resolves complex issues, freeing up IT resources for value-creating projects
· Ensures a reliable data protection strategy with detailed alerts and monitoring
· Expertly guides storage resource planning, determining the optimal approach to scaling cache, IOPS to meet changing SLAs
· Identifies latency and performance bottlenecks through the entire virtualization stack
For more information, refer to:
https://www.nimblestorage.com/infosight/architecture/
CASL uses fast, in-line compression for variable application block sizes to decrease the footprint of inbound write data by as much as 75 percent. Once there are enough variable-sized blocks to form a full write stripe, CASL writes the data to disk. If the data being written is active data, it is also copied to SSD cache for faster reads. Written data is protected with triple-parity RAID.
Capacity is only consumed as data is written. CASL efficiently reclaims free space on an ongoing basis, preserving write performance with higher levels of capacity utilization. This avoids fragmentation issues that hamper other architectures.
By sequencing random write data, CASL’s writes to disk are orders of magnitude faster than other storage systems’ random writes. The CS700, Nimble’s top-of-the-line array, delivers double the write IOPS of a single MLC flash drive with a 7,200-RPM hard disk.
CASL accelerates read performance by dynamically caching hot data in flash, delivering sub-millisecond read latency and high throughput across a wide variety of demanding enterprise applications.
CASL leverages flash as a true read cache, as opposed to a bolt-on tier. This enables Nimble arrays to easily adapt to changing workloads. As the architectural foundation of Adaptive Flash, CASL allows flash to flexibly scale for higher performance, especially benefitting those applications that work best when their entire working sets reside in flash.
CASL leverages flash as a true read cache, as opposed to a bolt-on tier. This enables Nimble arrays to easily adapt to changing workloads. As the architectural foundation of Adaptive Flash, CASL allows flash to flexibly scale for higher performance, especially benefitting those applications that work best when their entire working sets reside in flash.
Flash can be allocated to individual workloads on a per-volume basis according to one of three user-assignable service levels:
All Flash: The entire workload is pinned in cache for deterministic low latency. Ideal for latency-sensitive workloads or single applications with large working sets or high cache churn.
Auto Flash: Default service level where workload active data is dynamically cached. Ideal for applications requiring high performance, or a balance of performance and capacity.
No Flash: No active data is cached in flash. Recommended for capacity-optimized workloads without high performance demands.
All-inclusive snapshot-based data protection is built into the Adaptive Flash platform. Snapshots and production data reside on the same array, eliminating the inefficiencies inherent to running primary and backup storage silos. And, InfoSight ensures that customers’ data protection strategies work as expected through intuitive dashboards and proactive notifications in case of potential issues.
Nimble snapshots are point-in-time copies capturing just changed data, allowing three months of frequent snapshots to be easily stored on a singlearray. Data can be instantly restored, as snapshots reside on the same array as primary data.
Only compressed, changed data blocks are sent over the network for simple and WAN-efficient disaster recovery.
Nimble’s snapshots allow fully functioning copies, or clones of volumes, to be quickly created. Instant clones deliver the same performance and functionality as the source volume, an advantage for virtualization, VDI, and test/development workloads.
Nimble enables instant application/VM-consistent backups using VSS framework and VMware integration, using application templates with pre-tuned storage parameters.
NimbleOS enables encryption of individual volumes with little to no performance impact. Encrypted volumes can be replicated to another Nimble target, and data can be securely shredded.
For more information, refer to:
https://www.nimblestorage.com/products-technology/casl-architecture/
This SmartStack Solution was designed to address the IT infrastructure requirements of mid-sized enterprises and departmental deployments in larger enterprises. These deployments support a diverse set of applications and typically have access to a very limited IT staff and budget. As such, simplicity and ease of deployment and management are critical factors when selecting an infrastructure platform to address IT needs.
The SmartStack design uses the compact version of Cisco UCS (Cisco UCS Mini) with Cisco B200M4 half-width blades running VMware ESXi 6.0 to provide the compute resources.
Cisco UCS Mini has built-in redundant fabric interconnects (FI) providing unified network and storage access. Each fabric interconnect has 4x10Gbps ports – two ports will be used as Uplink ports and connect to the switching infrastructure and the remaining two will connect to Nimble array and configured as Appliance ports. A 40GbE QSFP+ expansion port is also available to expand compute capacity using UCS C-series rack mount servers. The QSFP+ port could also be used to connect to the storage array.
Figure 6 Cisco UCS 6324 Fabric Interconnect
The networking infrastructure in the SmartStack design comprise of a pair of Cisco Nexus 9372 PX switches. Cisco Nexus switches provide network connectivity to applications hosted on Cisco UCS Mini. The switching infrastructure is an optional component of the SmartStack design. A customer can choose to use a different model of Cisco Nexus switches or their existing switching infrastructure to provide connectivity. Note that a customer may not be able to benefit from the SmartStack Solution support if non-Cisco switching gear is used, however, customers can still benefit from product-specific support.
As stated earlier, Cisco Nexus 9000 family of switches supports two modes of operation: NX-OS standalone mode and ACI mode. NX-OS standalone mode is used in this SmartStack design. Cisco Nexus 9000 switches have the capabilities and performance necessary for medium-size businesses and Enterprises without having to upgrade the networking infrastructure as the networking needs grow. Cisco Nexus switches provides 40G connectivity at low latency and high port-density. Cisco Nexus 9300 series switches used in this design also provide investment protection by providing the foundation for migrating to ACI with centralized policy based management.
Figure 7 Network Connectivity from Cisco UCS Mini to Cisco Nexus 9000 Series Switches
Link aggregation using virtual PortChannels (vPC) are used in this design to provide higher aggregate bandwidth and fault tolerance. Cisco Nexus 9000 platforms support link aggregation using 802.3ad standard Link Aggregation Control Protocol (LACP). Virtual PortChannels allow links that are physically connected to two different Cisco Nexus 9000 Series devices to appear as a single logical link to a third device - Cisco UCS Mini in this case. This provides device-level redundancy and connectivity even if one of the Cisco Nexus switches fail. It also provides a loop-free topology without blocked ports that typically occurs with spanning tree, enabling all available uplink bandwidth to be used, thereby increasing the aggregate bandwidth to and from the Cisco UCS domain. These links are used to provide connectivity to the rest of the customer’s network but are not used to carry the storage traffic in this SmartStack design.
The storage design uses a Nimble CS300 array to provide block storage using iSCSI. Nimble CS300 is directly attached to Cisco UCS 6324 FI ports at the back of the Cisco UCS Mini. The direct attached storage meets the needs of smaller deployments without having to buy additional switching infrastructure. iSCSI is also used to SAN boot the ESXi hosts in the Cisco UCS Mini. The storage array is redundantly connected into the Cisco UCS Mini using four 10 Gbps links. The design uses two distinct IP domains with distinct iSCSI targets and uses iSCSI multipathing to provide storage high availability.
The design is highly available with no single point of failure between the compute and storage subsystems. The redundancy between Cisco UCS Mini and Nimble array are shown in the figure below. The redundant controllers on the Nimble CS300 are connected to Cisco UCS Mini’s redundant FIs using the cabling configuration show below. Virtual PortChannels are not used in the connectivity between Nimble and Cisco UCS. As stated earlier, all links are 10Gbps links. The Nimble controllers operate in an Active/Passive Mode. The links on the Active controllers will be used for forwarding to provide an aggregate storage bandwidth of 20Gbps. If one of the links on the Active controller fail, the standby controller will become active so as to ensure that 20Gbps of bandwidth is always available.
Figure 8 Storage Connectivity from Cisco UCS Mini to Nimble CS300
The aggregate network and storage bandwidth provided in this design are 20Gbps. The Cisco UCS Mini also has 40Gbps QSFP+ ports (one per FI) that can be used to connect up to 4 Cisco C-series rack servers. All aspects of the infrastructure (compute, network, storage) can be expanded to support higher scale – these options will be covered later in this document.
The platform will be managed locally using Cisco UCS Manager, VMware vCenter and Nimble Management software. The storage array will also be remotely monitored from the cloud using Nimble InfoSight™ to provide insights into I/O and capacity usage, trend analysis for capacity planning and expansion, and for pro-active ticketing and notification when issues occur.
A detailed topology of the SmartStack architecture used in validation is shown in the figure below.
Figure 9 Physical Topology
Each UCS blade was deployed using iSCSI SAN boot. Each UCS blade used a single Service Profile level IQN for all connections. Each blade had a boot volume created on the Nimble Storage array. The Nimble Storage array provides an initiator group to only honor connections from this single service profile. During iSCSI SAN boot connectivity, the blade connects to both a primary and secondary target. This provides for normal boot operations even when the primary path is offline. The host software utilized MPIO and the Nimble Connection Manager assisted with iSCSI session and path management. Also the VMware hosts in question were deployed in a cluster to allow for HA failover and to avoid a single point of failure at the hypervisor layer.
Each Cisco UCS Server running ESXi 6.0 was deployed using Cisco UCS 1340 VIC network adapter. At the server level, each Cisco VIC presents multiple vPCIe devices to ESXi node which vSphere identifies as vmnics. ESXi is unaware that these NICs are virtual adapters. In the SmartStack design, the following virtual NICs were used:
· One vNIC carries isolated iSCSI-A traffic to FI-A
· One vNIC carries isolated iSCSI-B traffic to FI-B
· One vNIC carry data traffic and in-band management traffic
SmartStack solution was configured for jumbo frames with an MTU size of 9000 Bytes on the storage network links between the Cisco UCS Mini and Nimble CS300 array. Though storage traffic does not traverse the Nexus switching layer, these links are used for VMware vMotion and therefore, were also configured for jumbo frames. The jumbo MTU size is important so that larger frames can be sent and received on the wire which reduces the CPU load and a more efficient use of the available resources. Jumbo frames were enabled at the NIC and virtual switch level.
A separate out-of-band management network was used for configuring and managing compute, storage and network infrastructure components in the solution. Management ports on each Nimble CS300 controller and Cisco UCS Mini FI were physically connected to a separate dedicated management switch. Management ports on Cisco Nexus 9372PXs were also connected into the same management switch.
Access to vCenter and ESXi hosts were done in-band. If out-of-band access to these components are required then additional ports on the 6324 Fabric Interconnects is required. A disjoint layer-2 configuration can then be used to keep the management and data plane networks completely separate. This would require 2 additional vNICs (for example, OOB-Mgmt-A, OOB-Mgmt-B) on each server which are then associated with the management uplink ports.
SmartStack platform was designed for maximum availability of the complete infrastructure (compute, network, storage, and virtualization) with no single points of failure.
Compute and Virtualization
· Cisco UCS mini server is highly redundant with redundant power supplies, redundant 6324 fabric interconnects.
· NIC failover between Cisco UCS Fabric Interconnects enabled through Cisco UCS Manager. This is done for all management and virtual machine vNICs.
· VMware vCenter deployed with VMware HA and DRS enabled clusters
· Cisco UCS B200 M4 servers deployed in a N+1 configuration in all management and application VM clusters to provide backup in the event of a ESXi host failure – up to one host failure per cluster is supported.
· VMware vMotion was enabled and VMware HA was enabled to auto restart VMs in the event of a host/server failure in the cluster
· Host Monitoring enabled to monitor heartbeats of all ESXi hosts in the cluster to ensure quick detection in the event of a ESXi host failure
· Admission Control was enabled to ensure the cluster has enough resources to accommodate a single host failure
Network
· Virtual PortChannel (vPC) or link-aggregation capabilities of the Cisco Nexus 9000 family of switches were used for the network connectivity between Cisco UCS Mini and the customer’s network. vPC provides Layer 2 multipathing by allowing multiple parallel paths between nodes with load balancing that result in increased bandwidth and redundancy. A vPC based architecture is therefore highly resilient and robust hat scales the available Layer 2 bandwidth by using all available links. Technical benefits of vPC include:
— Allows a single device to use a PortChannel across two upstream devices
— Eliminates Spanning Tree Protocol blocked ports
— Provides a loop-free topology
— Uses all available uplink bandwidth
— Provides fast convergence if either the link or a device fails
— Provides link-level resiliency
— Helps ensure high availability
· The following vPC and Nexus best practices were implemented:
— Spanning tree port type ‘edge trunk’ used on ports connected to hosts on Cisco UCS Mini
— Spanning tree Bridge Protocol Data Unit (BPDU) Guard and filter enabled on ‘edge trunk’ ports
— All criteria for vPC consistency checks implemented
— Link Aggregation Control Protocol (LACP) used on all vPC PortChannels
— Unique vPC Domain ID used (same on both peers) with a lower ‘role priority’ on the primary switch
— Same vPC ID and PortChannel ID used for ease of configuration and troubleshooting
— vPC IP Address Resolution Protocol (ARP) synchronization enabled on all vPC peers
— vPC auto-recovery feature enabled
— Bridge Assurance only on vPC Peer links – enabled by default
— Loopguard - disabled by default
— CDP enabled for infrastructure visibility and ease of troubleshooting
Storage
· Nimble CS300 was redundantly deployed with redundant storage controllers, redundant power supplies, redundant cabling and paths to each controller
· Dual Subnet configuration for storage connectivity to Cisco UCS Mini Fabric Interconnects
· Ten Gigabit Ethernet data path connectivity from each controller to each Fabric Interconnect.
· Backup and Recovery capability via snapshots
Compute
The SmartStack design can support a maximum of eight half-width blade servers within the Cisco UCS 5108 chassis. Additionally, the Cisco UCS Mini can support up to four Cisco UCS rack mount servers using the 40 GbE Enhanced Quad SFP (QSFP+) ports on the Cisco UCS 6324 Fabric Interconnects. The figure below shows the maximum compute scale possible in a SmartStack design with direct attached storage. Assuming that two ports on each Cisco UCS 6324 Fabric Interconnect are used for inbound and outbound traffic to the existing network infrastructure, the maximum compute scale possible with this design is twelve, eight half-width Cisco UCS B-series blade servers and four Cisco UCS C-series rack mount servers.
Figure 10 SmartStack Compute Scale with Cisco UCS Mini and Direct Attached Storage
Storage
The SmartStack design provides flexible Storage scaling with the Nimble Storage CS300 array. Storage can be scaled deep by adding additional disk shelves. Additional Caching capability can be added by either replacing the capacity of the 4 SSD drives or by adding an All-Flash-Shelf (AFS). Storage compute can be scaled by either upgrading the controller or to add additional storage arrays to scale out. Additional 10GB network cards can be added for additional network throughput. All of these storage options can be accomplished non-intrusively.
A high level summary of the validation done for the SmartStack design is provided in this section. The solution was validated for basic data forwarding by deploying virtual machine running IOMeter tool. The system was validated for resiliency by failing various aspects of the system under load. Examples of the types of tests executed are as follows:
· Failure and recovery of ESXi hosts in a cluster
· Rebooting of hosts
· Failure and recovery of redundant links to Nimble controller
· Failure and recovery of SSD and spinning disks
· Managed failure and recovery of backup and active Nimble controllers from Nimble management interface
· Failure and recovery of backup and active Nimble controllers from Nimble management interface – simulating a hard failure
· Upgrading Nimble OS while system is actively processing I/O
Load was generated using IO Meter tool and different IO profiles were used to reflect the different profiles that are seen in customer networks. See figure below for the profiles used.
Table 2 Traffic Profiles
I/O Profile 1 |
8k size, Random, 75% Read, 25% Write,16 Outstanding IOs |
I/O Profile 2 |
4k size, Sequential, 100% Reads |
I/O Profile 3 |
8k Size, Random, 50% Read, 50% Write |
The table below is a summary of all the components used for validating the SmartStack design. The following links should be consulted before deploying SmartStack.
· Cisco UCS Hardware and Software Interoperability: http://www.cisco.com/web/techdoc/ucs/interoperability/matrix/matrix.html
· VMware Compatibility Guide:
http://www.vmware.com/resources/compatibility/search.php
· Nimble Support Matrix:
https://infosightweb.nimblestorage.com/InfoSight/cgi-bin/viewPDFFile?ID=array/pubs_support_matrix_2_3_rev_f.pdf
Table 3 Infrastructure Components and Software Revisions
Infrastructure Stack |
Component |
Software Revision |
Details |
Compute |
Cisco UCS Mini - 5108 |
N/A |
Blade Server Chassis |
Cisco UCS B200 M4 |
3.0.2(c) |
Server/Host on Cisco UCS Mini |
|
Cisco UCS FI 6324 |
3.0.2(c) |
Embedded Fabric Interconnect on Cisco UCS Mini |
|
Cisco UCS Manager |
3.0.2(c) |
Embedded Management |
|
Cisco eNIC Driver |
2.1.2.69 |
Ethernet driver for Cisco VIC |
|
Network |
Cisco Nexus 9372 PX |
6.1(2)I3(4a) |
Optional |
Storage |
Nimble CS300 Array |
2.3.7 |
Build: 2.3.7-280146-opt |
Nimble NCM for ESXi |
2.3.1 |
Build: 2.3.1-600006 |
|
Virtualization |
VMware vSphere |
6.0 |
Build: 6.0.0-2494585 |
VMware vCenter Server Appliance |
6.0.0 U1 |
Build: 6.0.0.10000-3018521 |
|
Microsoft SQL Server |
2012 RP1 |
VMware vCenter Database |
A detailed view of the configuration on the Nimble storage array is shown below.
Figure 11 Nimble Storage Array Configuration
This is a reference Bill of Materials for SmartStack, it includes all major pieces but does not include everything needed in a SmartStack deployment. Also, though the quantity of servers shown in the list below is one but multiple hosts were deployed during testing for high availability and to support multiple clusters (for example, Infra cluster, VM cluster).
Table 4 Bill of Materials for SmartStack
SKU |
Description |
Qtty |
|
|
|
|
|
1.0 |
Cisco UCSB-B200-M4 |
Cisco UCS B200 M4 w/o CPU, mem, drive bays, HDD, mezz |
1 |
1.1 |
Cisco UCS-CPU-E52620D |
2.40 GHz E5-2620 v3/85W 6C/15MB Cache/DDR4 1866MHz |
2 |
1.2 |
Cisco UCS-MR-1X162RU-A |
16GB DDR4-2133-MHz RDIMM/PC4-17000/dual rank/x4/1.2v |
8 |
1.3 |
Cisco UCSB-MLOM-40G-03 |
Cisco UCS VIC 1340 modular LOM for blade servers |
1 |
1.4 |
Cisco UCSB-HS-EP-M4-F |
CPU Heat Sink for Cisco UCS B200 M4/B420 M4 (Front) |
1 |
1.5 |
Cisco UCSB-HS-EP-M4-R |
CPU Heat Sink for Cisco UCS B200 M4/B420 M4 (Rear) |
1 |
1.6 |
Cisco UCSB-LSTOR-BK |
FlexStorage blanking panels w/o controller, w/o drive bays |
2 |
1.7 |
C1 Cisco UCS-OPT-OUT |
Cisco ONE Data Center Compute Opt Out Option |
1 |
2.0 |
Cisco UCSB-5108-AC2 |
Cisco UCS 5108 Blade Server AC2 Chassis, 0 PSU/8 fans/0 FEX |
1 |
2.1 |
N01-UAC1 |
Single phase AC power module for Cisco UCS 5108 |
1 |
2.2 |
N20-FAN5 |
Fan module for Cisco UCS 5108 |
8 |
2.3 |
N20-CBLKB1 |
Blade slot blanking panel for Cisco UCS 5108/single slot |
7 |
2.4 |
N20-CAK |
Accessory kit for Cisco UCS 5108 Blade Server Chassis |
1 |
2.5 |
N20-FW013 |
Cisco UCS Blade Server Chassis FW Package 3.0 |
1 |
2.6 |
Cisco UCSB-5108-PKG-HW |
Cisco UCS 5108 Packaging for chassis with half width blades. |
1 |
2.7 |
Cisco UCSB-PSU-2500ACDV |
2500W Platinum AC Hot Plug Power Supply - DV |
4 |
2.8 |
CAB-C19-CBN |
Cabinet Jumper Power Cord, 250 VAC 16A, C20-C19 Connectors |
4 |
2.9 |
Cisco UCS-FI-M-6324 |
Cisco UCS 6324 In-Chassis FI with 4 UP, 1x40G Exp Port, 16 10Gb |
2 |
2.10 |
N10-MGT013 |
Cisco UCS Manager 3.0 for 6324 |
2 |
2.11 |
SFP-H10GB-CU1M |
10GBASE-CU SFP+ Cable 1 Meter |
2 |
2.12 |
QSFP-4SFP10G-CU3M |
QSFP to 4xSFP10G Passive Copper Splitter Cable, 3m |
2 |
3.0 |
Cisco UCSC-C220-M4S |
Cisco UCS C220 M4 SFF w/o CPU, mem, HD, PCIe, PSU, rail kit |
1 |
3.1 |
Cisco UCS-CPU-E52620D |
2.40 GHz E5-2620 v3/85W 6C/15MB Cache/DDR4 1866MHz |
2 |
3.2 |
Cisco UCS-MR-1X162RU-A |
16GB DDR4-2133-MHz RDIMM/PC4-17000/dual rank/x4/1.2v |
8 |
3.3 |
Cisco UCSC-PCIE-CSC-02 |
Cisco VIC 1225 Dual Port 10Gb SFP+ CNA |
1 |
3.4 |
Cisco UCSC-RAIL-NONE |
NO RAIL KIT OPTION |
1 |
3.5 |
Cisco UCSC-PSU1-770W |
770W AC Hot-Plug Power Supply for 1U C-Series Rack Server |
2 |
3.6 |
CAB-9K12A-NA |
Power Cord, 125VAC 13A NEMA 5-15 Plug, North America |
2 |
3.7 |
Cisco UCSC-HS-C220M4 |
Heat sink for Cisco UCS C220 M4 rack servers |
2 |
3.8 |
Cisco UCSC-MLOM-BLK |
MLOM Blanking Panel |
1 |
3.9 |
N20-BBLKD |
Cisco UCS 2.5 inch HDD blanking panel |
8 |
3.10 |
C1UCS-OPT-OUT |
Cisco ONE Data Center Compute Opt Out Option |
1 |
4.0 |
Nimble CS300 |
Adaptive Flash Storage Array |
1 |
SmartStack with Cisco UCS Mini and Nimble CS300 delivers an infrastructure platform in a compelling new form factor to address the business objectives and IT needs of small and medium businesses. Designed and validated with best practices and high availability, SmartStack can reduce deployment time, project risk and IT costs while maintaining scalability and flexibility for addressing a multitude of IT initiatives.
Archana Sharma, Technical Leader, Cisco UCS Data Center Solutions Engineering, Cisco Systems
Archana Sharma has 20 years of experience at Cisco focused on Data Center, Desktop Virtualization, Collaboration and related technologies. Archana has been working on Enterprise and Service Provider systems and solutions and delivering Cisco Validated designs for over 10 years. Archana holds a CCIE (#3080) in Routing and Switching and a Bachelor’s degree in Electrical Engineering from North Carolina State University.
Steve Sexton, Technical Marketing Engineer, Nimble Storage
Steve Sexton has over 15 years of experience in both the Network and Storage industries. For the last five years he has been focused on Cisco UCS technologies and integration efforts. He holds Bachelor’s and Master’s degrees in Industrial Technology from Appalachian State University.
For their support and contribution to the design, validation, and creation of this Cisco Validated Design, the authors would like to thank:
· Chris O' Brien, Manager, UCS Solutions Technical Marketing Team, Cisco Systems
· Ashish Prakash, VP Solutions Engineering, Nimble Storage
· Radhika Krishnan, VP Product Marketing and Alliances, Nimble Storage
· Matt Miller, Senior Product Marketing Manager, Nimble Storage