Cisco UCS Integrated Infrastructure for SAP HANA
Design and Deployment of Cisco UCS Server with MapR
Converged Data Platform
Last Updated: June 9, 2016
The CVD program consists of systems and solutions designed, tested, and documented to facilitate faster, more reliable, and more predictable customer deployments. For more information, visit:
http://www.cisco.com/go/designzone.
ALL DESIGNS, SPECIFICATIONS, STATEMENTS, INFORMATION, AND RECOMMENDATIONS (COLLECTIVELY, "DESIGNS") IN THIS MANUAL ARE PRESENTED "AS IS," WITH ALL FAULTS. CISCO AND ITS SUPPLIERS DISCLAIM ALL WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE. IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THE DESIGNS, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
THE DESIGNS ARE SUBJECT TO CHANGE WITHOUT NOTICE. USERS ARE SOLELY RESPONSIBLE FOR THEIR APPLICATION OF THE DESIGNS. THE DESIGNS DO NOT CONSTITUTE THE TECHNICAL OR OTHER PROFESSIONAL ADVICE OF CISCO, ITS SUPPLIERS OR PARTNERS. USERS SHOULD CONSULT THEIR OWN TECHNICAL ADVISORS BEFORE IMPLEMENTING THE DESIGNS. RESULTS MAY VARY DEPENDING ON FACTORS NOT TESTED BY CISCO.
CCDE, CCENT, Cisco Eos, Cisco Lumin, Cisco Nexus, Cisco StadiumVision, Cisco TelePresence, Cisco WebEx, the Cisco logo, DCE, and Welcome to the Human Network are trademarks; Changing the Way We Work, Live, Play, and Learn and Cisco Store are service marks; and Access Registrar, Aironet, AsyncOS, Bringing the Meeting To You, Catalyst, CCDA, CCDP, CCIE, CCIP, CCNA, CCNP, CCSP, CCVP, Cisco, the Cisco Certified Internetwork Expert logo, Cisco IOS, Cisco Press, Cisco Systems, Cisco Systems Capital, the Cisco Systems logo, Cisco Unity, Collaboration Without Limitation, EtherFast, EtherSwitch, Event Center, Fast Step, Follow Me Browsing, FormShare, GigaDrive, HomeLink, Internet Quotient, IOS, iPhone, iQuick Study, IronPort, the IronPort logo, LightStream, Linksys, MediaTone, MeetingPlace, MeetingPlace Chime Sound, MGX, Networkers, Networking Academy, Network Registrar, PCNow, PIX, PowerPanels, ProConnect, ScriptShare, SenderBase, SMARTnet, Spectrum Expert, StackWise, The Fastest Way to Increase Your Internet Quotient, TransPath, WebEx, and the WebEx logo are registered trademarks of Cisco Systems, Inc. and/or its affiliates in the United States and certain other countries.
All other trademarks mentioned in this document or website are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (0809R)
© 2016 Cisco Systems, Inc. All rights reserved.
Table of Contents
Cisco Unified Computing System
Cisco UCS 6332UP Fabric Interconnect
Cisco UCS 2304XP Fabric Extender
Cisco UCS B460 M4 Blade Server
Cisco I/O Adapters for Blade and Rack-Mount Servers
Cisco VIC 1380 Virtual Interface Card
Cisco VIC 1385 Virtual Interface Card
Cisco Unified Computing System Performance Manager
Hardware Requirements for the SAP HANA Database
Deployment Hardware and Software
Cisco Nexus 3500 Series Switch Network Configuration
Cisco Nexus 3548 Initial Configuration
Enable Appropriate Cisco Nexus 3548 Switch Features and Settings
Create Global Policy to Enable Jumbo Frame and Apply the Policy to System Wide
Create VLANs for SAP HANA Traffic
Configure Network Interfaces Connecting to Cisco UCS Fabric Interconnect
Cisco Nexus 9000 Series Switch Network Configuration
Cisco Nexus 9000 A Initial Configuration
Cisco Nexus 9000 B Initial Configuration
Enable Appropriate Cisco Nexus 9000 Series Switches─Features and Settings
Cisco Nexus 9000 A and Cisco Nexus 9000 B
Create VLANs for SAP HANA Traffic
Cisco Nexus 9000 A and Cisco Nexus 9000 B
Configure Virtual Port-Channel Domain
Configure Network Interfaces for the VPC Peer Links
Configure Network Interfaces with Cisco UCS Fabric Interconnect
(Optional) Configure Network Interfaces for SAP HANA Backup/Data Source/Replication
(Optional) Management Plane Access for Cisco UCS Servers
Uplink into Existing Network Infrastructure
Initial Setup of Cisco UCS 6332 Fabric Interconnect
Cisco UCS 6332 Fabric Interconnect A
Cisco UCS 6332 Fabric Interconnect B
Upgrade Cisco UCS Manager Software to Version 3.1(1e)
Add Block of IP Addresses for KVM Access
Cisco UCS Blade Chassis Connection Options
Enable Server and Uplink Ports
Acknowledge Cisco UCS Chassis and Rack-Mount Servers
Create Local Disk Configuration Policy
Update Default Maintenance Policy
Set Jumbo Frames in Cisco UCS Fabric
Update Default Network Control Policy to Enable CDP
Create Network Control Policy for Internal Network
Create IP Pool for Inband Management
Create Service Profile Templates for SAP HANA Scale-Out Servers
Create Service Profile from the Template
Create Service Profile Templates for MapR Servers
MapR Server RAID Configuration
MapR Server Operating System Installation
Operating System Network Configuration
RedHat System Update and OS Customization for MapR Servers
RedHat Operating System Installation
Operating System Network Configuration
RedHat System Update and OS customization for SAP HANA
High Availability (HA) Configuration for Scale-Out
High Availability Configuration
Enable the SAP HANA Storage Connector API
Organizations in every industry are generating and using more data than ever before; from customer transactions and supplier delivery considerations to real-time user-consumption statistics. Without scalable infrastructure that can store, process, and analyze big data sets in real time, companies are unable to use this information to their advantage. The Cisco UCS Integrated Infrastructure for SAP HANA Scale-Out with the Cisco Unified Computing System™ (Cisco UCS) helps companies more easily harness information and make better business decisions that let them stay ahead of the competition. Our solutions help improve access to all of your data, accelerate business decision making with policy-based, simplified management; lower deployment risk; and reduce total cost of ownership (TCO). Our innovations give you the key to unlock the intelligence in your data and interpret it with a new dimension of context and insight to help you create a sustainable, competitive business advantage.
Cisco Validated Designs include systems and solutions that are designed, tested, and documented to facilitate and improve customer deployments. These designs incorporate a wide range of technologies and products into a portfolio of solutions that have been developed to address the business needs of customers. The Cisco UCS Integrated Infrastructure for SAP HANA with MapR Converged Data Platform provides an end-to-end architecture that demonstrate support for multiple SAP HANA workloads with high availability and server redundancy. The solution consists of Cisco UCS B-series B460-M4 blade servers as compute nodes for SAP HANA and Cisco UCS C-Series C240-M4 rack mount servers as storage nodes. The next Gen Cisco UCS Fabric Interconnect 6332 with 40 Gb Ethernet for Server Management and Storage Connectivity. Cisco UCS service profiles enable rapid and consistent server configuration, and automation simplifies ongoing system maintenance activities such as deploying firmware updates across the entire cluster as a single operation. Advanced monitoring capabilities raise alarms and send notifications about the health of the entire cluster so that you can proactively address concerns before they affect data analysis. The Storage Nodes are composed of Cisco UCS Servers with MapR Converged Data Platform, which is a modern NFS-mountable distributed file-system. MapR-FS is a complete POSIX file system that handles raw disk I/O for big data workload with direct access to storage hardware which dramatically improving performance and scale to thousands of nodes, and trillions of files, with extremely high throughput. MapR-FS includes enterprise-grade features such as block-level mirroring for mission-critical disaster recovery as well as load balancing, and consistent snapshots for easy data recovery.
Cisco UCS Integrated Infrastructure provides a pre-validated, ready-to-deploy infrastructure, which reduces the time and complexity involved in configuring and validating a traditional data center deployment. Cisco UCS Platforms is flexible, reliable and cost effective to facilitate various deployment options of the applications while being easily scalable and manageable. The reference architecture detailed in this document highlights the resiliency, cost benefit, and ease of deployment of a SAP HANA solution. This document describing the infrastructure installation and configuration to run SAP HANA on a dedicated or shared infrastructure.
SAP HANA is SAP SE’s implementation of in-memory database technology. The SAP HANA database takes advantage of the low cost main memory (RAM), data-processing capabilities of multicore processors, and faster data access to provide better performance for analytical and transactional applications. SAP HANA offers a multi-engine, query-processing environment that supports relational data (with both row- and column-oriented physical representations in a hybrid engine) as well as a graph and text processing for semi-structured and unstructured data management within the same system. As an appliance, SAP HANA combines software components from SAP optimized for certified hardware. However, this solution has a preconfigured hardware set-up and preinstalled software package that is dedicated for SAP HANA. In 2013, SAP introduced SAP HANA Tailored Datacenter Integration (TDI) option; TDI solution offers a more open and flexible way for integrating SAP HANA into the data center by reusing existing enterprise storage hardware, thereby reducing hardware costs. With the introduction of SAP HANA TDI for shared infrastructure, the Cisco UCS Integrated Infrastructure solution provides the advantage of having the compute, storage, and network stack integrated with the programmability of the Cisco Unified Computing System (Cisco UCS). SAP HANA TDI option enables organizations to run multiple SAP HANA production systems on a shared infrastructure. It also enables customers to run the SAP applications servers and SAP HANA database hosted on the same infrastructure.
For more information about SAP HANA, see the SAP help portal: http://help.sap.com/hana/
The intended audience for this document includes, but is not limited to, sales engineers, field consultants, professional services, IT managers, partner engineering, and customers deploying the Cisco Integrated Infrastructure for SAP HANA with MapR Converged Data Platform. External references are provided wherever applicable, but readers are expected to be familiar with the technology, infrastructure, and database security policies of the customer installation.
This document describes the steps required to deploy and configure a Cisco Datacenter Solution for SAP HANA. Cisco’s validation provides further confirmation with regard to component compatibility, connectivity and correct operation of the entire integrated stack. This document showcases one of the variants of Cisco Integrated Infrastructure for SAP HANA. While readers of this document are expected to have sufficient knowledge to install and configure the products used, configuration details that are important to the deployment of this solution are provided in this CVD.
Cisco UCS Integrated Infrastructure for SAP HANA solution designed with next generation Fabric Interconnect which provides 40GbE ports. The solution is designed with 40GbE end-to-end network including Storage network. The persistent storage is configured on C240 C-series servers with MapR Converged Data Platform. MapR-FS provides distributed, reliable, high performance, scalable, and full read/write data storage for SAP HANA.
The Cisco UCS Integrated Infrastructure for SAP HANA with MapR Converged Data Platform provides an end-to-end architecture with Cisco Hardware that demonstrate support for multiple SAP HANA workloads with high availability and server redundancy. The solution supports up to 16 x Cisco UCS B460-M4 B-series blade servers for SAP HANA and up to 8 x Cisco UCS C240-M4 C-Series rack mount servers for storage. The next Gen Cisco UCS Fabric Interconnect 6332 with 40 GbE network bandwidth for Server Management and Network connectivity. The Cisco UCS C240-M4 servers provides persistent storage with MapR Converged Data Platform, which is a modern NFS-mountable distributed file-system. The Nexus 3000 series Ethernet switch is used for failover purpose. In case of 2304 Fabric Expander failure or link failure between Server and Fabric Interconnect, the data traffic path will use Nexus 3000 series switch for High Availability and redundancy. Figure 1 shows the Cisco UCS Integrated Infrastructure for SAP HANA block diagram.
The reference architecture documented in the CVD consists of 8 x Cisco UCS B460-M4 B-series blade servers for SAP HANA and 4 x Cisco UCS C240-M4 C-Series rack mount servers for storage.
Figure 1 Cisco UCS Integrated Infrastructure for SAP HANA
The solution can be designed with a pair for Cisco Nexus 9000 series switches, alternative to single Nexus 3000 series switch. Two Cisco Nexus 9000 series switches is configured with vPC between network switch and Cisco Fabric Interconnect as show in Figure 2.
Figure 2 Cisco UCS Integrated Infrastructure for SAP HANA with pair of Nexus Switches
The Cisco Unified Computing System is a state-of-the-art data center platform that unites computing, network, storage access, and virtualization into a single cohesive system.
The main components of the Cisco Unified Computing System are:
· Computing - The system is based on an entirely new class of computing system that incorporates rack mount and blade servers based on Intel Xeon Processor E5 and E7. The Cisco UCS Servers offer the patented Cisco Extended Memory Technology to support applications with large datasets and allow more virtual machines per server.
· Network - The system is integrated onto a low-latency, lossless, 40-Gbps unified network fabric. This network foundation consolidates LANs, SANs, and high-performance computing networks which are separate networks today. The unified fabric lowers costs by reducing the number of network adapters, switches, and cables, and by decreasing the power and cooling requirements.
· Virtualization - The system unleashes the full potential of virtualization by enhancing the scalability, performance, and operational control of virtual environments. Cisco security, policy enforcement, and diagnostic features are now extended into virtualized environments to better support changing business and IT requirements.
· Storage access - The system provides consolidated access to both SAN storage and Network Attached Storage (NAS) over the unified fabric. By unifying the storage access, the Cisco Unified Computing System can access storage over Ethernet (NFS or iSCSI) and Fibre Channel over Ethernet (FCoE). This provides customers with choice for storage access and investment protection. In addition, the server administrators can pre-assign storage-access policies for system connectivity to storage resources, simplifying storage connectivity, and management for increased productivity.
The Cisco Unified Computing System is designed to deliver:
· A reduced Total Cost of Ownership (TCO) and increased business agility.
· Increased IT staff productivity through just-in-time provisioning and mobility support.
· A cohesive, integrated system, which unifies the technology in the data center.
· Industry standards supported by a partner ecosystem of industry leaders.
Cisco Unified Computing System Manager (UCSM) provides unified, embedded, policy-driven management programmatically controls server, network, and storage resources, so the can be efficiently managed at scale through software. It is embedded on a pair of Cisco UCS 6300 or 6200 Series Fabric Interconnects, supporting an end-to-end Ethernet or Fibre Channel over Ethernet (FCoE) solution of up to 40 Gb and up to 16 Gb Fibre Channel. The manager participates in server, fabric, and storage provisioning, device discovery, inventory, configuration, diagnostics, monitoring, fault detection, auditing, and statistics collection. Cisco UCS Manager works with HTML 5, Java, or CLI graphical user interfaces. An open API facilitates integration of Cisco UCS Manager with a wide variety of monitoring, analysis, configuration, deployment, and orchestration tools from other independent software vendors. The API also facilitates customer development through the Cisco UCS PowerTool for Powershell and a Python SDK.
The Cisco UCS 6300 Series Fabric Interconnects are a core part of Cisco UCS, providing both network connectivity and management capabilities for the system. The Cisco UCS 6300 Series offers line-rate, low-latency, lossless 10 and 40 Gigabit Ethernet, Fibre Channel over Ethernet (FCoE), and Fibre Channel functions. The Cisco UCS 6300 Series provides the management and communication backbone for the Cisco UCS B-Series Blade Servers, 5100 Series Blade Server Chassis, and C-Series Rack Servers managed by Cisco UCS. All servers attached to the fabric interconnects become part of a single, highly available management domain. In addition, by supporting unified fabric, the Cisco UCS 6300 Series provides both LAN and SAN connectivity for all servers within its domain.
From a networking perspective, the Cisco UCS 6300 Series uses a cut-through architecture, supporting deterministic, low-latency, line-rate 10 and 40 Gigabit Ethernet ports, switching capacity of 2.56 terabits per second (Tbps), and 320 Gbps of bandwidth per chassis, independent of packet size and enabled services. The product family supports Cisco® low-latency, lossless 10 and 40 Gigabit Ethernet unified network fabric capabilities, which increase the reliability, efficiency, and scalability of Ethernet networks. The fabric interconnect supports multiple traffic classes over a lossless Ethernet fabric from the server through the fabric interconnect. Significant TCO savings can be achieved with an FCoE optimized server design in which network interface cards (NICs), host bus adapters (HBAs), cables, and switches can be consolidated.
The Cisco UCS 6332 Fabric Interconnect is the management and communication backbone for Cisco UCS B-Series Blade Servers, C-Series Rack Servers, and 5100 Series Blade Server Chassis. All servers attached to 6332 Fabric Interconnects become part of one highly available management domain. The Cisco UCS 6332UP 32-Port Fabric Interconnect is a 1-rack-unit 40 Gigabit Ethernet, FCoE and Fibre Channel switch offering up to 2.56 Tbps throughput and up to 32 ports. The switch has 32 fixed 40-Gbps Ethernet and FCoE ports. Cisco UCS 6332UP 32-Port Fabric Interconnect have ports that can be configured for the breakout feature that supports connectivity between 40 Gigabit Ethernet ports and 10 Gigabit Ethernet ports. This feature provides backward compatibility to existing hardware that supports 10 Gigabit Ethernet. A 40 Gigabit Ethernet port can be used as four 10 Gigabit Ethernet ports. Using a 40 Gigabit Ethernet SFP, these ports on a Cisco UCS 6300 Series Fabric Interconnect can connect to another fabric interconnect that has four 10 Gigabit Ethernet SFPs.
Figure 3 Cisco UCS 6332 UP Fabric Interconnect
The Cisco UCS 2304 Fabric Extender has four 40 Gigabit Ethernet, FCoE-capable, Quad Small Form-Factor Pluggable (QSFP+) ports that connect the blade chassis to the fabric interconnect. Each Cisco UCS 2304 has four 40 Gigabit Ethernet ports connected through the midplane to each half-width slot in the chassis. Typically configured in pairs for redundancy, two fabric extenders provide up to 320 Gbps of I/O to the chassis.
Figure 4 Cisco UCS 2304 XP
The Cisco UCS 5100 Series Blade Server Chassis is a crucial building block of the Cisco Unified Computing System, delivering a scalable and flexible blade server.
The Cisco UCS 5108 Blade Server Chassis is six rack units (6RU) high and can mount in an industry-standard 19-inch rack. A single chassis can house up to eight half-width Cisco UCS B-Series Blade Servers and can accommodate both half-width and full-width blade form factors. Four hot-swappable power supplies are accessible from the front of the chassis, and single-phase AC, –48V DC, and 200 to 380V DC power supplies and chassis are available. These power supplies are up to 94 percent efficient and meet the requirements for the 80 Plus Platinum rating. The power subsystem can be configured to support nonredundant, N+1 redundant, and grid-redundant configurations. The rear of the chassis contains eight hot-swappable fans, four power connectors (one per power supply), and two I/O bays that can support either Cisco UCS 2000 Series Fabric Extenders or the Cisco UCS 6324 Fabric Interconnect. A passive midplane provides up to 80 Gbps of I/O bandwidth per server slot and up to 160 Gbps of I/O bandwidth for two slots. The Cisco UCS Blade Server Chassis is shown in Figure 5.
Figure 5 Cisco Blade Server Chassis (front and back view)
The Cisco UCS C240 M4 Rack Server is an enterprise-class server designed to deliver exceptional performance, expandability, and efficiency for storage and I/O-intensive infrastructure workloads. This includes big data analytics, virtualization, and graphics-rich and bare-metal applications.
The Cisco UCS C240 M4 Rack Server delivers outstanding levels of expandability and performance for standalone or Cisco UCS-managed environments in a two rack-unit (2RU) form factor. It provides:
· Dual Intel® Xeon® E5-2600 v3 processors for improved performance suitable for nearly all two-socket applications
· Next-generation double-data-rate 4 (DDR4) memory, 12-Gbps SAS throughput, and NVMe PCIe SSD support
· Innovative Cisco UCS virtual interface card (VIC) support in PCIe or modular LAN-on-motherboard (mLOM) form factor
· Graphics-rich experiences to more virtual users with support for the latest NVIDIA graphics processing units (GPUs)
The Cisco UCS C240 M4 server also offers maximum reliability, availability, and serviceability (RAS) features, including:
· Tool-free CPU insertion
· Easy-to-use latching lid
· Hot-swappable and hot-pluggable components
· Redundant Cisco Flexible Flash SD cards
The Cisco UCS C240 M4 server can be deployed standalone or as part of the Cisco Unified Computing System. Cisco UCS unifies computing, networking, management, virtualization, and storage access into a single integrated architecture that can enable end-to-end server visibility, management, and control in both bare-metal and virtualized environments. With Cisco UCS-managed deployment, Cisco UCS C240 M4 takes advantage of our standards-based unified computing innovations to significantly reduce customers’ TCO and increase business agility.
Figure 6 Cisco UCS C240 M4 Rack Server
The Cisco UCS B460 M4 Blade Server provides outstanding performance and enterprise-critical stability for memory-intensive workloads such as large-scale databases, in-memory analytics, and business intelligence. The Cisco UCS B460 M4 harnesses the power of four Intel® Xeon® processor E7 v3 product families and accelerates access to critical data. This blade server supports up to 72 processor cores, 6.0 TB of memory,4.8 TB of internal storage, and 320 Gbps of overall Ethernet throughput.
The Cisco UCS B460 M4 provides:
· Four Intel® Xeon® processor E7 v3 product families
· 96 DDR3 memory DIMM slots
· Four hot-pluggable drive bays for Hard Disk Drives (HDDs) or Solid State Disks (SSDs)
· SAS controller on board with RAID 0 and 1 support
· Two modular LAN on motherboard (mLOM) slots for Cisco UCS Virtual Interface Card (VIC)
· Six PCIe mezzanine slots, with two dedicated for optional Cisco UCS VIC 1340, and four slots for Cisco UCS VIC 1380, VIC port expander, or flash cards
The Cisco UCS B460 M4 server is powered by the scalable performance and reliability of the Intel Xeon E7 v3 processor product family. It is designed to meet the needs of computing-intensive and memory-intensive enterprise-critical applications.
Figure 7 Cisco UCS B460 M4 Blade Server
This section discusses the Cisco I/O Adapters used in this solution.
The Cisco UCS blade server has various Converged Network Adapters (CNA) options.
The Cisco UCS Virtual Interface Card (VIC) 1340 is a 2-port 40-Gbps Ethernet or dual 4 x 10-Gbps Ethernet, Fibre Channel over Ethernet (FCoE)-capable modular LAN on motherboard (mLOM) designed exclusively for the M4 generation of Cisco UCS B-Series Blade Servers. When used in combination with an optional port expander, the Cisco UCS VIC 1340 capabilities is enabled for two ports of 40-Gbps Ethernet.
Figure 8 Cisco UCS 1340 VIC Card
The Cisco UCS VIC 1340 enables a policy-based, stateless, agile server infrastructure that can present over 256 PCIe standards-compliant interfaces to the host that can be dynamically configured as either network interface cards (NICs) or host bus adapters (HBAs). In addition, the Cisco UCS VIC 1340 supports Cisco® Data Center Virtual Machine Fabric Extender (VM-FEX) technology, which extends the Cisco UCS fabric interconnect ports to virtual machines, simplifying server virtualization deployment and management.
Figure 9 Cisco UCS VIC 1340 Architecture
Figure 10 Cisco UCS VIC 1340 Architecture with Port Expander
The Cisco UCS Virtual Interface Card (VIC) 1380 is a dual-port 40-Gbps Ethernet, or dual 4 x 10 Fibre Channel over Ethernet (FCoE)-capable mezzanine card designed exclusively for the M4 generation of Cisco UCS B-Series Blade Servers. The card enables a policy-based, stateless, agile server infrastructure that can present over 256 PCIe standards-compliant interfaces to the host that can be dynamically configured as either network interface cards (NICs) or host bus adapters (HBAs). In addition, the Cisco UCS VIC 1380 supports Cisco® Data Center Virtual Machine Fabric Extender (VM-FEX) technology, which extends the Cisco UCS fabric interconnect ports to virtual machines, simplifying server virtualization deployment and management.
Figure 11 Cisco UCS 1380 VIC Card
Figure 12 Cisco UCS VIC 1380 Architecture
The Cisco UCS Virtual Interface Card (VIC) 1385 is a Cisco® innovation. It provides a policy-based, stateless, agile server infrastructure for your data center. This dual-port Enhanced Quad Small Form-Factor Pluggable (QSFP) half-height PCI Express (PCIe) card is designed exclusively for Cisco UCS C-Series Rack Servers. The card supports 40 Gigabit Ethernet and Fibre Channel over Ethernet (FCoE). It incorporates Cisco’s next-generation converged network adapter (CNA) technology and offers a comprehensive feature set, providing investment protection for future feature software releases. The card can present more than 256 PCIe standards-compliant interfaces to the host, and these can be dynamically configured as either network interface cards (NICs) or host bus adapters (HBAs). In addition, the VIC supports Cisco Data Center Virtual Machine Fabric Extender (VM-FEX) technology. This technology extends the Cisco UCS Fabric Interconnect ports to virtual machines, simplifying server virtualization deployment.
Figure 13 Cisco UCS 1385 VIC Card
Cisco UCS Performance Manager is a purpose-built data center operations management solution. It unifies the monitoring of key applications, business services, and integrated infrastructures across dynamic, heterogeneous, physical, and virtual Cisco UCS-powered data centers. Cisco UCS Performance Manager uses Cisco UCS APIs to collect data from Cisco UCS Manager to display comprehensive information about all Cisco UCS infrastructure components. With a customizable view, data center staff can see application services and view performance and component or service availability information for Cisco UCS integrated infrastructures.
Cisco UCS Performance Manager dynamically collects information about Cisco UCS servers, network, storage, and virtual machine hosts using an agentless information gathering approach. The solution provided the following:
· Unifies performance monitoring and management of Cisco UCS integrated infrastructure solutions
· Delivers real-time views of fabric and data center switch bandwidth usage and capacity thresholds
· Discovers and creates a relationship model of each system, giving staff a single, accurate view of all components
· Allows staff to navigate into individual Cisco UCS infrastructure components when troubleshooting and resolving issues
Cisco UCS Performance Manager provides deep visibility of Cisco UCS integrated infrastructure performance for service profiles, chassis, fabric extenders, adapters, virtual interface cards, ports, and uplinks for granular data center monitoring. Customers can use Cisco UCS Performance Manager to maintain service-level agreements (SLAs) by managing optimal resource allocation to prevent under-provisioning and avoid performance degradation. And by defining component or application-centric views of critical resources, administrators can monitor SLA health and performance from a single console, eliminating the need for multiple tools.
Cisco UCS Performance Manager Installation and Configuration guide is available at UCS Performance Manager Install Guide
Cisco’s Unified Compute System is revolutionizing the way servers are managed in data-center. The following are the unique differentiators of Cisco Unified Computing System and Cisco UCS Manager:
· Embedded management: In Cisco Unified Computing System, the servers are managed by the embedded firmware in the Fabric Interconnects, eliminating need for any external physical or virtual devices to manage the servers. Also, a pair of FIs can manage up to 40 chassis, each containing 8 blade servers. This gives enormous scaling on management plane.
· Unified fabric: In Cisco Unified Computing System, from blade server chassis or rack server fabric-extender to FI, there is a single Ethernet cable used for LAN, SAN and management traffic. This converged I/O, results in reduced cables, SFPs and adapters – reducing capital and operational expenses of overall solution.
· Auto discovery: By simply inserting the blade server in the chassis or connecting rack server to the fabric extender, discovery and inventory of compute resource occurs automatically without any management intervention. Combination of unified fabric and auto-discovery enables wire-once architecture of Cisco Unified Computing System, where compute capability of Cisco Unified Computing System can extend easily, while keeping the existing external connectivity to LAN, SAN and management networks.
· Policy based resource classification: When a compute resource is discovered by Cisco UCS Manager, it can be automatically classified to a given resource pool based on policies defined. This capability is useful in multi-tenant cloud computing. This CVD focuses on the policy-based resource classification of Cisco UCS Manager.
· Combined Rack and Blade server management: Cisco UCS Manager can manage B-Series blade servers and C-Series rack server under the same Cisco UCS domain. This feature, along with stateless computing makes compute resources truly hardware form factor agnostic. This CVD focuses on the combination of B-Series and C-Series Servers to demonstrate stateless and form factor independent computing work load.
· Model based management architecture: Cisco UCS Manager Architecture and management database is model based and data driven. Open, standard based XML API is provided to operate on the management model. This enables easy and scalable integration of UCSM with other management system, such as VMware vCloud director, Microsoft system center, and Citrix CloudPlatform.
· Policies, Pools, Templates: Management approach in UCSM is based on defining policies, pools and templates, instead of cluttered configuration, which enables simple, loosely coupled, data driven approach in managing compute, network and storage resources.
· Loose referential integrity: In Cisco UCS Manager, a service profile, port profile or policies can refer to other policies or logical resources with loose referential integrity. A referred policy cannot exist at the time of authoring the referring policy or a referred policy can be deleted even though other policies are referring to it. This provides different subject matter experts to work independently from each-other. This provides great flexibilities where different experts from different domains, such as network, storage, security, server and virtualization work together to accomplish a complex task.
· Policy resolution: In Cisco UCS Manager, a tree structure of organizational unit hierarchy can be created that mimics the real life tenants and/or organization relationships. Various policies, pools and templates can be defined at different levels of organization hierarchy. A policy referring to other policy by name is resolved in the org hierarchy with closest policy match. If no policy with specific name is found in the hierarchy till root org, then special policy named “default” is searched. This policy resolution practice enables automation friendly management APIs and provides great flexibilities to owners of different orgs.
· Service profiles and stateless computing: Service profile is a logical representation of a server, carrying its various identities and policies. This logical server can be assigned to any physical compute resource as far as it meets the resource requirements. Stateless computing enables procurement of a server within minutes, which used to take days in legacy server management systems.
· Built-in multi-tenancy support: Combination of policies, pools and templates, loose referential integrity, policy resolution in org hierarchy and service profile based approach to compute resources make Cisco UCS Manager inherently friendly to multi-tenant environment typically observed in private and public clouds.
· Virtualization aware network: VM-FEX technology makes access layer of network aware about host virtualization. This prevents domain pollution of compute and network domains with virtualization when virtual network is managed by port-profiles defined by the network administrators’ team. VM-FEX also offloads hypervisor CPU by performing switching in the hardware, thus allowing hypervisor CPU to do more virtualization related tasks. VM-FEX technology is well integrated with VMware vCenter, Linux KVM and Hyper-V SR-IOV to simplify cloud management.
· Simplified QoS: Even though fibre-channel and Ethernet are converged in Cisco UCS fabric, built-in support for QoS and lossless Ethernet makes it seamless. Network Quality of Service (QoS) is simplified in Cisco UCS Manager by representing all system classes in one GUI panel.
The MapR Converged Data Platform integrates Hadoop and Spark, real-time database capabilities, and global event streaming with big data enterprise storage, for developing and running innovative data applications. The MapR Platform is powered by the industry’s fastest, most reliable, secure, and open data infrastructure that dramatically lowers TCO and enables global real-time data applications.
MapR Platform Services is the set of core capabilities in the MapR Converged Data Platform. Core components include:
· MapR Streams - a global publish-subscribe event streaming system for big data
· MapR-DB - a high performance, in-Hadoop NoSQL database management system
· MapR-FS - the underlying POSIX file system that provides distributed, reliable, high performance, scalable, and full read/write data storage
The Cisco UCS C240 Appliance for SAP HANA takes advantage of the enterprise grade MapR-FS in MapR Platform Services. It offers the appliance a robust storage layer that is highly available, resilient and performant.
MapR converged data platform is backed by the robust MapR-FS, which can be accessed through the NFS gateway. The SAP HANA Servers mounts MapR-FS using NFS client. Data can be persisted to the storage system managed by MapR-FS. MapR-FS is distributed, has a global name space, real-time read/write access, volume based, secure and has many other benefits compared to HDFS. Some important features pertaining to data protection include:
Snapshots are an incredibly efficient and effective approach in protecting data from accidental deletion and corruption due to errors in applications, without the need to actually copy the data within a cluster or to another cluster. The ability to create and manage Snapshots is an essential feature expected from enterprise-grade storage systems, and this capability is increasingly seen as critical with big data systems. A Snapshot is a capture of the state of the storage system at an exact point in time, and is used to provide a full recovery of data when lost. For example, with MapR, you can snapshot petabytes of data in seconds, as we simply maintain pointers to the locations of blocks that make up a Volume of data as opposed to the need to physically copy those petabytes into the local cluster or a remote one.
For additional information, please see the MapR Snapshots technical brief: MapR Snapshots.
Data in MapR-FS can be mirrored synchronously inside the cluster or asynchronously to a separate cluster located at a remote location. Because the mirroring is block-based, only the changed blocks are mirrored across the WAN after the baseline mirroring happened. It is fast and light on network bandwidth utilization compared to file-based mirroring. Most importantly, the connection for mirroring is secure.
By default, MapR-FS uses a 3-way replication model to prevent data loss due to node failure. However, this approach reduces usable storage and increases the overall storage cost. Cisco UCS’s storage controller offers excellent hardware level RAID protection. By combining RAID-5 and lowering the replication factor to 2, we are able to offer an optimally designed appliance that has both high I/O throughput and great usable/raw storage ratio.
For additional information, please see the MapR File System: MapR-FS.
This section describes the SAP HANA system requirements defined by SAP and Architecture of Cisco UCS Integrated Infrastructure for SAP HANA.
SAP HANA System on a Single Server - Scale-Up, is the simplest of the installation types. It is possible to run an SAP HANA system entirely on one host and then scale the system up as needed. All data and processes are located on the same server and can be accessed locally. The network requirements for this option minimum one 1-Gb Ethernet (access) and one 10-Gb Ethernet storage networks are sufficient to run SAP HANA scale-up. SAP HANA Scale-Out option is used if the SAP HANA system does not fit into the main memory of a single server based on the rules defined by SAP. In this method, multiple independent servers are combined to form one system and the load is distributed among multiple servers. In a distributed system, each index server is usually assigned to its own host to achieve maximum performance. It is possible to assign different tables to different hosts (partitioning the database), or a single table can be split across hosts (partitioning of tables). SAP HANA Scale-Out supports failover scenarios and high availability. Individual hosts in a distributed system have different roles master, worker, slave, standby depending on the task.
Some use cases are not supported on SAP HANA Scale-Out configuration and it is recommended to check with SAP whether a use case can be deployed as a Scale-Out solution.
The network requirements for this option are higher than for Scale-Up systems. In addition to the client and application access and storage access network, a node-to-node network is necessary. One 10 Gigabit Ethernet (access) and one 10 Gigabit Ethernet (node-to-node) and one 10 Gigabit Ethernet storage networks are required to run SAP HANA Scale-Out system. Additional network bandwidth is required to support system replication or backup capability.
For additional information, go to: http://saphana.com.
This document does not cover the updated information published by SAP.
SAP HANA supports servers equipped with Intel Xeon processor E7-8880 v3, E7-8890 v3, E7-8890L v3 or Intel Xeon processor E7-2890v2, E7-4890v2, or E7-8890v2 CPUs. In addition, the Intel Xeon processor E5-26xx v3 is supported for scale-up systems with the SAP HANA TDI option.
SAP HANA Scale-Out solution is supported in the following memory configurations:
· Homogenous symmetric assembly of dual in-line memory modules (DIMMs) for example, DIMM size or speed should not be mixed
· Maximum use of all available memory channels
· Memory of 512 GB to 1.5 TB per 4 Socket Server for SAP NetWeaver Business Warehouse (BW) and DataMart with HANA SPS 10 or below
· Memory of 512 GB to 2 TB per 4 Socket Server for SAP NetWeaver Business Warehouse (BW) and DataMart with HANA SPS 11
A SAP HANA data center deployment can range from a database running on a single host to a complex distributed system. Distributed systems can get complex with multiple hosts located at a primary site having one or more secondary sites; supporting a distributed multi-terabyte database with full fault and disaster recovery.
SAP HANA has different types of network communication channels to support the different SAP HANA scenarios and setups:
· Client zone: Channels used for external access to SAP HANA functions by end-user clients, administration clients, and application servers, and for data provisioning through SQL or HTTP
· Internal zone: Channels used for SAP HANA internal communication within the database or, in a distributed scenario, for communication between hosts
· Storage zone: Channels used for storage access (data persistence) and for backup and restore procedures
Table 1 lists all the networks defined by SAP or Cisco or requested by customers.
Table 1 List of Known Networks
Name |
Use Case |
Solutions |
Bandwidth Requirements |
Solution Design |
Client Zone Networks |
||||
Application Server Network |
SAP Application Server to DB communication |
All |
1 or 10 GbE |
10 or 40 GbE |
Client Network |
User / Client Application to DB communication |
All |
1 or 10 GbE |
10 or 40 GbE |
Data Source Network |
Data import and external data integration |
Optional for all SAP HANA systems |
1 or 10 GbE |
10 or 40 GbE |
Internal Zone Networks |
||||
Inter-Node Network |
Node to node communication within a scale-out configuration |
Scale-Out |
10 GbE |
40 GbE |
System Replication Network |
SAP HANA System Replication |
For SAP HANA Disaster Tolerance |
TBD with Customer |
TBD with Customer |
Storage Zone Networks |
||||
Backup Network |
Data Backup |
Optional for all SAP HANA systems |
10 GbE |
10 or 40 GbE |
Storage Network |
Node to Storage communication |
All
|
10 GbE |
20 or 40 GbE |
Infrastructure Related Networks |
||||
Administration Network |
Infrastructure and SAP HANA administration |
Optional for all SAP HANA systems |
1 GbE |
10 or 40 GbE |
Boot Network |
Boot the Operating Systems through PXE/NFS or FCoE |
Optional for all SAP HANA systems |
1 GbE |
NA |
Details about the network requirements for SAP HANA are available in the white paper from SAP SE at: SAP HANA Network requirement.
The network need to be properly segmented and must be connected to the same core/ backbone switch as shown in Figure 14 based on customer’s high-availability and redundancy requirements for different SAP HANA network segments.
Figure 14 High-Level SAP HANA Network Overview
Based on the listed network requirements, every server must be equipped with 2x 10 Gigabit Ethernet for scale-up systems to establish the communication with the application or user (Client Zone) and a 10 GbE Interface for Storage access.
For scale-out solutions an additional redundant network for SAP HANA node to node communication with 10 GbE is required (Internal Zone).
For more information on SAP HANA Network security please refer to SAP HANA Security Guide.
As an in-memory database, SAP HANA uses storage devices to save a copy of the data, for the purpose of startup and fault recovery without data loss. The choice of the specific storage technology is driven by various requirements like size, performance and high availability. To use Storage system in the Tailored Datacenter Integration option, the storage must be certified for SAP HANA TDI option at http://scn.sap.com/docs/DOC-48516.
All relevant information about storage requirements is documented in the white paper “SAP HANA Storage Requirements” and is available at: http://scn.sap.com/docs/DOC-62595.
SAP can only support performance related SAP HANA topics if the installed solution has passed the validation test successfully.
Refer to SAP HANA Administration Guide section 2.8 Hardware Checks for Tailored Datacenter Integration for Hardware check test tool and the related documentation.
Figure 15 shows the file system layout and the required storage sizes to install and operate SAP HANA. For the Linux OS installation (/root) 10 GB of disk size is recommended. Additionally, 50 GB must be provided for the /usr/sap since the volume used for SAP software that supports SAP HANA.
While installing SAP HANA on a host, we specify the mount point for the installation binaries (/hana/shared/<sid>), data files (/hana/data/<sid>) and log files (/hana/log/<sid>), where sid is the instance identifier of the SAP HANA installation.
Figure 15 File System Layout for 2 Node Scale-Out System
The storage sizing for filesystem is based on the amount of memory equipped on the SAP HANA host.
In case of distributed installation of SAP HANA Scale-Out, each server will have the following:
Root-FS: 10 GB
/usr/sap: 50 GB
The installation binaries, trace and configuration files are stored on a shared filesystem, which should be accessible for all hosts in the distributed installation. The size of shared filesystem should be equal to one times memory in each host for every four worker nodes. For example: In a distributed installation with eight hosts with 2 TB of memory each, shared file system should be 4 TB
For each HANA host there should be mount point for data and log volume.
Size of the file system for data volume for Appliance option is three times the host memory.
/hana/data/<sid>/mntXXXXX: 3 x Memory
Size of the file system for data volume with TDI option is one times the host memory.
/hana/data/<sid>/mntXXXXX: 1 x Memory
For solutions based on Intel E7-88x0v3 CPU the size of the Log volume must be as follows”
· Half of the server memory for systems ≤ 256 GB memory
· Min 512 GB for systems with ≥ 512 GB memory
The supported operating systems for SAP HANA are as follows:
· SUSE Linux Enterprise Server 11
· SUSE Linux Enterprise Server for SAP Applications
· RedHat Enterprise Linux for SAP HANA
This document provides the installation process for the RedHat Enterprise Linux option only.
The infrastructure for a SAP HANA solution must not have single point of failure. To support high-availability, the hardware and software requirements are:
· Internal storage: A RAID-based configuration is preferred
· External storage: Redundant data paths, dual controllers, and a RAID-based configuration are required
· Ethernet switches: Two or more independent switches should be used
SAP HANA Scale-Out comes with in integrated high-availability function. If a SAP HANA system is configured with a stand-by node, a failed part of SAP HANA will start on the stand-by node automatically. For automatic host failover, storage connector API must be properly configured for the implementation and operation of the SAP HANA.
Please check the latest information from SAP at: http://saphana.com or http://service.sap.com/notes.
The Cisco UCS Integrated Infrastructure for SAP HANA with MapR Converged Data Platform provides an end-to-end architecture with Cisco Hardware that demonstrate support for multiple SAP HANA workloads with high availability and server redundancy. The architecture uses Cisco UCS Manager with combined Cisco UCS B-Series and C-Series Servers with Cisco UCS Fabric Interconnect. The uplink from Cisco UCS Fabric Interconnect is connected to Nexus 3548 switches for High Availability and Failover functionality. The C-Series Rack Servers are connected directly to Cisco UCS Fabric Interconnect with single-wire management feature, the data traffic between HANA servers and Storage will be contained in the Cisco UCS Fabric Interconnect. This infrastructure is deployed to provide IP based Storage access using NFS protocols with file-level access to shared storage.
Figure 16 shows the Cisco UCS Integrated Infrastructure for SAP HANA, described in this Cisco Validation Design. It highlights the Cisco UCS Integrated Infrastructure hardware components and the network connections for a configuration with IP-based shared storage.
Figure 16 Cisco UCS Integrated Infrastructure for SAP HANA
The Base Reference architecture includes:
Cisco Unified Computing System
· 2 x Cisco UCS 6332UP 32-Port Fabric Interconnects
· 2 x Cisco UCS 5108 Blade Chassis with 2 x Cisco UCS 2304 Fabric Extenders with 4x 40 Gigabit Ethernet interfaces
· 8 x Cisco UCS B460 M4 High-Performance Blade Servers with 2x Cisco UCS Virtual Interface Card (VIC) 1380 and 2x Cisco UCS Virtual Interface Card (VIC) 1340
· 4 x Cisco UCS C240 M4 High-Performance Blade Servers with Cisco UCS Virtual Interface Card (VIC) 1385
Cisco Network
· For Base design:
— 1 x Cisco Nexus 3548 Switch for 10 Gigabit Ethernet connectivity between the two Cisco UCS Fabric Interconnects for failover scenarios.
The solution can be designed for Data Center Design option using a pair for Cisco Nexus 9000 series switches. With two Cisco Nexus switches, vPC is configured between Cisco Nexus 9000 series switches and Cisco Fabric Interconnect as show in Figure 17.
· For Enterprise Data Center Design option:
— 2 x Cisco Nexus 9372 Switch for 10 Gigabit Ethernet connectivity between the two Cisco UCS Fabric Interconnects
Figure 17 Cisco UCS Integrated Infrastructure for SAP HANA with Data Center Switching
Although this is the base design, each of the components can be scaled easily to support specific business requirements. Additional servers or even blade chassis can be deployed to increase compute capacity without additional Network components. Two Cisco UCS 6332UP, 32 port Fabric interconnect can support up to:
· 16 Cisco UCS B-Series B460 M4 Server with 8 Blade Server Chassis
· 8 Cisco UCS C240 M4 Server
Minimum of 3 x UCS C240 M4 Server is required install MapR Converged Data Platform with High Availability. 3 x Cisco UCS C240 M4 can support up to 6 Active HANA Servers. The scaling of storage to computer node is linear, for every 2 x Cisco UCS Server for HANA, 1 x Cisco UCS C240 M4 for Storage is required to meet the SAP HANA storage performance defined by SAP SE.
The solution is designed to meet SAP HANA performance requirement defined by SAP SE. The HANA Server and Storage server are connecting to UCS Fabric Interconnect, all the data traffic between HANA node and Storage node is contained in the UCS Fabric Interconnect. Each HANA Server and Storage Server are equipped with 2 x 40GbE capable Cisco Virtual Interface Cards, the storage network provides dedicated bandwidth between HANA servers and Storage Servers. The traffic shaping and Quality of Service (QoS) is configured and managed by Cisco UCS Fabric Interconnect. For HANA node-to-node network, 40 Gb dedicated network bandwidth is provided with non-blocking mode. The Cisco UCS Integrated Infrastructure for SAP HANA quarantines the bandwidth and latency for best performance to run SAP HANA.
This document provides details for configuring a fully redundant, highly available configuration for a Cisco UCS Integrated Infrastructure for SAP HANA.
This document is intended to enable you to fully configure the customer environment. In this process, various steps require you to insert customer-specific naming conventions, IP addresses, and VLAN schemes, as well as to record appropriate MAC addresses. Table 2 lists the configuration variables that are used throughout this document. This table can be completed based on the specific site variables and used in implementing the document configuration steps.
Table 2 Configuration Variables
Variable |
Description |
Customer Implementation Value |
<<var_nexus_HA_hostname>> |
Cisco Nexus 3548 HA Switch host name |
|
<<var_nexus_HA_mgmt0_ip>> |
Out-of-band Cisco Nexus 3548 HA Switch management IP address |
|
<<var_nexus_HA_mgmt0_netmask>> |
Out-of-band management network netmask |
|
<<var_nexus_HA_mgmt0_gw>> |
Out-of-band management network default gateway |
|
<<var_global_ntp_server_ip>> |
NTP server IP address |
|
<<var_oob_vlan_id>> |
Out-of-band management network VLAN ID |
|
<<var_mgmt_vlan_id>> |
Management network VLAN ID |
|
<<var_nexus_vpc_domain_mgmt_id>> |
Unique Cisco Nexus switch VPC domain ID for Management Switch |
|
<<var_nexus_vpc_domain_id>> |
Unique Cisco Nexus switch VPC domain ID |
|
<<var_nexus_A_hostname>> |
Cisco Nexus 9000 A host name |
|
<<var_nexus_A_mgmt0_ip>> |
Out-of-band Cisco Nexus 9000 A management IP address |
|
<<var_nexus_A_mgmt0_netmask>> |
Out-of-band management network netmask |
|
<<var_nexus_A_mgmt0_gw>> |
Out-of-band management network default gateway |
|
<<var_nexus_B_hostname>> |
Cisco Nexus 9000 B host name |
|
<<var_nexus_B_mgmt0_ip>> |
Out-of-band Cisco Nexus 9000 B management IP address |
|
<<var_nexus_B_mgmt0_netmask>> |
Out-of-band management network netmask |
|
<<var_nexus_B_mgmt0_gw>> |
Out-of-band management network default gateway |
|
<<var_mapr-01_vlan_id>> |
MapR Internal network 01 VLAN ID |
|
<<var_mapr-02_vlan_id>> |
MapR Internal network 02 VLAN ID |
|
<<var_mapr-03_vlan_id>> |
MapR Internal network 03 VLAN ID |
|
<<var_storage_vlan_id>> |
Storage network for HANA Data/log VLAN ID |
|
<<var_internal_vlan_id>> |
Node to Node Network for HANA Data/log VLAN ID |
|
<<var_backup_vlan_id>> |
Backup Network for HANA Data/log VLAN ID |
|
<<var_client_vlan_id>> |
Client Network for HANA Data/log VLAN ID |
|
<<var_appserver_vlan_id>> |
Application Server Network for HANA Data/log VLAN ID |
|
<<var_datasource_vlan_id>> |
Data source Network for HANA Data/log VLAN ID |
|
<<var_replication_vlan_id>> |
Replication Network for HANA Data/log VLAN ID |
|
<<var_inband_vlan_id>> |
In-band management network VLAN ID |
|
<<var_ucs_clustername>> |
Cisco UCS Manager cluster host name |
|
<<var_ucsa_mgmt_ip>> |
Cisco UCS fabric interconnect (FI) A out-of-band management IP address |
|
<<var_ucsa_mgmt_mask>> |
Out-of-band management network netmask |
|
<<var_ucsa_mgmt_gateway>> |
Out-of-band management network default gateway |
|
<<var_ucs_cluster_ip>> |
Cisco UCS Manager cluster IP address |
|
<<var_ucsb_mgmt_ip>> |
Cisco UCS FI B out-of-band management IP address |
|
The information in this section is provided as a reference for cabling the network and compute components. For connectivity between 40GbE ports to 10GbE ports on Cisco Switches, Cisco QSFP to Four SFP+ Copper Breakout Cables are used. These breakout cables connect to a 40G QSFP port of a Cisco Fabric Interconnect on one end and to four 10G SFP+ ports of a Cisco switch on the other end
Figure 18 shows the cabling topology for Cisco UCS Integrated Infrastructure for SAP HANA configuration using the Cisco Nexus 3000.
Figure 18 Cabling Topology for Cisco UCS Integrated Infrastructure for SAP HANA
Figure 19 shows the cabling topology for Cisco UCS Integrated Infrastructure for SAP HANA configuration using the pair of Cisco Nexus 9000 series switches.
Figure 19 Cabling Topology for Cisco UCS Integrated Infrastructure for SAP HANA
The information in this section is provided as a reference for cabling the network and compute components. To simplify cabling requirements, the tables include both local and remote device and port locations. The tables show the out-of-band management ports connectivity into preexisting management infrastructure, the Management Ports cabling needs to be adjusted accordingly. These Management interfaces will be used in various configuration steps.
Table 3 through Table 7 provides the details of all the connections.
Table 3 Cisco UCS Fabric Interconnect A - Cabling Information
Local Device |
Local Port |
Connection |
Remote Device |
Remote Port |
Cisco UCS fabric interconnect A
|
Eth1/1 |
40GbE |
Uplink to Customer Data Switch A |
Any |
Eth1/2 |
40GbE |
Uplink to Customer Data Switch B |
Any |
|
Eth1/3/1 |
40GbE QSFP to 4 SFP+ break-out cables |
Cisco Nexus 3000 HA |
Eth 1/3 |
|
Eth1/3/2 |
Cisco Nexus 3000 HA |
Eth 1/4 |
||
Eth1/3/3 |
Cisco Nexus 3000 HA |
Eth 1/5 |
||
Eth1/3/4 |
Cisco Nexus 3000 HA |
Eth 1/6 |
||
Eth1/3/1* |
40GbE QSFP to 4 SFP+ break-out cables |
Cisco Nexus 9000 A |
Eth 1/3 |
|
Eth1/3/2* |
Cisco Nexus 9000 A |
Eth 1/4 |
||
Eth1/3/3* |
Cisco Nexus 9000 B |
Eth 1/3 |
||
Eth1/3/4* |
Cisco Nexus 9000 B |
Eth 1/4 |
||
Eth1/4 |
40GbE |
|
|
|
Eth1/5 |
40GbE |
Cisco UCS Chassis 1 Fabric Extender (FEX) A |
IOM 1/1 |
|
Eth1/6 |
40GbE |
Cisco UCS Chassis 1 Fabric Extender (FEX) A |
IOM 1/3 |
|
Eth1/7 |
40GbE |
Cisco UCS Chassis 2 Fabric Extender (FEX) A |
IOM 1/1 |
|
Eth1/8 |
40GbE |
Cisco UCS Chassis 2 Fabric Extender (FEX) A |
IOM 1/3 |
|
Eth1/9 |
40GbE |
Cisco UCS Chassis 3 Fabric Extender (FEX) A |
IOM 1/1 |
|
Eth1/10 |
40GbE |
Cisco UCS Chassis 3 Fabric Extender (FEX) A |
IOM 1/3 |
|
Eth1/11 |
40GbE |
Cisco UCS Chassis 4 Fabric Extender (FEX) A |
IOM 1/1 |
|
Eth1/12 |
40GbE |
Cisco UCS Chassis 4 Fabric Extender (FEX) A |
IOM 1/3 |
|
Eth1/13 |
40GbE |
Cisco UCS Chassis 5 Fabric Extender (FEX) A |
IOM 1/1 |
|
Eth1/14 |
40GbE |
Cisco UCS Chassis 5 Fabric Extender (FEX) A |
IOM 1/3 |
|
Eth1/15 |
40GbE |
Cisco UCS Chassis 6 Fabric Extender (FEX) A |
IOM 1/1 |
|
Eth1/16 |
40GbE |
Cisco UCS Chassis 6 Fabric Extender (FEX) A |
IOM 1/3 |
|
Eth1/17 |
40GbE |
Cisco UCS Chassis 7 Fabric Extender (FEX) A |
IOM 1/1 |
|
Eth1/18 |
40GbE |
Cisco UCS Chassis 7 Fabric Extender (FEX) A |
IOM 1/3 |
|
Eth1/19 |
40GbE |
Cisco UCS Chassis 8 Fabric Extender (FEX) A |
IOM 1/1 |
|
Eth1/20 |
40GbE |
Cisco UCS Chassis 8 Fabric Extender (FEX) A |
IOM 1/3 |
|
Eth1/21 |
40GbE |
Cisco UCS C240-M4-1 |
VIC 1385 Port 1 |
|
Eth1/22 |
40GbE |
Cisco UCS C240-M4-2 |
VIC 1385 Port 1 |
|
Eth1/23 |
40GbE |
Cisco UCS C240-M4-3 |
VIC 1385 Port 1 |
|
Eth1/24 |
40GbE |
Cisco UCS C240-M4-4 |
VIC 1385 Port 1 |
|
Eth1/25 |
40GbE |
Cisco UCS C240-M4-5 |
VIC 1385 Port 1 |
|
Eth1/26 |
40GbE |
Cisco UCS C240-M4-6 |
VIC 1385 Port 1 |
|
Eth1/27 |
40GbE |
Cisco UCS C240-M4-7 |
VIC 1385 Port 1 |
|
Eth1/28 |
40GbE |
Cisco UCS C240-M4-8 |
VIC 1385 Port 1 |
|
MGMT0 |
GbE |
Customer’s Management Switch |
Any |
|
L1 |
GbE |
Cisco UCS fabric interconnect B |
L1 |
|
L2 |
GbE |
Cisco UCS fabric interconnect B |
L2 |
* The ports ETH1/3/1-4 are used with pair Nexus 9000 Switches design option.
Table 4 Cisco UCS Fabric Interconnect B - Cabling Information
Local Device |
Local Port |
Connection |
Remote Device |
Remote Port |
Cisco UCS fabric interconnect B
|
Eth1/1 |
40GbE |
Uplink to Customer Data Switch A |
Any |
Eth1/2 |
40GbE |
Uplink to Customer Data Switch B |
Any |
|
Eth1/3/1 |
40GbE QSFP to 4 SFP+ break-out cables |
Cisco Nexus 3000 HA |
Eth 1/7 |
|
Eth1/3/2 |
Cisco Nexus 3000 HA |
Eth 1/8 |
||
Eth1/3/3 |
Cisco Nexus 3000 HA |
Eth 1/9 |
||
Eth1/3/4 |
Cisco Nexus 3000 HA |
Eth 1/10 |
||
Eth1/3/1* |
40GbE QSFP to 4 SFP+ break-out cables |
Cisco Nexus 9000 A |
Eth 1/5 |
|
Eth1/3/2* |
Cisco Nexus 9000 A |
Eth 1/6 |
||
Eth1/3/3* |
Cisco Nexus 9000 B |
Eth 1/5 |
||
Eth1/3/4* |
Cisco Nexus 9000 B |
Eth 1/6 |
||
Eth1/4 |
40GbE |
|
|
|
Eth1/5 |
40GbE |
Cisco UCS Chassis 1 Fabric Extender (FEX) B |
IOM 1/1 |
|
Eth1/6 |
40GbE |
Cisco UCS Chassis 1 Fabric Extender (FEX) B |
IOM 1/3 |
|
Eth1/7 |
40GbE |
Cisco UCS Chassis 2 Fabric Extender (FEX) B |
IOM 1/1 |
|
Eth1/8 |
40GbE |
Cisco UCS Chassis 2 Fabric Extender (FEX) B |
IOM 1/3 |
|
Eth1/9 |
40GbE |
Cisco UCS Chassis 3 Fabric Extender (FEX) B |
IOM 1/1 |
|
Eth1/10 |
40GbE |
Cisco UCS Chassis 3 Fabric Extender (FEX) B |
IOM 1/3 |
|
Eth1/11 |
40GbE |
Cisco UCS Chassis 4 Fabric Extender (FEX) B |
IOM 1/1 |
|
Eth1/12 |
40GbE |
Cisco UCS Chassis 4 Fabric Extender (FEX) B |
IOM 1/3 |
|
Eth1/13 |
40GbE |
Cisco UCS Chassis 5 Fabric Extender (FEX) B |
IOM 1/1 |
|
Eth1/14 |
40GbE |
Cisco UCS Chassis 5 Fabric Extender (FEX) B |
IOM 1/3 |
|
Eth1/15 |
40GbE |
Cisco UCS Chassis 6 Fabric Extender (FEX) B |
IOM 1/1 |
|
Eth1/16 |
40GbE |
Cisco UCS Chassis 6 Fabric Extender (FEX) B |
IOM 1/3 |
|
Eth1/17 |
40GbE |
Cisco UCS Chassis 7 Fabric Extender (FEX) B |
IOM 1/1 |
|
Eth1/18 |
40GbE |
Cisco UCS Chassis 7 Fabric Extender (FEX) B |
IOM 1/3 |
|
Eth1/19 |
40GbE |
Cisco UCS Chassis 8 Fabric Extender (FEX) B |
IOM 1/1 |
|
Eth1/20 |
40GbE |
Cisco UCS Chassis 8 Fabric Extender (FEX) B |
IOM 1/3 |
|
Eth1/21 |
40GbE |
Cisco UCS C240-M4-1 |
VIC 1385 Port 2 |
|
Eth1/22 |
40GbE |
Cisco UCS C240-M4-2 |
VIC 1385 Port 2 |
|
Eth1/23 |
40GbE |
Cisco UCS C240-M4-3 |
VIC 1385 Port 2 |
|
Eth1/24 |
40GbE |
Cisco UCS C240-M4-4 |
VIC 1385 Port 2 |
|
Eth1/25 |
40GbE |
Cisco UCS C240-M4-5 |
VIC 1385 Port 2 |
|
Eth1/26 |
40GbE |
Cisco UCS C240-M4-6 |
VIC 1385 Port 2 |
|
Eth1/27 |
40GbE |
Cisco UCS C240-M4-7 |
VIC 1385 Port 2 |
|
Eth1/28 |
40GbE |
Cisco UCS C240-M4-8 |
VIC 1385 Port 2 |
|
MGMT0 |
GbE |
Customer’s Management Switch |
Any |
|
L1 |
GbE |
Cisco UCS fabric interconnect A |
L1 |
|
L2 |
GbE |
Cisco UCS fabric interconnect A |
L2 |
* The ports ETH1/3/1-4 are used with pair Nexus 9000 Switches design option.
Table 5 Cisco Nexus 3000 Cabling Information
Local Device |
Local Port |
Connection |
Remote Device |
Remote Port |
Cisco Nexus 3000 HA |
Eth1/3 |
10GbE |
Cisco UCS fabric interconnect A |
Eth1/3/1 |
Eth1/4 |
10GbE |
Cisco UCS fabric interconnect A |
Eth1/3/2 |
|
Eth1/5 |
10GbE |
Cisco UCS fabric interconnect A |
Eth1/3/3 |
|
Eth1/6 |
10GbE |
Cisco UCS fabric interconnect A |
Eth1/3/4 |
|
Eth1/7 |
10GbE |
Cisco UCS fabric interconnect B |
Eth1/3/1 |
|
Eth1/8 |
10GbE |
Cisco UCS fabric interconnect B |
Eth1/3/2 |
|
Eth1/9 |
10GbE |
Cisco UCS fabric interconnect B |
Eth1/3/3 |
|
Eth1/10 |
10GbE |
Cisco UCS fabric interconnect B |
Eth1/3/4 |
|
MGMT0 |
GbE |
Customer’s Management Switch |
Any |
Table 6 Cisco Nexus 9000-A Cabling Information
Local Device |
Local Port |
Connection |
Remote Device |
Remote Port |
Cisco Nexus 9000 A
|
Eth1/1 |
10GbE |
Uplink to Customer Data Switch A |
Any |
Eth1/2 |
10GbE |
Uplink to Customer Data Switch B |
Any |
|
Eth1/3 |
10GbE |
Cisco UCS fabric interconnect A |
Eth1/3/1 |
|
Eth1/4 |
10GbE |
Cisco UCS fabric interconnect A |
Eth1/3/2 |
|
Eth1/5 |
10GbE |
Cisco UCS fabric interconnect B |
Eth1/3/1 |
|
Eth1/6 |
10GbE |
Cisco UCS fabric interconnect B |
Eth1/3/2 |
|
Eth1/9* |
10GbE |
Cisco Nexus 9000 B |
Eth1/9 |
|
Eth1/10* |
10GbE |
Cisco Nexus 9000 B |
Eth1/10 |
|
Eth1/11* |
10GbE |
Cisco Nexus 9000 B |
Eth1/11 |
|
Eth1/12* |
10GbE |
Cisco Nexus 9000 B |
Eth1/12 |
|
MGMT0 |
GbE |
Customer’s Management Switch |
Any |
* The ports ETH1/9-12 can be replaced with E2/1 and E2/2 for 40G connectivity.
For devices requiring GbE connectivity, use the GbE Copper SFP+s (GLC–T=).
Table 7 Cisco Nexus 9000-B Cabling Information
Local Device |
Local Port |
Connection |
Remote Device |
Remote Port |
Cisco Nexus 9000 B
|
Eth1/1 |
10GbE |
Uplink to Customer Data Switch A |
Any |
Eth1/2 |
10GbE |
Uplink to Customer Data Switch B |
Any |
|
Eth1/3 |
10GbE |
Cisco UCS fabric interconnect A |
Eth1/3/3 |
|
Eth1/4 |
10GbE |
Cisco UCS fabric interconnect A |
Eth1/3/4 |
|
Eth1/5 |
10GbE |
Cisco UCS fabric interconnect B |
Eth1/3/3 |
|
Eth1/6 |
10GbE |
Cisco UCS fabric interconnect B |
Eth1/3/4 |
|
Eth1/9* |
10GbE |
Cisco Nexus 9000 A |
Eth1/9 |
|
Eth1/10* |
10GbE |
Cisco Nexus 9000 A |
Eth1/10 |
|
Eth1/11* |
10GbE |
Cisco Nexus 9000 A |
Eth1/11 |
|
Eth1/12* |
10GbE |
Cisco Nexus 9000 A |
Eth1/12 |
|
MGMT0 |
GbE |
Customer’s Management Switch |
Any |
* The ports ETH1/9-12 can be replaced with E2/1 and E2/2 for 40G connectivity.
For devices requiring GbE connectivity, use the GbE Copper SFP+s (GLC–T=).
Table 8 details the software revisions used for validating various components of the Cisco UCS Integrated Infrastructure for SAP HANA.
Table 8 Hardware and Software Components of the Cisco UCS Integrated Infrastructure for SAP HANA
Vendor |
Product |
Version |
Description |
Cisco |
Cisco UCSM |
3.1(1e) |
Cisco UCS Manager |
Cisco |
Cisco UCS 6332 UP FI |
5.0(3)N2(3.11e) |
Cisco UCS Fabric Interconnects |
Cisco |
Cisco UCS 5108 Blade Chassis |
NA |
Cisco UCS Blade Server Chassis |
Cisco |
Cisco UCS 2304 XP FEX |
3.1(1e) |
Cisco UCS Fabric Extenders for Blade Server chassis |
Cisco |
Cisco UCS B-Series M4 Servers |
3.1(1e) |
Cisco B-Series M4 Blade Servers |
Cisco |
Cisco UCS C240 M4 Servers |
2.0(9c) – CIMC Controller |
Cisco C240 M4 Rack Servers for Management |
Cisco |
Cisco UCS VIC 1340/1380 |
4.1(1d) |
Cisco UCS VIC 1240/1280 Adapters |
Cisco |
Cisco UCS VIC 1385 |
4.1(1d) |
Cisco UCS VIC Adapter |
Cisco |
Cisco Nexus 9372PX Switches |
7.0(3)I2(2a) |
Cisco Nexus 9372PX Switches |
Cisco |
Cisco Nexus 3548 Switches |
6.0(2)A6(5a) |
Cisco Nexus 3548TP Switches |
RedHat |
RedHat Enterprise Linux (RHEL) for SAP HANA |
6.7 (64 bit) |
Operating System to host SAP HANA |
MapR |
MapR Converged Enterprise Edition |
5.0.0.32987.GA |
MapR Converged Enterprise Edition |
The following section provides a detailed procedure for configuring the Cisco Nexus 3548 Switch for High Availability in SAP HANA environment. The switch configuration in this section based on cabling plan described in the Device Cabling section. If the systems connected on different ports, configure the switches accordingly following the guidelines described in this section
These steps provide details for the initial Cisco Nexus 3548 Series Switch setup.
To set up the initial configuration for the first Cisco Nexus switch complete the following steps:
On initial boot and connection to the serial or console port of the switch, the NX-OS setup should automatically start and attempt to enter Power on Auto Provisioning.
Abort Power On Auto Provisioning and continue with normal setup ?(yes/no)[n]:yes
---- System Admin Account Setup ----
Do you want to enforce secure password standard (yes/no): yes
Enter the password for "admin":
Confirm the password for "admin":
---- Basic System Configuration Dialog ----
This setup utility will guide you through the basic configuration of
the system. Setup configures only enough connectivity for management
of the system.
Please register Cisco Nexus 3500 Family devices promptly with your
supplier. Failure to register may affect response times for initial
service calls. Nexus devices must be registered to receive entitled
support services.
Press Enter at anytime to skip a dialog. Use ctrl-c at anytime
to skip the remaining dialogs.
Would you like to enter the basic configuration dialog (yes/no): yes
Create another login account (yes/no) [n]:
Configure read-only SNMP community string (yes/no) [n]:
Configure read-write SNMP community string (yes/no) [n]:
Enter the switch name : <<var_nexus_HA_hostname>>
Continue with Out-of-band (mgmt0) management configuration? (yes/no) [y]:
Mgmt0 IPv4 address : <<var_nexus_HA_mgmt0_ip>>
Mgmt0 IPv4 netmask : <<var_nexus_HA_mgmt0_netmask>>
Configure the default gateway? (yes/no) [y]:
IPv4 address of the default gateway : <<var_nexus_HA_mgmt0_gw>>
Enable the telnet service? (yes/no) [n]:
Enable the ssh service? (yes/no) [y]:
Type of ssh key you would like to generate (dsa/rsa) : rsa
Number of key bits <768-2048> : 2048
Configure the ntp server? (yes/no) [n]: y
NTP server IPv4 address : <<var_global_ntp_server_ip>>
Configure default interface layer (L3/L2) [L2]:
Configure default switchport interface state (shut/noshut) [noshut]:
Configure CoPP System Policy Profile ( default / l2 / l3 ) [default]:
The following configuration will be applied:
switchname <<var_nexus_HA_hostname>>
interface mgmt0
ip address <<var_nexus_HA_mgmt0_ip>> <<var_nexus_HA_mgmt0_netmask>>
vrf context management
ip route 0.0.0.0/0 <<var_nexus_A_mgmt0_gw>>
no shutdown
no telnet server enable
ssh key rsa 2048 force
ssh server enable
ntp server <<var_global_ntp_server_ip>>
system default switchport
no system default switchport shutdown
policy-map type control-plane copp-system-policy ( default )
Would you like to edit the configuration? (yes/no) [n]:
Use this configuration and save it? (yes/no) [y]:
[########################################] 100%
Copy complete, now saving to disk (please wait)...
The following commands enable IP switching feature and set default spanning tree behaviors:
1. On each Nexus 3548, enter configuration mode:
config terminal
2. Use the following commands to enable the necessary features:
feature lacp
feature interface-vlan
feature lldp
3. Save the running configuration to start-up:
copy run start
policy-map type network-qos jumbo
class type network-qos class-default
mtu 9216
system qos
service-policy type network-qos jumbo
To create the necessary VLANs, complete the following step on both switches:
1. From the configuration mode, run the following commands:
vlan <<var_storage_vlan_id>>
name HANA-Storage
vlan <<var_internal_vlan_id>>
name HANA-Internal
vlan <<var_inband_vlan_id>>
name NFS-IPMI
vlan <<var_mapr-01_vlan_id>>
name MapR-01
vlan <<var_mapr-02_vlan_id>>
name MapR-02
vlan <<var_mapr-03_vlan_id>>
name MapR-03
1. Define a port description for the interface connecting to <<var_ucs_clustername>>-A.
interface Eth1/3-6
description <<var_ucs_clustername>>-A:1/3
2. Apply it to a port channel and bring up the interface.
interface eth1/3-6
channel-group 3 mode active
no shutdown
3. Define a description for the port-channel connecting to <<var_ucs_clustername>>-A.
interface Po3
description <<var_ucs_clustername>>-A
4. Make the port-channel a switchport, and configure a trunk to allow internal HANA VLANs
switchport
switchport mode trunk
switchport trunk allowed vlan <<var_storage_vlan_id>>,<<var_internal_vlan_id>>,<<var_inband_vlan_id>>, <<var_mapr-01_vlan_id>>,<<var_mapr-02_vlan_id>>,<<var_mapr-03_vlan_id>>
5. Make the port channel and associated interfaces spanning tree edge ports.
spanning-tree port type edge trunk
spanning-tree bpduguard enable
6. Bring up the port-channel.
no shutdown
7. Define a port description for the interface connecting to <<var_ucs_clustername>>-B.
interface Eth1/7-10
description <<var_ucs_clustername>>-B:1/3
8. Apply it to a port channel and bring up the interface.
interface Eth1/7-10
channel-group 4 mode active
no shutdown
9. Define a description for the port-channel connecting to <<var_ucs_clustername>>-B.
interface Po4
description <<var_ucs_clustername>>-B
10. Make the port-channel a switchport, and configure a trunk to allow internal HANA VLANs.
switchport
switchport mode trunk
switchport trunk allowed vlan <<var_storage_vlan_id>>,<<var_internal_vlan_id>>,<<var_inband_vlan_id>>, <<var_mapr-01_vlan_id>>,<<var_mapr-02_vlan_id>>,<<var_mapr-03_vlan_id>>
11. Make the port channel and associated interfaces spanning tree edge ports.
spanning-tree port type edge trunk
spanning-tree bpduguard enable
12. Bring up the port-channel.
no shutdown
13. Save the running configuration to start-up configuration.
copy run start
The following section provides a detailed procedure for configuring the Cisco Nexus 9000 Switches for SAP HANA environment. The switch configuration in this section based on cabling plan described in the Device Cabling section. If the systems connected on different ports, configure the switches accordingly following the guidelines described in this section
The configuration steps detailed in this section provides guidance for configuring the Cisco Nexus 9000 running release 7.0(3)I2(2a) within a multi-VDC environment.
These steps provide the details for the initial Cisco Nexus 9000 Series Switch setup.
To set up the initial configuration for the first Cisco Nexus switch complete the following steps:
On initial boot and connection to the serial or console port of the switch, the NX-OS setup should automatically start and attempt to enter Power on Auto Provisioning.
---- Basic System Configuration Dialog VDC: 1 ----
This setup utility will guide you through the basic configuration of
the system. Setup configures only enough connectivity for management
of the system.
*Note: setup is mainly used for configuring the system initially,
when no configuration is present. So setup always assumes system
defaults and not the current system configuration values.
Press Enter at anytime to skip a dialog. Use ctrl-c at anytime
to skip the remaining dialogs.
Would you like to enter the basic configuration dialog (yes/no): yes
Do you want to enforce secure password standard (yes/no) [y]:
Create another login account (yes/no) [n]:
Configure read-only SNMP community string (yes/no) [n]:
Configure read-write SNMP community string (yes/no) [n]:
Enter the switch name : <<var_nexus_A_hostname>>
Continue with Out-of-band (mgmt0) management configuration? (yes/no) [y]:
Mgmt0 IPv4 address : <<var_nexus_A_mgmt0_ip>>
Mgmt0 IPv4 netmask : <<var_nexus_A_mgmt0_netmask>>
Configure the default gateway? (yes/no) [y]:
IPv4 address of the default gateway : <<var_nexus_A_mgmt0_gw>>
Configure advanced IP options? (yes/no) [n]:
Enable the telnet service? (yes/no) [n]:
Enable the ssh service? (yes/no) [y]:
Type of ssh key you would like to generate (dsa/rsa) [rsa]:
Number of rsa key bits <1024-2048> [2048]:
Configure the ntp server? (yes/no) [n]: y
NTP server IPv4 address : <<var_global_ntp_server_ip>>
Configure CoPP system profile (strict/moderate/lenient/dense/skip) [strict]:
The following configuration will be applied:
password strength-check
switchname <<var_nexus_A_hostname>>
vrf context management
ip route 0.0.0.0/0 <<var_nexus_A_mgmt0_gw>>
exit
no feature telnet
ssh key rsa 2048 force
feature ssh
ntp server <<var_global_ntp_server_ip>>
copp profile strict
interface mgmt0
ip address <<var_nexus_A_mgmt0_ip>> <<var_nexus_A_mgmt0_netmask>>
no shutdown
Would you like to edit the configuration? (yes/no) [n]: Enter
Use this configuration and save it? (yes/no) [y]: Enter
[########################################] 100%
Copy complete.
To set up the initial configuration for the second Cisco Nexus switch complete the following steps:
On initial boot and connection to the serial or console port of the switch, the NX-OS setup should automatically start and attempt to enter Power on Auto Provisioning.
---- Basic System Configuration Dialog VDC: 1 ----
This setup utility will guide you through the basic configuration of
the system. Setup configures only enough connectivity for management
of the system.
*Note: setup is mainly used for configuring the system initially,
when no configuration is present. So setup always assumes system
defaults and not the current system configuration values.
Press Enter at anytime to skip a dialog. Use ctrl-c at anytime
to skip the remaining dialogs.
Would you like to enter the basic configuration dialog (yes/no): yes
Create another login account (yes/no) [n]:
Configure read-only SNMP community string (yes/no) [n]:
Configure read-write SNMP community string (yes/no) [n]:
Enter the switch name : <<var_nexus_B_hostname>>
Continue with Out-of-band (mgmt0) management configuration? (yes/no) [y]:
Mgmt0 IPv4 address : <<var_nexus_B_mgmt0_ip>>
Mgmt0 IPv4 netmask : <<var_nexus_B_mgmt0_netmask>>
Configure the default gateway? (yes/no) [y]:
IPv4 address of the default gateway : <<var_nexus_B_mgmt0_gw>>
Configure advanced IP options? (yes/no) [n]:
Enable the telnet service? (yes/no) [n]:
Enable the ssh service? (yes/no) [y]:
Type of ssh key you would like to generate (dsa/rsa) [rsa]:
Number of rsa key bits <1024-2048> [2048]:
Configure the ntp server? (yes/no) [n]: y
NTP server IPv4 address : <<var_global_ntp_server_ip>>
Configure default interface layer (L3/L2) [L3]: L2
Configure default switchport interface state (shut/noshut) [shut]: Enter
Configure CoPP system profile (strict/moderate/lenient/dense/skip) [strict]:
The following configuration will be applied:
password strength-check
switchname <<var_nexus_B_hostname>>
vrf context management
ip route 0.0.0.0/0 <<var_nexus_B_mgmt0_gw>>
exit
no feature telnet
ssh key rsa 2048 force
feature ssh
ntp server <<var_global_ntp_server_ip>>
copp profile strict
interface mgmt0
ip address <<var_nexus_B_mgmt0_ip>> <<var_nexus_B_mgmt0_netmask>>
no shutdown
Would you like to edit the configuration? (yes/no) [n]: Enter
Use this configuration and save it? (yes/no) [y]: Enter
[########################################] 100%
Copy complete.
The following commands enable the IP switching feature and set the default spanning tree behaviors:
1. On each Nexus 9000, enter configuration mode:
config terminal
2. Use the following commands to enable the necessary features:
feature udld
feature lacp
feature vpc
feature interface-vlan
feature lldp
3. Configure spanning tree defaults:
spanning-tree port type network default
spanning-tree port type edge bpduguard default
spanning-tree port type edge bpdufilter default
4. Save the running configuration to start-up:
copy run start
To create the necessary VLANs, complete the following step on both switches:
1. From the configuration mode, run the following commands:
vlan <<var_storage_vlan_id>>
name HANA-Storage
vlan <<var_mgmt_vlan_id>>
name Management
vlan <<var_internal_vlan_id>>
name HANA-Internal
vlan <<var_backup_vlan_id>>
name HANA-Backup
vlan <<var_client_vlan_id>>
name HANA-Client
vlan <<var_appserver_vlan_id>>
name HANA-AppServer
vlan <<var_datasource_vlan_id>>
name HANA-DataSource
vlan <<var_replication_vlan_id>>
name HANA-Replication
vlan <<var_inband_vlan_id>>
name NFS-IPMI
vlan <<var_mapr-01_vlan_id>>
name MapR-01
vlan <<var_mapr-02_vlan_id>>
name MapR-02
vlan <<var_mapr-03_vlan_id>>
name MapR-03
To configure vPCs for switch A, complete the following steps:
1. From the global configuration mode, create a new vPC domain:
vpc domain <<var_nexus_vpc_domain_id>>
2. Make Nexus 9000A the primary vPC peer by defining a low priority value:
role priority 10
3. Use the management interfaces on the supervisors of the Nexus 9000s to establish a keepalive link:
peer-keepalive destination <<var_nexus_B_mgmt0_ip>> source <<var_nexus_A_mgmt0_ip>>
4. Enable following features for this vPC domain:
peer-switch
delay restore 150
peer-gateway
auto-recovery
To configure vPCs for switch B, complete the following steps:
1. From the global configuration mode, create a new vPC domain:
vpc domain <<var_nexus_vpc_domain_id>>
2. Make Cisco Nexus 9000 B the secondary vPC peer by defining a higher priority value than that of the Nexus 9000 A:
role priority 20
3. Use the management interfaces on the supervisors of the Cisco Nexus 9000s to establish a keepalive link:
peer-keepalive destination <<var_nexus_A_mgmt0_ip>> source <<var_nexus_B_mgmt0_ip>>
4. Enable following features for this vPC domain:
peer-switch
delay restore 150
peer-gateway
auto-recovery
1. Define a port description for the interfaces connecting to VPC Peer <<var_nexus_B_hostname>>.
interface Eth2/1
description VPC Peer <<var_nexus_B_hostname>>:2/1
interface Eth2/2
description VPC Peer <<var_nexus_B_hostname>>:2/2
2. Apply a port channel to both VPC Peer links and bring up the interfaces.
interface Eth2/1-2
channel-group 1 mode active
no shutdown
3. Define a description for the port-channel connecting to <<var_nexus_B_hostname>>.
interface Po1
description vPC peer-link
4. Make the port-channel a switchport, and configure a trunk to allow HANA VLANs
switchport
switchport mode trunk
switchport trunk allowed vlan <<var_storage_vlan_id>>,<<var_mgmt_vlan_id>>,<<var_boot_vlan_id>>,<<var_internal_vlan_id>>,<<var_inband_vlan_id>>,<<var_backup_vlan_id>>, <<var_client_vlan_id>>, <<var_appserver_vlan_id>>, <<var_datasource_vlan_id>>,
<<var_replication_vlan_id>>
5. Make this port-channel the VPC peer link and bring it up.
spanning-tree port type network
vpc peer-link
no shutdown
1. Define a port description for the interfaces connecting to VPC peer <<var_nexus_A_hostname>>.
interface Eth2/1
description VPC Peer <<var_nexus_A_hostname>>:2/1
interface Eth2/2
description VPC Peer <<var_nexus_A_hostname>>:2/2
2. Apply a port channel to both VPC peer links and bring up the interfaces.
interface Eth2/1-2
channel-group 1 mode active
no shutdown
3. Define a description for the port-channel connecting to <<var_nexus_A_hostname>>.
interface Po1
description vPC peer-link
4. Make the port-channel a switchport, and configure a trunk to allow HANA VLANs.
switchport
switchport mode trunk
switchport trunk allowed vlan <<var_storage_vlan_id>>,<<var_mgmt_vlan_id>>,<<var_boot_vlan_id>>,<<var_internal_vlan_id>>,<<var_inband_vlan_id>>,<<var_backup_vlan_id>>, <<var_client_vlan_id>>, <<var_appserver_vlan_id>>, <<var_datasource_vlan_id>>,
<<var_replication_vlan_id>>
5. Make this port-channel the VPC peer link and bring it up.
spanning-tree port type network
vpc peer-link
no shutdown
1. Define a port description for the interface connecting to <<var_ucs_clustername>>-A.
interface Eth1/3-4
description <<var_ucs_clustername>>-A:1/3
2. Apply it to a port channel and bring up the interface.
interface eth1/3-4
channel-group 11 mode active
no shutdown
3. Define a description for the port-channel connecting to <<var_ucs_clustername>>-A.
interface Po11
description <<var_ucs_clustername>>-A
4. Make the port-channel a switchport, and configure a trunk to allow all HANA VLANs
switchport
switchport mode trunk
switchport trunk allowed vlan <<var_storage_vlan_id>>,<<var_mgmt_vlan_id>>,<<var_boot_vlan_id>>,<<var_internal_vlan_id>>,<<var_inband_vlan_id>>,<<var_backup_vlan_id>>, <<var_client_vlan_id>>, <<var_appserver_vlan_id>>, <<var_datasource_vlan_id>>,
<<var_replication_vlan_id>>
5. Make the port channel and associated interfaces spanning tree edge ports.
spanning-tree port type edge trunk
6. Set the MTU to be 9216 to support jumbo frames.
mtu 9216
7. Make this a VPC port-channel and bring it up.
vpc 11
no shutdown
8. Define a port description for the interface connecting to <<var_ucs_clustername>>-B.
interface Eth1/5-6
description <<var_ucs_clustername>>-B:1/3
9. Apply it to a port channel and bring up the interface.
interface Eth1/5-6
channel-group 12 mode active
no shutdown
10. Define a description for the port-channel connecting to <<var_ucs_clustername>>-B.
interface Po12
description <<var_ucs_clustername>>-B
11. Make the port-channel a switchport, and configure a trunk to allow all HANA VLANs.
switchport
switchport mode trunk
switchport trunk allowed vlan <<var_storage_vlan_id>>,<<var_mgmt_vlan_id>>,<<var_boot_vlan_id>>,<<var_internal_vlan_id>>,<<var_inband_vlan_id>>,<<var_backup_vlan_id>>, <<var_client_vlan_id>>, <<var_appserver_vlan_id>>, <<var_datasource_vlan_id>>,
<<var_replication_vlan_id>>
12. Make the port channel and associated interfaces spanning tree edge ports.
spanning-tree port type edge trunk
13. Set the MTU to be 9216 to support jumbo frames.
mtu 9216
14. Make this a VPC port-channel and bring it up.
vpc 12
no shutdown
You can define the port-channel for each type Network to have dedicated bandwidth. Below is an example to create such port-channel for Backup Network, these cable are connected to Storage for Backup. The following example assumes two ports (Ethernet 1/29 and 1/30) are connected to dedicated NFS Storage to backup SAP HANA.
1. Define a port description for the interface connecting to <<var_node01>>.
interface Eth1/29
description <<var_backup_node01>>:<<Port_Number>>
2. Apply it to a port channel and bring up the interface.
interface eth1/29
channel-group 21 mode active
no shutdown
3. Define a description for the port-channel connecting to <<var_backup_node01>>.
interface Po21
description <<var_backup_vlan_id>>
4. Make the port-channel a switchport, and configure a trunk to allow NFS VLAN for DATA.
switchport
switchport mode trunk
switchport trunk allowed vlan <<var_backup_vlan_id>>
5. Make the port channel and associated interfaces spanning tree edge ports.
spanning-tree port type edge trunk
6. Set the MTU to be 9216 to support jumbo frames.
mtu 9216
7. Make this a VPC port-channel and bring it up.
vpc 21
no shutdown
8. Define a port description for the interface connecting to <<var_node02>>.
interface Eth1/30
description <<var_backup_node01>>:<<Port_Number>>
9. Apply it to a port channel and bring up the interface.
channel-group 22 mode active
no shutdown
10. Define a description for the port-channel connecting to <<var_node02>>.
interface Po22
description <<var_backup_node02>>
11. Make the port-channel a switchport, and configure a trunk to allow NFS VLAN for DATA
switchport
switchport mode trunk
switchport trunk allowed vlan <<var_backup_vlan_id>>
12. Make the port channel and associated interfaces spanning tree edge ports.
spanning-tree port type edge trunk
13. Set the MTU to be 9216 to support jumbo frames.
mtu 9216
14. Make this a VPC port-channel and bring it up.
vpc 22
no shutdown
This is an optional step, which can be used to implement a management plane access for the Cisco UCS servers.
To enable management access across the IP switching environment, complete the following steps:
You may want to create a dedicated Switch Virtual Interface (SVI) on the Nexus data plane to test and troubleshoot the management plane. If an L3 interface is deployed be sure it is deployed on both Cisco Nexus 9000s to ensure Type-2 VPC consistency.
1. Define a port description for the interface connecting to the management plane.
interface Eth1/<<interface_for_in_band_mgmt>>
description IB-Mgmt:<<mgmt_uplink_port>>
2. Apply it to a port channel and bring up the interface.
channel-group 6 mode active
no shutdown
3. Define a description for the port-channel connecting to management switch.
interface Po6
description IB-Mgmt
4. Configure the port as an access VLAN carrying the InBand management VLAN traffic.
switchport
switchport mode access
switchport access vlan <<var_inband_vlan_id>>
5. Make the port channel and associated interfaces normal spanning tree ports.
spanning-tree port type normal
6. Make this a VPC port-channel and bring it up.
vpc 6
no shutdown
7. Save the running configuration to start-up in both Nexus 9000s.
copy run start
Depending on the available network infrastructure, several methods and features can be used to uplink the SAP HANA environment. If an existing Cisco Nexus environment is present, Cisco recommends using vPCs to uplink the Cisco Nexus 9000 switches in the SAP HANA environment to the existing infrastructure. The previously described procedures can be used to create an uplink vPC to the existing environment. Make sure to run copy run start to save the configuration on each switch after configuration is completed.
This section describes the specific configurations on Cisco UCS servers to address SAP HANA requirements.
This section provides detailed procedures for configuring the Cisco Unified Computing System (Cisco UCS) for use in SAP HANA Scale Out Solution environment. The steps are necessary to provision the Cisco UCS C-Series and B-Series servers to meet SAP HANA requirement.
To configure the Cisco UCS Fabric Interconnect A, complete the following steps:
1. Connect to the console port on the first Cisco UCS 6300 Fabric Interconnect.
Enter the configuration method: console
Enter the setup mode; setup newly or restore from backup.(setup/restore)? setup
You have chosen to setup a a new fabric interconnect? Continue? (y/n): y
Enforce strong passwords? (y/n) [y]: y
Enter the password for "admin": <<var_password>>
Enter the same password for "admin": <<var_password>>
Is this fabric interconnect part of a cluster (select 'no' for standalone)? (yes/no) [n]: y
Which switch fabric (A|B): A
Enter the system name: <<var_ucs_clustername>>
Physical switch Mgmt0 IPv4 address: <<var_ucsa_mgmt_ip>>
Physical switch Mgmt0 IPv4 netmask: <<var_ucsa_mgmt_mask>>
IPv4 address of the default gateway: <<var_ucsa_mgmt_gateway>>
Cluster IPv4 address: <<var_ucs_cluster_ip>>
Configure DNS Server IPv4 address? (yes/no) [no]: y
DNS IPv4 address: <<var_nameserver_ip>>
Configure the default domain name? y
Default domain name: <<var_dns_domain_name>>
Join centralized management environment (UCS Central)? (yes/no) [n]: Enter
2. Review the settings printed to the console. If they are correct, answer yes to apply and save the configuration.
3. Wait for the login prompt to make sure that the configuration has been saved.
To configure the Cisco UCS Fabric Interconnect B, complete the following steps:
1. Connect to the console port on the second Cisco UCS 6332 Fabric Interconnect.
Enter the configuration method: console
Installer has detected the presence of a peer Fabric interconnect. This Fabric interconnect will be added to the cluster. Do you want to continue {y|n}? y
Enter the admin password for the peer fabric interconnect: <<var_password>>
Physical switch Mgmt0 IPv4 address: <<var_ucsb_mgmt_ip>>
Apply and save the configuration (select ‘no’ if you want to re-enter)? (yes/no): y
2. Wait for the login prompt to make sure that the configuration has been saved.
To log in to the Cisco Unified Computing System (UCS) environment, complete the following steps:
1. Open a web browser and navigate to the Cisco UCS 6332 Fabric Interconnect cluster address.
2. Click the Launch Cisco UCS Manager link to download the Cisco UCS Manager software.
3. If prompted to accept security certificates, accept as necessary.
4. When prompted, enter admin as the user name and enter the administrative password.
5. Click Login to log in to the Cisco UCS Manager.
Figure 20 Cisco UCS Manager Log In
This document assumes the use of Cisco UCS Manager Software version 3.1(1e). To upgrade the Cisco UCS Manager software and the Cisco UCS 6332 Fabric Interconnect software to version 3.1(1e), refer to Cisco UCS Manager Install and Upgrade Guides.
To create a block of IP addresses for server Keyboard, Video, Mouse (KVM) access in the Cisco UCS environment, complete the following steps:
This block of IP addresses should be in the same subnet as the management IP addresses for the Cisco UCS Manager.
1. In Cisco UCS Manager, click the LAN tab in the navigation pane.
2. Select Pools > root > IP Pools > IP Pool ext-mgmt.
3. In the Actions pane, select Create Block of IP Addresses.
4. Enter the starting IP address of the block and the number of IP addresses required, and the subnet and gateway information.
5. Click OK to create the IP block.
6. Click OK in the confirmation message.
To synchronize the Cisco UCS environment to the NTP server, complete the following steps:
1. In Cisco UCS Manager, click the Admin tab in the navigation pane.
2. Select All > Timezone Management.
3. In the Properties pane, select the appropriate time zone in the Timezone menu.
4. Click Save Changes, and then click OK.
5. Click Add NTP Server.
6. Enter <<var_global_ntp_server_ip>> and click OK.
7. Click OK.
For the Cisco UCS 6300 Series Fabric Extenders, two configuration options are available: pinning and port-channel. SAP HANA node communicates with every other SAP HANA node using multiple I/O streams and this makes the port-channel option a highly suitable configuration.
Setting the discovery policy simplifies the addition of Cisco UCS B-Series chassis and of additional fabric extenders for further C-Series connectivity.
To modify the chassis discovery policy, complete the following steps:
1. In Cisco UCS Manager, click the Equipment tab in the navigation pane and select Equipment in the list on the left.
2. In the right pane, click the Policies tab.
3. Under Global Policies, set the Chassis/FEX Discovery Policy to match the number of uplink ports that are cabled between the chassis or fabric extenders (FEXes) and the fabric interconnects.
4. Set the Link Grouping Preference to Port Channel.
5. Click Save Changes.
6. Click OK.
Figure 21 Chassis Discovery Policy
To enable server and uplink ports, complete the following steps:
1. In Cisco UCS Manager, click the Equipment tab in the navigation pane.
2. Select Equipment > Fabric Interconnects > Fabric Interconnect A (primary) > Fixed Module.
3. Expand Ethernet Ports.
4. Select the ports that are connected to the chassis and / or to the Cisco C-Series Server (two per FI), right-click them, and select Configure as Server Port.
5. Click Yes to confirm server ports and click OK.
6. Verify that the ports connected to the chassis and / or to the Cisco C-Series Server are now configured as server ports.
Figure 22 Cisco UCS – Port Configuration Example
7. Select ports that are connected to the Cisco Nexus switches, right-click them, and select Configure as Uplink Port.
8. Click Yes to confirm uplink ports and click OK.
9. Select Equipment > Fabric Interconnects > Fabric Interconnect B (subordinate) > Fixed Module.
10. Expand Ethernet Ports.
11. Select the ports that are connected to the chassis or to the Cisco C-Series Server (two per FI), right-click them, and select Configure as Server Port.
12. Click Yes to confirm server ports and click OK.
13. Select ports that are connected to the Cisco Nexus switches, right-click them, and select Configure as Uplink Port.
14. Click Yes to confirm the uplink ports and click OK.
The 40 GB Ethernet ports on Cisco UCS 6300 Fabric Interconnects can be configured as four breakout 10 GB ports using a supported breakout cable.
Configuring breakout ports requires rebooting the Fabric Interconnect. Any existing configuration on a port is erased. It is recommended to break out all required ports in a single transaction.
1. In Cisco UCS Manager, click the Equipment tab in the navigation pane.
2. Select Equipment > Fabric Interconnects > Fabric Interconnect A (primary) > Fixed Module.
3. Expand Ethernet Ports.
4. Select ports that are connected to the Cisco Nexus 3548 switches, right-click them, and select Configure Breakout Port.
5. Click Yes to confirm the Breakout ports and click OK
6. Select Equipment > Fabric Interconnects > Fabric Interconnect B (subordinate) > Fixed Module.
7. Expand Ethernet Ports.
8. Select ports that are connected to the Cisco Nexus 3548 switches, right-click them, and select Configure Breakout Port.
9. Click Yes to confirm the Breakout ports and click OK
When you configure a breakout port, you can configure each 10 GB sub-port as server, uplink, FCoE uplink, FCoE storage or appliance as required. Unified Ports can not be configured as Breakout Ports.
To acknowledge all Cisco UCS chassis and Rack Mount Servers, complete the following steps:
1. In Cisco UCS Manager, click the Equipment tab in the navigation pane.
2. Expand Chassis and select each chassis that is listed.
3. Right-click each chassis and select Acknowledge Chassis.
Figure 23 Cisco UCS – Chassis Overview
4. Click Yes and then click OK to complete acknowledging the chassis.
5. If C-Series servers are part of the configuration, expand Rack Mounts and FEX.
6. Right-click each Server that is listed and select Acknowledge Server.
7. Click Yes and then click OK to complete acknowledging the Rack Mount Servers.
A separate uplink port-channels for each of the network zones are defined as per SAP. For example, we create port-channel 11 on fabric interconnect A and port-channel 12 on fabric interconnect B for Client zone network.
To configure the necessary port channels out of the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the LAN tab in the navigation pane.
2. In this procedure, two port channels are created: one from fabric A to both Cisco Nexus switches and one from fabric B to both Cisco Nexus switches.
3. Under LAN > LAN Cloud, expand the Fabric A tree.
4. Right-click Port Channels.
5. Select Create Port Channel.
6. Enter 11 as the unique ID of the port channel.
7. Enter vPC-11-Nexus as the name of the port channel.
Figure 24 Cisco UCS – Port Channel Wizard
9. Select the following ports to be added to the port channel:
— Slot ID 1 and port 1
— Slot ID 1 and port 2
10. If the breakout cables are used for Uplink connectivity. Select the following ports to be added to the port channel:
— Slot ID 1, Aggregated Port ID 1 and port 1
— Slot ID 1, Aggregated Port ID 1 and port 2
— Slot ID 1, Aggregated Port ID 1 and port 3
— Slot ID 1, Aggregated Port ID 1 and port 4
— Slot ID 1, Aggregated Port ID 2 and port 1
— Slot ID 1, Aggregated Port ID 2 and port 2
— Slot ID 1, Aggregated Port ID 2 and port 3
— Slot ID 1, Aggregated Port ID 2 and port 4
11. Click >> to add the ports to the port channel.
12. Click Finish to create the port channel.
13. Click OK.
14. In the navigation pane, under LAN > LAN Cloud, expand the fabric B tree.
15. Right-click Port Channels.
16. Select Create Port Channel.
17. Enter 12 as the unique ID of the port channel.
18. Enter vPC-12-NEXUS as the name of the port channel.
19. Click Next.
20. Select the following ports to be added to the port channel:
— Slot ID 1 and port 1
— Slot ID 1 and port 2
21. If the breakout cables are used for Uplink connectivity. Select the following ports to be added to the port channel:
— Slot ID 1, Aggregated Port ID 1 and port 1
— Slot ID 1, Aggregated Port ID 1 and port 2
— Slot ID 1, Aggregated Port ID 1 and port 3
— Slot ID 1, Aggregated Port ID 1 and port 4
— Slot ID 1, Aggregated Port ID 2 and port 1
— Slot ID 1, Aggregated Port ID 2 and port 2
— Slot ID 1, Aggregated Port ID 2 and port 3
— Slot ID 1, Aggregated Port ID 2 and port 4
22. Click >> to add the ports to the port channel.
23. Click Finish to create the port channel.
24. Click OK.
Repeat the steps 1-24 to create Additional port-channel for each network zone based on your Data Center requirements.
If you are using single Nexus 3548 for IOM failover, create port-channel 5 on fabric interconnect A and port-channel 6 on fabric interconnect B for Internal network zone.
To configure the necessary port channels out of the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the LAN tab in the navigation pane.
2. In this procedure, two port channels are created: one from fabric A to both Cisco Nexus switches and one from fabric B to both Cisco Nexus switches.
3. Under LAN > LAN Cloud, expand the Fabric A tree.
4. Right-click Port Channels.
5. Select Create Port Channel.
6. Enter 5 as the unique ID of the port channel.
7. Enter N3k-Uplink as the name of the port channel.
8. Click Next.
9. Select the following ports to be added to the port channel:
— Slot ID 1, Aggregated Port ID 3 and port 1
— Slot ID 1, Aggregated Port ID 3 and port 2
— Slot ID 1, Aggregated Port ID 3 and port 3
— Slot ID 1, Aggregated Port ID 3 and port 4
10. Click >> to add the ports to the port channel.
11. Click Finish to create the port channel.
12. Click OK.
13. In the navigation pane, under LAN > LAN Cloud, expand the fabric B tree.
14. Right-click Port Channels.
15. Select Create Port Channel.
16. Enter 6 as the unique ID of the port channel.
17. Enter N3k-Uplink as the name of the port channel.
18. Click Next.
19. Select the following ports to be added to the port channel:
— Slot ID 1, Aggregated Port ID 3 and port 1
— Slot ID 1, Aggregated Port ID 3 and port 2
— Slot ID 1, Aggregated Port ID 3 and port 3
— Slot ID 1, Aggregated Port ID 3 and port 4
20. Click >> to add the ports to the port channel.
21. Click Finish to create the port channel.
22. Click OK.
For secure multi-tenancy within the Cisco UCS domain, a logical entity created known as organizations.
To create organization unit, complete the following steps:
1. In Cisco UCS Manager, on the Tool bar click New.
2. From the drop-down menu select Create Organization.
3. Enter the Name as HANA.
4. (Optional) Enter the Description as Org for HANA.
5. Click OK to create the Organization.
To configure the necessary MAC address pools for the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the LAN tab in the navigation pane.
2. Select Pools > root > Sub-Organization > HANA.
3. In this procedure, two MAC address pools are created, one for each switching fabric.
4. Right-click MAC Pools under the root organization.
5. Select Create MAC Pool to create the MAC address pool.
6. Enter FI-A as the name of the MAC pool.
7. (Optional) Enter a description for the MAC pool.
8. Choose Assignment Order Sequential.
9. Click Next.
10. Click Add.
11. Specify a starting MAC address.
12. The recommendation is to place 0A in the next-to-last octet of the starting MAC address to identify all of the MAC addresses as Fabric Interconnect A addresses.
13. Specify a size for the MAC address pool that is sufficient to support the available blade or server resources.
Figure 25 Cisco UCS – Create MAC Pool for Fabric A
14. Click OK.
15. Click Finish.
16. In the confirmation message, click OK.
17. Right-click MAC Pools under the HANA organization.
18. Select Create MAC Pool to create the MAC address pool.
19. Enter FI-B as the name of the MAC pool.
20. (Optional) Enter a description for the MAC pool.
21. Click Next.
22. Click Add.
23. Specify a starting MAC address.
The recommendation is to place 0B in the next to last octet of the starting MAC address to identify all the MAC addresses in this pool as fabric B addresses.
24. Specify a size for the MAC address pool that is sufficient to support the available blade or server resources.
Figure 26 Cisco UCS – Create MAC Pool for Fabric B
25. Click OK.
26. Click Finish.
27. In the confirmation message, click OK.
You can also define separate MAC address Pool for each Network Zone. Follow the above steps 1-16 to create MAC address pool for each Network Zone.
To configure the necessary universally unique identifier (UUID) suffix pool for the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Pools > root.
3. Right-click UUID Suffix Pools.
4. Select Create UUID Suffix Pool.
5. Enter UUID_Pool as the name of the UUID suffix pool.
6. (Optional) Enter a description for the UUID suffix pool.
7. Keep the Prefix as the Derived option.
8. Select Sequential for Assignment Order
9. Click Next.
10. Click Add to add a block of UUIDs.
11. Keep the From field at the default setting.
12. Specify a size for the UUID block that is sufficient to support the available blade or server resources.
Figure 27 Cisco UCS – Create UUID Block
13. Click OK.
14. Click Finish.
15. Click OK.
To run Cisco UCS with two independent power distribution units, the redundancy must be configured as Grid. Complete the following steps:
1. In Cisco UCS Manager, click the Equipment tab in the navigation pane and select Equipment in the list on the left.
2. In the right pane, click the Policies tab.
3. Under Global Policies, set the Redundancy field in Power Policy to Grid.
4. Click Save Changes.
5. Click OK.
Figure 28 Power Policy
Power Capping feature in Cisco UCS is designed to save power with a legacy data center use cases. This feature does not contribute much to the high performance behavior of SAP HANA. By choosing the option “No Cap” for power control policy, the SAP HANA server nodes will not have a restricted power supply. It is recommended to have this power control policy to make sure sufficient power supply for high performance and critical applications like SAP HANA.
To create a power control policy for the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Policies > root.
3. Right-click Power Control Policies.
4. Select Create Power Control Policy.
5. Enter HANA as the Power Control Policy name.
6. Change the Power Capping setting to No Cap.
7. Click OK to create the power control policy.
8. Click OK.
Figure 29 Power Control Policy for SAP HANA Nodes
Firmware management policies allow the administrator to select the corresponding packages for a given server configuration. These policies often include packages for adapter, BIOS, board controller, FC adapters, host bus adapter (HBA) option ROM, and storage controller properties.
To create a firmware management policy for a given server configuration in the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Policies > root.
3. Right-click Host Firmware Packages.
4. Select Create Host Firmware Package.
5. Enter HANA-FW as the name of the host firmware package.
6. Leave Simple selected.
7. Select the version 3.1(1e) for both the Blade and Rack Packages.
8. Click OK to create the host firmware package.
9. Click OK.
Figure 30 Host Firmware Package
A local disk configuration policy configures SAS local drives that have been installed on a server through the onboard RAID controller of the local drive.
To create a local disk configuration policy for HANA servers for Local OS disks, complete the following steps:
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Policies > root.
3. Right-click Local Disk Config Policies.
4. Select Create Local Disk Configuration Policy.
5. Enter RAID1 as the local disk configuration policy name.
6. Change the mode to RAID 1.
7. Click OK to create the local disk configuration policy.
Figure 31 Cisco UCS – Create Local Disk Policy
8. Click OK.
To create a local disk configuration policy for MapR servers, complete the following steps:
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Policies > root.
3. Right-click Local Disk Config Policies.
4. Select Create Local Disk Configuration Policy.
5. Enter MapR as the local disk configuration policy name.
6. Change the mode to Any Configuration.
Figure 32 Local Disk Configuration Policy for MapR Server
7. Click OK to create the local disk configuration policy.
To get best performance for HANA it is required to configure the Server BIOS accurately. To create a server BIOS policy for the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Policies > root > Sub-Organization > HANA.
3. Right-click BIOS Policies.
4. Select Create BIOS Policy.
5. Enter HANA-BIOS as the BIOS policy name.
6. Change the Quiet Boot setting to Disabled.
Figure 33 Create Server BIOS Policy
7. Click Next.
8. The recommendation from SAP for SAP HANA is to disable all Processor C States. This will force the CPU to stay on maximum frequency and allow SAP HANA to run with best performance.
Figure 34 Processor Settings in BIOS Policy
9. Click Next.
10. No changes required at the Intel Direct IO.
Figure 35 BIOS Policy – Advanced Intel Direct IO
11. Click Next.
12. In the RAS Memory please select maximum-performance and enable NUMA.
Figure 36 BIOS Policy – Advanced RAS Memory
13. Click Next.
14. In the Serial Port window, the Serial Port A must be set to enabled.
Figure 37 BIOS Policy – Advanced Serial Port
15. Click Next.
16. No changes required at the USB settings.
Figure 38 BIOS Policy – Advanced USB
17. Click Next.
18. No changes required at the PCI Configuration.
Figure 39 BIOS Policy – Advanced PCI Configuration
19. Click Next.
20. No changes required at the QPI.
Figure 40 BIOS Policy – QPI Configuration
21. Click Next.
22. No changes required at the LOM and PCIe Slots.
Figure 41 BIOS Policy – Advanced LOM and PCIe Slots
23. Click Next.
24. No changes required at the Boot Options.
Figure 41 BIOS Policy – Boot Options
Figure 42 BIOS Policy – Boot Options
25. Click Next.
26. Configure the Console Redirection to serial-port-a with the BAUD Rate 115200 and enable the feature Legacy OS redirect. This is used for Serial Console Access over LAN to all SAP HANA servers.
Figure 43 BIOS Policy – Server Management
27. Click Finish to Create BIOS Policy.
28. Click OK.
To create a server BIOS policy for the Cisco UCS environment for MapR server, complete the following steps:
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Policies > root > Sub-Organization > HANA.
3. Right-click BIOS Policies.
4. Select Create BIOS Policy.
5. Enter MapR-BIOS as the BIOS policy name.
6. Change the Quiet Boot setting to Disabled.
7. Click Next.
8. Enable Turbo Boost, Enhanced Intel Speedstep, Hyper Threading.
9. Recommendation is to disable all Processor C States. This will force the CPU to stay on maximum frequency and allow MapR Server to run with best performance.
10. Set CPU Performance for high throughput, Power Technology, Energy Performance, DRAM Clock Throttling for maximum performance.
Figure 44 Processor Settings in BIOS Policy
11. Click Next.
12. Keep default values at the Intel Direct IO
13. Click Next.
14. In the RAS Memory please select maximum-performance and enable NUMA.
Figure 44 BIOS Policy – Advanced RAS Memory
Figure 45 BIOS Policy – Advanced RAS Memory
15. Click Next.
16. Keep default values for rest of the options.
17. Click Next.
18. Click Finish to Create BIOS Policy.
The Serial over LAN policy is required to get console access to all the SAP HANA servers through SSH from the management network. This is used in case of the server hang or a Linux kernel crash, where the dump is required. Configure the speed in the Server Management Tab of the BIOS Policy.
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Policies > root > Sub-Organization > HANA.
3. Right-click the Serial over LAN Policies.
4. Select Create Serial over LAN Policy.
5. Enter SoL-Console as the Policy name.
6. Select Serial over LAN State to enable.
7. Change the Speed to 115200.
8. Click OK.
Figure 46 Serial Over LAN Policy
It is recommended to update the default Maintenance Policy with the Reboot Policy “User Ack” for the SAP HANA server. This policy will wait for the administrator to acknowledge the server reboot for the configuration changes to take effect.
To update the default Maintenance Policy, complete the following steps:
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Policies > root.
3. Select Maintenance Policies > default.
4. Change the Reboot Policy to User Ack.
5. Click Save Changes.
6. Click OK to accept the change.
Figure 47 Maintenance Policy
The Serial over LAN access requires an IPMI access control to the board controller. This is also used for the STONITH function of the SAP HANA mount API to kill a hanging server. The default user is “sapadm” with the password “cisco”.
To create an IPMI Access Profile, complete the following steps:
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Policies > root > Sub-Organization > HANA.
3. Right-click IPMI Access Profiles.
4. Select Create IPMI Access Profile.
5. Enter HANA-IPMI as the Profile name.
Figure 48 IPMI Profile
6. Click + button.
7. Enter Username in the Name field and password.
8. Select Admin as Role.
Figure 49 Create IPMI User
9. Click OK to create user.
10. Click OK to Create IPMI Access Profile.
11. Click OK.
Figure 50 IPMI Access Profile
The core network requirements for SAP HANA are covered by Cisco UCS defaults. Cisco UCS is based on 40GbE and provides redundancy through the Dual Fabric concept. The Service Profile is configured to distribute the traffic across Fabric Interconnect A and B. During normal operation, the traffic in the inter-node flows on FI A and the Storage traffic is on FI B. The inter-node traffic flows from a Blade Server to the Fabric Interconnect A and back to other Blade Server. The storage traffic flows from a HANA Server to the Fabric Interconnect B and back to MapR Server.
To configure jumbo frames and enable quality of service in the Cisco UCS fabric, complete the following steps:
1. In Cisco UCS Manager, click the LAN tab in the navigation pane.
2. Select LAN > LAN Cloud > QoS System Class.
3. In the right pane, click the General tab.
4. On the MTU Column, enter 9216 in the box.
5. Check Enabled Under Priority for Silver.
6. Click Save Changes in the bottom of the window.
7. Click OK.
Figure 51 Cisco UCS – Setting Jumbo Frames
QoS policies assign a system class to the network traffic for a vNIC. To create QoS Policy for the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the LAN tab in the navigation pane.
2. Select Policies > root > Sub-Organization > HANA.
3. Right-click QoS Policies.
4. Select Create QoS Policy.
5. Enter Storage-Silver as the QoS Policy name.
6. For Priority Select Silver from the drop-down list.
7. Enter 20000000 for Rate(Kbps)
8. Click OK to create QoS Policy.
Figure 52 Create QoS Policy
CDP needs to be enabled to learn the MAC address of the End Point. To update default Network Control Policy, complete the following steps:
1. In Cisco UCS Manager, click the LAN tab in the navigation pane.
2. Select LAN > Policies > root > Network Control Policies > default
3. In the right pane, click the General tab.
4. For CDP: select Enabled radio button
5. Click Save Changes in the bottom of the window.
6. Click OK
Figure 53 Network Control Policy to Enable CDP
In order to keep the vNIC links up in case of Nexus 3548 failure, create the Network Control Policy for Internal Network. To create Network Control Policy for Internal Network, complete the following steps:
1. In Cisco UCS Manager, click the LAN tab in the navigation pane.
2. Select LAN > Policies > root > Network Control Policies > Right Click and Select Create Network Control Policy
3. Enter Internal as the Name of the Policy
4. For CDP: select Enabled radio button
5. For Action on Uplink Fail: select Warning radio button
6. Click OK
Figure 54 Network Control Policy
Within Cisco UCS, all the network types for an SAP HANA system are reflected by defined VLANs. Network design from SAP has seven SAP HANA related networks and two infrastructure related networks. The VLAN IDs can be changed if required to match the VLAN IDs in the data center network – for example, ID 221 for backup should match the configured VLAN ID at the data center network switches. Even though nine VLANs are defined, VLANs for all the networks are not necessary if the solution will not use the network. For example if the Replication Network is not used in the solution, then VLAN ID 225 need not be created.
To configure the necessary VLANs for the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the LAN tab in the navigation pane.
In this procedure, Nine VLANs are created.
2. Select LAN > LAN Cloud.
3. Right-click VLANs.
4. Select Create VLANs.
5. Enter HANA-Internal as the name of the VLAN to be used for HANA Node to Node network
6. Keep the Common/Global option selected for the scope of the VLAN.
7. Enter <<var_boot_vlan_id>> as the ID of the HANA Node to Node network.
8. Keep the Sharing Type as None.
9. Click OK, and then click OK again.
Figure 55 Create VLAN for Internode
10. Repeat the Steps 1-9 for each VLAN.
11. Create VLAN for HANA-AppServer.
Figure 56 Create VLAN for AppServer
12. Create VLAN for HANA-Backup.
Figure 57 Create VLAN for Backup
13. Create VLAN for HANA-Client.
Figure 58 Create VLAN for Client Network
14. Create VLAN for HANA-DataSource.
Figure 59 Create VLAN for Data Source
15. Create VLAN for HANA-Replication.
Figure 60 Create VLAN for Replication
16. Create VLAN for HANA-Storage.
Figure 61 Create VLAN for Storage Access
17. Create VLAN for NFS-IPMI for Inband Access.
Figure 62 Create VLAN for NFS-IPMI for Inband Access
18. Create VLAN for Management.
Figure 63 Create VLAN for Management
19. Create VLAN for MapR-01
Figure 64 Create VLAN for MapR-01 VLAN
20. Create VLAN for MapR-02
Figure 65 Create VLAN for MapR-02 Internal VLAN
21. Create VLAN for MapR-03
Figure 66 Create VLAN for MapR-03 Internal VLAN
Figure 67 shows the list of created VLANs.
Figure 67 VLAN Definition in Cisco UCS
For easier management and bandwidth allocation to a dedicated uplink on the Fabric Interconnect, VLAN Groups are created within the Cisco UCS. SAP HANA uses the following VLAN groups:
· Admin Zone
· Client Zone
· Internal Zone
· Backup Network
· Replication Network
To configure the necessary VLAN Groups for the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the LAN tab in the navigation pane.
In this procedure, five VLAN Groups are created. Based on the solution requirement create VLAN groups, it not required to create all five VLAN groups.
2. Select LAN > LAN Cloud.
3. Right-click VLAN Groups.
4. Select Create VLAN Groups.
5. Enter Admin-Zone as the name of the VLAN Group used for Infrastructure network.
6. Select Management
Figure 68 Create VLAN Group for Admin Zone
7. Click Next
8. Click Next on Add Uplink Ports, since you will use port-channel.
9. Choose port-channels created for Admin Network.
Figure 69 Add Port-Channel for VLAN Group Admin Zone
10. Click Finish.
11. Follow the steps 1-10 for each VLAN Group.
12. Create VLAN Groups for Internal Zone.
Figure 70 Create VLAN Group for Internal Zone
13. Click Next.
14. Click Next on Add Uplink Ports, since you will use port-channel.
15. Choose port-channels created for Internal Zone.
Figure 71 Add Port-Channel for VLAN Group Internal Zone
16. Click Finish.
17. Create VLAN Groups for Client Zone.
Figure 72 Create VLAN Group for Client Zone
18. Click Next.
19. Click Next on Add Uplink Ports, since you will use port-channel.
20. Choose port-channels created for Client Zone.
Figure 73 Add Port-Channel for VLAN Group Client Zone
21. Click Finish.
22. Create VLAN Groups for Backup Network.
Figure 74 Create VLAN Group for Backup Network
23. Click Next.
24. Click Next on Add Uplink Ports, since we will use Port-Channel.
25. Choose port-channels created for Backup Network.
26. Click Finish.
27. Create VLAN Groups for Replication Network.
Figure 75 Create VLAN Group for Replication Network
28. Click Next.
29. Click Next on Add Uplink Ports, since we will use Port-Channel.
30. Choose port-channels created for Replication Network.
31. Click Finish.
VLAN groups created in the Cisco UCS is shown in Figure 76.
Figure 76 VLAN Groups in Cisco UCS
For each VLAN Group a dedicated or shared Ethernet Uplink Port or Port Channel can be selected.
Each VLAN is mapped to a vNIC template to specify the characteristic of a specific network. The vNIC template configuration settings include MTU size, Failover capabilities and MAC-Address pools.
To create vNIC templates for the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the LAN tab in the navigation pane.
2. Select Policies > root > Sub-Organization > HANA.
3. Right-click vNIC Templates.
4. Select Create vNIC Template.
5. Enter HANA-Internal as the vNIC template name.
6. Keep Fabric A selected.
7. Check the Enable Failover checkbox.
8. Under Target, make sure that the VM checkbox is unchecked.
9. Select Updating Template as the Template Type.
10. Under VLANs, check the checkboxes for HANA-Internal.
11. Set HANA-Internal as the native VLAN.
12. For MTU, enter 9000.
13. In the MAC Pool list, select FI-A.
14. For Network Control Policy Select Internal from dropdown menu.
Figure 77 Create vNIC Template for HANA-Internal
15. Click OK to create the vNIC template.
16. Click OK.
For most SAP HANA use cases the network traffic is well distributed across the two Fabrics (Fabric A and Fabric B) using the default setup. In special cases, it can be required to rebalance this distribution for better overall performance. This can be done in the vNIC template with the Fabric ID setting. Note that the MTU settings must match the configuration in customer data center. MTU setting of 9000 is recommended for best performance.
17. Follow the steps1-15 above to create vNIC template for each Network Interface.
18. Create a vNIC template for Storage Network.
19. Select QoS Policy Storage-Silver from the dropdown menu.
Figure 78 Create vNIC Template for Storage Access
20. Create a vNIC template for AppServer Network.
Figure 79 Create vNIC Template for AppServer Network
21. Create a vNIC template for Backup Network.
Figure 80 Create vNIC Template for Backup Network
22. Create a vNIC template for Client Network.
Figure 81 Create vNIC Template for Client Network
23. Create a vNIC template for DataSource Network.
Figure 82 Create vNIC Template for DataSource Network
24. Create a vNIC template for Replication Network.
Figure 83 Create vNIC Template for Replication Network
25. Create a vNIC template for Management Network.
Figure 84 Create vNIC Template for Management Network
26. Create a vNIC template for IPMI Inband Access Network.
Figure 85 Create vNIC Template for IPMI Inband Access Network
27. Create a vNIC template for MapR-01 Internal Network.
Figure 86 Create a vNIC Template for MapR-01 Internal Network
28. Create a vNIC template for MapR-02 Internal Network.
Figure 87 Create a vNIC Template for MapR-02 Internal Network
29. Create a vNIC template for MapR-03 Internal Network.
Figure 88 Create a vNIC Template for MapR-03 Internal Network
Figure 89 provides the list of created vNIC Templates for SAP HANA.
Figure 89 vNIC Templates Overview
To create Local boot policies, complete the following steps:
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Policies > root > Sub-Organization > HANA.
3. Right-click Boot Policies.
4. Select Create Boot Policy.
5. Enter Local-Boot as the name of the boot policy.
6. (Optional) Enter a description for the boot policy.
7. Expand the Local Devices drop-down menu and select Add CD/DVD.
8. Expand the Local Devices drop-down menu and select Add Local Disk.
9. Click Ok to save the boot policy. Click OK to close the Boot Policy Window.
Figure 90 Create Local Boot Policy
For SAP HANA High Availability configuration, we would be using IPMI tool, create a block of IP addresses for SAP HANA servers. In the Cisco UCS environment, complete the following steps:
This block of IP addresses should be in the same subnet as the HANA-Admin IP addresses of the HANA Servers.
1. In Cisco UCS Manager, click the LAN tab in the navigation pane.
2. Select Pools > root > > Sub-Organization > HANA > IP Pools Right-click Create a IP Pool.
3. For the Name enter HANA_IPMI.
4. For Assignment order select Sequential and Click Next.
5. Click Add.
6. Enter the starting IP address of the block and the number of IP addresses required, and the subnet and gateway information.
7. Click OK to create the IP block.
8. Click OK in the confirmation message.
To allocate the previously configured IPv4 Address Pool, VLAN, and VLAN Group to the global Inband Profile, complete the following steps:
1. In Cisco UCS Manager, click the LAN tab in the navigation pane.
2. Select LAN Cloud, on the right pane click the Global Policies tab.
3. On the Global Policies page, under the Inband Profile section:
4. For Inband VLAN Group select Internal-Zone from the drop-down list.
5. For Network select NFS-IPMI from the drop-down list.
6. For IP Pool Name select HANA-IPMI from the drop-down list.
7. Click Save Changes.
Figure 91 Inband Profile
The LAN configurations and relevant SAP HANA policies must be defined prior to creating, a Service Profile Template. For Scale-Out solution, two Service Profile templates are created - the only difference between the two is the vCon placement to achieve maximum network throughput per blade.
The guidelines to use service profile for scale-out system to achieve maximum network throughput between servers with a SAP HANA system are outlined below.
Service Profile HANA-A is used for:
· Server 3 in the Chassis for Cisco UCS B460 M4 Server
Service Profile HANA-B is used for:
· Server 7 in the Chassis for Cisco UCS B460 M4 Server
To create the service profile template, complete the following steps:
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Service Profile Templates > root > Sub-Organization > HANA.
3. Right-click HANA.
4. Select Create Service Profile Template to open the Create Service Profile Template wizard.
5. Identify the service profile template:
a. Enter HANA-A as the name of the service profile template.
b. Select the Updating Template option.
c. Under UUID, select HANA-UUID as the UUID pool.
d. Click Next.
Figure 92 Service Profile Template UUID
6. Configure the Storage Provisioning:
a. Click on Local Disk Configuration Policy tab.
b. Select RAID1 for Local Storage field from the dropdown menu.
c. Click Next
Figure 93 Service Profile Storage Provisioning
7. Configure the Networking options:
a. Keep the default settings for Dynamic vNIC Connection Policy.
b. Select the Expert option for How would you like to configure LAN connectivity.
c. Click the Add button to add a vNIC to the template.
d. In the Create vNIC dialog box, enter HANA-Internal as the name of the vNIC.
e. Check the Use vNIC Template checkbox.
f. In the vNIC Template list, select HANA-Internal.
g. In the Adapter Policy list, select Linux.
h. Click OK to add this vNIC to the template.
Figure 94 Service Profile Template vNIC Internal
i. Repeat the above steps c-h for each vNIC.
8. Add vNIC for HANA-Storage.
9. Add vNIC for HANA-Client.
10. Add vNIC for HANA-AppServer.
11. Add vNIC for HANA-DataSource.
12. Add vNIC for HANA-Replication.
13. Add vNIC for HANA-Backup.
14. Add vNIC for Mgmt.
15. Add vNIC for NFS-IPMI.
16. Review the table in the Networking page to make sure that all vNICs were created.
Figure 95 Service Profile Networking
17. Click Next.
18. Configure the SAN Connectivity, Select the No vHBAs option for the “How would you like to configure SAN connectivity?” field.
19. Click Next.
20. Set no Zoning options and click Next.
21. Set the vNIC/vHBA placement options.
a. In the Select Placement list, select the Specify Manually.
b. Select vCon2 and assign the vNICs to the virtual network interfaces policy in the following order:
i. HANA-Storage
ii. HANA-AppServer
iii. HANA-Backup
iv. Mgmt
Figure 96 Service Profile vNIC Placement for vCon 2
c. Select vCon4 and assign the vNICs to the virtual network interfaces policy in the following order:
i. HANA-Internal
ii. HANA-Client
iii. HANA-DataSource
iv. HANA-Replication
v. NFS-IPMI
Figure 97 Service Profile vNIC Placement for vCon 4
d. Review the table to verify that all vNICs are assigned to the policy in the appropriate order.
e. Click Next.
22. No Change required on the vMedia Policy, click Next.
23. Set the server boot order:
a. Select Local-Boot for Boot Policy.
Figure 98 Service Profile Template Boot Order Configuration
b. Click Next.
24. For Maintenance policy:
a. Select the default Maintenance Policy.
b. Click Next.
25. Specify the server assignment:
a. Select Down as the power state to be applied when the profile is associated with the server.
b. Expand Firmware Management at the bottom of the page and select HANA-FW from the Host Firmware list.
c. Click Next.
Figure 99 Service Profile Template Server Assignment
26. For Operational Policies:
a. In the BIOS Policy list, select HANA-BIOS.
b. Expand the External IPMI Management Configuration and select HANA-IPMI in the IPMI Access Profile.
c. Select SoL-Console in the SoL Configuration Profile.
d. Expand Management IP Address,
e. In the Outband IPv4 tab choose ext-mgmt in the Management IP Address Policy.
f. In the Inband IPv4 tab choose NFS-IPMI for Network and choose HANA-IPMI for Management IP Address Policy
Figure 100 Service Profile Template Management IP Address Inband
g. Expand Power Control Policy Configuration and select No-Power-Cap in the Power Control Policy list.
Figure 101 Service Profile Template Operational Policies
27. Click Finish to create the service profile template.
28. Click OK in the confirmation message.
To meet the Network bandwidth requirement defined by SAP, create the second service profile template and modify the vNIC placement policy.
To create a clone service profile template created, complete the following steps:
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Service Profile Templates > root > Sub-Organization > HANA.
3. Right-click Service Template HANA-A.
4. Select Create a Clone.
5. Enter HANA-B for Clone name and Choose root as Org.
Figure 102 Service Profile Create Clone
6. Click OK to Create the clone of HANA-A.
7. Click OK to confirm.
8. On the cloned Service Profile Template Select HANA-B in the Navigation pane and click Network tab.
9. Under Actions Click Modify vNIC/vHBA Placement.
10. Select vCon2 and assign the vNICs to the virtual network interfaces policy in the following order:
a. HANA-Internal
b. HANA-Client
c. HANA-DataSource
d. HANA-Replication
e. NFS-IPMI
11. Select vCon4 and assign the vNICs to the virtual network interfaces policy in the following order:
f. HANA-Storage
g. HANA-AppServer
h. HANA-Backup
i. Mgmt
12. Click OK to complete the vNIC/vHBA Placement policy.
13. Click OK to confirm
To create service profiles from the service profile template, complete the following steps:
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Service Profile Templates > root > Sub-Organization > HANA > Service Template HANA-A.
3. Right-click Service Template HANA-A and select Create Service Profiles from Template
4. Enter HANA-Server0 as the service profile prefix.
5. Enter 1 as Name Suffix Starting Number.
6. Enter 1 as the Number of Instances.
7. Click OK to create the service profile.
Figure 103 Service Profile Template
To create service profiles from the service profile template HANA-B, complete the following steps:
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Service Profile Templates > root > Sub-Organization > HANA > Service Template HANA-B.
3. Right-click Service Template HANA-B and select Create Service Profiles from Template.
4. Enter HANA-Server0 as the service profile prefix.
5. Enter 2 as Name Suffix Starting Number.
6. Enter 1 as the Number of Instances.
7. Click OK to create the service profile.
As mentioned previously, the Service Profiles are created in order to distribute network bandwidth across Fabric A and Fabric B. For Cisco UCS B460 M4 Servers use the service profile template HANA-A for servers in Slot 3 and service profile template HANA-B for servers in Slot 7.
To associate service profile created for a specific slot, complete the following steps:
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Service Profile > root > Sub-Organization > HANA > HANA-Server01.
3. Right-click HANA-Server01 and select Change Service Profile Association.
4. For Server Assignment Choose Select existing Server for the drop-down.
5. Click All Servers.
6. Select the Server as recommended.
7. Repeat for above steps 1-6 for each HANA Servers.
To create the service profile template, complete the following steps:
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Service Profile Templates > root > Sub-Organization > HANA.
3. Right-click HANA.
4. Select Create Service Profile Template to open the Create Service Profile Template wizard.
5. Identify the service profile template:
a. Enter MapR as the name of the service profile template.
b. Select the Updating Template option.
c. Under UUID, select HANA-UUID as the UUID pool.
d. Click Next.
6. Configure the Storage Provisioning:
a. Click on Local Disk Configuration Policy tab
b. Select MapR for Local Storage field from the dropdown menu.
c. Click Next
Figure 104 Service Profile Template for MapR Storage Provisioning
7. Configure the Networking options:
a. Keep the default settings for Dynamic vNIC Connection Policy.
d. Select the Expert option for How would you like to configure LAN connectivity.
e. Click the Add button to add a vNIC to the template.
f. In the Create vNIC dialog box, enter MapR-01 as the name of the vNIC.
g. Check the Use vNIC Template checkbox.
h. In the vNIC Template list, select MapR-01
i. In the Adapter Policy list, select Linux.
j. Click OK to add this vNIC to the template.
Figure 105 Service Profile Template for MapR vNIC MapR-01
k. Repeat the above steps c-h for each vNIC.
8. Add vNIC for MapR-02.
9. Add vNIC for MapR-03.
10. Add vNIC for HANA-Storage.
11. Add vNIC for Mgmt.
12. Review the table in the Networking page to make sure that all vNICs were created.
Figure 106 Service Profile Template for MapR Networking
13. Click Next.
14. Configure the SAN Connectivity, Select the No vHBAs option for the “How would you like to configure SAN connectivity?” field.
15. Click Next.
16. Set no Zoning options and click Next.
17. Set the vNIC/vHBA placement options, Keep the default values
Figure 107 Service Profile Template for MapR vNIC Placement
18. Click Next.
19. Keep the default values on the vMedia Policy, Click Next
20. Set the server boot order: Select Local-Boot for Boot Policy.
Figure 108 Service Profile Template for MapR vNIC Service Boot Order
21. Click Next.
22. For Maintenance policy: Select the default Maintenance Policy.
23. Click Next.
24. Specify the server assignment: Select Down as the power state to be applied when the profile is associated with the server.
25. Expand Firmware Management at the bottom of the page and select HANA-FW from the Host Firmware list.
26. Click Next.
27. For Operational policies, In the BIOS Policy list, select MapR-BIOS.
28. Expand Management IP Address,
29. In the Outband IPv4 tab choose ext-mgmt in the Management IP Address Policy.
30. In the Inband IPv4 tab choose NFS-IPMI for Network and choose HANA-IPMI for Management IP Address Policy
Figure 109 Service Profile Template for MapR Management IP Address Inband
31. Expand Power Control Policy Configuration and select No-Power-Cap in the Power Control Policy list.
Figure 110 Service Profile Template for MapR Operational Policies
32. Click Finish to create the service profile template.
33. Click OK in the confirmation message.
To create service profiles from the service profile template, complete the following steps:
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Service Profile Templates > root > Sub-Organization > HANA > Service Template MapR.
3. Right-click MapR and select Create Service Profiles from Template.
4. Enter appropriate name for the service profile prefix. For example, MapR-0.
5. Enter 1 as Name Suffix Starting Number.
6. Enter appropriate number of service profile to be created in the Number of Instances. For example, 4.
7. Click OK to create the service profile.
Figure 111 Service Profile Template for MapR
To associate service profile created for a specific slot, complete the following steps:
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Service Profile > root > Sub-Organization > HANA > MapR-01.
3. Right-click MapR-01 and select Change Service Profile Association.
4. For Server Assignment, choose Select existing Server from the drop-down.
5. Click All Servers.
6. Select the C240-M4 Rack Mount Server Rack ID 1
7. Repeat the above steps 1-6 for each MapR Server
The section describes the procedure for Installing MapR Data Platform on Cisco UCS C240-M4 Servers. Each server has 2 x Internal SSD Boot drives where the Operating System is installed with Software RAID 1. For MapR Storage 24 x 1.8 TB 10k rpm 4K disks are configured in 4 x RAID 5 group with 6 disks.
The following procedure shows the RAID configuration on C240 Servers used for MapR Data Platform.
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Service Profile > root > Sub-Organization > HANA > MapR-01.
3. Click KVM Console.
4. When the KVM Console is launched, click Boot Server.
Figure 112 Cisco UCS KVM Console
5. When prompted Press <Ctrl><R> to Run MegaRAID Configuration Utility.
Figure 113 Enter MegaRAID Configuration Utility
6. In the VD Mgmt press F2 for Operations.
Figure 114 MegaRAID Configuration Utility VD Mgmt
7. Select Create Virtual Drive.
Figure 115 MegaRAID Configuration Utility Create VD
8. Choose the following option to create RAID 5 virtual drive with 6 disks:
a. For RAID Level Choose RAID-5
b. Choose the first 6 disks
c. Give a Name for virtual drive 1
Figure 116 MegaRAID Configuration Utility Create RAID 5
9. Choose Advanced option:
a. For Strip Size choose 1MB.
b. For Read Policy Choose Ahead.
c. For Write Policy choose Write Back with BBU.
d. For I/O Policy choose Direct.
e. Keep the Default Disk cache Policy and Emulation.
f. Choose Initialize.
g. Click OK.
Figure 117 MegaRAID Configuration Utility Create RAID 5 Advanced Options
10. To create 2nd Virtual Drive press F2 for Operations and Select Create Virtual Drive.
11. Create a RAID 5 virtual drive with 6 disks:
a. For RAID Level Choose RAID-5.
b. Choose the next 6 disks.
c. Give a Name for virtual drive 2.
d. Choose Advanced option.
e. For Strip Size choose 1MB.
f. For Read Policy Choose Ahead.
g. For Write Policy choose Write Back with BBU.
h. For I/O Policy choose Direct.
i. Keep the Default Disk cache Policy and Emulation.
j. Choose Initialize.
k. Click OK.
12. To create the 3rd Virtual Drive press F2 for Operations and select Create Virtual Drive.
13. Create a RAID 5 virtual drive with 6 disks:
a. For RAID Level Choose RAID-5.
b. Choose the next 6 disks.
c. Give a Name for virtual drive 3.
d. Choose Advanced option.
e. For Strip Size choose 1MB.
f. For Read Policy Choose Ahead.
g. For Write Policy choose Write Back with BBU.
h. For I/O Policy choose Direct.
i. Keep the Default Disk cache Policy and Emulation.
j. Choose Initialize.
k. Click OK.
14. To create the 4th Virtual Drive press F2 for Operations and select Create Virtual Drive.
15. Create a RAID 5 virtual drive with 6 disks:
a. For RAID Level Choose RAID-5.
b. Choose the last 6 disks.
c. Give a Name for virtual drive 4.
d. Choose Advanced option.
e. For Strip Size choose 1MB.
f. For Read Policy Choose Ahead.
g. For Write Policy choose Write Back with BBU.
h. For I/O Policy choose Direct.
i. Keep the Default Disk cache Policy and Emulation.
j. Choose Initialize
k. Click OK
16. Press esc to exist MegaRAID Configuration Utility.
Figure 118 MegaRAID Configuration Utility with Virtual Drives
17. Press Ctrl+Alt+Del to reboot the server.
18. Repeat the steps 1-17 for each MapR Servers.
The following procedure show the RedHat Enterprise Linux Operation System 6.7 installation on Internal Boot Drives.
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Service Profile > root > Sub-Organization > HANA > MapR-01.
3. Click KVM Console.
4. When the KVM Console is launched, click Boot Server.
5. Click Virtual Media > Activate Virtual Devices.
a. Select the option Accept this Session for Unencrypted Virtual Media Session and then click Apply.
b. Click Virtual Media and Choose Map CD/DVD.
c. Click Browse to navigate ISO media location.
d. Click Map Device.
6. On the Initial screen click on Next to begin the installation process.
7. Choose Language and click OK.
8. Choose Keyboard layout and click OK.
9. Select Specialized Storage Devices and click Next.
10. Under Basic Devices, select the two "ATA INTEL" hard drives and click Next.
Figure 119 Boot Drive Selection
11. Enter the Hostname, example: mapr01.ciscolab.local
12. Click on Configure Network.
13. To configure the network interface on the OS, it is required to identify the mapping of the Ethernet device on the OS to vNIC interface on the Cisco UCS.
a. In Cisco UCS Manager, click the Servers tab in the navigation pane.
b. Expand Servers > Service Profile > root > Sub-Organization > HANA > MapR-01.
c. Click + to Expand.
d. Click vNICs.
e. On the right pane list of the vNICs with MAC Address are listed.
Figure 120 Cisco UCS vNIC MAC Address
f. Note that the MAC Address of the Mgmt vNIC “00:25:B5:2A:00:22”.
g. With the vNIC order in the Service Profile, vNIC for Management should be eth4
14. Under Wired, select System eth4 and click Edit.
15. By comparing MAC Address on the OS and Cisco UCS, eth4 on OS will carry the VLAN for Management.
16. Select Connect automatically check box.
17. Select Available to all users check box.
18. Click on IPv4 Settings:
a. Choose Manual for Method from the dropdown box.
b. In the Address field enter <<Management IP address>>.
c. In the Netmask field enter <<subnet mask for Mangement Interface>>.
d. In the Gateway field enter <<gateway IP address>>.
e. In the DNS servers field enter <<DNS server1, DNS server2>>.
f. In the Search domains field enter <<domain1.com,domain2.com>>.
g. Check Require IPv4 addressing for this connection to complete.
h. Click Apply.
Figure 121 RedHat OS Network Configuration
19. Click Close and click Next.
20. Select the appropriate Time Zone and click Next.
21. Enter the root password and confirm. Click Next.
22. Select Create Custom Layout. Click Next.
23. In the Please Select A Device window click Create.
24. In the Create Storage Window, under Create Software RAID, select RAID Partition and click Create.
25. In the Add Partition Window:
a. Select software RAID from drop-down menu for File System Type.
b. For Allowable Drives select the first drive, example: sdd
c. For Size (MB), enter 1024.
d. For Additional Size Options select Fixed Size radio button.
e. Select Force to be a primary partition.
f. Click OK.
Figure 122 RedHat OS Add Partition for Boot
26. In the Please Select A Device window click Create.
27. In the Create Storage Window, under Create Software RAID, select RAID Partition and click Create.
28. In the Add Partition Window:
a. Select software RAID from dropdown menu for File System Type.
b. For Allowable Drives select the first drive, example: sdd
c. For Size (MB), enter 2048.
d. For Additional Size Options select Fixed Size radio button.
e. Select Force to be a primary partition.
f. Click OK.
Figure 123 RedHat OS Add Partition for Swap Volume
29. In the Create Storage Window, under Create Software RAID, select RAID Partition and click Create.
30. In the Add Partition Window:
a. Select software RAID from drop-down menu for File System Type.
b. For Allowable Drives select the first drive, example: sdd
c. For Additional Size Options select Fill to maximum allowable size radio button.
d. Select Force to be a primary partition.
e. Click OK.
Figure 124 RedHat OS Add Partition for Root Volume
31. Create three more RAID partitions for the second drive.
32. In the Please Select A Device window click Create.
33. In the Create Storage Window, under Create Software RAID, select RAID Partition and click Create.
34. In the Add Partition Window:
a. Select software RAID from drop-down menu for File System Type.
b. For Allowable Drives select the second drive, example: sde
c. For Size (MB), enter 1024.
d. For Additional Size Options select Fixed Size radio button.
e. Select Force to be a primary partition.
f. Click OK.
35. In the Please Select A Device window click Create.
36. In the Create Storage Window, Under Create Software RAID, select RAID Partition and click Create.
37. In the Add Partition Window:
a. Select software RAID from drop-down menu for File System Type.
b. For Allowable Drives select the second drive, example: sde
c. For Size (MB), enter 2048.
d. For Additional Size Options select Fixed Size radio button.
e. Select Force to be a primary partition.
f. Click OK.
38. In the Create Storage Window, under Create Software RAID, select RAID Partition and click Create.
39. In the Add Partition Window:
a. Select software RAID from drop-down menu for File System Type.
b. For Allowable Drives select the second drive, example: sde
c. For Additional Size Options select Fill to maximum allowable size radio button.
d. Select Force to be a primary partition.
e. Click OK.
40. There should be six RAID partitions, three on each drive. Continue to create the RAID devices.
41. In the Please Select A Device window click Create.
42. In the Create Storage Window, under Create Software RAID, select RAID Device and click Create.
43. In the Make RAID Device:
a. Enter /boot for Mount Point.
b. Select ext3 from drop-down menu for File System Type.
c. For RAID Device select md0 from dropdown menu.
d. For RAID Level select RAID1 from dropdown menu.
e. For RAID Members select the two partitions which are 1024 MB in size.
f. Click OK.
Figure 125 RedHat OS Boot RAID Device
44. In the Create Storage Window, under Create Software RAID, select RAID Device and click Create.
45. In the Make RAID Device:
a. Select swap from drop-down menu for File System Type.
b. For RAID Device select md1 from drop-down menu.
c. For RAID Level select RAID1 from drop-down menu.
d. For RAID Members select the two partitions which are 2048 MB in size.
e. Click OK.
Figure 126 RedHat OS Swap RAID Device
46. In the Create Storage Window, under Create Software RAID, select RAID Device and click Create.
47. In the Make RAID Device:
a. Enter / for Mount Point.
b. Select ext3 from drop-down menu for File System Type.
c. For RAID Device select md2 from drop-down menu.
d. For RAID Level select RAID1 from drop-down menu.
e. For RAID Members, select the two remaining partitions with the biggest size.
f. Click OK.
Figure 127 RedHat OS Root RAID Device
48. Click Next.
49. Confirm Format Warnings and click Format.
50. Confirm the Writing storage configuration to disk Warnings and click Write changes to disk.
51. Select Install boot loader on /dev/sdd.
52. Click Next to proceed with the next step of the installation.
53. Select the installation mode as Minimal.
54. Keep Default Values and click Next.
Figure 128 RedHat OS Software Selection
55. The installer starts the installation process.
56. When the installation is completed the server requires a reboot. Click Reboot.
Follow the steps above to install RedHat Enterprise Linux on all the MapR servers.
To cutomize the server in preparation for the MapR Installation, complete the following steps:
1. The operating system must be configured such a way that the short name of the server is displayed for the command ‘hostname’ and Full Qualified Host Name is displayed with the command ‘hostname –d’.
2. ssh to the Server using Management IP address assigned to the server during installation.
3. Login as root and password.
4. Edit the Hostname.
vi /etc/sysconfig/network
HOSTNAME=<<hostname>>.<<Domain Name>>
Each MapR Server is configured with 5 vNIC device. Table 9 lists the IP Address information required to configure the IP address on the Operating System. IP Address and Subnet Mask provided below are example only, please configure IP address as per your environment.
Table 9 List the IP Address for SAP HANA Server
vNIC Name |
VLAN ID |
IP Address Range |
Subnet Mask |
MapR-01 |
<<var_mapr-01_vlan_id>> |
10.21.21.101 to 10.21.21.104 |
255.255.255.0 |
MapR-02 |
<<var_mapr-02_vlan_id>> |
10.22.22.101 to 10.22.22.104 |
255.255.255.0 |
MapR-03 |
<<var_mapr-03_vlan_id>> |
10.23.23.101 to 10.23.23.104 |
255.255.255.0 |
HANA-Storage |
<<var_storage_vlan_id>> |
192.168.110.101 to 192.168.110.104 |
255.255.255.0 |
Management |
<<var_mgmt_vlan_id>> |
192.168.196.101 to 192.168.196.104 |
255.255.0.0 |
1. To configure the network interface on the OS, it is required to identify the mapping of the ethernet device on the OS to vNIC interface on the Cisco UCS.
2. From the OS execute the below command to get list of Ethernet device with MAC Address.
ifconfig -a |grep HWaddr
eth0 Link encap:Ethernet HWaddr 00:25:B5:2A:00:20
eth1 Link encap:Ethernet HWaddr 00:25:B5:2B:00:20
eth2 Link encap:Ethernet HWaddr 00:25:B5:2A:00:21
eth3 Link encap:Ethernet HWaddr 00:25:B5:2B:00:21
eth4 Link encap:Ethernet HWaddr 00:25:B5:2A:00:22
3. In Cisco UCS Manager, click the Servers tab in the navigation pane.
4. Expand Servers > Service Profile > root > Sub-Organization > HANA > MapR-01.
5. Click + to Expand. Click vNICs.
6. On the right pane list of the vNICs with MAC Address are listed.
Figure 129 Cisco UCS vNIC MAC Address
7. Note the MAC Address of the MapR-01 vNIC is “00:25:B5:2A:00:20”.
8. By comparing the MAC Address on the OS and Cisco UCS, eth0 on OS will carry the VLAN for MapR-01.
9. Go to the network configuration directory and create a configuration for eth0.
/etc/sysconfig/network-scripts/
vi ifcfg-eth0
##
# Mapr-01 Network
##
DEVICE=eth0
ONBOOT=yes
BOOTPROTO=static
IPV6INIT=no
USERCTL=no
TYPE=Ethernet
NM_CONTROLLED=no
NETWORK=<<IP subnet for MapR-01 example:192.168.201.0>>
NETMASK=<<subnet mask for MapR-01 255.255.255.0>>
IPADDR=<<IP address for MapR-01 192.168.201.102>>
10. Repeat the steps 11 to 13 for each vNIC interface.
11. Add default gateway.
vi /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=<<HOSTNAME>>
GATEWAY=<<IP Address of default gateway>>
Domain Name Service configuration must be done based on the local requirements.
Configuration Example
Add DNS IP if it is required to access internet:
vi /etc/resolv.conf
DNS1=<<IP of DNS Server1>>
DNS2=<<IP of DNS Server2>>
DOMAIN= <<Domain_name>>
All nodes should be able to resolve internal network IP address, below is an example of 4 node host file with the entire network defined in the /etc/hosts file
Example:
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
#
## MapR 01
#
10.21.21.101 mapr21.ciscolab.local mapr21
10.21.21.102 mapr22.ciscolab.local mapr22
10.21.21.103 mapr23.ciscolab.local mapr23
10.21.21.104 mapr24.ciscolab.local mapr24
#
## MapR 02
#
10.22.22.101 mapr21b.ciscolab.local mapr21b
10.22.22.102 mapr22b.ciscolab.local mapr22b
10.22.22.103 mapr23b.ciscolab.local mapr23b
10.22.22.104 mapr24b.ciscolab.local mapr24b
#
## MapR 03
#
10.23.23.101 mapr21c.ciscolab.local mapr21c
10.23.23.102 mapr22c.ciscolab.local mapr22c
10.23.23.103 mapr23c.ciscolab.local mapr23c
10.23.23.104 mapr24c.ciscolab.local mapr24c
#
## HANA Storage Network
#
192.168.110.201 cishana21s.ciscolab.local cishana21s
192.168.110.202 cishana22s.ciscolab.local cishana22s
192.168.110.203 cishana23s.ciscolab.local cishana23s
192.168.110.204 cishana24s.ciscolab.local cishana24s
192.168.110.205 cishana25s.ciscolab.local cishana25s
192.168.110.206 cishana26s.ciscolab.local cishana26s
192.168.110.207 cishana27s.ciscolab.local cishana27s
192.168.110.208 cishana28s.ciscolab.local cishana28s
#
## MapR Storage
#
192.168.110.101 mapr21s.ciscolab.local mapr21s
192.168.110.102 mapr22s.ciscolab.local mapr22s
192.168.110.103 mapr23s.ciscolab.local mapr23s
192.168.110.104 mapr24s.ciscolab.local mapr24s
#
## MapR Storage Virtual IP
#
192.168.110.81 maprvip01
192.168.110.82 maprvip02
192.168.110.83 maprvip03
192.168.110.84 maprvip04
192.168.110.85 maprvip05
192.168.110.86 maprvip06
192.168.110.87 maprvip07
192.168.110.88 maprvip08
#
## Management
192.168.196.101 mapr21m.ciscolab.local mapr21m
192.168.196.102 mapr22m.ciscolab.local mapr22m
192.168.196.103 mapr23m.ciscolab.local mapr23m
192.168.196.104 mapr24m.ciscolab.local mapr24m
To updated and customize the RedHat System for MapR Servers, complete the following steps:
1. Subscribe all nodes to get access to the standard RHEL channels:
subscription-manager list --available --all
subscription-manager subscribe –pool=Pool_Id
2. Update only the OS kernel and firmware packages to the latest release that appeared in RHEL 6.7. Set the release version to 6.7
subscription-manager release –set=6.7
3. Update the OS:
yum –y update
4. Install additional groups and packages:
yum -y groupinstall base
yum -y install nfs-utils bash tuned rpcbind dmidecode glibc glibc-common glibc-headers glibc-devel hdparm initscripts iputils irqbalance libgcc libstdc++ redhat-lsb-core rpm-libs sdparm shadow-utils syslinux unzip zip nc mtools syslinux-nonlinux nss python-pycurl openssh-clients openssh-server openssl sshpass sudo wget which java-1.8.0-openjdk java-1.8.0-openjdk-headless java-1.8.0-openjdk-devel ipmitool
5. Disable SELinux
sed -i 's/^SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
sed -i 's/^SELINUX=permissive/SELINUX=disabled/g' /etc/sysconfig/selinux
sed -i 's/^SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
sed -i 's/^SELINUX=permissive/SELINUX=disabled/g' /etc/selinux/config
6. Modify /boot/grub/grub.conf and append this parameter to the kernel command line:
transparent_hugepage=never
7. Add mapr user and group. Make sure that <uid> and <gid> are greater than 1000 and the same on all MapR cluster nodes. The mapr defaults of 2000 for both uid and gid will ensure consistency with other default MapR clusters for inter-cluster mirroring:
groupadd mapr -g <gid>
useradd mapr -u <uid> -g <gid>
8. Activate rpcbind:
service rpcbind start
chkconfig rpcbind on
9. Append following parameters in /etc/sysctl.conf:
sunrpc.tcp_slot_table_entries = 128
sunrpc.tcp_max_slot_table_entries = 128
net.ipv4.tcp_slow_start_after_idle = 0
10. Add the following line into /etc/modprobe.d/sunrpc.conf, Create the file, if it does not exist.
options sunrpc tcp_slot_table_entries=128
options sunrpc tcp_max_slot_table_entries=128
11. Enable tuned profile:
tuned-adm profile enterprise-storage
12. Disable Firewall
service iptables stop
chkconfig iptables off
13. Reboot the OS by issuing reboot command
14. Optional: old kernels can be removed after OS update:
package-cleanup --oldkernels --count=1 -y
This procedure describes how to download the Cisco UCS Drivers ISO bundle, which contains most of the Cisco UCS Virtual Interface Card drivers.
1. In a web browser, navigate to http://www.cisco.com.
2. Under Support, click All Downloads.
3. In the product selector, click Products, then click Server - Unified Computing.
4. If prompted, enter your Cisco.com username and password to log in.
You must be signed in to download Cisco Unified Computing System (UCS) drivers.
5. Cisco UCS drivers are available for both Cisco UCS B-Series Blade Server Software and Cisco UCS C-Series Rack-Mount UCS-Managed Server Software.
6. Click UCS B-Series Blade Server Software.
7. Click Cisco Unified Computing System (UCS) Drivers.
The latest release version is selected by default. This document is built on Version 3.1(1).
8. Click 3.1(1) Version.
9. Download ISO image of Cisco UCS-related drivers.
10. Choose your download method and follow the prompts to complete your driver download.
11. After the download complete browse the UCS-B-Series/UCS_3.1.1/ucs-bxxx-drivers-combo.3.1.1/Drivers/Linux/Network/Cisco/VIC/RHEL/RHEL6.7 and copy kmod-enic-2.3.0.18-rhel6u7.el6.x86_64.rpm to each server
12. ssh to the Server on the Management IP as root.
13. Update the enic driver.
[root@mapr21 ~]# rpm -Uvh /tmp/kmod-enic-2.3.0.18-rhel6u7.el6.x86_64.rpm
Preparing... ########################################### [100%]
1:kmod-enic ########################################### [100%]
14. Update the enic driver on all the MapR Servers.
The configuration of NTP is important and must be performed on all systems. To configure network time, complete the following steps:
1. Install NTP-server with utilities.
yum -y install ntp ntpdate
2. Configure NTP by adding at least one NTP server to the NTP config file /etc/ntp.conf.
vi /etc/ntp.conf
server <NTP-SERVER1 IP>
server <NTP-SERVER2 IP>
3. Stop the NTP services and update the NTP Servers.
service ntpd stop
ntpdate ntp.example.com
4. Start NTP service and configure it to be started automatically.
service ntpd start
chkconfig ntpd on
chkconfig ntpdate on
The SSH Keys must be exchanged between all MapR servers for user ‘root’. To exchange the SSH keys, complete the following steps:
1. Generate the rsa public key by executing the command ssh-keygen -b 2048
[root@mapr21 ~]# ssh-keygen -b 2048
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
62:8a:0b:c4:4c:d7:de:8d:dc:e7:b7:17:ca:c7:02:94 root@mapr21.ciscolab.local
The key's randomart image is:
+--[ RSA 2048]----+
| |
| . |
| . . . . |
|+ . . o + E |
| + .o+So.. |
|. . o . o. . |
|. . . .o.o .|
| . . .+.+ |
| . .+ |
+-----------------+
2. Exchange the rsa public key by executing the below command from First server to rest of the servers. This ensures that every host can establish a password-less ssh connection to every other host:
“ssh-copy-id -i /root/.ssh/id_rsa.pub mapr21”
[root@mapr21 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub mapr22
Repeat the steps above for all the servers in the MapR cluster.
The installation of the MapR software needs a repository with the MapR software packages. If the MapR cluster nodes have internet access, it is recommended to prepare an online repository. If the MapR cluster nodes do not have internet access, you can download the packages from a computer which has internet access, unzip and copy them to a server which is reachable by the MapR cluster nodes or to a MapR cluster node itself and prepare an offline repository.
The following information is required to setup the MapR Cluster.
Cluster Name: example: mapr500
For High Availability and network bandwidth distribution, virtual IPs are created on the MapR cluster. These virtual IPs are bind to HANA Storage Ethernet device. HANA servers will use these virtual IPs to mount the NFS volumes. If a MapR node is failed, the virtual IPs are moved to other active nodes and the HANA servers will continue to access NFS volumes without any disruption. Its recommended to create two virtual IP address per MapR server, for example if there are 4 MapR nodes in the cluster – Create 8 Virtual IPs.
IP Address: example: 192.168.110.101-108
Subnet Mask: example: 255.255.255.0 or /24
On every MapR cluster node, create the file /etc/yum.repos.d/mapr.repo with the following content. If a proxy server is notnecessary, delete the two lines beginning with "proxy=":
[maprtech]
name=MapR Technologies
baseurl=http://package.mapr.com/releases/v5.0.0/redhat/
enabled=1
gpgcheck=0
protect=1
proxy=http://proxy:port
[maprecosystem]
name=MapR Technologies
baseurl=http://package.mapr.com/releases/ecosystem-5.x/redhat
enabled=1
gpgcheck=0
protect=1
proxy=http://proxy:port
Verify the repository:
yum repolist
If the MapR cluster nodes do not have internet access, you need to create an internal repository. This can either be done on one administrative Linux server that can be reached by all the MapR cluster nodes, or at one MapR cluster node itself.
1. The following packages have to be installed on the server which is providing the repository:
yum -y install createrepo deltarpm python-deltarpm apr apr-util apr-util-ldap httpd httpd-tools
2. Enable the web server:
service httpd start
chkconfig httpd on
3. Create directory:
mkdir -p /var/www/html/yum/base
4. Download the archive from the internet and copy it to the directory /var/www/html/yum/base:
http://package.mapr.com/releases/v5.0.0/redhat/mapr-v5.0.0GA.rpm.tgz
5. Extract the file:
tar -xvzf mapr-v5.0.0GA.rpm.tgz
6. Create the base repository headers:
createrepo /var/www/html/yum/base
7. On every MapR cluster node, create the file /etc/yum.repos.d/mapr.repo with following content:
[maprtech]
name=MapR Technologies
baseurl=http://<host>/yum/base
enabled=1
gpgcheck=0
8. Verify the repository:
yum repolist
Packages will be installed based on the role of the node. To install the MapR packages, complete the following steps:
1. Install on MapR NFS and MapR Fileserver on all nodes in the MapR Cluster
yum -y install mapr-nfs mapr-fileserver
2. Install zookeeper service on last three nodes of the MapR Cluster:
yum -y install mapr-zookeeper
3. Install cldb service on first two nodes of the MapR Cluster:
yum -y install mapr-cldb
4. Install webserver service on all the nodes or atleast on two nodes in the MapR Cluster:
yum -y install mapr-webserver
After all packages are installed, follow the steps below to configure MapR cluster.
WARNING: Any changes after these steps without re-running the configuration script might result in an unstable cluster state!
1. Increase memory usage of MapR instance on all nodes:
sed -i 's/service.command.mfs.heapsize.percent=.*/service.command.mfs.heapsize.percent=60/g' /opt/mapr/conf/warden.conf
2. Define the MapR subnets to use on all nodes:
sed -i 's/#export MAPR_SUBNETS=/export MAPR_SUBNETS=<<internal_network_01>>\/<<subnet_short>>,<<internal_network_02>>\/<<subnet_short>>,<<internal_network_03>>\/<<subnet_short>>/g' /opt/mapr/conf/env.sh
Example:
sed -i 's/#export MAPR_SUBNETS=/export MAPR_SUBNETS=10.21.21.0/24,10.22.22.0/24,10.23.23.0/24 g' /opt/mapr/conf/env.sh
3. Modify the DrCache size of the NFS server on all nodes:
sed -i 's/#DrCacheSize =.*/DrCacheSize = 614400/g' /opt/mapr/conf/nfsserver.conf
4. Create the /mapr directory and the /opt/mapr/conf/mapr_fstab file on every MapR node:
mkdir -p /mapr
echo "localhost:/mapr /mapr hard,nolock" > /opt/mapr/conf/mapr_fstab
5. Run the configuration script. -C followed by the cldb nodes, -Z followed by the zookeeper nodes, -N gives the cluster name, and -no-autostart hinders the MapR software to start automatically. If you use hostnames instead of IP addresses for -C and -Z, be sure to use hostnames which reflect to the internal cluster network IP addresses.
/opt/mapr/server/configure.sh -C <<cldbnode1>>,<<cldbnode2>> -Z <<zookeepernode1>>,<<zookeepernode2>>,<<zookeepernode3>> -N <<clustername>> -no-autostart
Example:
/opt/mapr/server/configure.sh -C mapr21, mapr22 -Z mapr22,mapr23,mapr24 -N mapr500 -no-autostart
6. Create a file with a disk list, containing all Virtual Drives in RAID 5 created on all nodes:
lsblk | grep 8.2T | awk '{print "/dev/"$1}' > /tmp/disk.list
7. Set up the disks in the disk.list file with a storage pool of 1 disk each on all nodes:
WARNING: The disks will be wiped immediately!
/opt/mapr/server/disksetup -W 1 -F /tmp/disk.list
To issue the following commands to start the cluster, complete the following steps:
1. On zookeeper nodes:
service mapr-zookeeper start
2. On all nodes, including zookeeper nodes:
service mapr-warden start
3. Wait for about 5 minutes for the cluster to come up.
To add a license to the MapR Cluster, complete the following steps:
1. When MapR Cluster is installed and running, log in to any MapR cluster node and retrieve your unique MapR cluster ID.
maprcli license showid
2. Send the cluster ID and the number of MapR nodes, along with <customer info> to <MapR> to get your MapR License.
3. If your cluster has internet access, install your license directly from the MapR License Server. Select "Manage Licenses" in the upper right corner of the MapR Control System GUI.
4. Select "Add licenses via Web", then "Apply Licenses".
5. Alternatively, if your cluster does not have internet access, you can copy your license file to any MapR cluster node, and install it via the command line interface.
maprcli license add -is_file true -license <license_file>
The MapR license file is a text file starting with the line "-----BEGIN SIGNED MESSAGE-----" and ending with the line "-----END MESSAGE HASH-----". You can also use "Add licenses via upload" or "Add licenses via copy/paste" in the "Manage Licenses" dialog in the MapR Control System GUI to install the license from a file.
In order to create virtual IP addresses for the MapR NFS server, MAC address of HANA-Storage vNIC on all the MapR nodes, IP address range for Virtual IP and Network subnet are required.
From each MapR Server execute ifconfig –a|grep HWaddr command to get list of Ethernet device with MAC Address. By comparing MAC Address on the OS and Cisco UCS Service Profile, eth3 on OS will carry the VLAN for HANA-Storage.
ifconfig -a |grep HWaddr
eth0 Link encap:Ethernet HWaddr 00:25:B5:2A:00:20
eth1 Link encap:Ethernet HWaddr 00:25:B5:2B:00:20
eth2 Link encap:Ethernet HWaddr 00:25:B5:2A:00:21
eth3 Link encap:Ethernet HWaddr 00:25:B5:2B:00:21
eth4 Link encap:Ethernet HWaddr 00:25:B5:2A:00:22
Below is the command to create Virtual IP on the MapR cluster:
maprcli virtualip add -macs <mac_addr_node1> <mac_addr_node2> <mac_addr_nodeN> -virtualipend <<last_virtual_ip>> -netmask <<subnet_mask> -virtualip <<first_virtual_ip>>
Example:
maprcli virtualip add -macs 00:25:b5:fb:00:20 00:25:b5:fb:00:22 00:25:b5:fb:00:24 00:25:b5:fb:00:26 -virtualipend 192.168.110.88 -netmask 255.255.255.0 -virtualip 192.168.110.81
The MapR cluster needs an internal directory structure for its volumes. It is recommended to store the volumes in the /apps folder of the MapR cluster directory structure. The MapR file system is mounted on the MapR cluster nodes under /mapr, so the following directory structure should be created on one of the MapR cluster node.
1. Confirm MapR file system is mounted at /mapr:
mount | grep mapr
localhost:/mapr on /mapr type nfs (rw,nolock,addr=127.0.0.1)
2. Create HANA directory:
mkdir -p /mapr/<clustername>/apps/hana/<SID>
Example:
mkdir -p /mapr/mapr500/apps/hana/ANA
When creating a volume, the volume name can differ from the directory name of MapR internal file system.
3. Create volume for /hana/shared:
maprcli volume create -name <volumename> -path /apps/hana/<SID>/<volumename> -replicationtype high_throughput -replication 2 -minreplication 2
4. Create volumes for /hana/data for every single storage partition:
maprcli volume create -name <volumename> -path /apps/hana/<SID>/<datamountpoint_storage_partition> -replicationtype high_throughput -replication 2 -minreplication 2
5. Create volumes for /hana/log for every single storage partition:
maprcli volume create -name <volumename> -path /apps/hana/<SID>/<logmountpoint_storage_partition> -replicationtype low_latency -replication 2 -minreplication 2
6. Turn off the compression for log volumes
hadoop mfs -setcompression off /mapr/<clustername>/apps/hana/<SID>/<volumename>
It is recommended to reflect the SID in the path. For the three different volume types shared, data and log.
Example for creating a volume for /hana/shared for SID "ANA":
maprcli volume create -name shared -path /apps/hana/ANA/shared -replicationtype high_throughput -replication 2 -minreplication 2
Example for creating eight volumes for /hana/data for SID "ANA":
maprcli volume create -name data00001 -path /apps/hana/ANA/data00001 -replicationtype high_throughput -replication 2 -minreplication 2
maprcli volume create -name data00002 -path /apps/hana/ANA/data00002 -replicationtype high_throughput -replication 2 -minreplication 2
maprcli volume create -name data00003 -path /apps/hana/ANA/data00003 -replicationtype high_throughput -replication 2 -minreplication 2
maprcli volume create -name data00004 -path /apps/hana/ANA/data00004 -replicationtype high_throughput -replication 2 -minreplication 2
maprcli volume create -name data00005 -path /apps/hana/ANA/data00005 -replicationtype high_throughput -replication 2 -minreplication 2
maprcli volume create -name data00006 -path /apps/hana/ANA/data00006 -replicationtype high_throughput -replication 2 -minreplication 2
maprcli volume create -name data00007 -path /apps/hana/ANA/data00007 -replicationtype high_throughput -replication 2 -minreplication 2
maprcli volume create -name data00007 -path /apps/hana/ANA/data00008 -replicationtype high_throughput -replication 2 -minreplication 2
Example for creating eight volumes for /hana/log for SID "ANA":
maprcli volume create -name log00001 -path /apps/hana/ANA/log00001 -replicationtype low_latency -replication 2 -minreplication 2
maprcli volume create -name log00002 -path /apps/hana/ANA/log00002 -replicationtype low_latency -replication 2 -minreplication 2
maprcli volume create -name log00003 -path /apps/hana/ANA/log00003 -replicationtype low_latency -replication 2 -minreplication 2
maprcli volume create -name log00004 -path /apps/hana/ANA/log00004 -replicationtype low_latency -replication 2 -minreplication 2
maprcli volume create -name log00005 -path /apps/hana/ANA/log00005 -replicationtype low_latency -replication 2 -minreplication 2
maprcli volume create -name log00006 -path /apps/hana/ANA/log00006 -replicationtype low_latency -replication 2 -minreplication 2
maprcli volume create -name log00007 -path /apps/hana/ANA/log00007 -replicationtype low_latency -replication 2 -minreplication 2
maprcli volume create -name log00008 -path /apps/hana/ANA/log00008 -replicationtype low_latency -replication 2 -minreplication 2
Example to turn off compression on eight log volumes for SID "ANA":
hadoop mfs -setcompression off /mapr/mapr500/apps/hana/ANA/log00001
hadoop mfs -setcompression off /mapr/mapr500/apps/hana/ANA/log00002
hadoop mfs -setcompression off /mapr/mapr500/apps/hana/ANA/log00003
hadoop mfs -setcompression off /mapr/mapr500/apps/hana/ANA/log00004
hadoop mfs -setcompression off /mapr/mapr500/apps/hana/ANA/log00005
hadoop mfs -setcompression off /mapr/mapr500/apps/hana/ANA/log00006
hadoop mfs -setcompression off /mapr/mapr500/apps/hana/ANA/log00007
hadoop mfs -setcompression off /mapr/mapr500/apps/hana/ANA/log00008
This section provides the procedure for installing RedHat Enterprise Linux Operation System and customizing for SAP HANA requirement.
For the latest information on SAP HANA installation and OS customization requirement, see SAP HANA installation guide at: http://www.saphana.com/
To install the RedHat Enterprise Linux Operation System 6.7 on the local drives, complete the following steps:
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Service Profile > root > Sub-Organization > HANA > HANA-Server01.
3. Click KVM Console.
4. When the KVM Console is launched, click Boot Server.
5. Click Virtual Media > Activate Virtual Devices.
a. Select the option Accept this Session for Unencrypted Virtual Media Session and then click Apply.
b. Click Virtual Media and Choose Map CD/DVD.
c. Click Browse to navigate ISO media location.
d. Click Map Device.
6. On the Initial screen click on Next to begin the installation process.
7. Choose Language and click OK.
8. Choose Keyboard layout and click OK.
9. Select the Basic Storage Devices from the menu and click Next.
10. In the Storage Device Warning window Click on Yes, discard any data
11. Enter the Hostname as <<hostname>>. <<domain_name>>. Click Next.
12. Click on Configure Network
13. To configure the network interface on the OS, it is required to identify the mapping of the Ethernet device on the OS to vNIC interface on the Cisco UCS.
a. In Cisco UCS Manager, click the Servers tab in the navigation pane.
b. Expand Servers > Service Profile > root > Sub-Organization > HANA > HANA-Server01.
c. Click + to Expand.
d. Click vNICs.
e. On the right pane list of the vNICs with MAC Address are listed.
Figure 130 Cisco UCS vNIC MAC Address
f. Note that the MAC Address of the Mgmt vNIC is “00:25:B5:2A:00:02”.
g. With the vNIC order in the Service Profile, vNIC for Management should be eth4.
14. Under Wired, select System eth4 and click Edit.
15. By comparing MAC Address on the OS and Cisco UCS, eth4 on OS will carry the VLAN for Management.
16. Select Connect automatically check box.
17. Select Available to all users check box.
18. Click the IPv4 Settings:
a. Choose Manual for Method from the drop-down box.
b. In the Address field enter <<Management IP address>>.
c. In the Netmask field enter <<subnet mask for Mangement Interface>>.
d. In the Gateway field enter <<gateway IP address>>.
e. In the DNS servers field enter <<DNS server1, DNS server2>>.
f. In the Search domains field enter <<domain1.com,domain2.com>>.
g. Check Require IPv4 addressing for this connection to complete.
h. Click Apply.
Figure 131 RedHat OS Network Configuration
19. Click Close and click Next.
20. Select the appropriate Time Zone and click Next.
21. Select Create Custom Layout and click Next.
22. In the Please Select A Device window click Create.
23. In the Create Storage Window, Under Create Partition, select Standard Partition and click Create.
24. In the Add Partition Window
a. Enter /boot for Mount Point.
b. Select ext3 from dropdown menu for File System Type.
c. For Size (MB): enter 200.
d. For Additional Size Options select Fixed Size radio button.
e. Click OK.
Figure 132 RedHat OS Boot Partition
25. In the Please Select A Device window click Create.
26. In the Create Storage Window, Under Create LVM, select LVM Physical Volume and click Create.
27. In the Add Partition Window.
a. For File System Type select physical volume (LVM) from drop-down menu.
b. For Additional Size Options select Fill to maximum allowable size radio button.
c. Click OK.
Figure 133 RedHat OS LVM
28. In the Please Select A Device window click Create.
29. In the Create Storage Window, Under Create LVM, select LVM Volume Group and click Create.
30. In the Add Partition Window:
a. For Volume Group Name enter root_vg
b. For Physical Extent keep the defaulf value of 4 MB
Figure 134 RedHat OS LVM Volume Group
c. Under Logical Volumes click Add
d. In the Make Logical Volume window
31. For File System Type select swap from drop-down menu.
32. For Logical Volume Name enter swapvol.
33. For Size (MB): enter 2048.
34. Click OK.
Figure 135 RedHat OS Logical Volume
e. Under Logical Volumes click Add
f. In the Make Logical Volume window
35. For Mount Point: enter /
36. For File System Type select ext3 from drop-down menu.
37. For Logical Volume Name enter rootvol.
38. For Size (MB): Keep the Default Max size.
39. Click OK.
40. Click OK.
Figure 136 RedHat OS LVM Volume Group
41. Confirm the Format Warnings and click Format.
42. Confirm the Writing storage configuration to disk Warnings and click Write changes to disk.
43. Select Install boot loader on /dev/sdd.
44. Click Next to proceed with the next step of the installation.
45. Select the installation mode as Minimal.
46. Keep Default Values and click Next.
47. The installer starts the installation process.
48. When the installation is completed the server requires a reboot. Click Reboot.
Figure 137 RHEL Installation Complete
To cutomize the server in preparation for the HANA Installation, complete the following steps:
1. The operating system must be configured such a way that the short name of the server is displayed for the command ‘hostname’ and Full Qualified Host Name is displayed with the command ‘hostname –d’.
2. ssh to the Server using Management IP address assigned to the server during installation.
3. Login as root and password.
4. Edit the Hostname.
vi /etc/sysconfig/network
HOSTNAME=<<hostname>>.<<Domain Name>>
Each SAP HANA Server is configured with 9 vNIC device. Table 10 lists the IP Address information required to configure the IP address on the Operating System.
The IP Address and Subnet Mask provided below are examples only, please configure the IP address for your environment.
Table 10 List the IP Address for SAP HANA Server
vNIC Name |
VLAN ID |
IP Address Range |
Subnet Mask |
HANA-AppServer |
<<var_appserver_vlan_id>> |
192.168.223.211 to 192.168.223.218 |
255.255.255.0 |
HANA-Backup |
<<var_backup_vlan_id>> |
192.168.221.211 to 192.168.211.218 |
255.255.255.0 |
HANA-Client |
<<var_client_vlan_id>> |
192.168.222.211 to 192.168.222.218 |
255.255.0.0 |
HANA-DataSource |
<<var_datasource_vlan_id>> |
192.168.224.211 to 192.168.224.218 |
255.255.255.0 |
HANA-Internal |
<<var_internal_vlan_id>> |
192.168.220.211 to 192.168.220.218 |
255.255.255.0 |
HANA-Replication |
<<var_replication_vlan_id>> |
192.168.225.211 to 192.168.225.218 |
255.255.255.0 |
HANA-Storage |
<<var_storage_vlan_id>> |
192.168.110.211 to 192.168.110.218 |
255.255.255.0 |
Management |
<<var_mgmt_vlan_id>> |
192.168.196.211 to 192.168.196.218 |
255.255.0.0 |
NFS-IPMI |
<<var_inband_vlan_id>> |
192.168.197.211 to 192.168.197.218 |
255.255.255.0 |
1. To configure the network interface on the OS, it is required to identify the mapping of the ethernet device on the OS to vNIC interface on the Cisco UCS.
2. From the OS execute the below command to get list of Ethernet device with MAC Address.
ifconfig -a |grep HWaddr
eth0 Link encap:Ethernet HWaddr 00:25:B5:2A:00:00
eth1 Link encap:Ethernet HWaddr 00:25:B5:2B:00:00
eth2 Link encap:Ethernet HWaddr 00:25:B5:2A:00:01
eth3 Link encap:Ethernet HWaddr 00:25:B5:2B:00:01
eth4 Link encap:Ethernet HWaddr 00:25:B5:2A:00:02
eth5 Link encap:Ethernet HWaddr 00:25:B5:2B:00:02
eth6 Link encap:Ethernet HWaddr 00:25:B5:2A:00:03
eth7 Link encap:Ethernet HWaddr 00:25:B5:2B:00:03
eth8 Link encap:Ethernet HWaddr 00:25:B5:2A:00:3B
3. In Cisco UCS Manager, click the Servers tab in the navigation pane.
4. Expand Servers > Service Profile > root > Sub-Organization > HANA > HANA-Server01.
5. Click + to Expand.
6. Click vNICs.
7. On the right pane list of the vNICs with MAC Address are listed.
Figure 138 Cisco UCS vNIC MAC Address
8. Note the MAC Address of the HANA-Internal vNIC is “00:25:B5:2A:00:02”.
9. By comparing MAC Address on the OS and Cisco UCS, eth0 on OS will carry the VLAN for HANA-Internal.
10. Go to network configuration directory and create a configuration for eth0.
/etc/sysconfig/network-scripts/
vi ifcfg-eth0
##
# Mapr-01 Network
##
DEVICE=eth0
ONBOOT=yes
BOOTPROTO=static
IPV6INIT=no
USERCTL=no
TYPE=Ethernet
NM_CONTROLLED=no
NETWORK=<<IP subnet for HANA-Internal example:192.168.220.0>>
NETMASK=<<subnet mask for HANA-Internal 255.255.255.0>>
IPADDR=<<IP address for HANA-Internal 192.168.220.101>>
11. Repeat the steps 11 to 13 for each vNIC interface.
12. Add default gateway.
vi /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=<<HOSTNAME>>
GATEWAY=<<IP Address of default gateway>>
Domain Name Service configuration must be done based on the local requirements.
Configuration Example
Add DNS IP if it is required to access internet:
vi /etc/resolv.conf
DNS1=<<IP of DNS Server1>>
DNS2=<<IP of DNS Server2>>
DOMAIN= <<Domain_name>>
For scale-out system all nodes should be able to resolve internal network IP address, below is an example of 8 node host file with the entire network defined in the /etc/hosts file
cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
## Internal Network
#
192.168.220.211 cishana21.ciscolab.local cishana21
192.168.220.212 cishana22.ciscolab.local cishana22
192.168.220.213 cishana23.ciscolab.local cishana23
192.168.220.214 cishana24.ciscolab.local cishana24
192.168.220.215 cishana25.ciscolab.local cishana25
192.168.220.216 cishana26.ciscolab.local cishana26
192.168.220.217 cishana27.ciscolab.local cishana27
192.168.220.218 cishana28.ciscolab.local cishana28
#
## Storage Network
#
192.168.110.211 cishana21s.ciscolab.local cishana21s
192.168.110.212 cishana22s.ciscolab.local cishana22s
192.168.110.213 cishana23s.ciscolab.local cishana23s
192.168.110.214 cishana24s.ciscolab.local cishana24s
192.168.110.215 cishana25s.ciscolab.local cishana25s
192.168.110.216 cishana26s.ciscolab.local cishana26s
192.168.110.217 cishana27s.ciscolab.local cishana27s
192.168.110.218 cishana28s.ciscolab.local cishana28s
#
## Client Network
#
192.168.222.211 cishana21c.ciscolab.local cishana21c
192.168.222.212 cishana22c.ciscolab.local cishana22c
192.168.222.213 cishana23c.ciscolab.local cishana23c
192.168.222.214 cishana24c.ciscolab.local cishana24c
192.168.222.215 cishana25c.ciscolab.local cishana25c
192.168.222.216 cishana26c.ciscolab.local cishana26c
192.168.222.217 cishana27c.ciscolab.local cishana27c
192.168.222.218 cishana28c.ciscolab.local cishana28c
#
## AppServer Network
#
192.168.223.211 cishana21a.ciscolab.local cishana21a
192.168.223.212 cishana22a.ciscolab.local cishana22a
192.168.223.213 cishana23a.ciscolab.local cishana23a
192.168.223.214 cishana24a.ciscolab.local cishana24a
192.168.223.215 cishana25a.ciscolab.local cishana25a
192.168.223.216 cishana26a.ciscolab.local cishana26a
192.168.223.217 cishana27a.ciscolab.local cishana27a
192.168.223.218 cishana28a.ciscolab.local cishana28a
#
## Management Network
#
192.168.196.211 cishana21m.ciscolab.local cishana21m
192.168.196.212 cishana22m.ciscolab.local cishana22m
192.168.196.213 cishana23m.ciscolab.local cishana23m
192.168.196.214 cishana24m.ciscolab.local cishana24m
192.168.196.215 cishana25m.ciscolab.local cishana25m
192.168.196.216 cishana26m.ciscolab.local cishana26m
192.168.196.217 cishana27m.ciscolab.local cishana27m
192.168.196.218 cishana28m.ciscolab.local cishana28m
#
## Backup Network
#
192.168.221.211 cishana21b.ciscolab.local cishana21b
192.168.221.212 cishana22b.ciscolab.local cishana22b
192.168.221.213 cishana23b.ciscolab.local cishana23b
192.168.221.214 cishana24b.ciscolab.local cishana24b
192.168.221.215 cishana25b.ciscolab.local cishana25b
192.168.221.216 cishana26b.ciscolab.local cishana26b
192.168.221.217 cishana27b.ciscolab.local cishana27b
192.168.221.218 cishana28b.ciscolab.local cishana28b
#
## DataSource Network
#
192.168.224.211 cishana21d.ciscolab.local cishana21d
192.168.224.212 cishana22d.ciscolab.local cishana22d
192.168.224.213 cishana23d.ciscolab.local cishana23d
192.168.224.214 cishana24d.ciscolab.local cishana24d
192.168.224.215 cishana25d.ciscolab.local cishana25d
192.168.224.216 cishana26d.ciscolab.local cishana26d
192.168.224.217 cishana27d.ciscolab.local cishana27d
192.168.224.218 cishana28d.ciscolab.local cishana28d
1
## Replication Network
#
192.168.225.211 cishana21r.ciscolab.local cishana21r
192.168.225.212 cishana22r.ciscolab.local cishana22r
192.168.225.213 cishana23r.ciscolab.local cishana23r
192.168.225.214 cishana24r.ciscolab.local cishana24r
192.168.225.215 cishana25r.ciscolab.local cishana25r
192.168.225.216 cishana26r.ciscolab.local cishana26r
192.168.225.217 cishana27r.ciscolab.local cishana27r
192.168.225.218 cishana28r.ciscolab.local cishana28r
#
## IPMI External Address
#
192.168.196.181 cishana21e-ipmi
192.168.196.182 cishana22e-ipmi
192.168.196.183 cishana23e-ipmi
192.168.196.184 cishana24e-ipmi
192.168.196.185 cishana25e-ipmi
192.168.196.186 cishana26e-ipmi
192.168.196.187 cishana27e-ipmi
192.168.196.188 cishana28e-ipmi
#
## NFS-IPMI Inband Address
#
192.168.197.154 cishana21-ipmi
192.168.197.158 cishana22-ipmi
192.168.197.153 cishana23-ipmi
192.168.197.157 cishana24-ipmi
192.168.197.152 cishana25-ipmi
192.168.197.156 cishana26-ipmi
192.168.197.151 cishana27-ipmi
192.168.197.155 cishana28-ipmi
#
##MapR Storage
#
192.168.110.101 mapr01s.ciscolab.local mapr01s
192.168.110.102 mapr02s.ciscolab.local mapr02s
192.168.110.103 mapr03s.ciscolab.local mapr03s
192.168.110.104 mapr04s.ciscolab.local mapr04s
#
## MapR Storage Virtual IP
#
192.168.110.81 maprvip01
192.168.110.82 maprvip02
192.168.110.83 maprvip03
192.168.110.84 maprvip04
192.168.110.85 maprvip05
192.168.110.86 maprvip06
192.168.110.87 maprvip07
192.168.110.88 maprvip08
To get the IP address assigned to the Cisco UCS service profile for Inband Management, go to Cisco UCS Manager, click the LAN tab in the navigation pane, select Pools > root > Sub-Organization > HANA > IP Pools > IP Pools HANA-Servers. In the right pane click the IP Address tab that shows the IP address and assigned Service Profiles.
To update and customize SAP HANA, complete the following steps:
1. Updating the RedHat system.
In order to patch the system, the repository must be updated. Note that the installed system doesn’t include any update information. In order to patch the Redhat System, it must be registered and attached to a valid Subscription. The following line will register the installation and update the repository information.
subscription-manager register --auto-attach
Username: <<username>>
Password: <<password>>
2. Update only the OS kernel and firmware packages to the latest release that appeared in RHEL 6.7. Set the release version to 6.7
subscription-manager release –set=6.7
3. Add the repos required for SAP HANA
subscription-manager repos --disable "*"
subscription-manager repos --enable rhel-6-server-rpms --enable rhel-sap-hana-for-rhel-6-server-rpms --enable rhel-scalefs-for-rhel-6-server-rpms
4. Apply the latest updates for RHEL 6.7 Typically, the kernel is updated as well:
yum -y update
5. Reboot the machine and use the new kernel.
6. Install the base package group
yum -y groupinstall base
7. Install dependencies in accordance with the SAP HANA Server Installation and Update Guide. The numactl package if the benchmark HWCCT is to be used.
yum -y install gtk2 libicu xulrunner ntp ntpdate sudo tcsh libssh2 expect cairo graphviz iptraf krb5-workstation krb5-libs.i686 nfs-utils lm_sensors rsyslog compat-sap-c++ openssl098e openssl PackageKit-gtk-module libcanberra-gtk2 libtool-ltdl xauth compat-libstdc++-33 numactl libuuid uuidd e2fsprogs icedtea-web xfsprogs glibc-devel libgomp tuned-profiles-sap-hana tuned ipmitool
8. Disable SELinux.
vi /etc/sysconfig/selinux
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of these two values:
# targeted - Targeted processes are protected,
# mls - Multi Level Security protection.
SELINUXTYPE=targeted
9. Sysctl.conf: The following parameters must be set in /etc/sysctl.conf:
10. Add the following line into /etc/modprobe.d/sunrpc.conf, Create the file, if it does not exist.
options sunrpc tcp_slot_table_entries=128
options sunrpc tcp_max_slot_table_entries=128
11. For compatibility reasons, four symbolic links are required:
ln -s /usr/lib64/libssl.so.0.9.8e /usr/lib64/libssl.so.0.9.8
ln -s /usr/lib64/libssl.so.1.0.1e /usr/lib64/libssl.so.1.0.1
ln -s /usr/lib64/libcrypto.so.0.9.8e /usr/lib64/libcrypto.so.0.9.8
ln -s /usr/lib64/libcrypto.so.1.0.1e /usr/lib64/libcrypto.so.1.0.1
12. Transparent Hugepages: In the file /boot/grub/grub.conf add the kernel command line argument.
transparent_hugepage=never
13. Disable c-state as per SAP Node.
a. In the file /boot/grub/grub.conf add the kernel command line argument.
intel_idle.max_cstate=0 processor.max_cstate=0
b. Edit the /boot/grub/menu.lst and append the below parameter to the kernel line per SAP’s recomendation.
intel_idle.max_cstate=0
14. Use a tuned profile to minimize latencies:
# yum -y install tuned
# tuned-adm profile latency-performance
# chkconfig tuned on
# service tuned start
15. Disable Crash Dump:
# chkconfig abrtd off
# chkconfig abrt-ccpp off
# service abrtd stop
# service abrt-ccpp stop
16. Disable core file creation. To disable core dumps for all users, open /etc/security/limits.conf, and add the line:
* soft core 0
* hard core 0
17. Enable group "sapsys" to create an unlimited number of processes:
echo "@sapsys soft nproc unlimited" > /etc/security/limits.d/99-sapsys.conf
18. In order to avoid sapstarsrv lock files being removed after 10 days, please update the package tmpwatch to at least version tmpwatch-2.9.16-5.el6_7.x86_64 and verify that the file /etc/cron.daily/tmpwatch contains the following entries:
-X '/tmp/.hdb*lock' -X '/tmp/.sapstartsrv*.log'
19. Disable Firewall.
service iptables stop
chkconfig iptables off
20. Reboot the OS by issuing reboot command.
21. Optional: old kernels can be removed after OS update:
package-cleanup --oldkernels --count=1 -y
To download the Cisco UCS Drivers ISO bundle, that contains most Cisco UCS Virtual Interface Card drivers, complete the following steps:
1. In a web browser, navigate to http://www.cisco.com.
2. Under Support, click All Downloads.
3. In the product selector, click Products, then click Server - Unified Computing.
4. If prompted, enter your Cisco.com username and password to log in.
You must be signed in to download Cisco Unified Computing System (UCS) drivers.
5. Cisco UCS drivers are available for both Cisco UCS B-Series Blade Server Software and Cisco UCS C-Series Rack-Mount UCS-Managed Server Software.
6. Click UCS B-Series Blade Server Software.
7. Click Cisco Unified Computing System (UCS) Drivers.
The latest release version is selected by default. This document is built on Version 3.1(1).
8. Click 3.1(1) Version.
9. Download ISO image of Cisco UCS-related drivers.
10. Choose your download method and follow the prompts to complete your driver download.
11. After the downloading, browse the UCS-B-Series/UCS_3.1.1/ucs-bxxx-drivers-combo.3.1.1/Drivers/Linux/Network/Cisco/VIC/RHEL/RHEL6.7 and copy kmod-enic-2.3.0.18-rhel6u7.el6.x86_64.rpm to each server
12. ssh to the Server on the Management IP as root.
13. Update the enic driver.
[root@cishana21 ~]# rpm -Uvh /tmp/kmod-enic-2.3.0.18-rhel6u7.el6.x86_64.rpm
Preparing... ########################################### [100%]
1:kmod-enic ########################################### [100%]
14. Update the enic driver on all the HANA Servers.
It is important that the time on all components used for SAP HANA is in sync. The configuration of NTP is important and to be performed on all systems.
1. Install NTP-server with utilities.
yum -y install ntp ntpdate
2. Configure NTP by adding at least one NTP server to the NTP config file /etc/ntp.conf.
vi /etc/ntp.conf
server <NTP-SERVER1 IP>
server <NTP-SERVER2 IP>
3. Stop the NTP services and update the NTP Servers
service ntpd stop
ntpdate ntp.example.com
4. Start NTP service and configure it to be started automatically.
service ntpd start
chkconfig ntpd on
chkconfig ntpdate on
The SSH Keys must be exchanged between all nodes in a SAP HANA Scale-Out system for user ‘root’ and user <SID>adm.
1. Generate the rsa public key by executing the command ssh-keygen -b 2048
[root@cishanar01 ~]# ssh-keygen -b 2048
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
14:5a:e6:d6:00:f3:81:86:38:47:e7:fb:de:78:f5:26 root@cishanar01.ciscolab.local
The key's randomart image is:
+--[ RSA 2048]----+
| o..+o* |
| o oooB = |
| o .o = . |
| + |
| . S |
| . . |
| . . . |
| . o. E o |
| o.. o |
+-----------------+
2. The SSH Keys must be exchanged between all nodes in a SAP HANA Scale-Out system for user ‘root’ and user.
3. Exchange the rsa public key by executing the below command from First server to rest of the servers in the scale-out system.
“ssh-copy-id -i /root/.ssh/id_rsa.pub cishanar02”
[root@cishanar01 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub cishanar02
The authenticity of host 'cishanar02 (172.29.220.202)' can't be established.
RSA key fingerprint is 28:5c:1e:aa:04:59:da:99:70:bc:f1:d1:2d:a4:e9:ee.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'cishanar02,172.29.220.202' (RSA) to the list of known hosts.
root@cishanar02's password:
Now try logging into the machine, with "ssh 'cishanar02'", and check in:
.ssh/authorized_keys
to make sure we haven't added extra keys that you weren't expecting.
4. Repeat the 1- 4 for all the servers in the SAP HANA system.
Mount options vary from the default Linux setting for using NFS for SAP HANA data and log volumes. The following is an example of /etc/fstab entry for an eight node SAP HANA Scale-Out system with SID ANA.
maprvip01:/mapr/mapr500/apps/hana/ANA/shared /hana/shared nfs nolock,hard,timeo=600 0 0
maprvip01:/mapr/mapr500/apps/hana/ANA/data001 /hana/data/ANA/mnt00001 nfs nolock,hard,timeo=600 0 0
maprvip02:/mapr/mapr500/apps/hana/ANA/data002 /hana/data/ANA/mnt00002 nfs nolock,hard,timeo=600 0 0
maprvip03:/mapr/mapr500/apps/hana/ANA/data003 /hana/data/ANA/mnt00003 nfs nolock,hard,timeo=600 0 0
maprvip04:/mapr/mapr500/apps/hana/ANA/data004 /hana/data/ANA/mnt00004 nfs nolock,hard,timeo=600 0 0
maprvip05:/mapr/mapr500/apps/hana/ANA/data005 /hana/data/ANA/mnt00005 nfs nolock,hard,timeo=600 0 0
maprvip06:/mapr/mapr500/apps/hana/ANA/data006 /hana/data/ANA/mnt00006 nfs nolock,hard,timeo=600 0 0
maprvip07:/mapr/mapr500/apps/hana/ANA/data007 /hana/data/ANA/mnt00007 nfs nolock,hard,timeo=600 0 0
maprvip08:/mapr/mapr500/apps/hana/ANA/data008 /hana/data/ANA/mnt00008 nfs nolock,hard,timeo=600 0 0
maprvip08:/mapr/mapr500/apps/hana/ANA/log001 /hana/log/ANA/mnt00001 nfs nolock,hard,timeo=600 0 0
maprvip07:/mapr/mapr500/apps/hana/ANA/log002 /hana/log/ANA/mnt00002 nfs nolock,hard,timeo=600 0 0
maprvip06:/mapr/mapr500/apps/hana/ANA/log003 /hana/log/ANA/mnt00003 nfs nolock,hard,timeo=600 0 0
maprvip05:/mapr/mapr500/apps/hana/ANA/log004 /hana/log/ANA/mnt00004 nfs nolock,hard,timeo=600 0 0
maprvip04:/mapr/mapr500/apps/hana/ANA/log005 /hana/log/ANA/mnt00005 nfs nolock,hard,timeo=600 0 0
maprvip03:/mapr/mapr500/apps/hana/ANA/log006 /hana/log/ANA/mnt00006 nfs nolock,hard,timeo=600 0 0
maprvip02:/mapr/mapr500/apps/hana/ANA/log007 /hana/log/ANA/mnt00007 nfs nolock,hard,timeo=600 0 0
maprvip01:/mapr/mapr500/apps/hana/ANA/log008 /hana/log/ANA/mnt00008 nfs nolock,hard,timeo=600 0 0
Create the required directory to mount /hana/shared /hana/data and /hana/log volumes. Mount all the volumes from /etc/fstab using “mount –a”. Check the status of all mounted volumes using “df –h” command.
maprvip01:/mapr/mapr500/apps/hana/ANA/shared
125T 194G 125T 1% /hana/shared
maprvip01:/mapr/mapr500/apps/hana/ANA/data001
125T 194G 125T 1% /hana/data/ANA/mnt00001
maprvip02:/mapr/mapr500/apps/hana/ANA/data002
125T 194G 125T 1% /hana/data/ANA/mnt00002
maprvip03:/mapr/mapr500/apps/hana/ANA/data003
125T 194G 125T 1% /hana/data/ANA/mnt00003
maprvip04:/mapr/mapr500/apps/hana/ANA/data004
125T 194G 125T 1% /hana/data/ANA/mnt00004
maprvip05:/mapr/mapr500/apps/hana/ANA/data005
125T 194G 125T 1% /hana/data/ANA/mnt00005
maprvip06:/mapr/mapr500/apps/hana/ANA/data006
125T 194G 125T 1% /hana/data/ANA/mnt00006
maprvip07:/mapr/mapr500/apps/hana/ANA/data007
125T 194G 125T 1% /hana/data/ANA/mnt00007
maprvip08:/mapr/mapr500/apps/hana/ANA/data008
125T 194G 125T 1% /hana/data/ANA/mnt00008
maprvip08:/mapr/mapr500/apps/hana/ANA/log001
125T 194G 125T 1% /hana/log/ANA/mnt00001
maprvip07:/mapr/mapr500/apps/hana/ANA/log002
125T 194G 125T 1% /hana/log/ANA/mnt00002
maprvip06:/mapr/mapr500/apps/hana/ANA/log003
125T 194G 125T 1% /hana/log/ANA/mnt00003
maprvip05:/mapr/mapr500/apps/hana/ANA/log004
125T 194G 125T 1% /hana/log/ANA/mnt00004
maprvip04:/mapr/mapr500/apps/hana/ANA/log005
125T 194G 125T 1% /hana/log/ANA/mnt00005
maprvip03:/mapr/mapr500/apps/hana/ANA/log006
125T 194G 125T 1% /hana/log/ANA/mnt00006
maprvip02:/mapr/mapr500/apps/hana/ANA/log007
125T 194G 125T 1% /hana/log/ANA/mnt00007
maprvip01:/mapr/mapr500/apps/hana/ANA/log008
125T 194G 125T 1% /hana/log/ANA/mnt00008
Change the directory permissions BEFORE installing HANA. Use the chown command after the file systems are mounted on each HANA node:
chmod -R 777 /hana/data
chmod -R 777 /hana/log
Please use the official SAP documentation, which describes the installation process with and without the SAP unified installer.
SAP HANA installation documentation:
SAP HANA Server Installation Guide
All other SAP installation and administration documentation is available under:
http://service.sap.com/instguides
Read the following SAP Notes before you start the installation. These SAP Notes contain the latest information about the installation, as well as corrections to the installation documentation.
The latest SAP Notes can be found at: https://service.sap.com/notes.
SAP Note 1514967 - SAP HANA: Central Note
SAP Note 2004651 - SAP HANA Platform SPS 08 Release Note
SAP Note 2075266 - SAP HANA Platform SPS 09 Release Note
SAP Note 1523337 - SAP HANA Database: Central Note
SAP Note 2000003 - FAQ: SAP HANA
SAP Note 1730999 - Configuration changes in SAP HANA appliance
SAP Note 1514966 - SAP HANA 1.0: Sizing SAP In-Memory Database
SAP Note 1780950 - Connection problems due to host name resolution
SAP Note 1780950 - SAP HANA SPS06: Network setup for external communication
SAP Note 1743225 - SAP HANA: Potential failure of connections with scale out nodes
SAP Note 1755396 - Released DT solutions for SAP HANA with disk replication
SAP Note 1890444 - HANA system slow due to CPU power save mode
SAP Note 1681092 - Support for multiple SAP HANA databases on a single SAP HANA appliance
SAP Note 1514966 - SAP HANA: Sizing SAP HANA Database
SAP Note 1637145 - SAP BW on HANA: Sizing SAP HANA Database
SAP Note 1793345 - Sizing for Suite on HANA
SAP Note 2235581 - SAP HANA: Supported Operating Systems
SAP Note 2009879 - SAP HANA Guidelines for RedHat Enterprise Linux (RHEL)
SAP Note 2247020 - SAP HANA DB: Recommended OS settings for RHEL 6.7
SAP Note 2228351 - SAP HANA Database SPS 11 revision 110 (or higher) on RHEL 6 or SLES 11
SAP Note 1944799 - SAP HANA Guidelines for SLES Operating System
SAP Note 1824819 - SAP HANA DB: Recommended OS settings for SLES11/SLES4SAP SP2
SAP Note 1731000 - Non-recommended configuration changes
SAP Note 1557506 - Linux paging improvements
SAP Note 1310037 - SUSE Linux Enterprise Server 11 - installation notes
SAP Note 1726839 - SAP HANA DB: potential crash when using xfs filesystem
SAP Note 1740136 - SAP HANA: wrong mount option may lead to corrupt persistency
SAP Note 1829651 - Time zone settings in SAP HANA scale out landscapes
SAP Note 1658845 - SAP HANA DB hardware check
SAP Note 1637145 - SAP BW on SAP HANA: Sizing SAP In-Memory Database
SAP Note 1661202 - Support for multiple applications on SAP HANA
SAP Note 1681092 - Support for multiple SAP HANA databases one HANA aka Multi SID
SAP Note 1577128 - Supported clients for SAP HANA 1.0
SAP Note 1808450 - Homogenous system landscape for on BW-HANA
SAP Note 1976729 - Application Component Hierarchy for SAP HANA
SAP Note 1927949 - Standard Behavior for SAP Logon Tickets
SAP Note 1577128 - Supported clients for SAP HANA
SAP Note 1730928 - Using external software in a SAP HANA appliance
SAP Note 1730929 - Using external tools in an SAP HANA appliance
SAP Note 1730930 - Using antivirus software in an SAP HANA appliance
SAP Note 1730932 - Using backup tools with Backint for SAP HANA
SAP Note 1788665 - SAP HANA running on VMware vSphere VMs
Since HANA revision 35, the ha_provider python class supports the STONITH functionality.
STONITH = Shoot the Other Node In The Head. With this python class, we are able to reboot the failing node to prevent a split brain and thus an inconsistency of the database. Since we use NFSv3, we must implement the STONITH functionality to prevent the database from a corruption because of multiple access to mounted file systems. If a HANA node is failed over to another node, the failed node will be rebooted from the master name server. This eliminates the risk of multiple accesses to the same file systems.
The used version of the ucs_ha_class.py must be at least 1.1
vi ucs_ha_class.py
"""
Function Class to call the reset program to kill the failed host and remove NFS locks for the SAP HANA HA
Class Name ucs_ha_class
Class Phath /usr/sap/<SID>/HDB<ID>/exe/python_support/hdb_ha
Provider Cisco Systems Inc.
Version 1.1 (apiVersion=2 and hdb_ha.client import sudowers)
"""
from hdb_ha.client import StorageConnectorClient
import os
class ucs_ha_class(StorageConnectorClient):
apiVersion = 2
def __init__(self, *args, **kwargs):
super(ucs_ha_class, self).__init__(*args, **kwargs)
def stonith(self, hostname):
os.system ("/bin/logger STONITH HANA Node:" + hostname)
os.system ("/hana/shared/HA/ucs_ipmi_reset.sh " + hostname)
return 0
def about(self):
ver={"provider_company":"Cisco",
"provider_name" :"ucs_ha_class",
"provider_version":"1.0",
"api_version" :2}
self.tracer.debug('about: %s'+str(ver))
print '>> ha about',ver
return ver
@staticmethod
def sudoers():
return """ALL=NOPASSWD: /bin/mount, /bin/umount, /bin/logger"""
def attach(self,storages):
return 0
def detach(self, storages):
return 0
def info(self, paths):
pass
Prepare the script to match the Cisco UCS Manager configured ipmi username and password. Default is ipmi-user sapadm and ipmi-user-password cisco.
vi ucs_ipmi_reset.sh
#!/bin/bash
# Cisco Systems Inc.
# SAP HANA High Availability
# Version NFS_UCS v01
# changelog: 02/18/2016
if [ -z $1 ]
then
echo "please add the hostname to reset to the command line"
exit 1
fi
# Trim the domain name off of the hostname
host=`echo "$1" | awk -F'.' '{print $1}'`
PASSWD=cisco
USER=sapadm
echo $host-ipmi
system_down='Chassis Power is off'
system_up='Chassis Power is on'
power_down='power off'
power_up='power on'
power_status='power status'
#
# Shut down the server via ipmitool power off
#
/bin/logger `whoami`" Resetting the HANA Node $host because of an Nameserver reset command"
#Power Off
rc2=`/usr/bin/ipmitool -I lanplus -H $host-ipmi -U $USER -P $PASSWD $power_down`
sleep 20
#Status
rc3=`/usr/bin/ipmitool -I lanplus -H $host-ipmi -U $USER -P $PASSWD $power_status`
#Chassis Power is still on
if [ "$rc3" = "$system_down" ]
then
/bin/logger `whoami`" HANA Node $host switched from ON to OFF "
else
#Power Off again
/bin/logger `whoami`" HANA Node $host still online second try to shutdown... "
rc2=`/usr/bin/ipmitool -I lanplus -H $host-ipmi -U $USER -P $PASSWD $power_down`
sleep 20
#Status
rc3=`/usr/bin/ipmitool -I lanplus -H $host-ipmi -U $USER -P $PASSWD $power_status`
#Chassis Power is still on
if [ "$rc3" = "$system_down" ]
then
/bin/logger `whoami`" HANA Node $host switched from ON to OFF 2nd try"
else
/bin/logger `whoami`" Resetting the HANA Node $host failed "
exit 1
fi
fi
#Chassis Power is down and the server can be swiched back on
#
#The NFS locks are released
#We will start the server now to bring it back as standby node
#Chassis Power is off
power="off"
/bin/logger `whoami`" HANA Node $host will stay offline for 10 seconds..... "
sleep 10
/bin/logger `whoami`" Switching HANA Node $host back ON "
rc2=`/usr/bin/ipmitool -I lanplus -H $host-ipmi -U $USER -P $PASSWD $power_up`
sleep 20
#Status
rc3=`/usr/bin/ipmitool -I lanplus -H $host-ipmi -U $USER -P $PASSWD $power_status`
#Chassis Power is off
if [ "$rc3" = "$system_up" ]
then
/bin/logger `whoami`" HANA Node $host reset done, system is booting"
power="on"
exit 0
else
/bin/logger `whoami`" Switching HANA Node $host back ON failed first time..."
/bin/logger `whoami`" Switching HANA Node $host back ON second time..."
rc2=`/usr/bin/ipmitool -I lanplus -H $host-ipmi -U $USER -P $PASSWD $power_up`
sleep 20
rc3=`/usr/bin/ipmitool -I lanplus -H $host-ipmi -U $USER -P $PASSWD $power_status`
if [ "$rc3" = "$system_up" ]
then
/bin/logger `whoami`" HANA Node $host reset done, system is booting"
power="on"
exit 0
else
/bin/logger `whoami`" Resetting the HANA Node $host failed "
exit 1
fi
fi
#
# Server is power on and should boot - our work is done
#
Copy the HA scripts to the shared HA directory under /hana/shared/<SID>/HA
(HANA nameserver is responsible to reset the failed node)
ssh cishana01
mkdir /hana/shared/HA
chown t01adm:sapsys /hana/shared/HA
scp ucs_ipmi_reset.sh /hana/shared/HA/
scp ucs_ha_class.py /hana/shared/HA/
chown t01adm:sapsys /hana/shared/HA/*
The SAP Storage Connector API provides a way to call a user procedure whenever the SAP HANA Nameserver triggers a node failover. The API requires the files mentioned above.
The procedure is executed on the master nameserver.
To activate the procedure in case of a node failover, the global.ini file in
<HANA installdirectory>/<SID>/global/hdb/custom/config/
must be edited and the following entry must be added:
[Storage]
ha_provider = ucs_ha_class
ha_provider_path = /hana/shared/HA
cd /hana/shared/<SID>/global/hdb/custom/config
vi global.ini
[communication]
internal_network = 192.168.220/24
listeninterface = .internal
[internal_hostname_resolution]
192.168.220.211 = cishana21
192.168.220.212 = cishana22
192.168.220.213 = cishana23
192.168.220.215 = cishana25
192.168.220.218 = cishana28
192.168.220.216 = cishana26
192.168.220.214 = cishana24
192.168.220.217 = cishana27
[persistence]
basepath_datavolumes = /hana/data/ANA
basepath_logvolumes = /hana/log/ANA
[storage]
ha_provider = ucs_ha_class
ha_provider_path = /usr/shared/HA
Modify the /etc/sudoers file and append the below line on all the nodes. By adding the line <sid>adm account can execute commands mentioned without password
cishana01:/ # vi /etc/sudoers
<sid>adm ALL=NOPASSWD: /bin/mount, /bin/umount, /bin/logger, /sbin/multipath, /sbin/multipathd, /usr/bin/sg_persist, /etc/init.d/multipathd, /bin/kill, /usr/bin/lsof, /sbin/vgchange, /sbin/vgscan
To activate the change, please restart the SAP HANA DB.
Test the ipmi connectivity on ALL nodes
cishana01:~ # ipmitool -I lanplus -H cishana01-ipmi -U sapadm -P cisco power status
Chassis Power is on
Make sure that all the nodes are responding to the ipmitool command and the IP address for the ipmi network match in the /etc/hosts file of all the servers.
After the OS for MapR and HANA are installed, you might consider using clustershell for easy administration. With this tool, you can execute commands simultaneously on all nodes or on a specified subset of nodes. Clustershell could be installed on an independent Linux desktop or administration server which has access to the MapR and HANA cluster nodes. It can also run on one or several nodes within the HANA and / or MapR cluster.
Clustershell is part of the Extra Packages for Enterprise Linux (EPEL) and therefore used at own risk:
https://access.redhat.com/solutions/3358
For latest information about clustershell, see the EPEL repository for RHEL 6:
https://dl.fedoraproject.org/pub/epel/6/x86_64/repoview/clustershell.html
There is some content provided by MapR:
https://www.mapr.com/developercentral/code/clush-and-cluster-auditing
Before installing clustershell, create and exchange ssh keys on every host. This means, the host where clustershell runs and all MapR and HANA cluster nodes. Please refer to the chapters "SSH Keys" for MapR and HANA cluster nodes.
Install EPEL repository
wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-6.noarch.rpm
rpm -ivh epel-release-latest-6.noarch.rpm
Verify EPEL respository
yum repolist
Install clustershell. Dependencies are: libyaml, PyYAML. If these dependencies are not fulfilled, you need to register first to RHN.
yum install clustershell
After installing clustershell, configure host groups in /etc/clustershell/groups
In this example, eight hana nodes and four mapr nodes are part of the clush environment. The group "mapr" contains four nodes named mapr01 - mapr04, and the group "hana" contains eight nodes named cishana01 - cishana08. You can setup more or different groups as you see them fit.
The usage of clustershell is quite simple. This example executes "date" on all nodes:
clush -a date
This example shows the version the compat-sap-c++ package on nodes cishana02 - cishana08:
clush -w cishana[02-08] "rpm -qa|grep compat-sap"
This example executes sed on the mapr node group (nodes mapr01 - mapr04) which changes the line beginning with "#DrCacheSize =" into "DrCacheSize = 614400". Pay attention on how double quotation marks and single quotation marks are being used:
clush -gmapr sed -i "'s/#DrCacheSize =.*/DrCacheSize = 614400/g' /opt/mapr/conf/nfsserver.conf"
Shailendra Mruthunjaya is a Technical Marketing Engineer with Cisco UCS Solutions and Performance Group. Shailendra has over four years of experience with SAP HANA on Cisco UCS platform. Shailendra has designed several SAP landscapes in public and private cloud environment. Currently, his focus is on developing and validating infrastructure best practices for SAP applications on Cisco UCS Servers, Cisco Nexus products and Storage technologies.
Matthias Schlarb is part of the Cisco SAP Competence Center in Walldorf where he conducts workshops with customers and partners. As a Technical Marketing Engineer, he develops best practices for SAP Cloud Infrastructure on Cisco Unified Computing System
For their support and contribution to the design, validation, and creation of this CVD, we would like to thank:
· Ralf Klahr, Cisco Systems, Inc.
· Ulrich Kleidon, Cisco Systems, Inc.
· Erik Lillestolen, Cisco Systems, Inc.
· Pramod Ramamurthy, Cisco Systems, Inc.
· Andy Lerner, MapR Technologies, Inc.
· James Sun, MapR Technologies, Inc.