Cisco UCS C240 M4 Data Platform for SAP HANA Storage TDI
Design and Deployment of Cisco UCS Server with MapR
Converged Data Platform
Last Updated: September 7, 2016
The CVD program consists of systems and solutions designed, tested, and documented to facilitate faster, more reliable, and more predictable customer deployments. For more information, visit:
http://www.cisco.com/go/designzone.
ALL DESIGNS, SPECIFICATIONS, STATEMENTS, INFORMATION, AND RECOMMENDATIONS (COLLECTIVELY, "DESIGNS") IN THIS MANUAL ARE PRESENTED "AS IS," WITH ALL FAULTS. CISCO AND ITS SUPPLIERS DISCLAIM ALL WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE. IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THE DESIGNS, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
THE DESIGNS ARE SUBJECT TO CHANGE WITHOUT NOTICE. USERS ARE SOLELY RESPONSIBLE FOR THEIR APPLICATION OF THE DESIGNS. THE DESIGNS DO NOT CONSTITUTE THE TECHNICAL OR OTHER PROFESSIONAL ADVICE OF CISCO, ITS SUPPLIERS OR PARTNERS. USERS SHOULD CONSULT THEIR OWN TECHNICAL ADVISORS BEFORE IMPLEMENTING THE DESIGNS. RESULTS MAY VARY DEPENDING ON FACTORS NOT TESTED BY CISCO.
CCDE, CCENT, Cisco Eos, Cisco Lumin, Cisco Nexus, Cisco StadiumVision, Cisco TelePresence, Cisco WebEx, the Cisco logo, DCE, and Welcome to the Human Network are trademarks; Changing the Way We Work, Live, Play, and Learn and Cisco Store are service marks; and Access Registrar, Aironet, AsyncOS, Bringing the Meeting To You, Catalyst, CCDA, CCDP, CCIE, CCIP, CCNA, CCNP, CCSP, CCVP, Cisco, the Cisco Certified Internetwork Expert logo, Cisco IOS, Cisco Press, Cisco Systems, Cisco Systems Capital, the Cisco Systems logo, Cisco Unity, Collaboration Without Limitation, EtherFast, EtherSwitch, Event Center, Fast Step, Follow Me Browsing, FormShare, GigaDrive, HomeLink, Internet Quotient, IOS, iPhone, iQuick Study, IronPort, the IronPort logo, LightStream, Linksys, MediaTone, MeetingPlace, MeetingPlace Chime Sound, MGX, Networkers, Networking Academy, Network Registrar, PCNow, PIX, PowerPanels, ProConnect, ScriptShare, SenderBase, SMARTnet, Spectrum Expert, StackWise, The Fastest Way to Increase Your Internet Quotient, TransPath, WebEx, and the WebEx logo are registered trademarks of Cisco Systems, Inc. and/or its affiliates in the United States and certain other countries.
All other trademarks mentioned in this document or website are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (0809R)
© 2016 Cisco Systems, Inc. All rights reserved.
Table of Contents
Cisco Unified Computing System
Cisco UCS 6332UP Fabric Interconnect
Cisco VIC 1385 Virtual Interface Card
Cisco Unified Computing System Performance Manager
Hardware Requirements for the SAP HANA Database
Deployment Hardware and Software
Cisco Nexus 3000 Series Switch Network Configuration
Cisco Nexus 3548 Initial Configuration
Enable Appropriate Cisco Nexus 3548 Switch Features and Settings
Create Global Policy to Enable Jumbo Frame and Apply the Policy to System Wide
Create VLANs for SAP HANA Traffic
Configure Network Interfaces Connecting to Cisco UCS Fabric Interconnect
Cisco Nexus 9000 Series Switch Network Configuration
Cisco Nexus 9000 A Initial Configuration
Cisco Nexus 9000 B Initial Configuration
Enable Appropriate Cisco Nexus 9000 Series Switches─Features and Settings
Cisco Nexus 9000 A and Cisco Nexus 9000 B
Create VLANs for SAP HANA Traffic
Cisco Nexus 9000 A and Cisco Nexus 9000 B
Configure Virtual Port-Channel Domain
Configure Network Interfaces for the VPC Peer Links
Configure Network Interfaces with Cisco UCS Fabric Interconnect
(Optional) Configure Network Interfaces for SAP HANA Backup/Data Source/Replication
(Optional) Management Plane Access for Cisco UCS Servers
Uplink into Existing Network Infrastructure
Initial Setup of Cisco UCS 6332 Fabric Interconnect
Cisco UCS 6332 Fabric Interconnect A
Cisco UCS 6332 Fabric Interconnect B
Upgrade Cisco UCS Manager Software to Version 3.1(1e)
Add Block of IP Addresses for KVM Access
Cisco UCS Blade Chassis Connection Options
Enable Server and Uplink Ports
Acknowledge Rack-Mount Servers
Create Local Disk Configuration Policy
Update Default Maintenance Policy
Set Jumbo Frames in Cisco UCS Fabric
Update Default Network Control Policy to Enable CDP
Create Network Control Policy for Internal Network
Create Service Profile Templates for MapR Servers
MapR Server RAID Configuration
MapR Server Operating System Installation
Operating System Network Configuration
RedHat System Update and OS Customization for MapR Servers
Organizations in every industry are generating and using more data than ever before; from customer transactions and supplier delivery considerations to real-time user-consumption statistics. Without scalable infrastructure that can store, process, and analyze big data sets in real time, companies are unable to use this information to their advantage. The Cisco UCS Integrated Infrastructure for SAP HANA Scale-Out with the Cisco Unified Computing System™ (Cisco UCS) helps companies more easily harness information and make better business decisions that let them stay ahead of the competition. Our solutions help improve access to all of your data, accelerate business decision making with policy-based, simplified management; lower deployment risk; and reduce total cost of ownership (TCO). Our innovations give you the key to unlock the intelligence in your data and interpret it with a new dimension of context and insight to help you create a sustainable, competitive business advantage.
Cisco Validated Designs include systems and solutions that are designed, tested, and documented to facilitate and improve customer deployments. These designs incorporate a wide range of technologies and products into a portfolio of solutions that have been developed to address the business needs of customers. The Cisco UCS Integrated Infrastructure for SAP HANA with MapR Converged Data Platform provides an end-to-end architecture that demonstrate support for multiple SAP HANA workloads with high availability and server redundancy. The SAP HANA Storage TDI solution consists of Cisco UCS C-Series C240 M4 rack mount servers and the next generation Cisco UCS Fabric Interconnect 6332 with 40 Gigabit Ethernet (GbE) for Server Management and Storage Connectivity. Cisco UCS service profiles enable rapid and consistent server configuration, and automation simplifies ongoing system maintenance activities such as deploying firmware updates across the entire cluster as a single operation. Advanced monitoring capabilities raise alarms and send notifications about the health of the entire cluster so that you can proactively address concerns before they affect data analysis. The Storage Nodes are composed of Cisco UCS Servers with MapR Converged Data Platform, which is a modern NFS-mountable distributed file-system. MapR-FS is a complete POSIX file system that handles raw disk I/O for big data workload with direct access to storage hardware which dramatically improving performance and scale to thousands of nodes, and trillions of files, with extremely high throughput. MapR-FS includes enterprise-grade features such as block-level mirroring for mission-critical disaster recovery as well as load balancing, and consistent snapshots for easy data recovery.
Cisco UCS Integrated Infrastructure provides a pre-validated, ready-to-deploy infrastructure, which reduces the time and complexity involved in configuring and validating a traditional data center deployment. Cisco UCS Platforms is flexible, reliable and cost effective to facilitate various deployment options of the applications while being easily scalable and manageable. The reference architecture detailed in this document highlights the resiliency, cost benefit, and ease of deployment of an SAP HANA Storage TDI solution. This document describing the infrastructure installation and configuration to run SAP HANA Storage TDI on a shared infrastructure.
SAP HANA is SAP SE’s implementation of in-memory database technology. The SAP HANA database takes advantage of the low cost main memory (RAM), data-processing capabilities of multicore processors, and faster data access to provide better performance for analytical and transactional applications. SAP HANA offers a multi-engine, query-processing environment that supports relational data (with both row- and column-oriented physical representations in a hybrid engine) as well as a graph and text processing for semi-structured and unstructured data management within the same system. As an appliance, SAP HANA combines software components from SAP optimized for certified hardware. However, this solution has a preconfigured hardware set-up and preinstalled software package that is dedicated for SAP HANA. In 2013, SAP introduced SAP HANA Tailored Datacenter Integration (TDI) option; TDI solution offers a more open and flexible way for integrating SAP HANA into the data center by reusing existing enterprise storage hardware, thereby reducing hardware costs. With the introduction of SAP HANA TDI for shared infrastructure, the Cisco UCS Integrated Infrastructure solution provides the advantage of having the compute, storage, and network stack integrated with the programmability of the Cisco Unified Computing System (Cisco UCS). SAP HANA TDI option enables organizations to run multiple SAP HANA production systems on a shared infrastructure. It also enables customers to run the SAP application servers and SAP HANA database hosted on the same infrastructure.
For more information about SAP HANA, see the SAP help portal: http://help.sap.com/hana/
The intended audience for this document includes, but is not limited to: sales engineers, field consultants, professional services, IT managers, partner engineering, and customers deploying the Cisco Integrated Infrastructure for SAP HANA with MapR Converged Data Platform. External references are provided wherever applicable, but readers are expected to be familiar with the technology, infrastructure, and database security policies of the customer installation.
This document describes the steps required to deploy and configure Cisco UCS C240 M4 Data Platform for SAP HANA Storage TDI. Cisco’s validation provides further confirmation with regard to component compatibility, connectivity and correct operation of the entire integrated stack. This document showcases one of the variants of Cisco Integrated Infrastructure for SAP HANA. While readers of this document are expected to have sufficient knowledge to install and configure the products used, configuration details that are important to the deployment of this solution are provided in this CVD.
Cisco UCS Integrated Infrastructure for SAP HANA solution designed with next generation Fabric Interconnect which provides 40GbE ports. The solution is designed with 40GbE end-to-end network including Storage network. The persistent storage is configured on C240 M4 C-series servers with MapR Converged Data Platform. MapR-FS provides distributed, reliable, high performance, scalable, and full read/write data storage for SAP HANA.
The Cisco UCS C240 M4 Data Platform for SAP HANA Storage TDI with MapR Converged Data Platform provides a storage solution on Cisco hardware to support multiple SAP HANA workloads with high availability and server redundancy. The solution supports up to 8 x Cisco UCS C240 M4 C-series rack mount servers for storage. Server management and network connectivity is provided by the next generation Cisco UCS Fabric Interconnect 6332 with 40 GbE network bandwidth. The Cisco UCS C240 M4 servers provide persistent storage with MapR Converged Data Platform, which is a modern NFS-mountable distributed file-system. Figure 1 shows the Cisco UCS C240 M4 Data Platform for SAP HANA Storage TDI block diagram.
The reference architecture documented in this CVD consists of 4 x Cisco UCS C240 M4 C-Series rack mount servers for storage.
Figure 1 Cisco UCS Integrated Infrastructure for SAP HANA
The Cisco Unified Computing System is a state-of-the-art data center platform that unites computing, network, storage access, and virtualization into a single cohesive system.
The main components of the Cisco Unified Computing System are:
· Computing - The system is based on an entirely new class of computing system that incorporates rack mount and blade servers based on Intel Xeon Processor E5 and E7. The Cisco UCS Servers offer the patented Cisco Extended Memory Technology to support applications with large datasets and allow more virtual machines per server.
· Network - The system is integrated onto a low-latency, lossless, 40-Gbps unified network fabric. This network foundation consolidates LANs, SANs, and high-performance computing networks which are separate networks today. The unified fabric lowers costs by reducing the number of network adapters, switches, and cables, and by decreasing the power and cooling requirements.
· Virtualization - The system unleashes the full potential of virtualization by enhancing the scalability, performance, and operational control of virtual environments. Cisco security, policy enforcement, and diagnostic features are now extended into virtualized environments to better support changing business and IT requirements.
· Storage access - The system provides consolidated access to both SAN storage and Network Attached Storage (NAS) over the unified fabric. By unifying the storage access, the Cisco Unified Computing System can access storage over Ethernet (NFS or iSCSI) and Fibre Channel over Ethernet (FCoE). This provides customers with choice for storage access and investment protection. In addition, the server administrators can pre-assign storage-access policies for system connectivity to storage resources, simplifying storage connectivity, and management for increased productivity.
The Cisco Unified Computing System is designed to deliver:
· A reduced Total Cost of Ownership (TCO) and increased business agility.
· Increased IT staff productivity through just-in-time provisioning and mobility support.
· A cohesive, integrated system, which unifies the technology in the data center.
· Industry standards supported by a partner ecosystem of industry leaders.
Cisco Unified Computing System Manager (UCSM) provides unified, embedded, policy-driven management programmatically controls server, network, and storage resources, so it can be efficiently managed at scale-through software. It is embedded on a pair of Cisco UCS 6300 or 6200 Series Fabric Interconnects, supporting an end-to-end Ethernet or Fibre Channel over Ethernet (FCoE) solution of up to 40 Gb and up to 16 Gb Fibre Channel. The manager participates in server, fabric, and storage provisioning, device discovery, inventory, configuration, diagnostics, monitoring, fault detection, auditing, and statistics collection. Cisco UCS Manager works with HTML 5, Java, or CLI graphical user interfaces. An open API facilitates integration of Cisco UCS Manager with a wide variety of monitoring, analysis, configuration, deployment, and orchestration tools from other independent software vendors. The API also facilitates customer development through the Cisco UCS PowerTool for Powershell and a Python SDK.
The Cisco UCS 6300 Series Fabric Interconnects are a core part of Cisco UCS, providing both network connectivity and management capabilities for the system. The Cisco UCS 6300 Series offers line-rate, low-latency, lossless 10 and 40 GbE, Fibre Channel over Ethernet (FCoE), and Fibre Channel functions. The Cisco UCS 6300 Series provides the management and communication backbone for the Cisco UCS B-Series Blade Servers, 5100 Series Blade Server Chassis, and C-Series Rack Servers managed by Cisco UCS. All servers attached to the fabric interconnects become part of a single, highly available management domain. In addition, by supporting unified fabric, the Cisco UCS 6300 Series provides both LAN and SAN connectivity for all servers within its domain.
From a networking perspective, the Cisco UCS 6300 Series uses a cut-through architecture, supporting deterministic, low-latency, line-rate 10 and 40 GbE ports, switching capacity of 2.56 terabits per second (Tbps), and 320 Gbps of bandwidth per chassis, independent of packet size and enabled services. The product family supports Cisco® low-latency, lossless 10 and 40 GbE unified network fabric capabilities, which increase the reliability, efficiency, and scalability of Ethernet networks. The fabric interconnect supports multiple traffic classes over a lossless Ethernet fabric from the server through the fabric interconnect. Significant TCO savings can be achieved with an FCoE optimized server design in which network interface cards (NICs), host bus adapters (HBAs), cables, and switches can be consolidated.
The Cisco UCS 6332 Fabric Interconnect is the management and communication backbone for Cisco UCS B-Series Blade Servers, C-Series Rack Servers, and 5100 Series Blade Server Chassis. All servers attached to 6332 Fabric Interconnects become part of one highly available management domain. The Cisco UCS 6332UP 32-Port Fabric Interconnect is a 1-rack-unit 40 GbE, FCoE and Fibre Channel switch offering up to 2.56 Tbps throughput and up to 32 ports. The switch has 32 fixed 40 GbE and FCoE ports. Cisco UCS 6332UP 32-Port Fabric Interconnect have ports that can be configured for the breakout feature that supports connectivity between 40 GbE ports and 10 GbE ports. This feature provides backward compatibility to existing hardware that supports 10 GbE. A 40 GbE port can be used as four 10 GbE ports. Using a 40 GbE SFP, these ports on a Cisco UCS 6300 Series Fabric Interconnect can connect to another fabric interconnect that has four 10 GbE SFPs.
Figure 2 Cisco UCS 6332 UP Fabric Interconnect
The Cisco UCS C240 M4 Rack Server is an enterprise-class server designed to deliver exceptional performance, expandability, and efficiency for storage and I/O-intensive infrastructure workloads. This includes big data analytics, virtualization, and graphics-rich and bare-metal applications.
The Cisco UCS C240 M4 Rack Server delivers outstanding levels of expandability and performance for standalone or Cisco UCS-managed environments in a two rack-unit (2RU) form factor. It provides:
· Dual Intel® Xeon® E5-2600 v3 processors for improved performance suitable for nearly all two-socket applications
· Next-generation double-data-rate 4 (DDR4) memory, 12-Gbps SAS throughput, and NVMe PCIe SSD support
· Innovative Cisco UCS virtual interface card (VIC) support in PCIe or modular LAN-on-motherboard (mLOM) form factor
· Graphics-rich experiences to more virtual users with support for the latest NVIDIA graphics processing units (GPUs)
The Cisco UCS C240 M4 server also offers maximum reliability, availability, and serviceability (RAS) features, including:
· Tool-free CPU insertion
· Easy-to-use latching lid
· Hot-swappable and hot-pluggable components
· Redundant Cisco Flexible Flash SD cards
The Cisco UCS C240 M4 server can be deployed standalone or as part of the Cisco Unified Computing System. Cisco UCS unifies computing, networking, management, virtualization, and storage access into a single integrated architecture that can enable end-to-end server visibility, management, and control in both bare-metal and virtualized environments. With Cisco UCS-managed deployment, Cisco UCS C240 M4 takes advantage of our standards-based unified computing innovations to significantly reduce customers’ TCO and increase business agility.
Figure 3 Cisco UCS C240 M4 Rack Server
The Cisco UCS Virtual Interface Card (VIC) 1385 is a Cisco® innovation. It provides a policy-based, stateless, agile server infrastructure for your data center. This dual-port Enhanced Quad Small Form-Factor Pluggable (QSFP) half-height PCI Express (PCIe) card is designed exclusively for Cisco UCS C-Series Rack Servers. The card supports 40 GbE and Fibre Channel over Ethernet (FCoE). It incorporates Cisco’s next-generation converged network adapter (CNA) technology and offers a comprehensive feature set, providing investment protection for future feature software releases. The card can present more than 256 PCIe standards-compliant interfaces to the host, and these can be dynamically configured as either network interface cards (NICs) or host bus adapters (HBAs). In addition, the VIC supports Cisco Data Center Virtual Machine Fabric Extender (VM-FEX) technology. This technology extends the Cisco UCS Fabric Interconnect ports to virtual machines, simplifying server virtualization deployment.
Figure 4 Cisco UCS 1385 VIC Card
Cisco UCS Performance Manager is a purpose-built data center operations management solution. It unifies the monitoring of key applications, business services, and integrated infrastructures across dynamic, heterogeneous, physical, and virtual Cisco UCS-powered data centers. Cisco UCS Performance Manager uses Cisco UCS APIs to collect data from Cisco UCS Manager to display comprehensive information about all Cisco UCS infrastructure components. With a customizable view, data center staff can see application services and view performance and component or service availability information for Cisco UCS integrated infrastructures.
Cisco UCS Performance Manager dynamically collects information about Cisco UCS servers, network, storage, and virtual machine hosts using an agentless information gathering approach. The solution provided the following:
· Unifies performance monitoring and management of Cisco UCS integrated infrastructure solutions
· Delivers real-time views of fabric and data center switch bandwidth usage and capacity thresholds
· Discovers and creates a relationship model of each system, giving staff a single, accurate view of all components
· Allows staff to navigate into individual Cisco UCS infrastructure components when troubleshooting and resolving issues
Cisco UCS Performance Manager provides deep visibility of Cisco UCS integrated infrastructure performance for service profiles, chassis, fabric extenders, adapters, virtual interface cards, ports, and uplinks for granular data center monitoring. Customers can use Cisco UCS Performance Manager to maintain service-level agreements (SLAs) by managing optimal resource allocation to prevent under-provisioning and avoid performance degradation. And by defining component or application-centric views of critical resources, administrators can monitor SLA health and performance from a single console, eliminating the need for multiple tools.
Cisco UCS Performance Manager Installation and Configuration guide is available at Cisco UCS Performance Manager Install Guide
Cisco’s Unified Compute System is revolutionizing the way servers are managed in data-center. The following are the unique differentiators of Cisco Unified Computing System and Cisco UCS Manager:
· Embedded management: In Cisco Unified Computing System, the servers are managed by the embedded firmware in the Fabric Interconnects, eliminating need for any external physical or virtual devices to manage the servers. Also, a pair of FIs can manage up to 40 chassis, each containing 8 blade servers. This gives enormous scaling on management plane.
· Unified fabric: In Cisco Unified Computing System, from blade server chassis or rack server fabric-extender to FI, there is a single Ethernet cable used for LAN, SAN and management traffic. This converged I/O, results in reduced cables, SFPs and adapters – reducing capital and operational expenses of overall solution.
· Auto discovery: By simply inserting the blade server in the chassis or connecting rack server to the fabric extender, discovery and inventory of compute resource occurs automatically without any management intervention. Combination of unified fabric and auto-discovery enables wire-once architecture of Cisco Unified Computing System, where compute capability of Cisco Unified Computing System can extend easily, while keeping the existing external connectivity to LAN, SAN and management networks.
· Policy based resource classification: When a compute resource is discovered by Cisco UCS Manager, it can be automatically classified to a given resource pool based on policies defined. This capability is useful in multi-tenant cloud computing. This CVD focuses on the policy-based resource classification of Cisco UCS Manager.
· Combined Rack and Blade server management: Cisco UCS Manager can manage B-Series blade servers and C-Series rack server under the same Cisco UCS domain. This feature, along with stateless computing makes compute resources truly hardware form factor agnostic. This CVD focuses on the combination of B-Series and C-Series Servers to demonstrate stateless and form factor independent computing work load.
· Model based management architecture: Cisco UCS Manager Architecture and management database is model based and data driven. Open, standard based XML API is provided to operate on the management model. This enables easy and scalable integration of UCSM with other management system, such as VMware vCloud director, Microsoft system center, and Citrix CloudPlatform.
· Policies, Pools, Templates: Management approach in Cisco UCS Manager is based on defining policies, pools and templates, instead of cluttered configuration, which enables simple, loosely coupled, data driven approach in managing compute, network and storage resources.
· Loose referential integrity: In Cisco UCS Manager, a service profile, port profile or policies can refer to other policies or logical resources with loose referential integrity. A referred policy cannot exist at the time of authoring the referring policy or a referred policy can be deleted even though other policies are referring to it. This provides different subject matter experts to work independently from each-other. This provides great flexibilities where different experts from different domains, such as network, storage, security, server and virtualization work together to accomplish a complex task.
· Policy resolution: In Cisco UCS Manager, a tree structure of organizational unit hierarchy can be created that mimics the real life tenants and/or organization relationships. Various policies, pools and templates can be defined at different levels of organization hierarchy. A policy referring to other policy by name is resolved in the org hierarchy with closest policy match. If no policy with specific name is found in the hierarchy till root org, then special policy named “default” is searched. This policy resolution practice enables automation friendly management APIs and provides great flexibilities to owners of different orgs.
· Service profiles and stateless computing: Service profile is a logical representation of a server, carrying its various identities and policies. This logical server can be assigned to any physical compute resource as far as it meets the resource requirements. Stateless computing enables procurement of a server within minutes, which used to take days in legacy server management systems.
· Built-in multi-tenancy support: Combination of policies, pools and templates, loose referential integrity, policy resolution in org hierarchy and service profile based approach to compute resources make Cisco UCS Manager inherently friendly to multi-tenant environment typically observed in private and public clouds.
· Virtualization aware network: VM-FEX technology makes access layer of network aware about host virtualization. This prevents domain pollution of compute and network domains with virtualization when virtual network is managed by port-profiles defined by the network administrators’ team. VM-FEX also offloads hypervisor CPU by performing switching in the hardware, thus allowing hypervisor CPU to do more virtualization related tasks. VM-FEX technology is well integrated with VMware vCenter, Linux KVM and Hyper-V SR-IOV to simplify cloud management.
· Simplified QoS: Even though fibre-channel and Ethernet are converged in Cisco UCS fabric, built-in support for QoS and lossless Ethernet makes it seamless. Network Quality of Service (QoS) is simplified in Cisco UCS Manager by representing all system classes in one GUI panel.
The MapR Converged Data Platform integrates Hadoop and Spark, real-time database capabilities, and global event streaming with big data enterprise storage, for developing and running innovative data applications. The MapR Platform is powered by the industry’s fastest, most reliable, secure, and open data infrastructure that dramatically lowers TCO and enables global real-time data applications.
MapR Platform Services is the set of core capabilities in the MapR Converged Data Platform. Core components include:
· MapR Streams - a global publish-subscribe event streaming system for big data
· MapR-DB - a high performance, in-Hadoop NoSQL database management system
· MapR-FS - the underlying POSIX file system that provides distributed, reliable, high performance, scalable, and full read/write data storage
The Cisco UCS C240 M4 Data Platform for SAP HANA Storage TDI takes advantage of the enterprise grade MapR-FS in MapR Platform Services. It offers the appliance a robust storage layer that is highly available, resilient and performant.
MapR converged data platform is backed by the robust MapR-FS, which can be accessed through the NFS gateway. The SAP HANA Servers mounts MapR-FS using NFS client. Data can be persisted to the storage system managed by MapR-FS. MapR-FS is distributed, has a global name space, real-time read/write access, volume based, secure and has many other benefits compared to HDFS. Some important features pertaining to data protection include:
Snapshots are an incredibly efficient and effective approach in protecting data from accidental deletion and corruption due to errors in applications, without the need to actually copy the data within a cluster or to another cluster. The ability to create and manage Snapshots is an essential feature expected from enterprise-grade storage systems, and this capability is increasingly seen as critical with big data systems. A Snapshot is a capture of the state of the storage system at an exact point in time, and is used to provide a full recovery of data when lost. For example, with MapR, you can snapshot petabytes of data in seconds, as we simply maintain pointers to the locations of blocks that make up a Volume of data as opposed to the need to physically copy those petabytes into the local cluster or a remote one.
For additional information, please see the MapR Snapshots technical brief: MapR Snapshots.
Data in MapR-FS can be mirrored synchronously inside the cluster or asynchronously to a separate cluster located at a remote location. Because the mirroring is block-based, only the changed blocks are mirrored across the WAN after the baseline mirroring happened. It is fast and light on network bandwidth utilization compared to file-based mirroring. Most importantly, the connection for mirroring is secure.
By default, MapR-FS uses a 3-way replication model to prevent data loss due to node failure. However, this approach reduces usable storage and increases the overall storage cost. Cisco UCS’s storage controller offers excellent hardware level RAID protection. By combining RAID-5 and lowering the replication factor to 2, we are able to offer an optimally designed appliance that has both high I/O throughput and great usable/raw storage ratio.
For additional information, please see the MapR File System: MapR-FS.
This section describes the SAP HANA TDI system requirements defined by SAP and architecture of Cisco UCS C240 M4 Data Platform for SAP HANA Storage TDI.
SAP HANA System on a Single Server - scale-up - is the simplest of the installation types. It is possible to run an SAP HANA system entirely on one host and then scale the system up as needed. All data and processes are located on the same server and can be accessed locally. The network requirements for this option are minimum one 1 GbE (access) and one 10 GbE (storage networks) are sufficient to run SAP HANA scale-up. SAP HANA scale-out option is used if the SAP HANA system does not fit into the main memory of a single server based on the rules defined by SAP. In this method, multiple independent servers are combined to form one system and the load is distributed among multiple servers. In a distributed system, each index server is usually assigned to its own host to achieve maximum performance. It is possible to assign different tables to different hosts (partitioning the database), or a single table can be split across hosts (partitioning of tables). SAP HANA scale-out supports failover scenarios and high availability. Individual hosts in a distributed system have different roles master, worker, slave, standby depending on the task.
Some use cases are not supported on SAP HANA scale-out configuration and it is recommended to check with SAP whether a use case can be deployed as a scale-out solution.
The network requirements for this option are higher than for scale-up systems. In addition to the client and application access and storage access network, a node-to-node network is necessary. One 1 GbE (access), one 10 GbE (node-to-node) and one 10 GbE (storage networks) are required to run SAP HANA scale-out system. Additional network bandwidth is required to support system replication or backup capability.
For additional information, go to: http://saphana.com.
This document does not cover the updated information published by SAP.
SAP HANA supports servers equipped with Intel Xeon processor E7-8880 v4, E7-8890 v4, Intel Xeon processor E7-8880 v3, E7-8890 v3, E7-8890L v3 and Intel Xeon processor E7-2890v2, E7-4890v2, E7-8890v2 CPUs. In addition, the Intel Xeon processor E5-26xx v4 and E5-26xx v3 is supported for scale-up systems with the SAP HANA TDI option.
SAP HANA scale-out solution is supported in the following memory configurations:
· Homogenous symmetric assembly of dual in-line memory modules (DIMMs) for example, DIMM size or speed should not be mixed
· Maximum use of all available memory channels
· Memory of 512 GB to 1.5 TB per 4 Socket Server for SAP NetWeaver Business Warehouse (BW) and DataMart with HANA SPS 10 or below
· Memory of 512 GB to 2 TB per 4 Socket Server for SAP NetWeaver Business Warehouse (BW) and DataMart with HANA SPS 11 or above
A SAP HANA data center deployment can range from a database running on a single host to a complex distributed system. Distributed systems can get complex with multiple hosts located at a primary site having one or more secondary sites; supporting a distributed multi-terabyte database with full fault and disaster recovery.
SAP HANA has different types of network communication channels to support the different SAP HANA scenarios and setups:
· Client zone: Channels used for external access to SAP HANA functions by end-user clients, administration clients, and application servers, and for data provisioning through SQL or HTTP
· Internal zone: Channels used for SAP HANA internal communication within the database or, in a distributed scenario, for communication between hosts
· Storage zone: Channels used for storage access (data persistence) and for backup and restore procedures
Table 1 lists all the networks defined by SAP or Cisco or requested by customers.
Table 1 List of Known Networks
Name |
Use Case |
Solutions |
Bandwidth Requirements |
Solution Design |
Client Zone Networks |
||||
Application Server Network |
SAP Application Server to DB communication |
All |
1 or 10 GbE |
10 or 40 GbE |
Client Network |
User / Client Application to DB communication |
All |
1 or 10 GbE |
10 or 40 GbE |
Data Source Network |
Data import and external data integration |
Optional for all SAP HANA systems |
1 or 10 GbE |
10 or 40 GbE |
Internal Zone Networks |
||||
Inter-Node Network |
Node to node communication within a scale-out configuration |
Scale-Out |
10 GbE |
40 GbE |
System Replication Network |
SAP HANA System Replication |
For SAP HANA Disaster Tolerance |
TBD with Customer |
TBD with Customer |
Storage Zone Networks |
||||
Backup Network |
Data Backup |
Optional for all SAP HANA systems |
10 GbE |
10 or 40 GbE |
Storage Network |
Node to Storage communication |
All
|
10 GbE |
20 or 40 GbE |
Infrastructure Related Networks |
||||
Administration Network |
Infrastructure and SAP HANA administration |
Optional for all SAP HANA systems |
1 GbE |
10 or 40 GbE |
Boot Network |
Boot the Operating Systems through PXE/NFS or FCoE |
Optional for all SAP HANA systems |
1 GbE |
NA |
Details about the network requirements for SAP HANA are available in the white paper from SAP SE at: SAP HANA Network requirement.
The network need to be properly segmented and must be connected to the same core/ backbone switch as shown in Figure 5 based on customer’s high-availability and redundancy requirements for different SAP HANA network segments.
Figure 5 High-Level SAP HANA Network Overview
Based on the listed network requirements, every HANA server node must be equipped with two 1 GbE for scale-up systems to establish the communication with the application or user (Client Zone) and one 10 GbE for storage access (Storage Zone).
For scale-out solutions, an additional redundant network for SAP HANA node to node communication with 10 GbE is required (Internal Zone).
For more information on SAP HANA Network security, please refer to SAP HANA Security Guide.
As an in-memory database, SAP HANA uses storage devices to save a copy of the data, for the purpose of startup and fault recovery without data loss. The choice of the specific storage technology is driven by various requirements like size, performance and high availability. To use storage system in the Tailored Datacenter Integration option, the storage must be certified for SAP HANA TDI option at http://global.sap.com/community/ebook/2014-09-02-hana-hardware/enEN/enterprise-storage.html.
All relevant information about storage requirements is documented in the white paper “SAP HANA Storage Requirements” and is available at: http://scn.sap.com/docs/DOC-62595.
SAP can only support performance related SAP HANA topics if the installed solution has passed the validation test successfully.
Refer to SAP HANA Administration Guide section 2.8 Hardware Checks for Tailored Datacenter Integration for Hardware check test tool and the related documentation.
Figure 6 shows the file system layout and the required storage sizes to install and operate SAP HANA. For the Linux OS installation (/(root)), 10 GB of disk size is recommended. Additionally, 50 GB must be provided for /usr/sap as the volume used for SAP software.
While installing SAP HANA on a host, we specify the mount point for the installation binaries (/hana/shared/<SID>), data files (/hana/data/<SID>) and log files (/hana/log/<SID>), where SID is the instance identifier of the SAP HANA installation.
Figure 6 File System Layout for 2 Node Scale-Out System
The storage sizing for filesystem is based on the amount of memory equipped on the SAP HANA host.
In case of distributed installation of SAP HANA Scale-Out, each server will have the following:
Root-FS: 10 GB
/usr/sap: 50 GB
The installation binaries, trace and configuration files are stored on a shared filesystem, which should be accessible for all hosts in the distributed installation. The size of shared filesystem should be equal to one times memory in each host for every four worker nodes. For example, in a distributed installation with eight hosts with 2 TB of memory each, shared file system should be 4 TB.
For each HANA host there should be mount point for data and log volume.
Size of the file system for data volume for Appliance option is three times the host memory.
/hana/data/<sid>/mntXXXXX: 3 x Memory
Size of the file system for data volume with TDI option is one times the host memory.
/hana/data/<sid>/mntXXXXX: 1 x Memory
For solutions based on Intel E7-8800v3 and E7-8800v4 CPU the size of the Log volume must be as follows:
· Half of the server memory for systems ≤ 256 GB memory
· Min. 512 GB for systems with ≥ 512 GB memory
The supported operating systems for SAP HANA are as follows:
· SUSE Linux Enterprise Server for SAP Applications 11 and 12
· Red Hat Enterprise Linux for SAP HANA 6 and 7
This document provides the installation process for the Red Hat Enterprise Linux for SAP HANA option only.
The infrastructure for a SAP HANA solution must not have single point of failure. To support high-availability, the hardware and software requirements are:
· Internal storage: A RAID-based configuration is preferred
· External storage: Redundant data paths, dual controllers, and a RAID-based configuration are required
· Ethernet switches: Two or more independent switches should be used
SAP HANA scale-out comes with in integrated high-availability function. If an SAP HANA system is configured with a stand-by node, a failed part of SAP HANA will start on the stand-by node automatically. For automatic host failover, the storage connector API must be properly configured for the implementation and operation of the SAP HANA.
Please check the latest information from SAP at: http://saphana.com or http://service.sap.com/notes.
The Cisco UCS C240 M4 Data Platform for SAP HANA Storage TDI provides an end-to-end architecture with Cisco Hardware that demonstrate support for multiple SAP HANA workloads with high availability and server redundancy. The architecture uses Cisco UCS Manager with Cisco UCS C-Series Blade Servers on Cisco UCS Fabric Interconnect. The uplink from Cisco UCS Fabric Interconnect is connected to Cisco Nexus 3548 switches for High Availability and Failover functionality. The Cisco UCS C-Series Rack Servers are connected directly to Cisco UCS Fabric Interconnect with single-wire management feature. This infrastructure is deployed to provide IP based storage access using NFS protocols with file-level access to shared storage.
Figure 7 shows the Cisco UCS C240 M4 Data Platform for SAP HANA Storage TDI, described in this Cisco Validated Design. It highlights the Cisco UCS Integrated Infrastructure hardware components and the network connections for a configuration with IP-based shared storage.
Figure 7 Physical Topology with Nexus 3000 (Base Design)
The base reference architecture includes:
Cisco Unified Computing System
· 2 x Cisco UCS 6332 32-Port Fabric Interconnects
· 4 x Cisco UCS C240 M4 High-Performance Blade Servers with Cisco UCS Virtual Interface Card (VIC) 1385
Cisco Network
· For Base design:
— 1 x Cisco Nexus 3548 Switch for 10 Gigabit Ethernet connectivity between the two Cisco UCS Fabric Interconnects for failover scenarios.
The solution can be designed for Data Center Design option using a pair for Cisco Nexus 9000 series switches. With two Cisco Nexus switches, vPC is configured between Cisco Nexus 9000 series switches and Cisco Fabric Interconnect as show in Figure 8.
· For Enterprise Data Center Design option:
— 2 x Cisco Nexus 9372 Switch for 10 Gigabit Ethernet connectivity between the two Cisco UCS Fabric Interconnects
Figure 8 Physical Topology with Nexus 9000 (Enterprise Data Center Design)
A minimum of 3 x Cisco UCS C240 M4 blade servers are required to install MapR Converged Data Platform. 3 x Cisco UCS C240 M4 can support up to 12 active HANA servers. The scaling of storage to compute nodes is linear, for every four HANA nodes, one Cisco UCS C240 M4 for storage is required to meet the SAP HANA storage performance requirements for TDI defined by SAP SE.
Table 2 Compute Node to Storage Node Ratio
Active HANA Nodes |
Appliance Storage Nodes |
TDI Storage Nodes |
2 |
3 |
3 |
3 |
3 |
3 |
4 |
3 |
3 |
5 |
3 |
3 |
6 |
3 |
3 |
7 |
4 |
3 |
8 |
4 |
3 |
9 |
5 |
3 |
10 |
5 |
3 |
11 |
6 |
3 |
12 |
6 |
3 |
13 |
7 |
4 |
14 |
7 |
4 |
15 |
8 |
4 |
16 |
8 |
4 |
The components can be scaled easily to support specific business requirements. Additional servers can be deployed to increase storage capacity without additional network components. For each storage node, uplink bandwidth for HANA-Storage network of 40 Gb/s should be calculated, therefore a single UCS domain (a pair of Cisco UCS 6332 32 port Fabric Interconnects) can support up to 16 Cisco UCS C240 M4 servers while maintaining full uplink bandwidth to 64 SAP HANA compute nodes. The uplink configuration consists of 8 x 40 GbE per Fabric Interconnect, which will accumulate to 640 Gb/s which are necessary for 16 storage nodes.
This CVD does not describe uplink configuration to satisfy the HANA-Storage uplink bandwidth. The uplink configuration in this CVD shows 1 x 40 GbE per Fabric Interconnect only and has to be adapted to the actual number of storage nodes in the cluster.
Overcommitting uplink bandwidth can be done, but SAP HANA storage performance requirements for TDI might not be fulfilled. With such a setting, up to 24 storage nodes can be configured, but there is no performance guarantee.
Figure 9 Overall Uplink Bandwidth
The solution is designed to meet SAP HANA storage performance requirements for TDI defined by SAP SE. The storage servers are connecting to Cisco UCS Fabric Interconnect, which can be connected to a datacenter switch such as Cisco Nexus 9000 as uplink. Each storage server is equipped with 2 x 40 GbE ports. The main traffic consists of storage node to storage node traffic, which will take place within the Fabric Interconnect, and storage node to HANA node traffic, which will travel through the Fabric Interconnect uplinks (either to Cisco Nexus 9000 or the existing datacenter switch).
This document provides details for configuring a fully redundant, highly available configuration for a Cisco UCS C240 M4 Data Platform for SAP HANA Storage TDI.
This document is intended to enable you to fully configure the customer environment. In this process, various steps require you to insert customer-specific naming conventions, IP addresses, and VLAN schemes, as well as to record appropriate MAC addresses. Table 2 lists the configuration variables that are used throughout this document. This table can be completed based on the specific site variables and used in implementing the document configuration steps.
Not all VLANs may be necessary since it depends on which scenario the HANA networks are used. This is the list of HANA networks with the corresponding variable:
· Client zone
— <<var_client_vlan_id>> Client Network for HANA Data/log VLAN ID
— <<var_appserver_vlan_id>> Application Server Network for HANA Data/log VLAN ID
— <<var_datasource_vlan_id>> Data source Network for HANA Data/log VLAN ID
· Internal zone
— <<var_internal_vlan_id>> Node to Node Network for HANA Data/log VLAN ID
— <<var_replication_vlan_id>> Replication Network for HANA Data/log VLAN ID
· Storage zone
— <<var_storage_vlan_id>> Storage network for HANA Data/log VLAN ID
— <<var_backup_vlan_id>> Backup Network for HANA Data/log VLAN ID
Table 3 Configuration Variables
Variable |
Description |
Customer Implementation Value |
<<var_nexus_HA_hostname>> |
Cisco Nexus 3548 HA Switch host name |
|
<<var_nexus_HA_mgmt0_ip>> |
Out-of-band Cisco Nexus 3548 HA Switch management IP address |
|
<<var_nexus_HA_mgmt0_netmask>> |
Out-of-band management network netmask |
|
<<var_nexus_HA_mgmt0_gw>> |
Out-of-band management network default gateway |
|
<<var_global_ntp_server_ip>> |
NTP server IP address |
|
<<var_oob_vlan_id>> |
Out-of-band management network VLAN ID |
|
<<var_mgmt_vlan_id>> |
Management network VLAN ID |
|
<<var_nexus_vpc_domain_mgmt_id>> |
Unique Cisco Nexus switch VPC domain ID for Management Switch |
|
<<var_nexus_vpc_domain_id>> |
Unique Cisco Nexus switch VPC domain ID |
|
<<var_nexus_A_hostname>> |
Cisco Nexus 9000 A host name |
|
<<var_nexus_A_mgmt0_ip>> |
Out-of-band Cisco Nexus 9000 A management IP address |
|
<<var_nexus_A_mgmt0_netmask>> |
Out-of-band management network netmask |
|
<<var_nexus_A_mgmt0_gw>> |
Out-of-band management network default gateway |
|
<<var_nexus_B_hostname>> |
Cisco Nexus 9000 B host name |
|
<<var_nexus_B_mgmt0_ip>> |
Out-of-band Cisco Nexus 9000 B management IP address |
|
<<var_nexus_B_mgmt0_netmask>> |
Out-of-band management network netmask |
|
<<var_nexus_B_mgmt0_gw>> |
Out-of-band management network default gateway |
|
<<var_mapr-01_vlan_id>> |
MapR Internal network 01 VLAN ID |
|
<<var_mapr-02_vlan_id>> |
MapR Internal network 02 VLAN ID |
|
<<var_mapr-03_vlan_id>> |
MapR Internal network 03 VLAN ID |
|
<<var_storage_vlan_id>> |
Storage network for HANA Data/log VLAN ID |
|
<<var_internal_vlan_id>> |
Node to Node Network for HANA Data/log VLAN ID |
|
<<var_backup_vlan_id>> |
Backup Network for HANA Data/log VLAN ID |
|
<<var_client_vlan_id>> |
Client Network for HANA Data/log VLAN ID |
|
<<var_appserver_vlan_id>> |
Application Server Network for HANA Data/log VLAN ID |
|
<<var_datasource_vlan_id>> |
Data source Network for HANA Data/log VLAN ID |
|
<<var_replication_vlan_id>> |
Replication Network for HANA Data/log VLAN ID |
|
<<var_ucs_clustername>> |
Cisco UCS Manager cluster host name |
|
<<var_ucsa_mgmt_ip>> |
Cisco UCS fabric interconnect (FI) A out-of-band management IP address |
|
<<var_ucsa_mgmt_mask>> |
Out-of-band management network netmask |
|
<<var_ucsa_mgmt_gateway>> |
Out-of-band management network default gateway |
|
<<var_ucs_cluster_ip>> |
Cisco UCS Manager cluster IP address |
|
<<var_ucsb_mgmt_ip>> |
Cisco UCS FI B out-of-band management IP address |
|
The information in this section is provided as a reference for cabling the network and compute components. For connectivity between 40 GbE ports to 10 GbE ports on Cisco switches, Cisco QSFP to four SFP+ copper breakout cables are used. These breakout cables connects a 40 GbE QSFP port of a Cisco Fabric Interconnect on one end and to four 10 GbE SFP+ ports of a Cisco switch on the other end.
Figure 10 shows the cabling topology for Cisco UCS Integrated Infrastructure for SAP HANA configuration using the Cisco Nexus 3000.
Figure 10 Cabling Topology with Cisco Nexus 3000
Figure 11 shows the cabling topology for Cisco UCS Integrated Infrastructure for SAP HANA configuration using the pair of Cisco Nexus 9000 series switches.
Figure 11 Cabling Topology with Cisco Nexus 9000
The information in this section is provided as a reference for cabling the network and compute components. To simplify cabling requirements, the tables include both local and remote device and port locations. The tables show the out-of-band management ports connectivity into preexisting management infrastructure, the Management Ports cabling needs to be adjusted accordingly. These Management interfaces will be used in various configuration steps.
Table 4 through Table 8 provides the details of all the connections.
Table 4 Cisco UCS Fabric Interconnect A - Cabling Information
Local Device |
Local Port |
Connection |
Remote Device |
Remote Port |
Cisco UCS fabric interconnect A
|
Eth1/1 |
40GbE |
Uplink to Customer Data Switch A |
Any |
Eth1/2 |
40GbE |
Uplink to Customer Data Switch B |
Any |
|
Eth1/3/1 |
40GbE QSFP to 4 SFP+ break-out cables |
Cisco Nexus 3000 HA |
Eth 1/3 |
|
Eth1/3/2 |
Cisco Nexus 3000 HA |
Eth 1/4 |
||
Eth1/3/3 |
Cisco Nexus 3000 HA |
Eth 1/5 |
||
Eth1/3/4 |
Cisco Nexus 3000 HA |
Eth 1/6 |
||
Eth1/3/1* |
40GbE QSFP to 4 SFP+ break-out cables |
Cisco Nexus 9000 A |
Eth 1/3 |
|
Eth1/3/2* |
Cisco Nexus 9000 A |
Eth 1/4 |
||
Eth1/3/3* |
Cisco Nexus 9000 B |
Eth 1/3 |
||
Eth1/3/4* |
Cisco Nexus 9000 B |
Eth 1/4 |
||
|
|
|
|
|
Eth1/21 |
40GbE |
Cisco UCS C240-M4-1 |
VIC 1385 Port 1 |
|
Eth1/22 |
40GbE |
Cisco UCS C240-M4-2 |
VIC 1385 Port 1 |
|
Eth1/23 |
40GbE |
Cisco UCS C240-M4-3 |
VIC 1385 Port 1 |
|
Eth1/24 |
40GbE |
Cisco UCS C240-M4-4 |
VIC 1385 Port 1 |
|
Eth1/25 |
40GbE |
Cisco UCS C240-M4-5 |
VIC 1385 Port 1 |
|
Eth1/26 |
40GbE |
Cisco UCS C240-M4-6 |
VIC 1385 Port 1 |
|
Eth1/27 |
40GbE |
Cisco UCS C240-M4-7 |
VIC 1385 Port 1 |
|
Eth1/28 |
40GbE |
Cisco UCS C240-M4-8 |
VIC 1385 Port 1 |
|
MGMT0 |
GbE |
Customer’s Management Switch |
Any |
|
L1 |
GbE |
Cisco UCS fabric interconnect B |
L1 |
|
L2 |
GbE |
Cisco UCS fabric interconnect B |
L2 |
* The ports ETH1/3/1-4 are used with pair Cisco Nexus 9000 Switches design option.
Table 5 Cisco UCS Fabric Interconnect B - Cabling Information
Local Device |
Local Port |
Connection |
Remote Device |
Remote Port |
Cisco UCS fabric interconnect B
|
Eth1/1 |
40GbE |
Uplink to Customer Data Switch A |
Any |
Eth1/2 |
40GbE |
Uplink to Customer Data Switch B |
Any |
|
Eth1/3/1 |
40GbE QSFP to 4 SFP+ break-out cables |
Cisco Nexus 3000 HA |
Eth 1/7 |
|
Eth1/3/2 |
Cisco Nexus 3000 HA |
Eth 1/8 |
||
Eth1/3/3 |
Cisco Nexus 3000 HA |
Eth 1/9 |
||
Eth1/3/4 |
Cisco Nexus 3000 HA |
Eth 1/10 |
||
Eth1/3/1* |
40GbE QSFP to 4 SFP+ break-out cables |
Cisco Nexus 9000 A |
Eth 1/5 |
|
Eth1/3/2* |
Cisco Nexus 9000 A |
Eth 1/6 |
||
Eth1/3/3* |
Cisco Nexus 9000 B |
Eth 1/5 |
||
Eth1/3/4* |
Cisco Nexus 9000 B |
Eth 1/6 |
||
|
|
|
|
|
Eth1/21 |
40GbE |
Cisco UCS C240-M4-1 |
VIC 1385 Port 2 |
|
Eth1/22 |
40GbE |
Cisco UCS C240-M4-2 |
VIC 1385 Port 2 |
|
Eth1/23 |
40GbE |
Cisco UCS C240-M4-3 |
VIC 1385 Port 2 |
|
Eth1/24 |
40GbE |
Cisco UCS C240-M4-4 |
VIC 1385 Port 2 |
|
Eth1/25 |
40GbE |
Cisco UCS C240-M4-5 |
VIC 1385 Port 2 |
|
Eth1/26 |
40GbE |
Cisco UCS C240-M4-6 |
VIC 1385 Port 2 |
|
Eth1/27 |
40GbE |
Cisco UCS C240-M4-7 |
VIC 1385 Port 2 |
|
Eth1/28 |
40GbE |
Cisco UCS C240-M4-8 |
VIC 1385 Port 2 |
|
MGMT0 |
GbE |
Customer’s Management Switch |
Any |
|
L1 |
GbE |
Cisco UCS fabric interconnect A |
L1 |
|
L2 |
GbE |
Cisco UCS fabric interconnect A |
L2 |
* The ports ETH1/3/1-4 are used with pair Cisco Nexus 9000 Switches design option.
Table 6 Cisco Nexus 3000 - Cabling Information
Local Device |
Local Port |
Connection |
Remote Device |
Remote Port |
Cisco Nexus 3000 HA |
Eth1/3 |
10GbE |
Cisco UCS fabric interconnect A |
Eth1/3/1 |
Eth1/4 |
10GbE |
Cisco UCS fabric interconnect A |
Eth1/3/2 |
|
Eth1/5 |
10GbE |
Cisco UCS fabric interconnect A |
Eth1/3/3 |
|
Eth1/6 |
10GbE |
Cisco UCS fabric interconnect A |
Eth1/3/4 |
|
Eth1/7 |
10GbE |
Cisco UCS fabric interconnect B |
Eth1/3/1 |
|
Eth1/8 |
10GbE |
Cisco UCS fabric interconnect B |
Eth1/3/2 |
|
Eth1/9 |
10GbE |
Cisco UCS fabric interconnect B |
Eth1/3/3 |
|
Eth1/10 |
10GbE |
Cisco UCS fabric interconnect B |
Eth1/3/4 |
|
MGMT0 |
GbE |
Customer’s Management Switch |
Any |
Table 7 Cisco Nexus 9000-A - Cabling Information
Local Device |
Local Port |
Connection |
Remote Device |
Remote Port |
Cisco Nexus 9000 A
|
Eth1/1 |
10GbE |
Uplink to Customer Data Switch A |
Any |
Eth1/2 |
10GbE |
Uplink to Customer Data Switch B |
Any |
|
Eth1/3 |
10GbE |
Cisco UCS fabric interconnect A |
Eth1/3/1 |
|
Eth1/4 |
10GbE |
Cisco UCS fabric interconnect A |
Eth1/3/2 |
|
Eth1/5 |
10GbE |
Cisco UCS fabric interconnect B |
Eth1/3/1 |
|
Eth1/6 |
10GbE |
Cisco UCS fabric interconnect B |
Eth1/3/2 |
|
Eth1/9* |
10GbE |
Cisco Nexus 9000 B |
Eth1/9 |
|
Eth1/10* |
10GbE |
Cisco Nexus 9000 B |
Eth1/10 |
|
Eth1/11* |
10GbE |
Cisco Nexus 9000 B |
Eth1/11 |
|
Eth1/12* |
10GbE |
Cisco Nexus 9000 B |
Eth1/12 |
|
MGMT0 |
GbE |
Customer’s Management Switch |
Any |
* The ports ETH1/9-12 can be replaced with E2/1 and E2/2 for 40G connectivity.
For devices requiring GbE connectivity, use the GbE Copper SFP+s (GLC–T=).
Table 8 Cisco Nexus 9000-B - Cabling Information
Local Device |
Local Port |
Connection |
Remote Device |
Remote Port |
Cisco Nexus 9000 B
|
Eth1/1 |
10GbE |
Uplink to Customer Data Switch A |
Any |
Eth1/2 |
10GbE |
Uplink to Customer Data Switch B |
Any |
|
Eth1/3 |
10GbE |
Cisco UCS fabric interconnect A |
Eth1/3/3 |
|
Eth1/4 |
10GbE |
Cisco UCS fabric interconnect A |
Eth1/3/4 |
|
Eth1/5 |
10GbE |
Cisco UCS fabric interconnect B |
Eth1/3/3 |
|
Eth1/6 |
10GbE |
Cisco UCS fabric interconnect B |
Eth1/3/4 |
|
Eth1/9* |
10GbE |
Cisco Nexus 9000 A |
Eth1/9 |
|
Eth1/10* |
10GbE |
Cisco Nexus 9000 A |
Eth1/10 |
|
Eth1/11* |
10GbE |
Cisco Nexus 9000 A |
Eth1/11 |
|
Eth1/12* |
10GbE |
Cisco Nexus 9000 A |
Eth1/12 |
|
MGMT0 |
GbE |
Customer’s Management Switch |
Any |
* The ports ETH1/9-12 can be replaced with E2/1 and E2/2 for 40G connectivity.
For devices requiring GbE connectivity, use the GbE Copper SFP+s (GLC–T=).
Table 9 details the software revisions used for validating various components of the Cisco UCS Integrated Infrastructure for SAP HANA.
Table 9 Hardware and Software Components
Vendor |
Product |
Version |
Cisco |
Cisco UCSM |
3.1(1e) |
Cisco |
Cisco UCS 6332 Fabric Interconnect |
5.0(3)N2(3.11e) |
Cisco |
Cisco UCS C240 M4 Servers |
2.0(9c) – CIMC Controller |
Cisco |
Cisco UCS VIC 1385 |
4.1(1d) |
Cisco |
Cisco Nexus 9372PX |
7.0(3)I2(2a) |
Cisco |
Cisco Nexus 3548 |
6.0(2)A6(5a) |
Red Hat |
Red Hat Enterprise Linux (RHEL) |
6.7 (64 bit) |
MapR |
MapR Converged Data Platform |
5.1 |
This section provides the details to configure the Cisco Nexus 3548 Switch for High Availability in SAP HANA Storage TDI environment. The switch configuration in this section is based on the cabling plan described in the device cabling section. If the systems are connected on different ports, configure the switches accordingly following the guidelines described in this section.
These steps describe the initial Cisco Nexus 3548 Series Switch setup.
To set up the initial configuration for the first Cisco Nexus switch complete the following steps:
On initial boot and connection to the serial or console port of the switch, the NX-OS setup should automatically start and attempt to enter Power on Auto Provisioning.
Abort Power On Auto Provisioning and continue with normal setup ?(yes/no)[n]:yes
---- System Admin Account Setup ----
Do you want to enforce secure password standard (yes/no): yes
Enter the password for "admin":
Confirm the password for "admin":
---- Basic System Configuration Dialog ----
This setup utility will guide you through the basic configuration of
the system. Setup configures only enough connectivity for management
of the system.
Please register Cisco Nexus 3500 Family devices promptly with your
supplier. Failure to register may affect response times for initial
service calls. Nexus devices must be registered to receive entitled
support services.
Press Enter at anytime to skip a dialog. Use ctrl-c at anytime
to skip the remaining dialogs.
Would you like to enter the basic configuration dialog (yes/no): yes
Create another login account (yes/no) [n]:
Configure read-only SNMP community string (yes/no) [n]:
Configure read-write SNMP community string (yes/no) [n]:
Enter the switch name : <<var_nexus_HA_hostname>>
Continue with Out-of-band (mgmt0) management configuration? (yes/no) [y]:
Mgmt0 IPv4 address : <<var_nexus_HA_mgmt0_ip>>
Mgmt0 IPv4 netmask : <<var_nexus_HA_mgmt0_netmask>>
Configure the default gateway? (yes/no) [y]:
IPv4 address of the default gateway : <<var_nexus_HA_mgmt0_gw>>
Enable the telnet service? (yes/no) [n]:
Enable the ssh service? (yes/no) [y]:
Type of ssh key you would like to generate (dsa/rsa) : rsa
Number of key bits <768-2048> : 2048
Configure the ntp server? (yes/no) [n]: y
NTP server IPv4 address : <<var_global_ntp_server_ip>>
Configure default interface layer (L3/L2) [L2]:
Configure default switchport interface state (shut/noshut) [noshut]:
Configure CoPP System Policy Profile ( default / l2 / l3 ) [default]:
The following configuration will be applied:
switchname <<var_nexus_HA_hostname>>
interface mgmt0
ip address <<var_nexus_HA_mgmt0_ip>> <<var_nexus_HA_mgmt0_netmask>>
vrf context management
ip route 0.0.0.0/0 <<var_nexus_A_mgmt0_gw>>
no shutdown
no telnet server enable
ssh key rsa 2048 force
ssh server enable
ntp server <<var_global_ntp_server_ip>>
system default switchport
no system default switchport shutdown
policy-map type control-plane copp-system-policy ( default )
Would you like to edit the configuration? (yes/no) [n]:
Use this configuration and save it? (yes/no) [y]:
[########################################] 100%
Copy complete, now saving to disk (please wait)...
The following commands enable IP switching feature and set default spanning tree behaviors:
1. On each Nexus 3548, enter configuration mode:
config terminal
2. Use the following commands to enable the necessary features:
feature lacp
feature interface-vlan
feature lldp
3. Save the running configuration to start-up:
copy run start
policy-map type network-qos jumbo
class type network-qos class-default
mtu 9216
system qos
service-policy type network-qos jumbo
To create the necessary VLANs, complete the following step on both switches:
1. From the configuration mode, run the following commands:
vlan <<var_storage_vlan_id>>
name HANA-Storage
vlan <<var_internal_vlan_id>>
name HANA-Internal
vlan <<var_mapr-01_vlan_id>>
name MapR-01
vlan <<var_mapr-02_vlan_id>>
name MapR-02
vlan <<var_mapr-03_vlan_id>>
name MapR-03
1. Define a port description for the interface connecting to <<var_ucs_clustername>>-A.
interface Eth1/3-6
description <<var_ucs_clustername>>-A:1/3
2. Apply it to a port channel and bring up the interface.
interface eth1/3-6
channel-group 3 mode active
no shutdown
3. Define a description for the port-channel connecting to <<var_ucs_clustername>>-A.
interface Po3
description <<var_ucs_clustername>>-A
4. Make the port-channel a switchport, and configure a trunk to allow internal HANA VLANs
switchport
switchport mode trunk
switchport trunk allowed vlan <<var_storage_vlan_id>>,<<var_internal_vlan_id>>,<<var_mapr-01_vlan_id>>,<<var_mapr-02_vlan_id>>,<<var_mapr-03_vlan_id>>
5. Make the port channel and associated interfaces spanning tree edge ports.
spanning-tree port type edge trunk
spanning-tree bpduguard enable
6. Bring up the port-channel.
no shutdown
7. Define a port description for the interface connecting to <<var_ucs_clustername>>-B.
interface Eth1/7-10
description <<var_ucs_clustername>>-B:1/3
8. Apply it to a port channel and bring up the interface.
interface Eth1/7-10
channel-group 4 mode active
no shutdown
9. Define a description for the port-channel connecting to <<var_ucs_clustername>>-B.
interface Po4
description <<var_ucs_clustername>>-B
10. Make the port-channel a switchport, and configure a trunk to allow internal HANA VLANs.
switchport
switchport mode trunk
switchport trunk allowed vlan <<var_storage_vlan_id>>,<<var_internal_vlan_id>>,<<var_mapr-01_vlan_id>>,<<var_mapr-02_vlan_id>>,<<var_mapr-03_vlan_id>>
11. Make the port channel and associated interfaces spanning tree edge ports.
spanning-tree port type edge trunk
spanning-tree bpduguard enable
12. Bring up the port-channel.
no shutdown
13. Save the running configuration to start-up configuration.
copy run start
This section details the procedure configure the Cisco Nexus 9000 Switches for SAP HANA environment. The switch configuration in this section is based on the cabling plan described in the Device Cabling section. If the systems are connected on different ports, configure the switches accordingly following the guidelines described in this section
The configuration steps detailed in this section provide guidance for configuring the Cisco Nexus 9000 running release 7.0(3)I2(2a) within a multi-VDC environment. These steps provide the details for the initial Cisco Nexus 9000 Series Switch setup.
To set up the initial configuration for the first Cisco Nexus switch complete the following steps:
On initial boot and connection to the serial or console port of the switch, the NX-OS setup should automatically start and attempt to enter Power on Auto Provisioning.
---- Basic System Configuration Dialog VDC: 1 ----
This setup utility will guide you through the basic configuration of
the system. Setup configures only enough connectivity for management
of the system.
*Note: setup is mainly used for configuring the system initially,
when no configuration is present. So setup always assumes system
defaults and not the current system configuration values.
Press Enter at anytime to skip a dialog. Use ctrl-c at anytime
to skip the remaining dialogs.
Would you like to enter the basic configuration dialog (yes/no): yes
Do you want to enforce secure password standard (yes/no) [y]:
Create another login account (yes/no) [n]:
Configure read-only SNMP community string (yes/no) [n]:
Configure read-write SNMP community string (yes/no) [n]:
Enter the switch name : <<var_nexus_A_hostname>>
Continue with Out-of-band (mgmt0) management configuration? (yes/no) [y]:
Mgmt0 IPv4 address : <<var_nexus_A_mgmt0_ip>>
Mgmt0 IPv4 netmask : <<var_nexus_A_mgmt0_netmask>>
Configure the default gateway? (yes/no) [y]:
IPv4 address of the default gateway : <<var_nexus_A_mgmt0_gw>>
Configure advanced IP options? (yes/no) [n]:
Enable the telnet service? (yes/no) [n]:
Enable the ssh service? (yes/no) [y]:
Type of ssh key you would like to generate (dsa/rsa) [rsa]:
Number of rsa key bits <1024-2048> [2048]:
Configure the ntp server? (yes/no) [n]: y
NTP server IPv4 address : <<var_global_ntp_server_ip>>
Configure CoPP system profile (strict/moderate/lenient/dense/skip) [strict]:
The following configuration will be applied:
password strength-check
switchname <<var_nexus_A_hostname>>
vrf context management
ip route 0.0.0.0/0 <<var_nexus_A_mgmt0_gw>>
exit
no feature telnet
ssh key rsa 2048 force
feature ssh
ntp server <<var_global_ntp_server_ip>>
copp profile strict
interface mgmt0
ip address <<var_nexus_A_mgmt0_ip>> <<var_nexus_A_mgmt0_netmask>>
no shutdown
Would you like to edit the configuration? (yes/no) [n]: Enter
Use this configuration and save it? (yes/no) [y]: Enter
[########################################] 100%
Copy complete.
To set up the initial configuration for the second Cisco Nexus switch complete the following steps:
On initial boot and connection to the serial or console port of the switch, the NX-OS setup should automatically start and attempt to enter Power on Auto Provisioning.
---- Basic System Configuration Dialog VDC: 1 ----
This setup utility will guide you through the basic configuration of
the system. Setup configures only enough connectivity for management
of the system.
*Note: setup is mainly used for configuring the system initially,
when no configuration is present. So setup always assumes system
defaults and not the current system configuration values.
Press Enter at anytime to skip a dialog. Use ctrl-c at anytime
to skip the remaining dialogs.
Would you like to enter the basic configuration dialog (yes/no): yes
Create another login account (yes/no) [n]:
Configure read-only SNMP community string (yes/no) [n]:
Configure read-write SNMP community string (yes/no) [n]:
Enter the switch name : <<var_nexus_B_hostname>>
Continue with Out-of-band (mgmt0) management configuration? (yes/no) [y]:
Mgmt0 IPv4 address : <<var_nexus_B_mgmt0_ip>>
Mgmt0 IPv4 netmask : <<var_nexus_B_mgmt0_netmask>>
Configure the default gateway? (yes/no) [y]:
IPv4 address of the default gateway : <<var_nexus_B_mgmt0_gw>>
Configure advanced IP options? (yes/no) [n]:
Enable the telnet service? (yes/no) [n]:
Enable the ssh service? (yes/no) [y]:
Type of ssh key you would like to generate (dsa/rsa) [rsa]:
Number of rsa key bits <1024-2048> [2048]:
Configure the ntp server? (yes/no) [n]: y
NTP server IPv4 address : <<var_global_ntp_server_ip>>
Configure default interface layer (L3/L2) [L3]: L2
Configure default switchport interface state (shut/noshut) [shut]: Enter
Configure CoPP system profile (strict/moderate/lenient/dense/skip) [strict]:
The following configuration will be applied:
password strength-check
switchname <<var_nexus_B_hostname>>
vrf context management
ip route 0.0.0.0/0 <<var_nexus_B_mgmt0_gw>>
exit
no feature telnet
ssh key rsa 2048 force
feature ssh
ntp server <<var_global_ntp_server_ip>>
copp profile strict
interface mgmt0
ip address <<var_nexus_B_mgmt0_ip>> <<var_nexus_B_mgmt0_netmask>>
no shutdown
Would you like to edit the configuration? (yes/no) [n]: Enter
Use this configuration and save it? (yes/no) [y]: Enter
[########################################] 100%
Copy complete.
The following commands enable the IP switching feature and set the default spanning tree behaviors:
1. On each Nexus 9000, enter configuration mode:
config terminal
2. Use the following commands to enable the necessary features:
feature udld
feature lacp
feature vpc
feature interface-vlan
feature lldp
3. Configure spanning tree defaults:
spanning-tree port type network default
spanning-tree port type edge bpduguard default
spanning-tree port type edge bpdufilter default
4. Save the running configuration to start-up:
copy run start
To create the necessary VLANs, complete the following step on both switches:
1. From the configuration mode, run the following commands:
vlan <<var_storage_vlan_id>>
name HANA-Storage
vlan <<var_mgmt_vlan_id>>
name Management
vlan <<var_internal_vlan_id>>
name HANA-Internal
vlan <<var_backup_vlan_id>>
name HANA-Backup
vlan <<var_client_vlan_id>>
name HANA-Client
vlan <<var_appserver_vlan_id>>
name HANA-AppServer
vlan <<var_datasource_vlan_id>>
name HANA-DataSource
vlan <<var_replication_vlan_id>>
name HANA-Replication
vlan <<var_mapr-01_vlan_id>>
name MapR-01
vlan <<var_mapr-02_vlan_id>>
name MapR-02
vlan <<var_mapr-03_vlan_id>>
name MapR-03
To configure vPCs for switch A, complete the following steps:
1. From the global configuration mode, create a new vPC domain:
vpc domain <<var_nexus_vpc_domain_id>>
2. Make Nexus 9000A the primary vPC peer by defining a low priority value:
role priority 10
3. Use the management interfaces on the supervisors of the Nexus 9000s to establish a keepalive link:
peer-keepalive destination <<var_nexus_B_mgmt0_ip>> source <<var_nexus_A_mgmt0_ip>>
4. Enable following features for this vPC domain:
peer-switch
delay restore 150
peer-gateway
auto-recovery
To configure vPCs for switch B, complete the following steps:
1. From the global configuration mode, create a new vPC domain:
vpc domain <<var_nexus_vpc_domain_id>>
2. Make Cisco Nexus 9000 B the secondary vPC peer by defining a higher priority value than that of the Nexus 9000 A:
role priority 20
3. Use the management interfaces on the supervisors of the Cisco Nexus 9000s to establish a keepalive link:
peer-keepalive destination <<var_nexus_A_mgmt0_ip>> source <<var_nexus_B_mgmt0_ip>>
4. Enable following features for this vPC domain:
peer-switch
delay restore 150
peer-gateway
auto-recovery
1. Define a port description for the interfaces connecting to VPC Peer <<var_nexus_B_hostname>>.
interface Eth2/1
description VPC Peer <<var_nexus_B_hostname>>:2/1
interface Eth2/2
description VPC Peer <<var_nexus_B_hostname>>:2/2
2. Apply a port channel to both VPC Peer links and bring up the interfaces.
interface Eth2/1-2
channel-group 1 mode active
no shutdown
3. Define a description for the port-channel connecting to <<var_nexus_B_hostname>>.
interface Po1
description vPC peer-link
4. Make the port-channel a switchport, and configure a trunk to allow HANA VLANs
switchport
switchport mode trunk
switchport trunk allowed vlan <<var_storage_vlan_id>>,<<var_mgmt_vlan_id>>,<<var_internal_vlan_id>>,<<var_backup_vlan_id>>, <<var_client_vlan_id>>, <<var_appserver_vlan_id>>, <<var_datasource_vlan_id>>,
<<var_replication_vlan_id>>
5. Make this port-channel the VPC peer link and bring it up.
spanning-tree port type network
vpc peer-link
no shutdown
1. Define a port description for the interfaces connecting to VPC peer <<var_nexus_A_hostname>>.
interface Eth2/1
description VPC Peer <<var_nexus_A_hostname>>:2/1
interface Eth2/2
description VPC Peer <<var_nexus_A_hostname>>:2/2
2. Apply a port channel to both VPC peer links and bring up the interfaces.
interface Eth2/1-2
channel-group 1 mode active
no shutdown
3. Define a description for the port-channel connecting to <<var_nexus_A_hostname>>.
interface Po1
description vPC peer-link
4. Make the port-channel a switchport, and configure a trunk to allow HANA VLANs.
switchport
switchport mode trunk
switchport trunk allowed vlan <<var_storage_vlan_id>>,<<var_mgmt_vlan_id>>,<<var_internal_vlan_id>>,<<var_backup_vlan_id>>, <<var_client_vlan_id>>, <<var_appserver_vlan_id>>, <<var_datasource_vlan_id>>,
<<var_replication_vlan_id>>
5. Make this port-channel the VPC peer link and bring it up.
spanning-tree port type network
vpc peer-link
no shutdown
1. Define a port description for the interface connecting to <<var_ucs_clustername>>-A.
interface Eth1/3-4
description <<var_ucs_clustername>>-A:1/3
2. Apply it to a port channel and bring up the interface.
interface eth1/3-4
channel-group 11 mode active
no shutdown
3. Define a description for the port-channel connecting to <<var_ucs_clustername>>-A.
interface Po11
description <<var_ucs_clustername>>-A
4. Make the port-channel a switchport, and configure a trunk to allow all HANA VLANs
switchport
switchport mode trunk
switchport trunk allowed vlan <<var_storage_vlan_id>>,<<var_mgmt_vlan_id>>,<<var_internal_vlan_id>>,<<var_backup_vlan_id>>, <<var_client_vlan_id>>, <<var_appserver_vlan_id>>, <<var_datasource_vlan_id>>,
<<var_replication_vlan_id>>
5. Make the port channel and associated interfaces spanning tree edge ports.
spanning-tree port type edge trunk
6. Set the MTU to be 9216 to support jumbo frames.
mtu 9216
7. Make this a VPC port-channel and bring it up.
vpc 11
no shutdown
8. Define a port description for the interface connecting to <<var_ucs_clustername>>-B.
interface Eth1/5-6
description <<var_ucs_clustername>>-B:1/3
9. Apply it to a port channel and bring up the interface.
interface Eth1/5-6
channel-group 12 mode active
no shutdown
10. Define a description for the port-channel connecting to <<var_ucs_clustername>>-B.
interface Po12
description <<var_ucs_clustername>>-B
11. Make the port-channel a switchport, and configure a trunk to allow all HANA VLANs.
switchport
switchport mode trunk
switchport trunk allowed vlan <<var_storage_vlan_id>>,<<var_mgmt_vlan_id>>,<<var_internal_vlan_id>>,<<var_backup_vlan_id>>, <<var_client_vlan_id>>, <<var_appserver_vlan_id>>, <<var_datasource_vlan_id>>,
<<var_replication_vlan_id>>
12. Make the port channel and associated interfaces spanning tree edge ports.
spanning-tree port type edge trunk
13. Set the MTU to be 9216 to support jumbo frames.
mtu 9216
14. Make this a VPC port-channel and bring it up.
vpc 12
no shutdown
You can define the port-channel for each type of Network to have a dedicated bandwidth. Below is an example to create this port-channel for Backup Network; these cables are connected to Storage for Backup. The following example assumes two ports (Ethernet 1/29 and 1/30) are connected to dedicated NFS Storage to backup SAP HANA.
1. Define a port description for the interface connecting to <<var_node01>>.
interface Eth1/29
description <<var_backup_node01>>:<<Port_Number>>
2. Apply it to a port channel and bring up the interface.
interface eth1/29
channel-group 21 mode active
no shutdown
3. Define a description for the port-channel connecting to <<var_backup_node01>>.
interface Po21
description <<var_backup_vlan_id>>
4. Make the port-channel a switchport, and configure a trunk to allow NFS VLAN for DATA.
switchport
switchport mode trunk
switchport trunk allowed vlan <<var_backup_vlan_id>>
5. Make the port channel and associated interfaces spanning tree edge ports.
spanning-tree port type edge trunk
6. Set the MTU to be 9216 to support jumbo frames.
mtu 9216
7. Make this a VPC port-channel and bring it up.
vpc 21
no shutdown
8. Define a port description for the interface connecting to <<var_node02>>.
interface Eth1/30
description <<var_backup_node01>>:<<Port_Number>>
9. Apply it to a port channel and bring up the interface.
channel-group 22 mode active
no shutdown
10. Define a description for the port-channel connecting to <<var_node02>>.
interface Po22
description <<var_backup_node02>>
11. Make the port-channel a switchport, and configure a trunk to allow NFS VLAN for DATA
switchport
switchport mode trunk
switchport trunk allowed vlan <<var_backup_vlan_id>>
12. Make the port channel and associated interfaces spanning tree edge ports.
spanning-tree port type edge trunk
13. Set the MTU to be 9216 to support jumbo frames.
mtu 9216
14. Make this a VPC port-channel and bring it up.
vpc 22
no shutdown
This is an optional step, that you can use to implement a management plane access for the Cisco UCS servers.
To enable management access across the IP switching environment, complete the following steps:
You may want to create a dedicated Switch Virtual Interface (SVI) on the Nexus data plane to test and troubleshoot the management plane. If an L3 interface is deployed be sure it is deployed on both Cisco Nexus 9000s to ensure Type-2 VPC consistency.
1. Define a port description for the interface connecting to the management plane.
interface Eth1/<<interface_for_in_band_mgmt>>
description IB-Mgmt:<<mgmt_uplink_port>>
2. Apply it to a port channel and bring up the interface.
channel-group 6 mode active
no shutdown
3. Define a description for the port-channel connecting to management switch.
interface Po6
description IB-Mgmt
4. Make the port channel and associated interfaces normal spanning tree ports.
spanning-tree port type normal
5. Make this a VPC port-channel and bring it up.
vpc 6
no shutdown
6. Save the running configuration to start-up in both Nexus 9000s.
copy run start
Depending on the available network infrastructure, several methods and features can be used to uplink the SAP HANA Storage TDI environment. If an existing Cisco Nexus environment is present, Cisco recommends using vPCs to uplink the Cisco Nexus 9000 switches in the SAP HANA Storage TDI environment to the existing infrastructure. The previously described procedures can be used to create an uplink vPC to the existing environment. Make sure to run copy run start to save the configuration on each switch after configuration is completed.
This section describes the specific configurations on Cisco UCS servers to address SAP HANA Storage TDI requirements.
The following sections detail how to configure the Cisco Unified Computing System (Cisco UCS) to use in the SAP HANA Scale Out Solution environment. These steps are necessary to provision the Cisco UCS C-Series servers to meet SAP HANA Storage TDI requirement.
To configure the Cisco UCS Fabric Interconnect A, complete the following steps:
1. Connect to the console port on the first Cisco UCS 6332 Fabric Interconnect.
Enter the configuration method: console
Enter the setup mode; setup newly or restore from backup.(setup/restore)? setup
You have chosen to setup a a new fabric interconnect? Continue? (y/n): y
Enforce strong passwords? (y/n) [y]: y
Enter the password for "admin": <<var_password>>
Enter the same password for "admin": <<var_password>>
Is this fabric interconnect part of a cluster (select 'no' for standalone)? (yes/no) [n]: y
Which switch fabric (A|B): A
Enter the system name: <<var_ucs_clustername>>
Physical switch Mgmt0 IPv4 address: <<var_ucsa_mgmt_ip>>
Physical switch Mgmt0 IPv4 netmask: <<var_ucsa_mgmt_mask>>
IPv4 address of the default gateway: <<var_ucsa_mgmt_gateway>>
Cluster IPv4 address: <<var_ucs_cluster_ip>>
Configure DNS Server IPv4 address? (yes/no) [no]: y
DNS IPv4 address: <<var_nameserver_ip>>
Configure the default domain name? y
Default domain name: <<var_dns_domain_name>>
Join centralized management environment (UCS Central)? (yes/no) [n]: Enter
2. Review the settings printed to the console. If they are correct, answer yes to apply and save the configuration.
3. Wait for the login prompt to make sure that the configuration has been saved.
To configure the Cisco UCS Fabric Interconnect B, complete the following steps:
1. Connect to the console port on the second Cisco UCS 6332 Fabric Interconnect.
Enter the configuration method: console
Installer has detected the presence of a peer Fabric interconnect. This Fabric interconnect will be added to the cluster. Do you want to continue {y|n}? y
Enter the admin password for the peer fabric interconnect: <<var_password>>
Physical switch Mgmt0 IPv4 address: <<var_ucsb_mgmt_ip>>
Apply and save the configuration (select ‘no’ if you want to re-enter)? (yes/no): y
2. Wait for the login prompt to make sure that the configuration has been saved.
To log in to the Cisco Unified Computing System (UCS) environment, complete the following steps:
1. Open a web browser and navigate to the Cisco UCS 6332 Fabric Interconnect cluster address.
2. Click the Launch Cisco UCS Manager link to download the Cisco UCS Manager software.
3. If prompted to accept security certificates, accept as necessary.
4. When prompted, enter admin as the user name and enter the administrative password.
5. Click Login to log in to the Cisco UCS Manager.
Figure 12 Cisco UCS Manager Log In
This document assumes the use of Cisco UCS Manager Software version 3.1(1e). To upgrade the Cisco UCS Manager software and the Cisco UCS 6332 Fabric Interconnect software to version 3.1(1e), refer to Cisco UCS Manager Install and Upgrade Guides.
To create a block of IP addresses for server Keyboard, Video, Mouse (KVM) access in the Cisco UCS environment, complete the following steps:
This block of IP addresses should be in the same subnet as the management IP addresses for the Cisco UCS Manager.
1. In Cisco UCS Manager, click the LAN tab in the navigation pane.
2. Select Pools > root > IP Pools > IP Pool ext-mgmt.
3. In the Actions pane, select Create Block of IP Addresses.
4. Enter the starting IP address of the block and the number of IP addresses required, and the subnet and gateway information.
5. Click OK to create the IP block.
6. Click OK in the confirmation message.
To synchronize the Cisco UCS environment to the NTP server, complete the following steps:
1. In Cisco UCS Manager, click the Admin tab in the navigation pane.
2. Select All > Timezone Management.
3. In the Properties pane, select the appropriate time zone in the Timezone menu.
4. Click Save Changes, and then click OK.
5. Click Add NTP Server.
6. Enter <<var_global_ntp_server_ip>> and click OK.
7. Click OK.
For the Cisco UCS 6300 Series Fabric Extenders, two configuration options are available: pinning and port-channel. SAP HANA node communicates with every other SAP HANA node using multiple I/O streams and this makes the port-channel option a highly suitable configuration.
Setting the discovery policy simplifies the addition of Cisco UCS B-Series chassis and of additional fabric extenders for further C-Series connectivity.
To modify the chassis discovery policy, complete the following steps:
1. In Cisco UCS Manager, click the Equipment tab in the navigation pane and select Equipment in the list on the left.
2. In the right pane, click the Policies tab.
3. Under Global Policies, set the Chassis/FEX Discovery Policy to match the number of uplink ports that are cabled between the chassis or fabric extenders (FEXes) and the fabric interconnects.
4. Set the Link Grouping Preference to Port Channel.
5. Click Save Changes.
6. Click OK.
Figure 13 Chassis Discovery Policy
To enable the and uplink ports, complete the following steps:
1. In Cisco UCS Manager, click the Equipment tab in the navigation pane.
2. Select Equipment > Fabric Interconnects > Fabric Interconnect A (primary) > Fixed Module.
3. Expand Ethernet Ports.
4. Select the ports that are connected to the Cisco UCS C-Series servers, right-click them, and select Configure as Server Port.
5. Click Yes to confirm server ports and click OK.
6. Verify that the ports connected to the Cisco UCS C-Series servers are now configured as server ports.
Figure 14 Cisco UCS – Port Configuration Example
7. Select the ports that are connected to the Cisco Nexus switches, right-click them, and select Configure as Uplink Port.
8. Click Yes to confirm uplink ports and click OK.
9. Select Equipment > Fabric Interconnects > Fabric Interconnect B (subordinate) > Fixed Module.
10. Expand Ethernet Ports.
11. Select the ports that are connected to the Cisco UCS C-Series servers, right-click them, and select Configure as Server Port.
12. Click Yes to confirm server ports and click OK.
13. Select ports that are connected to the Cisco Nexus switches, right-click them, and select Configure as Uplink Port.
14. Click Yes to confirm the uplink ports and click OK.
The 40 GbE ports on Cisco UCS 6332 Fabric Interconnects can be configured as four breakout 10 GB ports using a supported breakout cable.
Configuring breakout ports requires rebooting the Fabric Interconnect. Any existing configuration on a port is erased. It is recommended to break out all required ports in a single transaction.
1. In Cisco UCS Manager, click the Equipment tab in the navigation pane.
2. Select Equipment > Fabric Interconnects > Fabric Interconnect A (primary) > Fixed Module.
3. Expand Ethernet Ports.
4. Select ports that are connected to the Cisco Nexus 3548 switches, right-click them, and select Configure Breakout Port.
5. Click Yes to confirm the Breakout ports and click OK.
6. Select Equipment > Fabric Interconnects > Fabric Interconnect B (subordinate) > Fixed Module.
7. Expand Ethernet Ports.
8. Select ports that are connected to the Cisco Nexus 3548 switches, right-click them, and select Configure Breakout Port.
9. Click Yes to confirm the Breakout ports and click OK.
When you configure a breakout port, you can configure each 10 GB sub-port as server, uplink, FCoE uplink, FCoE storage or appliance as required. Unified Ports cannot be configured as Breakout Ports.
To acknowledge all Cisco UCS rack mount servers, complete the following steps:
1. In Cisco UCS Manager, click the Equipment tab in the navigation pane.
2. Expand Rack Mounts and FEX.
3. Right-click each server that is listed and select Acknowledge Server.
4. Click Yes and then click OK to complete acknowledging the rack mount servers.
Separate uplink port-channels for each of the network zones are defined as per SAP. For example, we create port-channel 11 on fabric interconnect A and port-channel 12 on fabric interconnect B for Client zone network.
To configure the necessary port channels out of the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the LAN tab in the navigation pane.
2. In this procedure, two port channels are created: one from fabric A to both Cisco Nexus switches and one from fabric B to both Cisco Nexus switches.
3. Under LAN > LAN Cloud, expand the Fabric A tree.
4. Right-click Port Channels.
5. Select Create Port Channel.
6. Enter 11 as the unique ID of the port channel.
7. Enter vPC-11-Nexus as the name of the port channel.
Figure 15 Cisco UCS – Port Channel Wizard
9. Select the following ports to be added to the port channel:
— Slot ID 1 and port 1
— Slot ID 1 and port 2
10. If the breakout cables are used for Uplink connectivity. Select the following ports to be added to the port channel:
— Slot ID 1, Aggregated Port ID 1 and port 1
— Slot ID 1, Aggregated Port ID 1 and port 2
— Slot ID 1, Aggregated Port ID 1 and port 3
— Slot ID 1, Aggregated Port ID 1 and port 4
— Slot ID 1, Aggregated Port ID 2 and port 1
— Slot ID 1, Aggregated Port ID 2 and port 2
— Slot ID 1, Aggregated Port ID 2 and port 3
— Slot ID 1, Aggregated Port ID 2 and port 4
11. Click >> to add the ports to the port channel.
12. Click Finish to create the port channel.
13. Click OK.
14. In the navigation pane, under LAN > LAN Cloud, expand the fabric B tree.
15. Right-click Port Channels.
16. Select Create Port Channel.
17. Enter 12 as the unique ID of the port channel.
18. Enter vPC-12-NEXUS as the name of the port channel.
19. Click Next.
20. Select the following ports to be added to the port channel:
— Slot ID 1 and port 1
— Slot ID 1 and port 2
21. If the breakout cables are used for Uplink connectivity. Select the following ports to be added to the port channel:
— Slot ID 1, Aggregated Port ID 1 and port 1
— Slot ID 1, Aggregated Port ID 1 and port 2
— Slot ID 1, Aggregated Port ID 1 and port 3
— Slot ID 1, Aggregated Port ID 1 and port 4
— Slot ID 1, Aggregated Port ID 2 and port 1
— Slot ID 1, Aggregated Port ID 2 and port 2
— Slot ID 1, Aggregated Port ID 2 and port 3
— Slot ID 1, Aggregated Port ID 2 and port 4
22. Click >> to add the ports to the port channel.
23. Click Finish to create the port channel.
24. Click OK.
Repeat the steps 1-24 to create Additional port-channel for each network zone based on your Data Center requirements.
If you are using single Nexus 3548 for Fabric Interconnect failover, create port-channel 5 on fabric interconnect A and port-channel 6 on fabric interconnect B for Internal network zone.
To configure the necessary port channels out of the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the LAN tab in the navigation pane.
2. In this procedure, two port channels are created: one from fabric A to both Cisco Nexus switches and one from fabric B to both Cisco Nexus switches.
3. Under LAN > LAN Cloud, expand the Fabric A tree.
4. Right-click Port Channels.
5. Select Create Port Channel.
6. Enter 5 as the unique ID of the port channel.
7. Enter N3k-Uplink as the name of the port channel.
8. Click Next.
9. Select the following ports to be added to the port channel:
— Slot ID 1, Aggregated Port ID 3 and port 1
— Slot ID 1, Aggregated Port ID 3 and port 2
— Slot ID 1, Aggregated Port ID 3 and port 3
— Slot ID 1, Aggregated Port ID 3 and port 4
10. Click >> to add the ports to the port channel.
11. Click Finish to create the port channel.
12. Click OK.
13. In the navigation pane, under LAN > LAN Cloud, expand the fabric B tree.
14. Right-click Port Channels.
15. Select Create Port Channel.
16. Enter 6 as the unique ID of the port channel.
17. Enter N3k-Uplink as the name of the port channel.
18. Click Next.
19. Select the following ports to be added to the port channel:
— Slot ID 1, Aggregated Port ID 3 and port 1
— Slot ID 1, Aggregated Port ID 3 and port 2
— Slot ID 1, Aggregated Port ID 3 and port 3
— Slot ID 1, Aggregated Port ID 3 and port 4
20. Click >> to add the ports to the port channel.
21. Click Finish to create the port channel.
22. Click OK.
For secure multi-tenancy within the Cisco UCS domain, a logical entity created known as organizations.
To create organization unit, complete the following steps:
1. In Cisco UCS Manager, on the Tool bar click New.
2. From the drop-down menu select Create Organization.
3. Enter the Name as HANA.
4. (Optional) Enter the Description as Org for HANA.
5. Click OK to create the Organization.
To configure the necessary MAC address pools for the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the LAN tab in the navigation pane.
2. Select Pools > root > Sub-Organization > HANA.
3. In this procedure, two MAC address pools are created, one for each switching fabric.
4. Right-click MAC Pools under the root organization.
5. Select Create MAC Pool to create the MAC address pool.
6. Enter FI-A as the name of the MAC pool.
7. (Optional) Enter a description for the MAC pool.
8. Choose Assignment Order Sequential.
9. Click Next.
10. Click Add.
11. Specify a starting MAC address.
12. The recommendation is to place 0A in the next-to-last octet of the starting MAC address to identify all of the MAC addresses as Fabric Interconnect A addresses.
13. Specify a size for the MAC address pool that is sufficient to support the available blade or server resources.
Figure 16 Cisco UCS – Create MAC Pool for Fabric A
14. Click OK.
15. Click Finish.
16. In the confirmation message, click OK.
17. Right-click MAC Pools under the HANA organization.
18. Select Create MAC Pool to create the MAC address pool.
19. Enter FI-B as the name of the MAC pool.
20. (Optional) Enter a description for the MAC pool.
21. Click Next.
22. Click Add.
23. Specify a starting MAC address.
The recommendation is to place 0B in the next to last octet of the starting MAC address to identify all the MAC addresses in this pool as fabric B addresses.
24. Specify a size for the MAC address pool that is sufficient to support the available blade or server resources.
Figure 17 Cisco UCS – Create MAC Pool for Fabric B
25. Click OK.
26. Click Finish.
27. In the confirmation message, click OK.
You can also define separate MAC address Pool for each Network Zone. Follow the above steps 1-16 to create MAC address pool for each Network Zone.
To configure the necessary universally unique identifier (UUID) suffix pool for the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Pools > root.
3. Right-click UUID Suffix Pools.
4. Select Create UUID Suffix Pool.
5. Enter UUID_Pool as the name of the UUID suffix pool.
6. (Optional) Enter a description for the UUID suffix pool.
7. Keep the Prefix as the Derived option.
8. Select Sequential for Assignment Order
9. Click Next.
10. Click Add to add a block of UUIDs.
11. Keep the From field at the default setting.
12. Specify a size for the UUID block that is sufficient to support the available blade or server resources.
Figure 18 Cisco UCS – Create UUID Block
13. Click OK.
14. Click Finish.
15. Click OK.
To run Cisco UCS with two independent power distribution units, the redundancy must be configured as Grid. Complete the following steps:
1. In Cisco UCS Manager, click the Equipment tab in the navigation pane and select Equipment in the list on the left.
2. In the right pane, click the Policies tab.
3. Under Global Policies, set the Redundancy field in Power Policy to Grid.
4. Click Save Changes.
5. Click OK.
Figure 19 Power Policy
Power Capping feature in Cisco UCS is designed to save power with a legacy data center use cases. This feature does not contribute much to the high performance behavior of SAP HANA and MapR Converged Data Platform. By choosing the option “No Cap” for power control policy, the server nodes will not have a restricted power supply. It is recommended to have this power control policy to make sure sufficient power supply for high performance and critical applications like SAP HANA and MapR Converged Data Platform.
To create a power control policy for the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Policies > root.
3. Right-click Power Control Policies.
4. Select Create Power Control Policy.
5. Enter NoCap as the Power Control Policy name.
6. Choose option “Any” as Fan Speed Policy.
7. Change the Power Capping setting to No Cap.
8. Click OK to create the power control policy.
9. Click OK.
Figure 20 Power Control Policy for SAP HANA Nodes
Firmware management policies allow the administrator to select the corresponding packages for a given server configuration. These policies often include packages for adapter, BIOS, board controller, FC adapters, host bus adapter (HBA) option ROM, and storage controller properties.
To create a firmware management policy for a given server configuration in the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Policies > root.
3. Right-click Host Firmware Packages.
4. Select Create Host Firmware Package.
5. Enter HANA-FW as the name of the host firmware package.
6. Leave Simple selected.
7. Select the version 3.1(1e) for both the Blade and Rack Packages.
8. Click OK to create the host firmware package.
9. Click OK.
Figure 21 Host Firmware Package
A local disk configuration policy configures SAS local drives that have been installed on a server through the onboard I/O controller of the local drive.
To create a local disk configuration policy for MapR server nodes, complete the following steps:
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Policies > root.
3. Right-click Local Disk Config Policies.
4. Select Create Local Disk Configuration Policy.
5. Enter MapR as the local disk configuration policy name.
6. Change the mode to Any Configuration.
Figure 22 Local Disk Configuration Policy for MapR Server
7. Click OK to create the local disk configuration policy.
To create a server BIOS policy for the Cisco UCS environment for MapR server nodes, complete the following steps:
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Policies > root > Sub-Organization > HANA.
3. Right-click BIOS Policies.
4. Select Create BIOS Policy.
5. Enter MapR-BIOS as the BIOS policy name.
6. Change the Quiet Boot setting to Disabled.
7. Click Next.
8. Enable Turbo Boost, Enhanced Intel Speedstep, Hyper Threading.
9. Recommendation is to disable all Processor C States. This will force the CPU to stay on maximum frequency and allow MapR server nodes to run with best performance.
10. Set CPU Performance for high throughput, Power Technology, Energy Performance, DRAM Clock Throttling for maximum performance.
Figure 23 Processor Settings in BIOS Policy
11. Click Next.
12. Keep default values at the Intel Direct IO.
13. Click Next.
14. In the RAS Memory please select maximum-performance and enable NUMA.
Figure 24 BIOS Policy – Advanced RAS Memory
15. Click Next.
16. Keep default values for rest of the options.
17. Click Next.
18. Click Finish to Create BIOS Policy.
The Serial over LAN policy is required to get console access to all the MapR server nodes through SSH from the management network. This is used in case of the server hang or a Linux kernel crash, where the dump is required. Configure the speed in the Server Management Tab of the BIOS Policy.
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Policies > root > Sub-Organization > HANA.
3. Right-click the Serial over LAN Policies.
4. Select Create Serial over LAN Policy.
5. Enter SoL-Console as the Policy name.
6. Select Serial over LAN State to enable.
7. Change the Speed to 115200.
8. Click OK.
Figure 25 Serial Over LAN Policy
It is recommended to update the default Maintenance Policy with the Reboot Policy “User Ack” for the MapR server nodes. This policy will wait for the administrator to acknowledge the server reboot for the configuration changes to take effect.
To update the default Maintenance Policy, complete the following steps:
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Policies > root.
3. Select Maintenance Policies > default.
4. Change the Reboot Policy to User Ack.
5. Click Save Changes.
6. Click OK to accept the change.
Figure 26 Maintenance Policy
The core network requirements for SAP HANA are covered by Cisco UCS defaults. Cisco UCS is based on 40 GbE and provides redundancy through the Dual Fabric concept. The service profile is configured to distribute the traffic across Fabric Interconnect A and B. During normal operation, the MapR inter-node traffic flows on FI A and the Storage traffic is on FI B. The MapR inter-node traffic flows from a rack mount server to the Fabric Interconnect A and back to other rack mount server. The storage traffic flows from a MapR server node to the uplinks.
To configure jumbo frames and enable quality of service in the Cisco UCS fabric, complete the following steps:
1. In Cisco UCS Manager, click the LAN tab in the navigation pane.
2. Select LAN > LAN Cloud > QoS System Class.
3. In the right pane, click the General tab.
4. On the MTU Column, enter 9216 in the box.
5. Click Save Changes in the bottom of the window.
6. Click OK.
Figure 27 Cisco UCS – Setting Jumbo Frames
CDP needs to be enabled to learn the MAC address of the End Point. To update default Network Control Policy, complete the following steps:
1. In Cisco UCS Manager, click the LAN tab in the navigation pane.
2. Select LAN > Policies > root > Network Control Policies > default.
3. In the right pane, click the General tab.
4. For CDP: select Enabled radio button.
5. Click Save Changes in the bottom of the window.
6. Click OK.
Figure 28 Network Control Policy to Enable CDP
In order to keep the vNIC links up in case of a Cisco Nexus 3548 failure, create the Network Control Policy for Internal Network. To create Network Control Policy for Internal Network, complete the following steps:
1. In Cisco UCS Manager, click the LAN tab in the navigation pane.
2. Select LAN > Policies > root > Network Control Policies > right-click and Select Create Network Control Policy.
3. Enter Internal as the Name of the Policy.
4. For CDP: select the Enabled radio button.
5. For Action on Uplink Fail: select the Warning radio button.
6. Click OK.
Figure 29 Network Control Policy
Within Cisco UCS, all the network types for an SAP HANA system can be reflected by defined VLANs. Network design from SAP has seven SAP HANA related networks and two infrastructure related networks. The VLAN IDs can be changed if required to match the VLAN IDs in the data center network, for example, ID 221 for backup should match the configured VLAN ID at the data center network switches. Even though nine VLANs are defined, VLANs for all the networks are not necessary if the solution will not use the network. For example, if the Replication Network is not used in the solution, then HANA-Replication VLAN ID does not need to be created; if HANA nodes run outside the Cisco UCS Domain, the HANA-Internal VLAN ID is not needed, and so on.
To configure the necessary VLANs for the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the LAN tab in the navigation pane.
In this procedure, Nine VLANs are created.
2. Select LAN > LAN Cloud.
3. Right-click VLANs.
4. Select Create VLANs.
5. Enter HANA-Internal as the name of the VLAN to be used for HANA Node to Node network.
6. Keep the Common/Global option selected for the scope of the VLAN.
7. Enter <<var_internal_vlan_id>> as the ID of the HANA Node to Node network.
8. Keep the Sharing Type as None.
9. Click OK, and then click OK again.
Figure 30 Create VLAN for Internode
10. Repeat the Steps 1-9 for each VLAN.
11. Create VLAN for HANA-AppServer.
Figure 31 Create VLAN for AppServer
12. Create VLAN for HANA-Backup.
Figure 32 Create VLAN for Backup
13. Create VLAN for HANA-Client.
Figure 33 Create VLAN for Client Network
14. Create VLAN for HANA-DataSource.
Figure 34 Create VLAN for Data Source
15. Create VLAN for HANA-Replication.
Figure 35 Create VLAN for Replication
16. Create VLAN for HANA-Storage.
Figure 36 Create VLAN for Storage Access
17. Create VLAN for Management.
Figure 37 Create VLAN for Management
18. Create VLAN for MapR-01.
Figure 38 Create VLAN for MapR-01 VLAN
19. Create VLAN for MapR-02.
Figure 39 Create VLAN for MapR-02 Internal VLAN
20. Create VLAN for MapR-03.
Figure 40 Create VLAN for MapR-03 Internal VLAN
Figure 41 shows the list of created VLANs.
Figure 41 VLAN Definition in Cisco UCS
For easier management and bandwidth allocation to a dedicated uplink on the Fabric Interconnect, VLAN Groups are created within the Cisco UCS. SAP HANA uses the following VLAN groups:
· Admin Zone
· Client Zone
· Internal Zone
· Backup Network
· Replication Network
To configure the necessary VLAN Groups for the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the LAN tab in the navigation pane.
In this procedure, five VLAN Groups are created. Based on the solution requirement create VLAN groups, it not required to create all five VLAN groups.
2. Select LAN > LAN Cloud.
3. Right-click VLAN Groups.
4. Select Create VLAN Groups.
5. Enter Admin-Zone as the name of the VLAN Group used for Infrastructure network.
6. Select Management.
Figure 42 Create VLAN Group for Admin Zone
7. Click Next
8. Click Next on Add Uplink Ports, since you will use port-channel.
9. Choose port-channels created for Admin Network.
Figure 43 Add Port-Channel for VLAN Group Admin Zone
10. Click Finish.
11. Follow the steps 1-10 for each VLAN Group.
12. Create VLAN Groups for Internal Zone.
Figure 44 Create VLAN Group for Internal Zone
13. Click Next.
14. Click Next on Add Uplink Ports, since you will use port-channel.
15. Choose port-channels created for Internal Zone.
Figure 45 Add Port-Channel for VLAN Group Internal Zone
16. Click Finish.
17. Create VLAN Groups for Client Zone.
Figure 46 Create VLAN Group for Client Zone
18. Click Next.
19. Click Next on Add Uplink Ports, since you will use port-channel.
20. Choose port-channels created for Client Zone.
Figure 47 Add Port-Channel for VLAN Group Client Zone
21. Click Finish.
22. Create VLAN Groups for Backup Network.
Figure 48 Create VLAN Group for Backup Network
23. Click Next.
24. Click Next on Add Uplink Ports, since we will use Port-Channel.
25. Choose port-channels created for Backup Network.
26. Click Finish.
27. Create VLAN Groups for Replication Network.
Figure 49 Create VLAN Group for Replication Network
28. Click Next.
29. Click Next on Add Uplink Ports, since we will use Port-Channel.
30. Choose port-channels created for Replication Network.
31. Click Finish.
VLAN groups created in the Cisco UCS are shown in Figure 50.
Figure 50 VLAN Groups in Cisco UCS
For each VLAN Group a dedicated or shared Ethernet Uplink Port or Port Channel can be selected.
Each VLAN is mapped to a vNIC template to specify the characteristic of a specific network. The vNIC template configuration settings include MTU size, Failover capabilities and MAC-Address pools.
To create vNIC templates for the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the LAN tab in the navigation pane.
2. Select Policies > root > Sub-Organization > HANA.
3. Right-click vNIC Templates.
4. Select Create vNIC Template.
5. Enter HANA-Storage as the vNIC template name.
6. Keep Fabric A selected.
7. Check the Enable Failover checkbox.
8. Under Target, make sure that the VM checkbox is unchecked.
9. Select Updating Template as the Template Type.
10. Under VLANs, check the checkboxes for HANA-Storage.
11. Set HANA-Storage as the native VLAN.
12. For MTU, enter 9000.
13. In the MAC Pool list, select FI-A.
14. For Network Control Policy Select Internal from drop-down menu.
Figure 51 Create vNIC Template for HANA-Internal
15. Click OK to create the vNIC template.
16. Click OK.
For most SAP HANA use cases the network traffic is well distributed across the two Fabrics (Fabric A and Fabric B) using the default setup. In special cases, it can be required to rebalance this distribution for better overall performance. This can be done in the vNIC template with the Fabric ID setting. Note that the MTU settings must match the configuration in customer data center. MTU setting of 9000 is recommended for best performance.
17. Follow the steps1-15 above to create vNIC template for each Network Interface.
18. Create a vNIC template for Storage Network via Fabric A.
Figure 52 Create vNIC Template for Storage Access via Fabric A
19. Create a vNIC template for Storage Network through Fabric B.
Figure 53 Create vNIC Template for Storage Access through Fabric B
20. Create a vNIC template for AppServer Network.
Figure 54 Create vNIC Template for AppServer Network
21. Create a vNIC template for Backup Network.
Figure 55 Create vNIC Template for Backup Network
22. Create a vNIC template for Client Network.
Figure 56 Create vNIC Template for Client Network
23. Create a vNIC template for DataSource Network.
Figure 57 Create vNIC Template for DataSource Network
24. Create a vNIC template for Replication Network.
Figure 58 Create vNIC Template for Replication Network
25. Create a vNIC template for Management Network.
Figure 59 Create vNIC Template for Management Network
26. Create a vNIC template for MapR-01 Internal Network.
Figure 60 Create a vNIC Template for MapR-01 Internal Network
27. Create a vNIC template for MapR-02 Internal Network.
Figure 61 Create a vNIC Template for MapR-02 Internal Network
28. Create a vNIC template for MapR-03 Internal Network.
Figure 62 Create a vNIC Template for MapR-03 Internal Network
Figure 63 provides the list of created vNIC Templates for SAP HANA.
Figure 63 vNIC Templates Overview
To create Local boot policies, complete the following steps:
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Policies > root > Sub-Organization > HANA.
3. Right-click Boot Policies.
4. Select Create Boot Policy.
5. Enter Local-Boot as the name of the boot policy.
6. (Optional) Enter a description for the boot policy.
7. Expand the Local Devices drop-down menu and select Add CD/DVD.
8. Expand the Local Devices drop-down menu and select Add Local Disk.
9. Click OK to save the boot policy. Click OK to close the Boot Policy Window.
Figure 64 Create Local Boot Policy
Two different service profile templates have to be created. One uses HANA-Storage network on fabric A, the other service profile template uses HANA-Storage network on fabric B. The goal is to evenly distribute HANA-Storage network interfaces on fabric A and B. For example, if you like to setup four MapR server nodes, deploy two of them from service profile template MapR-A and two of them from service profile template MapR-B.
To create the service profile template, complete the following steps:
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Service Profile Templates > root > Sub-Organization > HANA.
3. Right-click HANA.
4. Select Create Service Profile Template to open the Create Service Profile Template wizard.
5. Identify the service profile template:
a. Enter MapR-A as the name of the service profile template.
To create a service profile template using fabric B for HANA-Storage network, enter MapR-B as the name of the service profile template.
b. Select the Updating Template option.
c. Under UUID, select HANA-UUID as the UUID pool.
d. Click Next.
6. Configure the Storage Provisioning:
a. Click on Local Disk Configuration Policy tab
a. Select MapR for Local Storage field from the dropdown menu.
b. Click Next
Figure 65 Service Profile Template for MapR Storage Provisioning
7. Configure the Networking options:
a. Keep the default settings for Dynamic vNIC Connection Policy.
c. Select the Expert option for How would you like to configure LAN connectivity.
d. Click the Add button to add a vNIC to the template.
e. In the Create vNIC dialog box, enter MapR-01 as the name of the vNIC.
f. Check the Use vNIC Template checkbox.
g. In the vNIC Template list, select MapR-01
h. In the Adapter Policy list, select Linux.
i. Click OK to add this vNIC to the template.
Figure 66 Service Profile Template for MapR vNIC MapR-01
j. Repeat the above steps c-h for each vNIC.
8. Add vNIC for MapR-02.
9. Add vNIC for MapR-03.
10. Add vNIC for HANA-Storage-A.
For creating service profile template using fabric B for HANA-Storage network, add vNIC for HANA-Storage-B instead.
11. Add vNIC for Mgmt.
12. Review the table in the Networking page to make sure that all vNICs were created.
Figure 67 Service Profile Template for MapR Networking
13. Click Next.
14. Configure the SAN Connectivity, Select the No vHBAs option for the “How would you like to configure SAN connectivity?” field.
15. Click Next.
16. Set no Zoning options and click Next.
17. Set the vNIC/vHBA placement options, keep the default values.
Figure 68 Service Profile Template for MapR vNIC Placement
18. Click Next.
19. Keep the default values on the vMedia Policy, click Next.
20. Set the server boot order: Select Local-Boot for Boot Policy.
Figure 69 Service Profile Template for MapR vNIC Service Boot Order
21. Click Next.
22. For Maintenance policy: Select the default Maintenance Policy.
23. Click Next.
24. Specify the server assignment: Select Down as the power state to be applied when the profile is associated with the server.
25. Expand Firmware Management at the bottom of the page and select HANA-FW from the Host Firmware list.
26. Click Next.
27. For Operational policies, In the BIOS Policy list, select MapR-BIOS.
28. Expand Management IP Address,
29. In the Outband IPv4 tab choose ext-mgmt in the Management IP Address Policy.
30. Expand Power Control Policy Configuration and select NoCap in the Power Control Policy list.
Figure 70 Service Profile Template for MapR Operational Policies
31. Click Finish to create the service profile template.
32. Click OK in the confirmation message.
To create the service profile template using fabric B for HANA-Storage network, follow the two remarks (different service profile name and adding a different HANA-Storage vNIC) inline of the above description instead.
Now you have two service profile templates:
· MapR-A
· MapR-B
To create one service profile on fabric A, complete the following steps:
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Service Profile Templates > root > Sub-Organization > HANA > Service Template MapR-A.
3. Right-click MapR-A and select Create Service Profiles from Template.
4. Enter appropriate name for the service profile prefix. For example, MapR-0.
5. Enter 1 as Name Suffix Starting Number.
6. Enter 1 as Number of Instances.
7. Click OK to create the service profile.
To create one service profile on fabric B, complete the following steps:
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Service Profile Templates > root > Sub-Organization > HANA > Service Template MapR-B.
3. Right-click MapR-B and select Create Service Profiles from Template.
4. Enter appropriate name for the service profile prefix. For example, MapR-0.
5. Enter 2 as Name Suffix Starting Number.
6. Enter 1 as Number of Instances.
7. Click OK to create the service profile.
To associate service profile created for a specific slot, complete the following steps:
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Service Profile > root > Sub-Organization > HANA > MapR-01.
3. Right-click MapR-01 and select Change Service Profile Association.
4. For Server Assignment, choose Select existing Server from the drop-down.
5. Click All Servers.
6. Select the C240 M4 Rack Mount Server Rack ID 1
7. Repeat the above steps 1-6 for each MapR server node.
The section describes the procedure for installing MapR Converged Data Platform on Cisco UCS C240 M4 servers. Each server has 2 x internal SSD boot drives where the operating system is installed with software RAID 1. For MapR storage pools, 24 x 1.8 TB 10k rpm 4Kn disks are configured in 4 x RAID 5 group with 6 disks each.
To configure RAID on Cisco UCS C240 M4 Servers used for MapR Data Platform, complete the following steps:
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Service Profile > root > Sub-Organization > HANA > MapR-01.
3. Click KVM Console.
4. When the KVM Console is launched, click Boot Server.
Figure 71 Cisco UCS KVM Console
5. When prompted Press <Ctrl><R> to Run MegaRAID Configuration Utility.
Figure 72 Enter MegaRAID Configuration Utility
6. In the VD Mgmt press F2 for Operations.
Figure 73 MegaRAID Configuration Utility VD Mgmt
7. Select Create Virtual Drive.
Figure 74 MegaRAID Configuration Utility Create VD
8. Choose the following option to create RAID 5 virtual drive with 6 disks:
a. For RAID Level Choose RAID-5
b. Choose the first 6 disks
c. Give a Name for virtual drive 1
Figure 75 MegaRAID Configuration Utility Create RAID 5
9. Choose Advanced option:
a. For Strip Size choose 1MB.
b. For Read Policy Choose Ahead.
c. For Write Policy choose Write Back with BBU.
d. For I/O Policy choose Direct.
e. Keep the Default Disk cache Policy and Emulation.
f. Choose Initialize.
g. Click OK.
Figure 76 MegaRAID Configuration Utility Create RAID 5 Advanced Options
10. To create 2nd Virtual Drive press F2 for Operations and Select Create Virtual Drive.
11. Create a RAID 5 virtual drive with 6 disks:
a. For RAID Level Choose RAID-5.
b. Choose the next 6 disks.
c. Give a Name for virtual drive 2.
d. Choose Advanced option.
e. For Strip Size choose 1MB.
f. For Read Policy Choose Ahead.
g. For Write Policy choose Write Back with BBU.
h. For I/O Policy choose Direct.
i. Keep the Default Disk cache Policy and Emulation.
j. Choose Initialize.
k. Click OK.
12. To create the 3rd Virtual Drive press F2 for Operations and select Create Virtual Drive.
13. Create a RAID 5 virtual drive with 6 disks:
a. For RAID Level Choose RAID-5.
b. Choose the next 6 disks.
c. Give a Name for virtual drive 3.
d. Choose Advanced option.
e. For Strip Size choose 1MB.
f. For Read Policy Choose Ahead.
g. For Write Policy choose Write Back with BBU.
h. For I/O Policy choose Direct.
i. Keep the Default Disk cache Policy and Emulation.
j. Choose Initialize.
k. Click OK.
14. To create the 4th Virtual Drive press F2 for Operations and select Create Virtual Drive.
15. Create a RAID 5 virtual drive with 6 disks:
a. For RAID Level Choose RAID-5.
b. Choose the last 6 disks.
c. Give a Name for virtual drive 4.
d. Choose Advanced option.
e. For Strip Size choose 1MB.
f. For Read Policy Choose Ahead.
g. For Write Policy choose Write Back with BBU.
h. For I/O Policy choose Direct.
i. Keep the Default Disk cache Policy and Emulation.
j. Choose Initialize
k. Click OK
16. Press esc to exist MegaRAID Configuration Utility.
Figure 77 MegaRAID Configuration Utility with Virtual Drives
17. Press Ctrl+Alt+Del to reboot the server.
18. Repeat the steps 1-17 for each MapR Servers.
To install the RedHat Enterprise Linux Operation System 6.7 on Internal Boot Drives, complete the following steps:
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Service Profile > root > Sub-Organization > HANA > MapR-01.
3. Click KVM Console.
4. When the KVM Console is launched, click Boot Server.
5. Click Virtual Media > Activate Virtual Devices.
a. Select the option Accept this Session for Unencrypted Virtual Media Session and then click Apply.
b. Click Virtual Media and Choose Map CD/DVD.
c. Click Browse to navigate ISO media location.
d. Click Map Device.
6. On the Initial screen click on Next to begin the installation process.
7. Choose Language and click OK.
8. Choose Keyboard layout and click OK.
9. Select Specialized Storage Devices and click Next.
10. Under Basic Devices, select the two "ATA INTEL" hard drives and click Next.
Figure 78 Boot Drive Selection
11. Enter the Hostname, example: mapr01.ciscolab.local
12. Click Configure Network.
13. To configure the network interface on the OS, it is required to identify the mapping of the Ethernet device on the OS to vNIC interface on the Cisco UCS.
a. In Cisco UCS Manager, click the Servers tab in the navigation pane.
b. Expand Servers > Service Profile > root > Sub-Organization > HANA > MapR-01.
c. Click + to Expand.
d. Click vNICs.
e. On the right pane list of the vNICs with MAC Address are listed.
Figure 79 Cisco UCS vNIC MAC Address
f. Note that the MAC Address of the Mgmt vNIC “00:25:B5:2A:00:22”.
g. With the vNIC order in the Service Profile, vNIC for Management should be eth4
14. Under Wired, select System eth4 and click Edit.
15. By comparing MAC Address on the OS and Cisco UCS, eth4 on OS will carry the VLAN for Management.
16. Select Connect automatically check box.
17. Select Available to all users check box.
18. Click IPv4 Settings:
a. Choose Manual for Method from the dropdown box.
b. In the Address field enter <<Management IP address>>.
c. In the Netmask field enter <<subnet mask for Mangement Interface>>.
d. In the Gateway field enter <<gateway IP address>>.
e. In the DNS servers field enter <<DNS server1, DNS server2>>.
f. In the Search domains field enter <<domain1.com,domain2.com>>.
g. Check Require IPv4 addressing for this connection to complete.
h. Click Apply.
Figure 80 RedHat OS Network Configuration
19. Click Close and click Next.
20. Select the appropriate Time Zone and click Next.
21. Enter the root password and confirm. Click Next.
22. Select Create Custom Layout. Click Next.
23. In the Please Select A Device window click Create.
24. In the Create Storage Window, under Create Software RAID, select RAID Partition and click Create.
25. In the Add Partition Window:
a. Select software RAID from drop-down menu for File System Type.
b. For Allowable Drives select the first drive, example: sdd
c. For Size (MB), enter 1024.
d. For Additional Size Options select Fixed Size radio button.
e. Select Force to be a primary partition.
f. Click OK.
Figure 81 RedHat OS Add Partition for Boot
26. In the Please Select A Device window click Create.
27. In the Create Storage Window, under Create Software RAID, select RAID Partition and click Create.
28. In the Add Partition Window:
a. Select software RAID from dropdown menu for File System Type.
b. For Allowable Drives select the first drive, example: sdd
c. For Size (MB), enter 2048.
d. For Additional Size Options select Fixed Size radio button.
e. Select Force to be a primary partition.
f. Click OK.
Figure 82 RedHat OS Add Partition for Swap Volume
29. In the Create Storage Window, under Create Software RAID, select RAID Partition and click Create.
30. In the Add Partition Window:
a. Select software RAID from drop-down menu for File System Type.
b. For Allowable Drives select the first drive, example: sdd
c. For Additional Size Options select Fill to maximum allowable size radio button.
d. Select Force to be a primary partition.
e. Click OK.
Figure 83 RedHat OS Add Partition for Root Volume
31. Create three more RAID partitions for the second drive.
32. In the Please Select A Device window click Create.
33. In the Create Storage Window, under Create Software RAID, select RAID Partition and click Create.
34. In the Add Partition Window:
a. Select software RAID from drop-down menu for File System Type.
b. For Allowable Drives select the second drive, example: sde
c. For Size (MB), enter 1024.
d. For Additional Size Options select Fixed Size radio button.
e. Select Force to be a primary partition.
f. Click OK.
35. In the Please Select A Device window click Create.
36. In the Create Storage Window, Under Create Software RAID, select RAID Partition and click Create.
37. In the Add Partition Window:
a. Select software RAID from drop-down menu for File System Type.
b. For Allowable Drives select the second drive, example: sde
c. For Size (MB), enter 2048.
d. For Additional Size Options select Fixed Size radio button.
e. Select Force to be a primary partition.
f. Click OK.
38. In the Create Storage Window, under Create Software RAID, select RAID Partition and click Create.
39. In the Add Partition Window:
a. Select software RAID from drop-down menu for File System Type.
b. For Allowable Drives select the second drive, example: sde
c. For Additional Size Options select Fill to maximum allowable size radio button.
d. Select Force to be a primary partition.
e. Click OK.
40. There should be six RAID partitions, three on each drive. Continue to create the RAID devices.
41. In the Please Select A Device window click Create.
42. In the Create Storage Window, under Create Software RAID, select RAID Device and click Create.
43. In the Make RAID Device:
a. Enter /boot for Mount Point.
b. Select ext3 from drop-down menu for File System Type.
c. For RAID Device select md0 from dropdown menu.
d. For RAID Level select RAID1 from dropdown menu.
e. For RAID Members select the two partitions which are 1024 MB in size.
f. Click OK.
Figure 84 RedHat OS Boot RAID Device
44. In the Create Storage Window, under Create Software RAID, select RAID Device and click Create.
45. In the Make RAID Device:
a. Select swap from drop-down menu for File System Type.
b. For RAID Device select md1 from drop-down menu.
c. For RAID Level select RAID1 from drop-down menu.
d. For RAID Members select the two partitions which are 2048 MB in size.
e. Click OK.
Figure 85 RedHat OS Swap RAID Device
46. In the Create Storage Window, under Create Software RAID, select RAID Device and click Create.
47. In the Make RAID Device:
a. Enter / for Mount Point.
b. Select ext3 from drop-down menu for File System Type.
c. For RAID Device select md2 from drop-down menu.
d. For RAID Level select RAID1 from drop-down menu.
e. For RAID Members, select the two remaining partitions with the biggest size.
f. Click OK.
Figure 86 RedHat OS Root RAID Device
48. Click Next.
49. Confirm Format Warnings and click Format.
50. Confirm the Writing storage configuration to disk Warnings and click Write changes to disk.
51. Select Install boot loader on /dev/sdd.
52. Click Next to proceed with the next step of the installation.
53. Select the installation mode as Minimal.
54. Keep Default Values and click Next.
Figure 87 RedHat OS Software Selection
55. The installer starts the installation process.
56. When the installation is completed the server requires a reboot. Click Reboot.
Follow the steps above to install RedHat Enterprise Linux on all the MapR servers.
To cutomize the server in preparation for the MapR Installation, complete the following steps:
1. The operating system must be configured such a way that the short name of the server is displayed for the command ‘hostname’ and Full Qualified Host Name is displayed with the command ‘hostname –d’.
2. ssh to the Server using Management IP address assigned to the server during installation.
3. Login as root and password.
4. Edit the Hostname.
vi /etc/sysconfig/network
HOSTNAME=<<hostname>>.<<Domain Name>>
Each MapR Server is configured with 5 vNIC device. Table 10 lists the IP Address information required to configure the IP address on the Operating System. IP Address and Subnet Mask provided below are example only, please configure IP address as per your environment.
Table 10 List the IP Address for SAP HANA Server
vNIC Name |
VLAN ID |
IP Address Range |
Subnet Mask |
MapR-01 |
<<var_mapr-01_vlan_id>> |
10.21.21.101 to 10.21.21.104 |
255.255.255.0 |
MapR-02 |
<<var_mapr-02_vlan_id>> |
10.22.22.101 to 10.22.22.104 |
255.255.255.0 |
MapR-03 |
<<var_mapr-03_vlan_id>> |
10.23.23.101 to 10.23.23.104 |
255.255.255.0 |
HANA-Storage |
<<var_storage_vlan_id>> |
192.168.110.101 to 192.168.110.104 |
255.255.255.0 |
Management |
<<var_mgmt_vlan_id>> |
192.168.196.101 to 192.168.196.104 |
255.255.0.0 |
1. To configure the network interface on the OS, it is required to identify the mapping of the ethernet device on the OS to vNIC interface on the Cisco UCS.
2. From the OS execute the below command to get list of Ethernet device with MAC Address.
ifconfig -a |grep HWaddr
eth0 Link encap:Ethernet HWaddr 00:25:B5:2A:00:20
eth1 Link encap:Ethernet HWaddr 00:25:B5:2B:00:20
eth2 Link encap:Ethernet HWaddr 00:25:B5:2A:00:21
eth3 Link encap:Ethernet HWaddr 00:25:B5:2B:00:21
eth4 Link encap:Ethernet HWaddr 00:25:B5:2A:00:22
3. In Cisco UCS Manager, click the Servers tab in the navigation pane.
4. Expand Servers > Service Profile > root > Sub-Organization > HANA > MapR-01.
5. Click + to Expand. Click vNICs.
6. On the right pane list of the vNICs with MAC Address are listed.
Figure 88 Cisco UCS vNIC MAC Address
7. Note the MAC Address of the MapR-01 vNIC is “00:25:B5:2A:00:20”.
8. By comparing the MAC Address on the OS and Cisco UCS, eth0 on OS will carry the VLAN for MapR-01.
9. Go to the network configuration directory and create a configuration for eth0.
/etc/sysconfig/network-scripts/
vi ifcfg-eth0
##
# Mapr-01 Network
##
DEVICE=eth0
ONBOOT=yes
BOOTPROTO=static
IPV6INIT=no
USERCTL=no
TYPE=Ethernet
NM_CONTROLLED=no
NETWORK=<<IP subnet for MapR-01 example:192.168.201.0>>
NETMASK=<<subnet mask for MapR-01 255.255.255.0>>
IPADDR=<<IP address for MapR-01 192.168.201.102>>
10. Repeat the steps 11 to 13 for each vNIC interface.
11. Add default gateway.
vi /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=<<HOSTNAME>>
GATEWAY=<<IP Address of default gateway>>
Domain Name Service configuration must be done based on the local requirements.
Configuration Example
Add DNS IP if it is required to access internet:
vi /etc/resolv.conf
DNS1=<<IP of DNS Server1>>
DNS2=<<IP of DNS Server2>>
DOMAIN= <<Domain_name>>
All nodes should be able to resolve internal network IP address, below is an example of 4 node host file with the entire network defined in the /etc/hosts file
Example:
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
#
## MapR 01
#
10.21.21.101 mapr21.ciscolab.local mapr21
10.21.21.102 mapr22.ciscolab.local mapr22
10.21.21.103 mapr23.ciscolab.local mapr23
10.21.21.104 mapr24.ciscolab.local mapr24
#
## MapR 02
#
10.22.22.101 mapr21b.ciscolab.local mapr21b
10.22.22.102 mapr22b.ciscolab.local mapr22b
10.22.22.103 mapr23b.ciscolab.local mapr23b
10.22.22.104 mapr24b.ciscolab.local mapr24b
#
## MapR 03
#
10.23.23.101 mapr21c.ciscolab.local mapr21c
10.23.23.102 mapr22c.ciscolab.local mapr22c
10.23.23.103 mapr23c.ciscolab.local mapr23c
10.23.23.104 mapr24c.ciscolab.local mapr24c
#
## HANA Storage Network
#
192.168.110.201 cishana21s.ciscolab.local cishana21s
192.168.110.202 cishana22s.ciscolab.local cishana22s
192.168.110.203 cishana23s.ciscolab.local cishana23s
192.168.110.204 cishana24s.ciscolab.local cishana24s
192.168.110.205 cishana25s.ciscolab.local cishana25s
192.168.110.206 cishana26s.ciscolab.local cishana26s
192.168.110.207 cishana27s.ciscolab.local cishana27s
192.168.110.208 cishana28s.ciscolab.local cishana28s
#
## MapR Storage
#
192.168.110.101 mapr21s.ciscolab.local mapr21s
192.168.110.102 mapr22s.ciscolab.local mapr22s
192.168.110.103 mapr23s.ciscolab.local mapr23s
192.168.110.104 mapr24s.ciscolab.local mapr24s
#
## MapR Storage Virtual IP
#
192.168.110.81 maprvip01
192.168.110.82 maprvip02
192.168.110.83 maprvip03
192.168.110.84 maprvip04
192.168.110.85 maprvip05
192.168.110.86 maprvip06
192.168.110.87 maprvip07
192.168.110.88 maprvip08
#
## Management
192.168.196.101 mapr21m.ciscolab.local mapr21m
192.168.196.102 mapr22m.ciscolab.local mapr22m
192.168.196.103 mapr23m.ciscolab.local mapr23m
192.168.196.104 mapr24m.ciscolab.local mapr24m
To updated and customize the RedHat System for MapR Servers, complete the following steps:
1. Subscribe all nodes to get access to the standard RHEL channels.
subscription-manager list --available --all
subscription-manager subscribe –pool=Pool_Id
2. Update only the OS kernel and firmware packages to the latest release that appeared in RHEL 6.7. Set the release version to 6.7.
subscription-manager release –set=6.7
3. Update the OS.
yum –y update
4. Install additional groups and packages.
yum -y groupinstall base
yum -y install nfs-utils bash tuned rpcbind dmidecode glibc glibc-common glibc-headers glibc-devel hdparm initscripts iputils irqbalance libgcc libstdc++ redhat-lsb-core rpm-libs sdparm shadow-utils syslinux unzip zip nc mtools syslinux-nonlinux nss python-pycurl openssh-clients openssh-server openssl sshpass sudo wget which java-1.8.0-openjdk java-1.8.0-openjdk-headless java-1.8.0-openjdk-devel ipmitool
5. Disable SELinux.
sed -i 's/^SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
sed -i 's/^SELINUX=permissive/SELINUX=disabled/g' /etc/sysconfig/selinux
sed -i 's/^SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
sed -i 's/^SELINUX=permissive/SELINUX=disabled/g' /etc/selinux/config
6. Modify /boot/grub/grub.conf and append this parameter to the kernel command line.
transparent_hugepage=never
7. Add mapr user and group. Make sure that <uid> and <gid> are greater than 1000 and the same on all MapR cluster nodes. The mapr defaults of 2000 for both uid and gid will ensure consistency with other default MapR clusters for inter-cluster mirroring.
groupadd mapr -g <gid>
useradd mapr -u <uid> -g <gid>
8. Activate rpcbind.
service rpcbind start
chkconfig rpcbind on
9. Append following parameters in /etc/sysctl.conf.
sunrpc.tcp_slot_table_entries = 128
sunrpc.tcp_max_slot_table_entries = 128
net.ipv4.tcp_slow_start_after_idle = 0
10. Add the following line into /etc/modprobe.d/sunrpc.conf, create the file, if it does not exist.
options sunrpc tcp_slot_table_entries=128
options sunrpc tcp_max_slot_table_entries=128
11. Enable tuned profile.
tuned-adm profile enterprise-storage
12. Disable Firewall.
service iptables stop
chkconfig iptables off
13. Reboot the OS by issuing reboot command.
14. Optional: old kernels can be removed after OS update.
package-cleanup --oldkernels --count=1 -y
To download the Cisco UCS Drivers ISO bundle, that contains most of the Cisco UCS Virtual Interface Card drivers, complete the following steps:
1. In a web browser, navigate to http://www.cisco.com.
2. Under Support, click All Downloads.
3. In the product selector, click Products, then click Server - Unified Computing.
4. If prompted, enter your Cisco.com username and password to log in.
You must be signed in to download Cisco Unified Computing System (UCS) drivers.
5. Cisco UCS drivers are available for both Cisco UCS B-Series Blade Server Software and Cisco UCS C-Series Rack-Mount UCS-Managed Server Software.
6. Click UCS B-Series Blade Server Software.
7. Click Cisco Unified Computing System (UCS) Drivers.
The latest release version is selected by default. This document is built on Version 3.1(1).
8. Click 3.1(1) Version.
9. Download ISO image of Cisco UCS-related drivers.
10. Choose your download method and follow the prompts to complete your driver download.
11. After the download complete browse the UCS-B-Series/UCS_3.1.1/ucs-bxxx-drivers-combo.3.1.1/Drivers/Linux/Network/Cisco/VIC/RHEL/RHEL6.7 and copy kmod-enic-2.3.0.18-rhel6u7.el6.x86_64.rpm to each server
12. ssh to the Server on the Management IP as root.
13. Update the enic driver.
[root@mapr21 ~]# rpm -Uvh /tmp/kmod-enic-2.3.0.18-rhel6u7.el6.x86_64.rpm
Preparing... ########################################### [100%]
1:kmod-enic ########################################### [100%]
14. Update the enic driver on all the MapR Servers.
The configuration of NTP is important and must be performed on all systems. To configure network time, complete the following steps:
1. Install NTP-server with utilities.
yum -y install ntp ntpdate
2. Configure NTP by adding at least one NTP server to the NTP config file /etc/ntp.conf.
vi /etc/ntp.conf
server <NTP-SERVER1 IP>
server <NTP-SERVER2 IP>
3. Stop the NTP services and update the NTP Servers.
service ntpd stop
ntpdate ntp.example.com
4. Start NTP service and configure it to be started automatically.
service ntpd start
chkconfig ntpd on
chkconfig ntpdate on
The SSH Keys must be exchanged between all MapR servers for user ‘root’. To exchange the SSH keys, complete the following steps:
1. Generate the rsa public key by executing the command ssh-keygen -b 2048
[root@mapr21 ~]# ssh-keygen -b 2048
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
62:8a:0b:c4:4c:d7:de:8d:dc:e7:b7:17:ca:c7:02:94 root@mapr21.ciscolab.local
The key's randomart image is:
+--[ RSA 2048]----+
| |
| . |
| . . . . |
|+ . . o + E |
| + .o+So.. |
|. . o . o. . |
|. . . .o.o .|
| . . .+.+ |
| . .+ |
+-----------------+
2. Exchange the rsa public key by executing the below command from First server to rest of the servers. This ensures that every host can establish a password-less ssh connection to every other host:
“ssh-copy-id -i /root/.ssh/id_rsa.pub mapr21”
[root@mapr21 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub mapr22
Repeat the steps above for all the servers in the MapR cluster.
The installation of the MapR software needs a repository with the MapR software packages. If the MapR cluster nodes have internet access, it is recommended to prepare an online repository. If the MapR cluster nodes do not have internet access, you can download the packages from a computer which has internet access, unzip and copy them to a server which is reachable by the MapR cluster nodes or to a MapR cluster node itself and prepare an offline repository.
The following information is required to setup the MapR Cluster.
Cluster Name: example: mapr500
For High Availability and network bandwidth distribution, virtual IPs are created on the MapR cluster. These virtual IPs are bind to HANA Storage Ethernet device. HANA servers will use these virtual IPs to mount the NFS volumes. If a MapR node is failed, the virtual IPs are moved to other active nodes and the HANA servers will continue to access NFS volumes without any disruption. Its recommended to create two virtual IP address per MapR server, for example if there are 4 MapR nodes in the cluster – Create 8 Virtual IPs.
IP Address: example: 192.168.110.101-108
Subnet Mask: example: 255.255.255.0 or /24
On every MapR cluster node, create the file /etc/yum.repos.d/mapr.repo with the following content. If a proxy server is notnecessary, delete the two lines beginning with "proxy=":
[maprtech]
name=MapR Technologies
baseurl=http://package.mapr.com/releases/v5.0.0/redhat/
enabled=1
gpgcheck=0
protect=1
proxy=http://proxy:port
[maprecosystem]
name=MapR Technologies
baseurl=http://package.mapr.com/releases/ecosystem-5.x/redhat
enabled=1
gpgcheck=0
protect=1
proxy=http://proxy:port
Verify the repository:
yum repolist
If the MapR cluster nodes do not have internet access, you need to create an internal repository. This can either be done on one administrative Linux server that can be reached by all the MapR cluster nodes, or at one MapR cluster node itself.
1. The following packages have to be installed on the server which is providing the repository.
yum -y install createrepo deltarpm python-deltarpm apr apr-util apr-util-ldap httpd httpd-tools
2. Enable the web server.
service httpd start
chkconfig httpd on
3. Create directory.
mkdir -p /var/www/html/yum/base
4. Download the archive from the internet and copy it to the directory /var/www/html/yum/base.
http://package.mapr.com/releases/v5.0.0/redhat/mapr-v5.0.0GA.rpm.tgz
5. Extract the file.
tar -xvzf mapr-v5.0.0GA.rpm.tgz
6. Create the base repository headers.
createrepo /var/www/html/yum/base
7. On every MapR cluster node, create the file /etc/yum.repos.d/mapr.repo with following content.
[maprtech]
name=MapR Technologies
baseurl=http://<host>/yum/base
enabled=1
gpgcheck=0
8. Verify the repository.
yum repolist
The MapR Packages are installed based on the role of the node. To install the MapR packages, complete the following steps:
1. Install on MapR NFS and MapR Fileserver on all nodes in the MapR Cluster.
yum -y install mapr-nfs mapr-fileserver
2. Install zookeeper service on last three nodes of the MapR Cluster.
yum -y install mapr-zookeeper
3. Install cldb service on first two nodes of the MapR Cluster.
yum -y install mapr-cldb
4. Install webserver service on all the nodes (minimum two nodes) in the MapR Cluster.
yum -y install mapr-webserver
After all packages are installed, follow the steps below to configure MapR cluster.
WARNING: Any changes after these steps without re-running the configuration script might result in an unstable cluster state!
1. Increase memory usage of MapR instance on all nodes.
sed -i 's/service.command.mfs.heapsize.percent=.*/service.command.mfs.heapsize.percent=60/g' /opt/mapr/conf/warden.conf
2. Define the MapR subnets to use on all nodes.
sed -i 's/#export MAPR_SUBNETS=/export MAPR_SUBNETS=<<internal_network_01>>\/<<subnet_short>>,<<internal_network_02>>\/<<subnet_short>>,<<internal_network_03>>\/<<subnet_short>>/g' /opt/mapr/conf/env.sh
Example
sed -i 's/#export MAPR_SUBNETS=/export MAPR_SUBNETS=10.21.21.0/24,10.22.22.0/24,10.23.23.0/24 g' /opt/mapr/conf/env.sh
3. Modify the DrCache size of the NFS server on all nodes.
sed -i 's/#DrCacheSize =.*/DrCacheSize = 614400/g' /opt/mapr/conf/nfsserver.conf
4. Create the /mapr directory and the /opt/mapr/conf/mapr_fstab file on every MapR node.
mkdir -p /mapr
echo "localhost:/mapr /mapr hard,nolock" > /opt/mapr/conf/mapr_fstab
5. Run the configuration script. -C followed by the cldb nodes, -Z followed by the zookeeper nodes, -N gives the cluster name, and -no-autostart hinders the MapR software to start automatically. If you use hostnames instead of IP addresses for -C and -Z, be sure to use hostnames which reflect to the internal cluster network IP addresses.
/opt/mapr/server/configure.sh -C <<cldbnode1>>,<<cldbnode2>> -Z <<zookeepernode1>>,<<zookeepernode2>>,<<zookeepernode3>> -N <<clustername>> -no-autostart
Example
/opt/mapr/server/configure.sh -C mapr21, mapr22 -Z mapr22,mapr23,mapr24 -N mapr500 -no-autostart
6. Create a file with a disk list, containing all Virtual Drives in RAID 5 created on all nodes.
lsblk | grep 8.2T | awk '{print "/dev/"$1}' > /tmp/disk.list
7. Set up the disks in the disk.list file with a storage pool of 1 disk each on all nodes.
WARNING: The disks will be wiped immediately!
/opt/mapr/server/disksetup -W 1 -F /tmp/disk.list
To issue the following commands to start the cluster, complete the following steps:
1. On zookeeper nodes.
service mapr-zookeeper start
2. On all nodes, including zookeeper nodes.
service mapr-warden start
3. Wait for about 5 minutes for the cluster to come up.
To add a license to the MapR Cluster, complete the following steps:
1. When MapR Cluster is installed and running, log in to any MapR cluster node and retrieve your unique MapR cluster ID.
maprcli license showid
2. Send the cluster ID and the number of MapR nodes, along with <customer info> to <MapR> to get your MapR License.
3. If your cluster has internet access, install your license directly from the MapR License Server. Select "Manage Licenses" in the upper right corner of the MapR Control System GUI.
4. Select "Add licenses via Web", then "Apply Licenses".
5. Alternatively, if your cluster does not have internet access, you can copy your license file to any MapR cluster node, and install it via the command line interface.
maprcli license add -is_file true -license <license_file>
The MapR license file is a text file starting with the line "-----BEGIN SIGNED MESSAGE-----" and ending with the line "-----END MESSAGE HASH-----". You can also use "Add licenses via upload" or "Add licenses via copy/paste" in the "Manage Licenses" dialog in the MapR Control System GUI to install the license from a file.
In order to create virtual IP addresses for the MapR NFS server, MAC address of HANA-Storage vNIC on all the MapR nodes, IP address range for Virtual IP and Network subnet are required.
From each MapR Server execute ifconfig –a|grep HWaddr command to get list of Ethernet device with MAC Address. By comparing MAC Address on the OS and Cisco UCS Service Profile, eth3 on OS will carry the VLAN for HANA-Storage.
ifconfig -a |grep HWaddr
eth0 Link encap:Ethernet HWaddr 00:25:B5:2A:00:20
eth1 Link encap:Ethernet HWaddr 00:25:B5:2B:00:20
eth2 Link encap:Ethernet HWaddr 00:25:B5:2A:00:21
eth3 Link encap:Ethernet HWaddr 00:25:B5:2B:00:21
eth4 Link encap:Ethernet HWaddr 00:25:B5:2A:00:22
Below is the command to create Virtual IP on the MapR cluster:
maprcli virtualip add -macs <mac_addr_node1> <mac_addr_node2> <mac_addr_nodeN> -virtualipend <<last_virtual_ip>> -netmask <<subnet_mask> -virtualip <<first_virtual_ip>>
Example
maprcli virtualip add -macs 00:25:b5:fb:00:20 00:25:b5:fb:00:22 00:25:b5:fb:00:24 00:25:b5:fb:00:26 -virtualipend 192.168.110.88 -netmask 255.255.255.0 -virtualip 192.168.110.81
The MapR cluster needs an internal directory structure for its volumes. It is recommended to store the volumes in the /apps folder of the MapR cluster directory structure. The MapR file system is mounted on the MapR cluster nodes under /mapr, so the following directory structure should be created on one of the MapR cluster node.
1. Confirm MapR file system is mounted at /mapr.
mount | grep mapr
localhost:/mapr on /mapr type nfs (rw,nolock,addr=127.0.0.1)
2. Create HANA directory.
mkdir -p /mapr/<clustername>/apps/hana/<SID>
Example
mkdir -p /mapr/mapr500/apps/hana/ANA
When creating a volume, the volume name can differ from the directory name of MapR internal file system.
3. Create volume for /hana/shared.
maprcli volume create -name <volumename> -path /apps/hana/<SID>/<volumename> -replicationtype high_throughput -replication 2 -minreplication 2
4. Create volumes for /hana/data for every single storage partition.
maprcli volume create -name <volumename> -path /apps/hana/<SID>/<datamountpoint_storage_partition> -replicationtype high_throughput -replication 2 -minreplication 2
5. Create volumes for /hana/log for every single storage partition.
maprcli volume create -name <volumename> -path /apps/hana/<SID>/<logmountpoint_storage_partition> -replicationtype low_latency -replication 2 -minreplication 2
6. Turn off the compression for log volumes.
hadoop mfs -setcompression off /mapr/<clustername>/apps/hana/<SID>/<volumename>
It is recommended to reflect the SID in the path. For the three different volume types shared, data and log.
Example for creating a volume for /hana/shared for SID "ANA"
maprcli volume create -name shared -path /apps/hana/ANA/shared -replicationtype high_throughput -replication 2 -minreplication 2
Example for creating eight volumes for /hana/data for SID "ANA"
maprcli volume create -name data00001 -path /apps/hana/ANA/data00001 -replicationtype high_throughput -replication 2 -minreplication 2
maprcli volume create -name data00002 -path /apps/hana/ANA/data00002 -replicationtype high_throughput -replication 2 -minreplication 2
maprcli volume create -name data00003 -path /apps/hana/ANA/data00003 -replicationtype high_throughput -replication 2 -minreplication 2
maprcli volume create -name data00004 -path /apps/hana/ANA/data00004 -replicationtype high_throughput -replication 2 -minreplication 2
maprcli volume create -name data00005 -path /apps/hana/ANA/data00005 -replicationtype high_throughput -replication 2 -minreplication 2
maprcli volume create -name data00006 -path /apps/hana/ANA/data00006 -replicationtype high_throughput -replication 2 -minreplication 2
maprcli volume create -name data00007 -path /apps/hana/ANA/data00007 -replicationtype high_throughput -replication 2 -minreplication 2
maprcli volume create -name data00007 -path /apps/hana/ANA/data00008 -replicationtype high_throughput -replication 2 -minreplication 2
Example for creating eight volumes for /hana/log for SID "ANA"
maprcli volume create -name log00001 -path /apps/hana/ANA/log00001 -replicationtype low_latency -replication 2 -minreplication 2
maprcli volume create -name log00002 -path /apps/hana/ANA/log00002 -replicationtype low_latency -replication 2 -minreplication 2
maprcli volume create -name log00003 -path /apps/hana/ANA/log00003 -replicationtype low_latency -replication 2 -minreplication 2
maprcli volume create -name log00004 -path /apps/hana/ANA/log00004 -replicationtype low_latency -replication 2 -minreplication 2
maprcli volume create -name log00005 -path /apps/hana/ANA/log00005 -replicationtype low_latency -replication 2 -minreplication 2
maprcli volume create -name log00006 -path /apps/hana/ANA/log00006 -replicationtype low_latency -replication 2 -minreplication 2
maprcli volume create -name log00007 -path /apps/hana/ANA/log00007 -replicationtype low_latency -replication 2 -minreplication 2
maprcli volume create -name log00008 -path /apps/hana/ANA/log00008 -replicationtype low_latency -replication 2 -minreplication 2
Example to turn off compression on eight log volumes for SID "ANA"
hadoop mfs -setcompression off /mapr/mapr500/apps/hana/ANA/log00001
hadoop mfs -setcompression off /mapr/mapr500/apps/hana/ANA/log00002
hadoop mfs -setcompression off /mapr/mapr500/apps/hana/ANA/log00003
hadoop mfs -setcompression off /mapr/mapr500/apps/hana/ANA/log00004
hadoop mfs -setcompression off /mapr/mapr500/apps/hana/ANA/log00005
hadoop mfs -setcompression off /mapr/mapr500/apps/hana/ANA/log00006
hadoop mfs -setcompression off /mapr/mapr500/apps/hana/ANA/log00007
hadoop mfs -setcompression off /mapr/mapr500/apps/hana/ANA/log00008
This section provides the procedure for installing RedHat Enterprise Linux Operation System and customizing for SAP HANA requirement.
For the latest information on SAP HANA installation and OS customization requirement, see the SAP HANA installation guide at: http://www.saphana.com/
Mount options vary from the default Linux setting for using NFS for SAP HANA data and log volumes. The following is an example of /etc/fstab entry for an eight node SAP HANA Scale-Out system with SID ANA.
maprvip01:/mapr/mapr500/apps/hana/ANA/shared /hana/shared nfs nolock,hard,timeo=600 0 0
maprvip01:/mapr/mapr500/apps/hana/ANA/data001 /hana/data/ANA/mnt00001 nfs nolock,hard,timeo=600 0 0
maprvip02:/mapr/mapr500/apps/hana/ANA/data002 /hana/data/ANA/mnt00002 nfs nolock,hard,timeo=600 0 0
maprvip03:/mapr/mapr500/apps/hana/ANA/data003 /hana/data/ANA/mnt00003 nfs nolock,hard,timeo=600 0 0
maprvip04:/mapr/mapr500/apps/hana/ANA/data004 /hana/data/ANA/mnt00004 nfs nolock,hard,timeo=600 0 0
maprvip05:/mapr/mapr500/apps/hana/ANA/data005 /hana/data/ANA/mnt00005 nfs nolock,hard,timeo=600 0 0
maprvip06:/mapr/mapr500/apps/hana/ANA/data006 /hana/data/ANA/mnt00006 nfs nolock,hard,timeo=600 0 0
maprvip07:/mapr/mapr500/apps/hana/ANA/data007 /hana/data/ANA/mnt00007 nfs nolock,hard,timeo=600 0 0
maprvip08:/mapr/mapr500/apps/hana/ANA/data008 /hana/data/ANA/mnt00008 nfs nolock,hard,timeo=600 0 0
maprvip08:/mapr/mapr500/apps/hana/ANA/log001 /hana/log/ANA/mnt00001 nfs nolock,hard,timeo=600 0 0
maprvip07:/mapr/mapr500/apps/hana/ANA/log002 /hana/log/ANA/mnt00002 nfs nolock,hard,timeo=600 0 0
maprvip06:/mapr/mapr500/apps/hana/ANA/log003 /hana/log/ANA/mnt00003 nfs nolock,hard,timeo=600 0 0
maprvip05:/mapr/mapr500/apps/hana/ANA/log004 /hana/log/ANA/mnt00004 nfs nolock,hard,timeo=600 0 0
maprvip04:/mapr/mapr500/apps/hana/ANA/log005 /hana/log/ANA/mnt00005 nfs nolock,hard,timeo=600 0 0
maprvip03:/mapr/mapr500/apps/hana/ANA/log006 /hana/log/ANA/mnt00006 nfs nolock,hard,timeo=600 0 0
maprvip02:/mapr/mapr500/apps/hana/ANA/log007 /hana/log/ANA/mnt00007 nfs nolock,hard,timeo=600 0 0
maprvip01:/mapr/mapr500/apps/hana/ANA/log008 /hana/log/ANA/mnt00008 nfs nolock,hard,timeo=600 0 0
Create the required directory to mount /hana/shared /hana/data and /hana/log volumes. Mount all the volumes from /etc/fstab using “mount –a”. Check the status of all mounted volumes using “df –h” command.
maprvip01:/mapr/mapr500/apps/hana/ANA/shared
125T 194G 125T 1% /hana/shared
maprvip01:/mapr/mapr500/apps/hana/ANA/data001
125T 194G 125T 1% /hana/data/ANA/mnt00001
maprvip02:/mapr/mapr500/apps/hana/ANA/data002
125T 194G 125T 1% /hana/data/ANA/mnt00002
maprvip03:/mapr/mapr500/apps/hana/ANA/data003
125T 194G 125T 1% /hana/data/ANA/mnt00003
maprvip04:/mapr/mapr500/apps/hana/ANA/data004
125T 194G 125T 1% /hana/data/ANA/mnt00004
maprvip05:/mapr/mapr500/apps/hana/ANA/data005
125T 194G 125T 1% /hana/data/ANA/mnt00005
maprvip06:/mapr/mapr500/apps/hana/ANA/data006
125T 194G 125T 1% /hana/data/ANA/mnt00006
maprvip07:/mapr/mapr500/apps/hana/ANA/data007
125T 194G 125T 1% /hana/data/ANA/mnt00007
maprvip08:/mapr/mapr500/apps/hana/ANA/data008
125T 194G 125T 1% /hana/data/ANA/mnt00008
maprvip08:/mapr/mapr500/apps/hana/ANA/log001
125T 194G 125T 1% /hana/log/ANA/mnt00001
maprvip07:/mapr/mapr500/apps/hana/ANA/log002
125T 194G 125T 1% /hana/log/ANA/mnt00002
maprvip06:/mapr/mapr500/apps/hana/ANA/log003
125T 194G 125T 1% /hana/log/ANA/mnt00003
maprvip05:/mapr/mapr500/apps/hana/ANA/log004
125T 194G 125T 1% /hana/log/ANA/mnt00004
maprvip04:/mapr/mapr500/apps/hana/ANA/log005
125T 194G 125T 1% /hana/log/ANA/mnt00005
maprvip03:/mapr/mapr500/apps/hana/ANA/log006
125T 194G 125T 1% /hana/log/ANA/mnt00006
maprvip02:/mapr/mapr500/apps/hana/ANA/log007
125T 194G 125T 1% /hana/log/ANA/mnt00007
maprvip01:/mapr/mapr500/apps/hana/ANA/log008
125T 194G 125T 1% /hana/log/ANA/mnt00008
Change the directory permissions BEFORE installing HANA. Use the chown command after the file systems are mounted on each HANA node:
chmod -R 777 /hana/data
chmod -R 777 /hana/log
After the OS for MapR and HANA are installed, you might consider using clustershell for easy administration. With this tool, you can execute commands simultaneously on all nodes or on a specified subset of nodes. Clustershell could be installed on an independent Linux desktop or administration server which has access to the MapR and HANA cluster nodes. It can also run on one or several nodes within the HANA and / or MapR cluster.
Clustershell is part of the Extra Packages for Enterprise Linux (EPEL) and therefore use at your own risk:
https://access.redhat.com/solutions/3358
For the latest information about Clustershell, see the EPEL repository for RHEL 6:
https://dl.fedoraproject.org/pub/epel/6/x86_64/repoview/clustershell.html
MapR provides some Clustershell information:
https://www.mapr.com/developercentral/code/clush-and-cluster-auditing
Before installing Clustershell, create and exchange ssh keys on every host. This means, the host where clustershell runs and all MapR and HANA cluster nodes. Please refer to the chapters "SSH Keys" for MapR and HANA cluster nodes.
Install EPEL repository
wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-6.noarch.rpm
rpm -ivh epel-release-latest-6.noarch.rpm
Verify EPEL repository
yum repolist
Install clustershell. Dependencies are: libyaml, PyYAML. If these dependencies are not fulfilled, you need to register first to RHN.
yum install clustershell
After installing clustershell, configure host groups in /etc/clustershell/groups
In this example, eight hana nodes and four mapr nodes are part of the clush environment. The group "mapr" contains four nodes named mapr01 - mapr04, and the group "hana" contains eight nodes named cishana01 - cishana08. You can setup more or different groups as you see them fit.
The usage of clustershell is quite simple. This example executes "date" on all nodes:
clush -a date
This example shows the version the compat-sap-c++ package on nodes cishana02 - cishana08:
clush -w cishana[02-08] "rpm -qa|grep compat-sap"
This example executes sed on the mapr node group (nodes mapr01 - mapr04) which changes the line beginning with "#DrCacheSize =" into "DrCacheSize = 614400". Pay attention on how double quotation marks and single quotation marks are being used:
clush -gmapr sed -i "'s/#DrCacheSize =.*/DrCacheSize = 614400/g' /opt/mapr/conf/nfsserver.conf"
Shailendra Mruthunjaya is a Technical Marketing Engineer with Cisco UCS Solutions and Performance Group. Shailendra has over four years of experience with SAP HANA on Cisco UCS platform. Shailendra has designed several SAP landscapes in public and private cloud environment. Currently, his focus is on developing and validating infrastructure best practices for SAP applications on Cisco UCS Servers, Cisco Nexus products and Storage technologies.
Matthias Schlarb is part of the Cisco SAP Competence Center in Walldorf, Germany where he conducts workshops with customers and partners. As a Technical Marketing Engineer, he develops best practices for SAP Cloud Infrastructure on Cisco Unified Computing System.
For their support and contribution to the design, validation, and creation of this CVD, we would like to thank:
· Ralf Klahr, Cisco Systems, Inc.
· Ulrich Kleidon, Cisco Systems, Inc.
· Erik Lillestolen, Cisco Systems, Inc.
· Pramod Ramamurthy, Cisco Systems, Inc.
· Andy Lerner, MapR Technologies, Inc.
· James Sun, MapR Technologies, Inc.