Deployment Guide for Cisco ACI, IBM SVC, FlashSystem 900, and Storwize V5030 with vSphere 6.0U2
Last Updated: February 27, 2017
The CVD program consists of systems and solutions designed, tested, and documented to facilitate faster, more reliable, and more predictable customer deployments. For more information visit
http://www.cisco.com/go/designzone.
ALL DESIGNS, SPECIFICATIONS, STATEMENTS, INFORMATION, AND RECOMMENDATIONS (COLLECTIVELY, "DESIGNS") IN THIS MANUAL ARE PRESENTED "AS IS," WITH ALL FAULTS. CISCO AND ITS SUPPLIERS DISCLAIM ALL WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE. IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THE DESIGNS, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
THE DESIGNS ARE SUBJECT TO CHANGE WITHOUT NOTICE. USERS ARE SOLELY RESPONSIBLE FOR THEIR APPLICATION OF THE DESIGNS. THE DESIGNS DO NOT CONSTITUTE THE TECHNICAL OR OTHER PROFESSIONAL ADVICE OF CISCO, ITS SUPPLIERS OR PARTNERS. USERS SHOULD CONSULT THEIR OWN TECHNICAL ADVISORS BEFORE IMPLEMENTING THE DESIGNS. RESULTS MAY VARY DEPENDING ON FACTORS NOT TESTED BY CISCO.
CCDE, CCENT, Cisco Eos, Cisco Lumin, Cisco Nexus, Cisco StadiumVision, Cisco TelePresence, Cisco WebEx, the Cisco logo, DCE, and Welcome to the Human Network are trademarks; Changing the Way We Work, Live, Play, and Learn and Cisco Store are service marks; and Access Registrar, Aironet, AsyncOS, Bringing the Meeting To You, Catalyst, CCDA, CCDP, CCIE, CCIP, CCNA, CCNP, CCSP, CCVP, Cisco, the Cisco Certified Internetwork Expert logo, Cisco IOS, Cisco Press, Cisco Systems, Cisco Systems Capital, the Cisco Systems logo, Cisco Unity, Collaboration Without Limitation, EtherFast, EtherSwitch, Event Center, Fast Step, Follow Me Browsing, FormShare, GigaDrive, HomeLink, Internet Quotient, IOS, iPhone, iQuick Study, IronPort, the IronPort logo, LightStream, Linksys, MediaTone, MeetingPlace, MeetingPlace Chime Sound, MGX, Networkers, Networking Academy, Network Registrar, PCNow, PIX, PowerPanels, ProConnect, ScriptShare, SenderBase, SMARTnet, Spectrum Expert, StackWise, The Fastest Way to Increase Your Internet Quotient, TransPath, WebEx, and the WebEx logo are registered trademarks of Cisco Systems, Inc. and/or its affiliates in the United States and certain other countries.
All other trademarks mentioned in this document or website are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (0809R)
© 2016 Cisco Systems, Inc. All rights reserved.
Table of Contents
Dedicated Management Node Connectivity to ACI Fabric
Cisco UCS Connectivity to ACI Fabric
IBM SAN Volume Controller connectivity to ACI Fabric
Cisco UCS connectivity to SAN Fabric
IBM FlashSystem 900 Base Configuration
Creating a replacement USB key
IBM FlashSystem 900 Initial Configuration
IBM Storwize V5030 Base Configuration
IBM Storwize V5000 Initial Configuration
IBM San Volume Controller Base Configuration
IBM SAN Volume Controller Initial Configuration
IBM SAN Volume Controller GUI Setup
Adding External Storage to the SVC
Cisco UCS Initial Configuration
Upgrade Cisco UCS Manager Software to Version 3.1(2b)
Add a Block of Management IP Addresses for KVM Access
Enable Server and Uplink Ports
Acknowledge Cisco UCS Chassis and FEX
Create VSAN for the Fibre Channel Interfaces
Create Port Channels for the Fibre Channel Interfaces
Create Port Channels for Ethernet Uplinks
Create a WWNN Address Pool for FC based Storage Access
Create a WWPN Address Pools for FC Based Storage Access
Create IQN Pools for iSCSI Boot and LUN Access
Create IP Pools for iSCSI Boot and LUN Access
Set Jumbo Frames in Cisco UCS Fabric
Create Local Disk Configuration Policy
Create Network Control Policy for Link Layer Discovery Protocol
Create Server Pool Qualification Policy (Optional)
Update Default Maintenance Policy
Create vNIC/vHBA Placement Policy
Create LAN Connectivity Policies
Adding iSCSI vNICs in LAN policy
Create vHBA Templates for FC Connectivity
Create FC SAN Connectivity Policies
Create iSCSI Boot Service Profile Template
Configure Storage Provisioning:
Configure Operational Policies
Create FC Boot Service Profile Template
Configure Storage Provisioning:
Configure Operational Policies
Backup the Cisco UCS Manager Configuration
Gather Necessary WWPN Information (FC Deployment)
Gather Necessary IQN Information (iSCSI Deployment)
IBM SVC iSCSI Storage Configuration
Create Volumes on the Storage System
IBM SVC Fibre Channel Storage Configuration
Cisco MDS 9396S SAN Zoning for UCS Hosts
Create Volumes on the Storage System
Network (Cisco ACI) Configuration
Cisco Application Policy Infrastructure Controller (APIC) Setup.
Setting up Out of Band Management IP Addresses for Leaf and Spine Switches.
Setting Time Zone and NTP Server
Set up Fabric Access Policy Setup
Create LLDP Interface Policies
Create BPDU Filter/Guard Policies
Create Virtual Port Channels (vPCs)
VPC – UCS Fabric Interconnects
Configure individual ports for iSCSI Access (iSCSI setup only)
Configuring Common Tenant for Management Access
Create Application Profile for Management Access
Create Application Profile for ESXi Connectivity
VMware vSphere Setup for Management Nodes
Log in to Cisco CIMC and setup FlexFlash
Install ESXi on the UCS Servers
Existing Management Switch Configuration
ESXi Host management configuration
Download VMware vSphere Client
Log in to VMware ESXi Hosts Using VMware vSphere Client
Set Up Management VMkernel Ports and Configure Virtual Switch 0
Set Up VMkernel Ports and Configure Virtual Switch 1
VMware vSphere Setup for UCS Host Environment
Install ESXi on the UCS Servers
Set Up Management Networking for ESXi Hosts
Download VMware vSphere Client
Log in to VMware ESXi Hosts Using VMware vSphere Client
Set Up VMkernel Ports and Configure Virtual Switch
Set Up iSCSI VMkernel Ports and vSwitches (iSCSI deployment Only)
Install VMware Drivers for the Cisco Virtual Interface Card (VIC)
Install the Client Integration Plug-in
Building the VMware vCenter Server Appliance
Setup Datacenter, Cluster, DRS and HA for management nodes
Setup Datacenter, Cluster, DRS and HA for compute nodes
ESXi Dump Collector Setup for iSCSI Hosts (iSCSI configuration only)
Cisco ACI – Virtual Machine Networking
Deploying VM Networking for vSphere Distributed Switch (VDS)
Add VMware ESXi Host Servers to VDS
Deploying VM Networking for Cisco Application Virtual Switch (AVS)
Install Cisco Virtual Switch Update Manager (VSUM) Virtual Appliance
Add VM Networking for AVS in APIC
Add VMware ESXi Host Servers to AVS
Add Second VXLAN Tunnel Endpoint (VTEP) to Each ESXi Host for Load Balancing
Connectivity to Existing Infrastructure – Shared L3 Out
Nexus 7000 – Sample Configuration
Configuring ACI Shared Layer 3 Out in Tenant Common
Configure External Routed Domain
Configure Leaf Switch Interfaces
Configure External Routed Networks under Tenant common
Onboarding an Application Tenant
Web-Tier to Shared L3 Out Contract
Configuring L4-L7 Services – Network Only Stitching Mode
Cisco ASA – Sample Configuration
Cisco ASA Context for Application App-A
Create Gateway for ASA Outside interfaces in tenant common
Create Subnet under Bridge Domain
L3 Configuration for the Subnet
Create Port-Channels for Cisco ASA Devices
Create Tenant L4-L7 Device and Service Graph
Remove the Existing Connectivity through L3 Out
Create L4-L7 Service Graph Template
Cisco Validated Designs (CVDs) deliver systems and solutions that are designed, tested, and documented to facilitate and improve customer deployments. These designs incorporate a wide range of technologies and products into a portfolio of solutions that have been developed to address the business needs of the customers and to guide them from design to deployment.
Customers looking to deploy applications using shared data center infrastructure face a number of challenges. A recurrent infrastructure challenge is to achieve the levels of IT agility and efficiency that can effectively meet the business objectives. Addressing these challenges requires having an optimal solution with the following key characteristics:
· Availability: Helps ensure applications and services availability at all times with no single point of failure
· Flexibility: Ability to support new services without requiring underlying infrastructure modifications
· Efficiency: Facilitate efficient operation of the infrastructure through re-usable policies
· Manageability: Ease of deployment and ongoing management to minimize operating costs
· Scalability: Ability to expand and grow with significant investment protection
· Compatibility: Minimize risk by ensuring compatibility of integrated components
Cisco and IBM have partnered to deliver a series of VersaStack solutions that enable strategic data center platforms with the above characteristics. VersaStack solution delivers an integrated architecture that incorporates compute, storage and network design best practices thereby minimizing IT risks by validating the integrated architecture to ensure compatibility between various components. The solution also addresses IT pain points by providing documented design guidance, deployment guidance and support that can be used in various stages (planning, designing and implementation) of a deployment.
The Cisco Application Centric Infrastructure (ACI) and IBM SAN Volume Controller (SVC) based VersaStack solution, covered in this CVD, delivers a converged infrastructure platform specifically designed for software defined networking (SDN) enabled data centers. In this deployment, SVC standardizes storage functionality across different arrays and provides a single point of control for virtualized storage. The design showcases:
· Cisco ACI enabled Cisco Nexus 9000 switching architecture
· IBM SVC providing single point of management and control for IBM FlashSystem 900 and IBM Storwize V5030
· Cisco Unified Compute System (UCS) servers with Intel Broadwell processors
· Storage designs supporting both Fibre Channel and iSCSI based storage access
· VMware vSphere 6.0U2 hypervisor
· Cisco MDS Fibre Channel (FC) switches for SAN connectivity
VersaStack solution is a pre-designed, integrated and validated architecture for data center that combines Cisco UCS servers, Cisco Nexus family of switches, Cisco MDS fabric switches and IBM Storwize and FlashSystem Storage Arrays into a single, flexible architecture. VersaStack is designed for high availability, with no single points of failure, while maintaining cost-effectiveness and flexibility in the design to support a wide variety of workloads.
VersaStack design can support different hypervisor options, bare metal servers and can also be sized and optimized based on customer workload requirements. VersaStack design discussed in this document has been validated for resiliency (under fair load) and fault tolerance during system upgrades, component failures, and partial as well as total power loss scenarios.
VersaStack with Cisco ACI and IBM SVC solution is designed to simplify the data center evolution to a shared cloud-ready infrastructure based on an application driven policy model. Utilizing the Cisco ACI functionality, the VersaStack platform delivers an application centric architecture with centralized automation that combines software flexibility with the hardware performance. With the addition of IBM SVC to the solution design, storage administrators can now perform system configuration, system management and service tasks in a consistent manner over multiple storage arrays from a single easy-to-use graphical user interface, therefore reducing the risk of inconsistent configuration.
The intended audience of this document includes, but is not limited to, sales engineers, field consultants, professional services, IT managers, partner engineering, and customers who want to take advantage of an infrastructure built to deliver IT efficiency and enable IT innovation.
This document provides step by step configuration and implementation guidelines for setting up VersaStack with ACI system. The following design elements distinguish this version of VersaStack from previous models:
· Validation of the Cisco ACI release 2.0
· Support for Cisco Tetration ready Nexus 93180YC leaf switches
· Cisco UCS Multi-system validation for added scalability
· Validation of ACI direct-attached Cisco UCS C-Series based dedicated management infrastructure
· IBM SVC 2145-DH8 and 2145-SV1 release 7.7.1.3
· IBM FlashSystem 900 release 1.4.5.0
· IBM Storwize V5030 release 7.7.1.3
· Support for the Cisco UCS release 3.1.2 and updated HTML5 based UCS Manager
· Support for Fiber Chanel storage utilizing Cisco MDS 9396S
· Validation of IP-based storage design supporting iSCSI based storage access
· Application design guidance for multi-tiered application using Cisco ACI application profiles and policies
· Support for application segregation utilizing ACI multi-tenancy
· Integration of Cisco ASA firewall appliance for enhanced application security
For more information on previous VersaStack models, please refer the VersaStack guides at:
VersaStack with Cisco ACI and IBM SVC architecture aligns with the converged infrastructure configurations and best practices as identified in the previous VersaStack releases. The system includes hardware and software compatibility support between all components and aligns to the configuration best practices for each of these components. All the core hardware components and software releases are listed and supported on both:
Cisco compatibility list:
http://www.cisco.com/en/US/products/ps10477/prod_technical_reference_list.html
IBM Interoperability Matrix:
http://www-03.ibm.com/systems/support/storage/ssic/interoperability.wss
The system supports high availability at network, compute and storage layers such that no single point of failure exists in the design. The system utilizes 10 and 40 Gbps Ethernet jumbo-frame based connectivity combined with port aggregation technologies such as virtual port-channels (vPC) for non-blocking LAN traffic forwarding. A dual SAN 8/16 Gbps environment provides redundant storage access from compute devices to the storage controllers.
This VersaStack with Cisco ACI and IBM SVC solution utilizes Cisco UCS platform with Cisco B200 M4 half-width blades and Cisco UCS C220 M4 rack mount servers connected and managed through Cisco UCS 6248 Fabric Interconnects and the integrated Cisco UCS manager. These high performance servers are configured as stateless compute nodes where ESXi 6.0 U1b hypervisor is loaded using SAN (iSCSI and FC) boot. The boot disks to store ESXi hypervisor image and configuration along with the block and file based datastores to host application Virtual Machines (VMs) are provisioned on the IBM storage devices.
As in the non-ACI designs of VersaStack, link aggregation technologies play an important role in VersaStack with ACI solution providing improved aggregate bandwidth and link resiliency across the solution stack. Cisco UCS, and Cisco Nexus 9000 platforms support active port channeling using 802.3ad standard Link Aggregation Control Protocol (LACP). In addition, the Cisco Nexus 9000 series features virtual Port Channel (vPC) capability which allows links that are physically connected to two different Cisco Nexus devices to appear as a single "logical" port channel. Each Cisco UCS Fabric Interconnect (FI) is connected to both the Cisco Nexus 93180 leaf switches using virtual port-channel (vPC) enabled 10GbE uplinks for a total aggregate system bandwidth of 40GBps. Additional ports can be easily added to the design for increased throughput. Each Cisco UCS 5108 chassis is connected to the UCS FIs using a pair of 10GbE ports from each IO Modules for a combined 40GbE uplink. Each of the Cisco UCS C-220 servers connect directly into each of the UCS FIs using a 10Gbps converged link for an aggregate bandwidth of 20Gbps per server.
To provide the compute to storage system connectivity, this design guides highlights two different storage connectivity options:
· Option 1: iSCSI based storage access through Cisco ACI Fabric
· Option 2: FC based storage access through Cisco MDS 9396S
The solution also showcases two stand-alone (not managed through UCS Manager) Cisco UCS C220 M4 rack servers configured as a dedicated management cluster to support core infrastructure service virtual machines (VM) such as vCenter, Active Directory, UCS performance manager etc. These Cisco UCS C220 servers are configured to boot ESXi hypervisor from internal storage using FlexFlash Secure Digital cards and are connected directly to Cisco Nexus 93180YC leaf switches. The network configuration allows iSCSI based shared storage access from the management cluster to the IBM SVC for VM deployment
Cisco ACI and IBM SVC based VersaStack design option is shown in Figure 1. IBM SVC nodes, IBM FlashSystem 900 and IBM Storwize V5030 are all connected using a Cisco MDS 9396S based redundant FC fabric. To provide FC based storage access to the compute nodes, Cisco UCS Fabric Interconnects are connected to the same pair of Cisco MDS 9396S switches and zoned appropriately. To provide iSCSI based storage access, IBM SVC is connected directly to the Cisco Nexus 93180 leaf switches. A10GbE port from each IBM SVC node is connected to each of the two Cisco Nexus 93180 leaf switches providing an aggregate bandwidth of 40Gbps.
Figure 1 VersaStack iSCSI and FC Storage Design with IBM SVC
The reference architecture covered in this document leverages:
· One* Cisco UCS 5108 Blade Server chassis with 2200 Series Fabric Extenders (FEX)
· Four* Cisco UCS B200-M4 Blade Servers
· Two* Cisco UCS C220-M4 Rack-Mount Servers
· Two* Cisco UCS C220-M4 Rack-Mount Servers with FlexFlash
· Two Cisco UCS 6248UP Fabric Interconnects (FI)
· Three Cisco Application Policy Infrastructure Controllers (APIC-M2)
· Two Cisco Nexus 9336 ACI Spine Switches
· Two Cisco Nexus 93180YC-EX ACI leaf Switches
· Two Cisco MDS 9396S Fabric Switches
· Two** IBM SAN Volume Controller 2145-DH8 nodes
· Two** IBM SAN Volume Controller 2145-SV1 nodes
· One IBM FlashSystem 900
· One dual controller IBM Storwize V5030
· VMware vSphere 6.0 update 2
· Cisco Application Virtual Switch (AVS) ***
* The actual number of servers in customer environment will vary.
** This design guide showcases two IBM 2145-DH8 and two 2145-SV1 nodes setup as a four-node cluster. This configuration can be customized for customer specific deployments.
*** This deployment guide covers both VMware Virtual Distributed Switch (VDS) and Cisco AVS.
This document guides customers through the low-level steps for deploying the base architecture. These procedures cover everything from physical cabling to network, compute and storage device configurations.
For detailed information regarding the design of VersaStack, see: http://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/versastack_aci_svc_vmw6_design.html
Table 1 below outlines the hardware and software versions used for the solution validation. It is important to note that Cisco, IBM, and VMware have interoperability matrices that should be referenced to determine support for any specific implementation of VersaStack. See the following links for more information:
· IBM System Storage Interoperation Center
· Cisco UCS Hardware and Software Interoperability Tool
Table 1 Hardware and software revisions
Layer |
Device |
Image |
Comments |
Compute |
Cisco UCS Fabric Interconnects 6200 Series, Cisco UCS B200 M4, Cisco UCS C220 M4 |
3.1(2b) |
Includes the Cisco UCS-IOM 2208XP, Cisco UCS Manager, and Cisco UCS VIC 1340 |
Cisco ESXi eNIC Driver |
2.3.0.10 |
Ethernet driver for Cisco VIC |
|
Cisco ESXi fnic Driver |
1.6.0.28 |
FCoE driver for Cisco VIC |
|
Network |
Cisco Nexus Switches |
12.0(2h) |
iNXOS |
Cisco APIC |
2.0(2h) |
ACI release |
|
Cisco MDS 9396S |
7.3(0)D1(1) |
FC switch firmware version |
|
Storage |
IBM SVC |
7.7.1.3 |
Software version |
IBM Storwize V5030 |
7.7.1.3 |
Software version |
|
IBM FlashSystem 900 |
1.4.5.0 |
Software version |
|
Software |
VMware vSphere ESXi |
6.0 update 2 |
Software version |
VMware vCenter |
6.0 update 2 |
Software version |
|
Cisco Virtual Switch Update Manager |
2.1 |
Software version |
|
Cisco AVS |
5.2(1)SV3(2.2) |
Software version |
This document provides details for configuring a fully redundant, highly available configuration. Therefore, appropriate references are provided to indicate the component being configured at each step, such as 01 and 02 or A and B. For example, the Cisco UCS fabric interconnects are identified as FI-A or FI-B. This document is intended to enable customers and partners to fully configure the customer environment and during this process, various steps may require the use of customer-specific naming conventions, IP addresses, and VLAN schemes, as well as record appropriate MAC addresses.
This document covers network (ACI), compute (UCS), virtualization (VMware) and related storage configurations (host to storage system connectivity).
The information in this section is provided as a reference for cabling the equipment in Cisco ACI with IBM SVC VersaStack environment. To simplify the documentation, the architecture shown in Figure 1 is broken down into network, compute and storage related physical connectivity details.
This document assumes that the out-of-band management ports are plugged into an existing management infrastructure at the deployment site. These interfaces will be used in various configuration steps.
Customers can choose interfaces and ports of their liking but failure to follow the exact connectivity shown in figures below will result in changes to the deployment procedures since specific port information is used in various configuration steps
Figure 2 and Table 2 provide physical connectivity details of the Cisco ACI network.
Figure 2 Physical Connectivity - ACI Network
Table 2 – ACI Network Connectivity
Local Device |
Local Port |
Connection |
Remote Device |
Remote Port |
Cisco APIC 1 -3 |
Mgmt./CIMC |
GbE |
Existing OOB Mgmt. Switch |
Any |
Cisco APIC 1 -3 |
Eth1/1 |
GbE |
Existing OOB Mgmt. Switch |
Any |
Cisco APIC 1 -3 |
Eth1/2 |
GbE |
Existing OOB Mgmt. Switch (select a different switch for redundancy) |
Any |
Cisco Nexus 93180 (A and B) |
Mgmt0 |
GbE |
Existing OOB Mgmt. Switch |
Any |
Cisco Nexus 93180 A |
Eth1/1 |
10GbE |
Cisco APIC 1 |
Eth2/1 |
Cisco Nexus 93180 A |
Eth1/2 |
10GbE |
Cisco APIC 2 |
Eth2/1 |
Cisco Nexus 93180 A |
Eth1/3 |
10GbE |
Cisco APIC 3 |
Eth2/1 |
Cisco Nexus 93180 B |
Eth1/1 |
10GbE |
Cisco APIC 1 |
Eth2/2 |
Cisco Nexus 93180 B |
Eth1/2 |
10GbE |
Cisco APIC 2 |
Eth2/2 |
Cisco Nexus 93180 B |
Eth1/3 |
10GbE |
Cisco APIC 3 |
Eth2/2 |
Cisco Nexus 93180 A |
Eth1/49 |
40GbE |
Cisco Nexus 9336 A |
Eth1/1 |
Cisco Nexus 93180 A |
Eth1/50 |
40GbE |
Cisco Nexus 9336 B |
Eth1/1 |
Cisco Nexus 93180 B |
Eth1/49 |
40GbE |
Cisco Nexus 9336 A |
Eth1/2 |
Cisco Nexus 93180 B |
Eth1/50 |
40GbE |
Cisco Nexus 9336 B |
Eth1/2 |
This deployment uses two UCS C-Series servers connected directly to the ACI leaf switches to act as the management nodes. These management nodes are connected to the ACI fabric using the 10Gbps ports. The management access to these hosts is provided by 1Gbps connection to the existing management infrastructure.
Figure 3 Management Node Connectivity
Table 3 Management Node ACI Connectivity
Local Device |
Local Port |
Connection |
Remote Device |
Remote Port |
Cisco UCS C220-M4 1 |
Port 1 |
10GbE |
Cisco Nexus 93180 A |
Eth1/15 |
Cisco UCS C220-M4 1 |
Port 2 |
10GbE |
Cisco Nexus 93180 B |
Eth1/15 |
Cisco UCS C220-M4 2 |
Port 1 |
10GbE |
Cisco Nexus 93180 A |
Eth1/16 |
Cisco UCS C220-M4 2 |
Port 2 |
10GbE |
Cisco Nexus 93180 B |
Eth1/16 |
For physical connectivity details of Cisco UCS to the Cisco ACI fabric, refer to Figure 4 through Figure 6 below.
Figure 4 Cisco UCS connectivity to the ACI leaf switches
Table 4 UCS connectivity to ACI Fabric
Local Device |
Local Port |
Connection |
Remote Device |
Remote Port |
Cisco UCS Fabric Interconnect A |
Eth1/1 |
10GbE |
Cisco UCS Chassis FEX A |
IOM 1/1 |
Cisco UCS Fabric Interconnect A |
Eth1/2 |
10GbE |
Cisco UCS Chassis FEX A |
IOM 1/2 |
Cisco UCS Fabric Interconnect A |
Eth1/27 |
10GbE |
Cisco Nexus 93180 A |
Eth1/27 |
Cisco UCS Fabric Interconnect A |
Eth1/28 |
10GbE |
Cisco Nexus 93180 B |
Eth1/27 |
Cisco UCS Fabric Interconnect B |
Eth1/1 |
10GbE |
Cisco UCS Chassis FEX B |
IOM 1/1 |
Cisco UCS Fabric Interconnect B |
Eth1/2 |
10GbE |
Cisco UCS Chassis FEX B |
IOM 1/2 |
Cisco UCS Fabric Interconnect B |
Eth1/27 |
10GbE |
Cisco Nexus 93180 A |
Eth1/28 |
Cisco UCS Fabric Interconnect B |
Eth1/28 |
10GbE |
Cisco Nexus 93180 B |
Eth1/28 |
Figure 5 Cisco UCS C-Series server connectivity to UCS Fabric Interconnects
Table 5 Cisco UCS C-Series connectivity to UCS Fabric Interconnects
Local Device |
Local Port |
Connection |
Remote Device |
Remote Port |
Cisco UCS C220-M4 1 |
Port 1 |
10GbE |
Cisco UCS Fabric A |
Eth1/15 |
Cisco UCS C220-M4 1 |
Port 2 |
10GbE |
Cisco UCS Fabric B |
Eth1/15 |
Cisco UCS C220-M4 2 |
Port 1 |
10GbE |
Cisco UCS Fabric A |
Eth1/16 |
Cisco UCS C220-M4 2 |
Port 2 |
10GbE |
Cisco UCS Fabric B |
Eth1/16 |
For added scalability, additional UCS chassis can be added to any of the open ports on the ACI fabric leaf as shown in Figure 6.
Figure 6 Connectivity for 2nd UCS Chassis (optional)
Table 6 Second UCS Chassis connectivity to ACI Fabric (optional)
Local Device |
Local Port |
Connection |
Remote Device |
Remote Port |
Cisco UCS Fabric Interconnect A |
Eth1/1 |
10GbE |
Cisco UCS Chassis FEX A |
IOM 1/1 |
Cisco UCS Fabric Interconnect A |
Eth1/2 |
10GbE |
Cisco UCS Chassis FEX A |
IOM 1/2 |
Cisco UCS Fabric Interconnect A |
Eth1/27 |
10GbE |
Cisco Nexus 93180 A |
Eth1/29 |
Cisco UCS Fabric Interconnect A |
Eth1/28 |
10GbE |
Cisco Nexus 93180 B |
Eth1/29 |
Cisco UCS Fabric Interconnect B |
Eth1/1 |
10GbE |
Cisco UCS Chassis FEX B |
IOM 1/1 |
Cisco UCS Fabric Interconnect B |
Eth1/2 |
10GbE |
Cisco UCS Chassis FEX B |
IOM 1/2 |
Cisco UCS Fabric Interconnect B |
Eth1/27 |
10GbE |
Cisco Nexus 93180 A |
Eth1/30 |
Cisco UCS Fabric Interconnect B |
Eth1/28 |
10GbE |
Cisco Nexus 93180 B |
Eth1/30 |
For physical connectivity details of SVC nodes to the Cisco ACI fabric, refer to Figure 7. This deployment shows connectivity for four SVC nodes, a pair of IBM 2145-DH8 nodes and a pair of IBM 2145-SV1 nodes. Additional nodes can be connected to open ports on ACI fabric as needed.
Figure 7 IBM SVC connectivity to ACI Leaf switches
Table 7 IBM SVC connectivity to the ACI Fabric
Local Device |
Local Port |
Connection |
Remote Device |
Remote Port |
IBM 2145-DH8 node 1 |
Port 4 |
10GbE |
Cisco Nexus 93180 A |
Eth1/17 |
IBM 2145-DH8 node 1 |
Port 6 |
10GbE |
Cisco Nexus 93180 B |
Eth1/17 |
IBM 2145-DH8 node 2 |
Port 4 |
10GbE |
Cisco Nexus 93180 A |
Eth1/18 |
IBM 2145-DH8 node 2 |
Port 6 |
10GbE |
Cisco Nexus 93180 B |
Eth1/18 |
IBM 2145-SV1 node 1 |
Port 4 |
10GbE |
Cisco Nexus 93180 A |
Eth1/19 |
IBM 2145-SV1 node 1 |
Port 6 |
10GbE |
Cisco Nexus 93180 B |
Eth1/19 |
IBM 2145-SV1 node 2 |
Port 4 |
10GbE |
Cisco Nexus 93180 A |
Eth1/20 |
IBM 2145-SV1 node 2 |
Port 6 |
10GbE |
Cisco Nexus 93180 B |
Eth1/20 |
For physical connectivity details of Cisco UCS to an MDS 9396S based redundant SAN fabric, refer to Figure 8 through Figure 9.
Figure 8 UCS connectivity to Cisco MDS switches
Table 8 Cisco UCS connectivity to Cisco MDS Switches
Local Device |
Local Port |
Connection |
Remote Device |
Remote Port |
Cisco UCS Fabric Interconnect A |
FC1/31 |
8Gbps |
Cisco MDS 9396S A |
FC1/31 |
Cisco UCS Fabric Interconnect A |
FC1/32 |
8Gbps |
Cisco MDS 9396S A |
FC1/32 |
Cisco UCS Fabric Interconnect B |
FC1/31 |
8Gbps |
Cisco MDS 9396S B |
FC1/31 |
Cisco UCS Fabric Interconnect B |
FC1/32 |
8Gbps |
Cisco MDS 9396S B |
FC1/32 |
Figure 9 shows FC connectivity for IBM SVC, IBM Storwize V5030 and IBM FS900. Additional nodes can be connected and configured by following the same design guidelines.
Figure 9 IBM SVC and Storage System FC Connectivity
Table 9 IBM SVC and Storage System FC Connectivity
Local Device |
Local Ports |
Connection |
Remote Device |
Remote Port |
IBM 2145-DH8 Node 1 |
Ports 1,3 |
16Gbps |
Cisco MDS 9396S A |
FC1/5, FC1/9 |
IBM 2145-DH8 Node 1 |
Ports 2,4 |
16Gbps |
Cisco MDS 9396S B |
FC1/5, FC1/9 |
IBM 2145-DH8 Node 2 |
Ports 1,3 |
16Gbps |
Cisco MDS 9396S A |
FC1/7, FC1/11 |
IBM 2145-DH8 Node 2 |
Ports 2,4 |
16Gbps |
Cisco MDS 9396S B |
FC1/7, FC1/11 |
IBM 2145-SV1 Node 3 |
Ports 1,3 |
16Gbps |
Cisco MDS 9396S A |
FC1/17, FC1/18 |
IBM 2145-SV1 Node 3 |
Ports 2,4 |
16Gbps |
Cisco MDS 9396S B |
FC1/17, FC1/18 |
IBM 2145-SV1 Node 4 |
Ports 1,3 |
16Gbps |
Cisco MDS 9396S A |
FC1/19, FC1/20 |
IBM 2145-SV1 Node 4 |
Ports 2,4 |
16Gbps |
Cisco MDS 9396S B |
FC1/19, FC1/20 |
IBM FS900 Canister 1 |
Ports 1,3 |
16Gbps |
Cisco MDS 9396S A |
FC1/13-14 |
IBM FS900 Canister 1 |
Ports 2,4 |
16Gbps |
Cisco MDS 9396S B |
FC1/13-14 |
IBM FS900 Canister 2 |
Ports 1,3 |
16Gbps |
Cisco MDS 9396S A |
FC1/15-16 |
IBM FS900 Canister 2 |
Ports 2,4 |
16Gbps |
Cisco MDS 9396S B |
FC1/15-16 |
IBM V5030 Controller 1 |
Port 3 |
16Gbps |
Cisco MDS 9396S A |
FC1/21 |
IBM V5030 Controller 1 |
Port 4 |
16Gbps |
Cisco MDS 9396S B |
FC1/21 |
IBM V5030 Controller 2 |
Port 3 |
16Gbps |
Cisco MDS 9396S A |
FC1/22 |
IBM V5030 Controller 2 |
Port 4 |
16Gbps |
Cisco MDS 9396S B |
FC1/22 |
This section covers initial setup for IBM FS 900. Configuring the IBM FlashSystem FS900 is a two-stage setup. A USB key and the IBM setup software will be used for the initial configuration and IP assignment, and the Spectrum Virtualize web interface will be used to complete the configuration.
Begin this procedure only after the physical installation of the FlashSystem 900 has been completed. The computer used to initialize the FlashSystem 900 must have a USB port and a network connection to the FlashSystem 900. An USB key was included with the FlashSystem 900 in order to run the setup. This is provided with the original shipment but the software can be download and copied to a blank, replacement USB drive if required, see below.
In the event, original USB Key is unavailable, a replacement USB key can be recreated using the steps below to download the System Initialization software from IBM Fix Central support web site.
1. Go to https://www.ibm.com/support/fixcentral/
2. Enter the information below into the ‘Find Product’ search tool.
3. Click Continue.
4. Scroll down the results page and locate the latest firmware release for the FlashSystem 900 as depicted below.
5. Click Continue to proceed to the download page.
You will need your IBM login account to download software.
6. Scroll down the web page and select the hyper link for the InitTool as detailed below.
7. Extract the contents of the zip archive file to the root directory of any USB key formatted with a FAT32, ext2 or ext3 file system.
To complete this initial configuration process, access to the powered on FlashSystem 900, the USB flash drive that was shipped with the system, the network credentials (IP Address, subnet and gateway) of the system, and a personal computer is required.
1. Run the System Initialization tool from the USB key. For Windows clients run the InitTool.bat located in the root directory of the USB key. For MAC, Red Hat, and Ubuntu, locate the root directory of the USB flash drive, that is, /Volumes/ and type: sh InitTool.sh.
2. Click Next to continue.
3. Select Yes to ‘Are you configuring the first control enclosure in a new system?’ and click Next.
4. Input the IP address for your FlashSystem 900 as well as the required subnet mask and gateway. Click Apply, then click Next.
5. Connect-up both power supply units, as depicted above. Wait for the status LED to come on, flash, and then come on solid. The process can take up to 10 minutes.
6. Safely eject/remove the USB key from the computer and insert into the left USB port on the FlashSystem 900 as pictured above. The blue ‘Identidy’ LED will turn on, then off. This process can take up to 3 minutes. Click Next.
7. Remove the USB key from the FlashSystem 900 and reinsert into the computer.
8. If the Ethernet ports of the newly initialized FlashSystem 900 are attached to the same network as the computer where InitTool was run, InitTool checks connectivity to the system and displays the result of the system initialization process.
9. The initialization software indicates that the operation has completed successfully. Click Finish. FlashSystem 900 management GUI should now be available for further setup.
After completing the initial tasks above, launch the management GUI, and configure the IBM FlashSystem 900. At the time of writing, the following browsers (or later) are supported with the management GUI; Firefox 32, Internet Explorer 10 and Google Chrome 37
Following IBM Redbook publication provides in-depth knowledge of the IBM FlashSystem 900 product architecture, software and hardware, implementation, and hints and tips: ‘Implementing IBM FlashSystem 900’
1. Login into the management GUI of the FlashSystem 900 using the IP address provided in the configuration steps above.
2. Log into the management GUI as superuser and enter the password as passw0rd. Click Log in.
3. The system will prompt to change the password for superuser. Make a note of the new password and then click Log in.
4. In the Welcome to System Setup screen click Next.
5. Enter the System Name and click Apply and Next to proceed.
6. Configure the system date and time; it is recommended to configure the system with an NTP server. Click Apply and Next.
It is highly recommended that to configure email event notifications, which will automatically notify IBM support centers when problems occur.
7. Enter the complete company name and address, and then click Next.
8. Enter the information of the contact person for the support center and click Apply and Next.
9. Enter the IP address and port for one or more email servers for the Call Home email notification.
10. Review the final summary page, and click Finish to complete the System Setup wizard.
11. In the Setup Complete pop-up window, click Close.
During system setup, FlashSystem 900 discovers the quantity and size of flash modules installed in the system. As the final step of the system setup procedure, on clicking ‘Finish’, a single RAID 5 array is created on these flash modules. One flash module is reserved to acts as an active spare while the remaining modules form the RAID 5 array.
12. The System view for IBM FS900, as shown above, is now available.
13. Along the left side the management GUI, hover over each of the icons in the Navigation Dock to become familiar with the options.
14. Select the Settings icon from the Navigation Dock and choose Network.
15. On the Network screen, highlight the Management IP Addresses section. Change the IP address if necessary and click OK. The application might need to close and redirect the browser to the new IP address.
16. While still on the Network screen, select Service IP Addresses from the list on the left and change the service IP addresses for Node 1 and Node 2 as required.
17. Click Access in the Navigation Dock to the left and select Users to access the Users screen.
18. Select Create User.
19. Enter a new name for an alternative admin account. Leave the SecurityAdmin default as the User Group, and input the new password, then click Create. Optionally, an SSH Public Key generated on a Unix server through the command “ssh-keygen -t rsa” can be copied to a public key file and associated with this user through the Choose File button.
20. Logout from the superuser account and log back in with the new account created during the last step.
21. Select Volumes in the Navigation Dock and then select Volumes.
22. Click Create Volumes.
23. Create a volume to be used as the Gold storage tier. This volume will be used by the SAN volume controller to cater for high I/O and low latency, mission critical applications and data. Click OK.
24. Click Create Volumes and repeat the process again to provision some of the flash storage for a hybrid storage pool. Using the SAN Volume Controller, the slower speed Enterprise disks from IBM Storwize V5000 will be combined with the flash modules provisioned here, to create an EazyTier storage pool.
25. Validate the volumes you have created on your FlashSystem 900.
Configuring the IBM Storwize V5000 Second Generation is a two-stage setup. The technician port (T) will be used for the initial configuration and IP assignment, and the management GUI will be used to complete the configuration.
For a more in-depth look at installing the IBM Storwize V5000 Second Generation hardware, refer to the Redbook publication: Implementing the IBM Storwize V5000 Gen2
Begin this procedure only after the physical installation of the IBM Storwize V5000 has been completed. The computer used to initialize the IBM Storwize V5000 must have an Ethernet cable connected to the technician port of the IBM Storwize V5000 as well as a supported browser installed. At the time of writing, the following browsers or later are supported with the management GUI; Firefox 32, Internet Explorer 10 and Google Chrome 37.
Do not connect the technician port to a switch. If a switch is detected, the technician port connection might shut down, causing a 746 node error.
1. Power on the IBM Storwize V5000 control enclosure. Use the supplied power cords to connect both power supply units. The enclosure does not have power switches.
When deploying the expansion enclosures, expansion enclosures must be powered on before powering on the control enclosure.
2. From the rear of the control enclosure, check the LEDs on each node canister. The canister is ready with no critical errors when Power is illuminated, Status is flashing, and Fault is off. See figure below for reference.
3. Configure an Ethernet port, on the computer used to connect to the control enclosure, to enable Dynamic Host Configuration Protocol (DHCP) configuration of its IP address and DNS settings.
If DHCP cannot be enabled, configure the PC networking as follows: specify the static IPv4 address 192.168.0.2, subnet mask 255.255.255.0, gateway 192.168.0.1, and DNS 192.168.0.1.
4. Locate the Ethernet port that is labelled T on the rear of the IBM Storwize V5000 node canister. For the IBM Storwize V5030 system, there is a dedicated technician port.
5. Connect an Ethernet cable between the port of the computer that is configured in step 3 and the technician port. After the connection is made, the system will automatically configure the IP and DNS settings for the personal computer if DHCP is available.
6. After the Ethernet port of the personal computer is connected, open a supported browser and browse to address http://install. (If DCHP is not enabled, open a supported browser and go to the following static IP address 192.168.0.1.) The browser will automatically be directed to the initialization tool.
In case of a problem due to a change in system states, wait 5 -10 seconds and then try again.
7. Click Next in the System Initialization welcome screen.
8. Click Next to continue with As the first node in a new system
9. Complete all of the fields with the networking details for managing the system. This will be referred to as the System or Cluster IP address. Click Next
10. The setup task completes and you are provided a view of the generated satask CLI command as show above. Click Close. The storage enclosure will now reboot.
11. The system takes approximately 10 minutes to reboot and reconfigure the Web Server. After this time, click Next to proceed to the final step.
12. After completing the initialization process, disconnect the cable between the computer and the technician port, as instructed above. Re-establish the connection to the customer network and click Finish to be redirected to the management address provided during the configuration.
After completing the initial tasks mentioned above, launch the management GUI and continue configuring the IBM Storwize V5000 system.
Following e-Learning module introduces the IBM Storwize V5000 management interface and provides an overview of the system setup tasks, including configuring the system, migrating and configuring storage, creating hosts, creating and mapping volumes, and configuring email notifications: Getting Started
1. Log in to the management GUI using the cluster IP address configured above.
2. Read and accept the license agreement. Click Accept.
3. Login as superuser with the password of passw0rd. Click Log in.
4. The system will prompt to change the password for superuser. Change the password for superuser and make a note of the password. Click Log in.
5. On the Welcome to System Setup screen click Next.
6. Enter the System Name and click Apply and Next to proceed.
7. Select the license that was purchased, and enter the number of enclosures that will be used for FlashCopy, Remote Mirroring, Easy Tier, and External Virtualization. Click Apply and Next to proceed.
8. Configure the date and time settings, inputting NTP server details if available. Click Apply and Next to proceed.
9. Enable the Encryption feature (or leave it disabled). Click Next to proceed.
It is highly recommended to configure email event notifications which will automatically notify IBM support centers when problems occur.
10. Enter the complete company name and address, and then click Next.
11. Enter the contact person for the support center calls. Click Apply and Next.
12. Enter the IP address and server port for one or more of the email servers for the Call Home email notification. Click Apply and Next
13. Review the final summary page, and click Finish to complete the System Setup wizard.
14. Setup Completed. Click Close.
15. The System view of IBM Storwize V5000 is now available, as depicted above.
16. In the left side menu, hover over each of the icons in the Navigation Dock to become familiar with the options.
17. Select the Setting icon from the Navigation Dock and choose Network.
18. On the Network screen, highlight the Management IP Addresses section. Then click the number 1 interface on the left-hand side to bring up the Ethernet port IP menu. Change the IP address if necessary and click OK. The application might need to close and redirect the browser to the new IP address.
19. While still on the Network screen, select 1) ‘Service IP Addresses’ from the list on the left and 2) Node Canister ‘left’ then 3) change the IP address for port 1, click OK.
20. Repeat this process for port 1 on Node Canisters right (and port 2 left/right if you have cabled those ports)
21. Click Access in the Navigation Dock to the left and select Users to access the Users screen.
22. Select Create User.
23. Enter a new name for an alternative admin account. Leave the ‘SecurityAdmin’ default as the User Group, and input the new password, then click Create. Optionally, an SSH Public Key generated on a Unix server through the command “ssh-keygen -t rsa” can be copied to a public key file and associated with this user through the Choose File button.
24. Logout from the superuser account and log back in as the new account.
25. Select Pools from the Navigation Dock and select MDisk by Pools.
26. Click Create Pool, and enter the name of the new storage pool. Click Create.
27. Select Add Storage.
28. Select Internal, review the drive assignments and then select Assign.
Depending on customer configuration, select ‘Internal Custom’ to manually create tired storage pools grouping together disk by capabilities. In this deployment, Flash and Enterprise class disk are utilized for ‘Silver’ pool and Nearline disks are utilized for ‘Bronze’ storage pool.
29. Validate the pools are online and have the relevant storage assigned.
30. Select Volumes in the Navigation Dock and then select Volumes.
31. Click Create Volumes.
32. Choose Basic or Mirrored volume setting and select the Bronze pool that was previously added. Select I/O group as Automatic and input the, capacity, desired capacity savings and name for the volume. Click Create and then click Close.
33. Click Create Volume again, select the storage Pool Silver, and select I/O group as Automatic. Input the, capacity, desired capacity savings and the volume name. Click Create and then click Close.
34. Validate the created volumes.
Configuring the IBM SAN Volume Controller is a two-stage setup. The technician port (T) will be used for the initial configuration and IP assignment to the configuration node, and the management GUI will be used to complete the configuration.
For an in-depth look at installing and configuring the IBM SAN Volume Controller, refer to Redbook publication: Implementing the IBM System Storage SAN Volume Controller.
Begin this procedure only after the physical installation of the IBM SAN Volume Controller has been completed. The computer used to initialize the IBM SAN Volume Controller must have an Ethernet cable connecting the personal computer to the technician port of the IBM SAN Volume Controller as well as a supported browser installed. At the time of writing, the following browsers or later are supported with the management GUI; Firefox 45, Internet Explorer 11 and Google Chrome 49.
To initialize a new system, you must connect a computer, configured as below, to the technician port on the rear of the IBM SAN Volume Controller node and then run the initialization tool. This node becomes the configuration node and provides access to the initialization GUI. Access the initialization GUI by using the management IP address through your IP network or through the technician port.
Use the initialization GUI to add each additional candidate node to the system.
1. Power on the IBM SAN Volume Controller node. Use the supplied power cords to connect both power supply units.
2. Check the LEDs on each node. The Power LED should be solidly on after a few seconds, but if it continues to blink after one minute, press the power-control button. The LED’s are shown below for both the DH8 and SV1 SAN Volume Controller.
The SAN Volume Controller runs an extended series of power-on self-tests. The node might appear to be idle for up to five minutes after powering on.
3. Operator information LEDs on the front of the DH8 SAN Volume Controller.
a. Power-control button and power-on LED
b. Ethernet icon
c. System-locator button and LED (blue)
d. Ethernet activity LEDs
e. System-error LED (amber)
4. Operator information LEDS on front of the SV1 SAN Volume Controller.
f. 1. Power-control button and power-on LED
g. 2. Identify LED
h. 3. Node status LED
i. 4. Node fault LED
j. 5. Battery status LED
5. Configure an Ethernet port, on the computer used to connect to the SAN Volume Controller, to enable Dynamic Host Configuration Protocol (DHCP) configuration of its IP address and DNS settings.
If DHCP cannot be enabled, configure the PC networking as follows: specify the static IPv4 address 192.168.0.2, subnet mask 255.255.255.0, gateway 192.168.0.1, and DNS 192.168.0.1.
Do not connect the technician port to a switch. If a switch is detected, the technician port connection might shut down, causing a 746 node error.
6. Locate the Ethernet port labelled T on the rear of the IBM SAN Volume Controller node. Refer to the appropriate figures below that show the location of the technician port, labelled 1, on each model.
7. Connect an Ethernet cable between the port of the computer that is configured in step 3 and the technician port. After the connection is made, the system will automatically configure the IP and DNS settings for the personal computer if DHCP is available. If it is not available, the system will use the values provided in the steps above.
8. After the Ethernet port of the personal computer is connected, open a supported browser and browse to address http://install. (In case DCHP is not enabled, open a supported browser and go to the following static IP address 192.168.0.1.) The browser will automatically be directed to the initialization tool.
In case of a problem due to a change in system states, wait 5 - 10 seconds and then try again.
9. Click Next on the System Initialization welcome message
10. Click Next to continue with As the first node in a new system
11. Complete all of the fields with the networking details for managing the system. This will be referred to as the System or Cluster IP address. Click Next
12. The setup task completes and provides a view of the generated satask CLI command as show above. Click Close. The storage enclosure will now reboot.
13. The system takes approximately 10 minutes to reboot and reconfigure the Web Server. After this time, click Next to proceed to the final step.
14. After completing the initialization process, disconnect the cable between the computer and the technician port, as instructed above. Re-establish the connection to the customer network and click Finish to be redirected to the management address that was provided while configuring the system.
15. Use the management GUI to add each additional candidate IBM SAN Volume Controller node to the system.
After completing the initial tasks above, launch the management GUI and complete the following steps:
1. Log in to the management GUI using the previously configured cluster IP address.
2. Read and accept the license agreement. Click Accept
3. Resetting the superuser password is required when logging in for the first time. Make a note of the password and then click Log In.
The default password for the superuser account is passw0rd (zero and not o).
4. On the Welcome to System Setup screen click Next.
5. Enter the System Name and click Apply and Next to proceed.
6. The Modify System Properties dialogue box shows the CLI command issued when applying the system name. Click Close to continue.
7. Select the licensed functions that were purchased with the system and enter the value (in TB) for Virtualization, FlashCopy, Remote Mirroring, and Real-time Compression. Click Apply and Next to proceed and click Close on the settings confirmation dialouge box.
The above vaules are shown as an example. Consider your license agreement details when populationg this information.
8. Configure the date and time settings and enter the NTP server details. Click Apply and Next to proceed and click Close on the settings confirmation dialouge box.
9. Enable the encryption feature (or leave it disabled). Click Next to proceed.
10. Enter the complete company name and address, and then click Next.
11. Enter the contact information for the support center. Click Apply and Next and click Close on the settings confirmation dialouge box.
12. Enter the IP address and server port for one or more of the email servers for the Call Home email notification. Click Apply and Next
13. Review the final summary page, and click Finish to complete the System Setup wizard and click Close on the settings confirmation dialouge box.
14. Setup Completed. Click Close.
15. The System view for IBM SVC, as shown above, is now available.
Before adding a node to a system, make sure that the switch zoning is configured such that the node being added is in the same zone as all other nodes in the system.
The following steps will configure zoning for the WWPNs for setting up the IBM SVC nodes as well as communication between SVC nodes and the FS900 and V5030 storage systems. WWPN information for various nodes can be easily collected using the command “show flogi database”. Refer to Table 9 to identify the ports where IBM nodes are connected to the MDS switches. In this configuration step, various zones will be created to enable communication between all the IBM nodes.
The configuration below assumes 4 SVC nodes have been deployed. FC ports 1 and 3 from each node are connected to the MDS-A switch and FC ports 2 and 4 are connected to MDS-B switch. Customers can adjust the configuration according to their deployment size.
Log in to the MDS switch and complete the following steps.
1. Configure all the relevant ports (Table 9) on Cisco MDS as follows:
interface fc1/x
port-license acquire
no shutdown
!
2. Create the VSAN and add all the ports from Table 9:
vsan database
vsan 101 interface fc1/x
vsan 101 interface fc1/x
<..>
3. The WWPNs obtained from “show flogi database” will be used in this step. Replace the variables with actual WWPN values.
device-alias database
device-alias name SVC-Node1-FC1 pwwn <Actual PWWN for Node1 FC1>
device-alias name SVC-Node1-FC3 pwwn <Actual PWWN for Node1 FC3>
device-alias name SVC-Node2-FC1 pwwn <Actual PWWN for Node2 FC1>
device-alias name SVC-Node2-FC3 pwwn <Actual PWWN for Node2 FC3>
device-alias name SVC-Node3-FC1 pwwn <Actual PWWN for Node3 FC1>
device-alias name SVC-Node3-FC3 pwwn <Actual PWWN for Node3 FC3>
device-alias name SVC-Node4-FC1 pwwn <Actual PWWN for Node4 FC1>
device-alias name SVC-Node4-FC3 pwwn <Actual PWWN for Node4 FC3>
device-alias name FS900-Can1-FC1 pwwn <Actual PWWN for FS900 CAN1 FC1>
device-alias name FS900-Can1-FC3 pwwn <Actual PWWN for FS900 CAN1 FC3>
device-alias name FS900-Can2-FC1 pwwn <Actual PWWN for FS900 CAN2 FC1>
device-alias name FS900-Can3-FC3 pwwn <Actual PWWN for FS900 CAN2 FC3>
device-alias name V5030-Cont1-FC3 pwwn <Actual PWWN for V5030 Cont1 FC3>
device-alias name V5030-Cont2-FC3 pwwn < Actual PWWN for V5030 Cont2 FC3>
device-alias commit
4. Create the zones and add device-alias members for the SVC inter-node and SVC nodes to storage system configurations.
zone name Inter-Node vsan 101
member device-alias SVC-Node1-FC1
member device-alias SVC-Node1-FC3
member device-alias SVC-Node2-FC1
member device-alias SVC-Node2-FC3
member device-alias SVC-Node3-FC1
member device-alias SVC-Node3-FC3
member device-alias SVC-Node4-FC1
member device-alias SVC-Node4-FC3
!
zone name SVC-5030 vsan 101
member device-alias SVC-Node1-FC1
member device-alias SVC-Node1-FC3
member device-alias SVC-Node2-FC1
member device-alias SVC-Node2-FC3
member device-alias SVC-Node3-FC1
member device-alias SVC-Node3-FC3
member device-alias SVC-Node4-FC1
member device-alias SVC-Node4-FC3
member device-alias V5030-Cont1-FC3
member device-alias V5030-Cont2-FC3
!
zone name SVC-FS900 vsan 101
member device-alias SVC-Node1-FC1
member device-alias SVC-Node1-FC3
member device-alias SVC-Node2-FC1
member device-alias SVC-Node2-FC3
member device-alias SVC-Node3-FC1
member device-alias SVC-Node3-FC3
member device-alias SVC-Node4-FC1
member device-alias SVC-Node4-FC3
member device-alias FS900-Can1-FC1
member device-alias FS900-Can1-FC3
member device-alias FS900-Can2-FC1
member device-alias FS900-Can2-FC3
!
5. Add zones to zoneset.
zoneset name versastackzoneset vsan 101
member Inter-Node
member SVC-V5030
member SVC-FS900
6. Activate the zoneset
zoneset activate name versastackzoneset vsan 101
Validate all the HBAs are logged into the MDS switch. The SVC nodes and storage systems should be powered on.
7. Validate all the HBAs are logged into the switch using the “show zoneset active” command.
VersaStack-SVC-FabA# show zoneset active
zoneset name versastackzoneset vsan 101
zone name Inter-Node vsan 101
* fcid 0x400020 [pwwn 50:05:07:68:0c:21:22:71] [SVC-Node1-FC1]
* fcid 0x400160 [pwwn 50:05:07:68:0c:11:22:71] [SVC-Node1-FC3]
* fcid 0x400040 [pwwn 50:05:07:68:0c:21:22:67] [SVC-Node2-FC1]
* fcid 0x400120 [pwwn 50:05:07:68:0c:11:22:67] [SVC-Node2-FC3]
* fcid 0x400180 [pwwn 50:05:07:68:0c:11:6f:60] [SVC-Node3-FC1]
* fcid 0x4001a0 [pwwn 50:05:07:68:0c:12:6f:60] [SVC-Node3-FC3]
* fcid 0x4001c0 [pwwn 50:05:07:68:0c:11:6f:63] [SVC-Node4-FC1]
* fcid 0x4001e0 [pwwn 50:05:07:68:0c:12:6f:63] [SVC-Node4-FC3]
zone name SVC-V5030 vsan 101
* fcid 0x400020 [pwwn 50:05:07:68:0c:21:22:71] [SVC-Node1-FC1]
* fcid 0x400160 [pwwn 50:05:07:68:0c:11:22:71] [SVC-Node1-FC3]
* fcid 0x400040 [pwwn 50:05:07:68:0c:21:22:67] [SVC-Node2-FC1]
* fcid 0x400120 [pwwn 50:05:07:68:0c:11:22:67] [SVC-Node2-FC3]
* fcid 0x400180 [pwwn 50:05:07:68:0c:11:6f:60] [SVC-Node3-FC1]
* fcid 0x4001a0 [pwwn 50:05:07:68:0c:12:6f:60] [SVC-Node3-FC3]
* fcid 0x4001c0 [pwwn 50:05:07:68:0c:11:6f:63] [SVC-Node4-FC1]
* fcid 0x4001e0 [pwwn 50:05:07:68:0c:12:6f:63] [SVC-Node4-FC3]
* fcid 0x400200 [pwwn 50:05:07:68:0d:0c:58:f0] [V5030-Cont1-FC3]
* fcid 0x400220 [pwwn 50:05:07:68:0d:0c:58:f1] [V5030-Cont2-FC3]
zone name SVC-FS900 vsan 101
* fcid 0x400020 [pwwn 50:05:07:68:0c:21:22:71] [SVC-Node1-FC1]
* fcid 0x400160 [pwwn 50:05:07:68:0c:11:22:71] [SVC-Node1-FC3]
* fcid 0x400040 [pwwn 50:05:07:68:0c:21:22:67] [SVC-Node2-FC1]
* fcid 0x400120 [pwwn 50:05:07:68:0c:11:22:67] [SVC-Node2-FC3]
* fcid 0x400180 [pwwn 50:05:07:68:0c:11:6f:60] [SVC-Node3-FC1]
* fcid 0x4001a0 [pwwn 50:05:07:68:0c:12:6f:60] [SVC-Node3-FC3]
* fcid 0x4001c0 [pwwn 50:05:07:68:0c:11:6f:63] [SVC-Node4-FC1]
* fcid 0x4001e0 [pwwn 50:05:07:68:0c:12:6f:63] [SVC-Node4-FC3]
* fcid 0x4000a0 [pwwn 50:05:07:60:5e:83:cc:81] [FS900-Can1-FC1]
* fcid 0x4000c0 [pwwn 50:05:07:60:5e:83:cc:91] [FS900-Can1-FC3]
* fcid 0x400080 [pwwn 50:05:07:60:5e:83:cc:a1] [FS900-Can2-FC1]
* fcid 0x4000e0 [pwwn 50:05:07:60:5e:83:cc:b1] [FS900-Can2-FC3]
8. Save the configuration.
copy run start
Log into the MDS switch and complete the following steps:
1. Configure all the relevant ports (Table 9) on Cisco MDS as follows:
interface fc1/x
port-license acquire
no shutdown
!
2. Create the VSAN and add all the ports from Table 9:
vsan database
vsan 102 interface fc1/x
vsan 102 interface fc1/x
<..>
3. The WWPNs obtained from “show flogi database” will be used in this step. Replace the variables with actual WWPN values.
device-alias database
device-alias name SVC-Node1-FC2 pwwn <Actual PWWN for Node1 FC2>
device-alias name SVC-Node1-FC4 pwwn <Actual PWWN for Node1 FC4>
device-alias name SVC-Node2-FC2 pwwn <Actual PWWN for Node2 FC2>
device-alias name SVC-Node2-FC4 pwwn <Actual PWWN for Node2 FC4>
device-alias name SVC-Node3-FC2 pwwn <Actual PWWN for Node3 FC2>
device-alias name SVC-Node3-FC4 pwwn <Actual PWWN for Node3 FC4>
device-alias name SVC-Node4-FC2 pwwn <Actual PWWN for Node4 FC2>
device-alias name SVC-Node4-FC4 pwwn <Actual PWWN for Node4 FC4>
device-alias name FS900-Can1-FC2 pwwn <Actual PWWN for FS900 CAN1 FC2>
device-alias name FS900-Can1-FC4 pwwn <Actual PWWN for FS900 CAN1 FC4>
device-alias name FS900-Can2-FC2 pwwn <Actual PWWN for FS900 CAN2 FC2>
device-alias name FS900-Can3-FC4 pwwn <Actual PWWN for FS900 CAN2 FC4>
device-alias name V5030-Cont1-FC4 pwwn <Actual PWWN for V5030 Cont1 FC4>
device-alias name V5030-Cont2-FC4 pwwn < Actual PWWN for V5030 Cont2 FC4>
device-alias commit
4. Create the zones and add device-alias members for the SVC inter-node and SVC nodes to storage system configurations.
zone name Inter-Node vsan 102
member device-alias SVC-Node1-FC2
member device-alias SVC-Node1-FC4
member device-alias SVC-Node2-FC2
member device-alias SVC-Node2-FC4
member device-alias SVC-Node3-FC2
member device-alias SVC-Node3-FC4
member device-alias SVC-Node4-FC2
member device-alias SVC-Node4-FC4
!
zone name SVC-5030 vsan 102
member device-alias SVC-Node1-FC2
member device-alias SVC-Node1-FC4
member device-alias SVC-Node2-FC2
member device-alias SVC-Node2-FC4
member device-alias SVC-Node3-FC2
member device-alias SVC-Node3-FC4
member device-alias SVC-Node4-FC2
member device-alias SVC-Node4-FC4
member device-alias V5030-Cont1-FC4
member device-alias V5030-Cont2-FC4
!
zone name SVC-FS900 vsan 102
member device-alias SVC-Node1-FC2
member device-alias SVC-Node1-FC4
member device-alias SVC-Node2-FC2
member device-alias SVC-Node2-FC4
member device-alias SVC-Node3-FC2
member device-alias SVC-Node3-FC4
member device-alias SVC-Node4-FC2
member device-alias SVC-Node4-FC4
member device-alias FS900-Can1-FC2
member device-alias FS900-Can1-FC4
member device-alias FS900-Can2-FC2
member device-alias FS900-Can2-FC4
!
5. Add zones to zoneset.
zoneset name versastackzoneset vsan 102
member Inter-Node
member SVC-V5030
member SVC-FS900
6. Activate the zoneset
zoneset activate name versastackzoneset vsan 102
Validate all the HBA's are logged into the MDS switch. The SVC nodes and storage systems should be powered on.
7. Validate all the HBAs are logged into the switch using the “show zoneset active” command.
VersaStack-SVC-FabB# show zoneset active
zoneset name versastackzoneset vsan 102
zone name Inter-Node vsan 102
* fcid 0x770040 [pwwn 50:05:07:68:0c:22:22:71] [SVC-Node1-FC2]
* fcid 0x770140 [pwwn 50:05:07:68:0c:12:22:71] [SVC-Node1-FC4]
* fcid 0x770020 [pwwn 50:05:07:68:0c:22:22:67] [SVC-Node2-FC2]
* fcid 0x770100 [pwwn 50:05:07:68:0c:12:22:67] [SVC-Node2-FC4]
* fcid 0x770180 [pwwn 50:05:07:68:0c:13:6f:60] [SVC-Node3-FC2]
* fcid 0x7701a0 [pwwn 50:05:07:68:0c:14:6f:60] [SVC-Node3-FC4]
* fcid 0x770160 [pwwn 50:05:07:68:0c:13:6f:63] [SVC-Node4-FC2]
* fcid 0x770200 [pwwn 50:05:07:68:0c:14:6f:63] [SVC-Node4-FC4]
zone name SVC-V5030 vsan 102
* fcid 0x770040 [pwwn 50:05:07:68:0c:22:22:71] [SVC-Node1-FC2]
* fcid 0x770140 [pwwn 50:05:07:68:0c:12:22:71] [SVC-Node1-FC4]
* fcid 0x770020 [pwwn 50:05:07:68:0c:22:22:67] [SVC-Node2-FC2]
* fcid 0x770100 [pwwn 50:05:07:68:0c:12:22:67] [SVC-Node2-FC4]
* fcid 0x770180 [pwwn 50:05:07:68:0c:13:6f:60] [SVC-Node3-FC2]
* fcid 0x7701a0 [pwwn 50:05:07:68:0c:14:6f:60] [SVC-Node3-FC4]
* fcid 0x770160 [pwwn 50:05:07:68:0c:13:6f:63] [SVC-Node4-FC2]
* fcid 0x770200 [pwwn 50:05:07:68:0c:14:6f:63] [SVC-Node4-FC4]
* fcid 0x7701c0 [pwwn 50:05:07:68:0d:10:58:f0] [V5030-Cont1-FC4]
* fcid 0x770220 [pwwn 50:05:07:68:0d:10:58:f1] [V5030-Cont2-FC4]
zone name SVC-FS900 vsan 102
* fcid 0x770040 [pwwn 50:05:07:68:0c:22:22:71] [SVC-Node1-FC2]
* fcid 0x770140 [pwwn 50:05:07:68:0c:12:22:71] [SVC-Node1-FC4]
* fcid 0x770020 [pwwn 50:05:07:68:0c:22:22:67] [SVC-Node2-FC2]
* fcid 0x770100 [pwwn 50:05:07:68:0c:12:22:67] [SVC-Node2-FC4]
* fcid 0x770180 [pwwn 50:05:07:68:0c:13:6f:60] [SVC-Node3-FC2]
* fcid 0x7701a0 [pwwn 50:05:07:68:0c:14:6f:60] [SVC-Node3-FC4]
* fcid 0x770160 [pwwn 50:05:07:68:0c:13:6f:63] [SVC-Node4-FC2]
* fcid 0x770200 [pwwn 50:05:07:68:0c:14:6f:63] [SVC-Node4-FC4]
* fcid 0x770080 [pwwn 50:05:07:60:5e:83:cc:82] [FS900-Can1-FC2]
* fcid 0x7700e0 [pwwn 50:05:07:60:5e:83:cc:92] [FS900-Can1-FC4]
* fcid 0x7700a0 [pwwn 50:05:07:60:5e:83:cc:a2] [FS900-Can2-FC2]
* fcid 0x7700c0 [pwwn 50:05:07:60:5e:83:cc:b2] [FS900-Can2-FC4]
8. Save the configuration.
copy run start
To have a fully functional SVC system, a second node should be added to the configuration. The configuration node should not be ‘aware’ of additional nodes in the system because they became part of the same zone on the fabric. To add a node to a clustered system, complete the following steps.
1. Right-click on the empty node space on io_grp0.
2. Select the 2nd node to be used in io_grp0.
3. Repeat the process for any additional node pairs in order to create additional io_grp (1,2,3). Click Finish and click Close on the settings confirmation dialouge box.
4. On the left side of the management GUI, hover over each of the icons in the Navigation Dock to become familiar with the options.
5. Select the Setting icon from the Navigation Dock and choose Network.
6. On the Network screen, highlight the Management IP Addresses section. Then click the number 1 interface on the left-hand side to bring up the Ethernet port IP menu. If required, change the IP address configured during the system initialization, click OK.
7. While still on the Network screen, select (1) Service IP Addresses from the list on the left and (2) Node Name node1 (io_grp0) then (3) change the IP address for port 1 to reflect the service IP allocated to this node, click OK.
8. Repeat this process for port 1 on the remaining nodes (and port 2 if cabled).
9. Click Access from the Navigation Dock on the left and select Users to access the Users screen.
10. Select Create User.
11. Enter a new name for an alternative admin account. Leave the ‘SecurityAdmin’ default as the User Group, and input the new password, then click Create. Optionally, save the SSH Public Key generate on a Unix server through the command “ssh-keygen -t rsa” to a file and associate the file to this user through the Choose File button.
12. Logout from the superuser account and log back in as the new account just created.
Following steps cover configuring the IBM FlashSystem 900 and IBM Storwize V5030 as backend storage.
IBM FlashSystem 900
1. Log into the management GUI for IBM FlashSystem 900 to complete these steps.
2. From the management GUI for the IBM FlashSystem 900, select Hosts from the Navigation Dock, then click Hosts
3. Click on Add Host
4. Create a host with a name representing the SAN Volume Controller and add all (16 in this example) the WWPNs of the SAN Volume Controller nodes to the new host.
5. Validate that host has been added with the correct number of host ports and that the state is Online
6. From the Navigation Dock, highlight Volumes, and select Volumes.
7. Select a volume by right-clicking, and then select Map to Host. In this example, the ‘Gold’ volume was selected.
8. Select the host and click Map and click Close to the Modify Mappings dialogue box
Switch to IBM SVC management GUI.
9. Using the IBM SAN Volume Controller management GUI, select Pools and then click External Storage.
10. Select Actions and Discover storage to refresh any changes to the external storage.
11. Right-click the newly presented controller0 and select Rename. Enter the name for the IBM FlashSystem 900 (FlashSystem 900 in this example). Click Close to the Rename Storage System dialogue box.
12. Select the FlashSystem 900 and click the plus sign (+), to expand the view. The volumes that we created and mapped on the backend FlashSystem 900 system will be displayed in the SVC MDisk view.
13. Right-click on new mdisk0 and rename to FS900-Gold. Click Rename and then click Close on the Rename MDiks dialogue box.
14. Return to the IBM FlashSystem 900 management GUI, and using the steps above, map the Silver volume to the IBM SAN Volume Controller host. Name the next Volume FS900-Silver.
15. Validate that there are 2 MDisks on IBM SAN Volume Controller provided by the IBM FlashSystem 900 backend storage.
IBM Storwize V5030
1. Log into the management GUI for IBM Storwize V5030 to complete these steps.
2. From the management GUI for the IBM Storwize V5030, select Hosts from the Navigation Dock, then click Hosts.
3. Click Add Host.
4. Create a host for the SAN Volume Controller and add all the WWPNs (16 in this example) of the IBM SAN Volume Controller nodes to the new host. Leave Host type and I/O groups as default. Click Add.
5. Validate that host has been added with the correct number of host ports and that the state is Online.
6. From the Navigation Dock, highlight Volumes, and select Volumes.
7. Select a volume by right-clicking, and then select Map to Host. In this example, the ‘Bronze’ volume has been selected.
8. Select the host and click Map and click Close to the Modify Mappings dialogue box.
9. Repeat the above steps to map the Silver volume to the host.
10. Using the IBM SAN Volume Controller management GUI, select Pools and then click External Storage.
11. Select Actions and Discover storage to refresh any changes to the external storage.
12. Right-click the newly presented controller1 and select Rename. Enter the name for IBM Storzwize V5030 (Storwize V5030 N1 in example above). Click Close.
13. Repeat above step to add controller2 and rename it to Storwize V5030 N2.
The Storwize V5030 is presented as 2 storage controllers, each storage controller represents a canister of the V5030 control enclosure. In this deployment, ‘N1’ and ‘N2’ was used to differentiate the two controllers.
14. Select the Storwize V5030 controller with the Plus sign (+) and expand the view. The volumes that were previously created and mapped should be displayed in the SAN Volume Controller MDisk view.
15. Using the volume size as a guide, right-click on new mdisk2 and rename to V5030-Bronze. Click Rename and then click Close. Repeat the process for mdisk3, renaming it to V5030-Silver.
16. Validate that there are 4 MDisks on the IBM SAN Volume Controller provided by both IBM FlashSystem 900 and Storwize V5030 backend storage systems.
17. Highlight Pools from the Navigation Dock, and select MDisks by Pools.
18. Click Create Pool and enter the name of your first pool: Bronze. Repeat this process for the Silver and Gold pools.
19. Select Add Storage on the Bronze pool
20. Select the Storwize V5030 storage system and the MDisks V5030-Bronze. Set the tier type to conform with the type of provisioned storage.
21. Repeat the process for the Gold pool, selecting the FlashSystem 900 storage system and the MDisks FS900-Gold. Set the tier type to Flash.
22. Repeat the process for the Silver pool, selecting the FlashSystem 900 storage system and the MDisks FS900-Silver. Set the tier type to Flash.
23. Repeat the process again for the Silver pool, selecting the Storwize V5030 storage system and the MDisks V5030-Silver. Set the tier type to Enterprise.
24. Validate that storage pools are as shown above.
Cisco UCS configuration requires information about the iSCSI IQNs on IBM SVC. Therefore, as part of the initial storage configuration, iSCSI ports are configured on IBM SVC
Two 10G ports from each of the IBM SVC nodes are connected to each of Nexus 93180 Leaf switches. These ports are configured as shown in Table 10:
Table 10 IBM SVC iSCSI interface configuration
System |
Port |
Path |
VLAN |
IP address |
Gateway |
Node 1 |
4 |
iSCSI-A |
3011 |
192.168.191.249/24 |
192.168.191.254 |
Node 1 |
6 |
iSCSI-B |
3021 |
192.168.192.249/24 |
192.168.192.254 |
Node 2 |
4 |
iSCSI-A |
3011 |
192.168.191.250/24 |
192.168.191.254 |
Node 2 |
6 |
iSCSI-B |
3021 |
192.168.192.250/24 |
192.168.192.254 |
Node 3 |
4 |
iSCSI-A |
3011 |
192.168.191.251/24 |
192.168.191.254 |
Node 3 |
6 |
iSCSI-B |
3021 |
192.168.192.251/24 |
192.168.192.254 |
Node 4 |
4 |
iSCSI-A |
3011 |
192.168.191.252/24 |
192.168.191.254 |
Node 4 |
6 |
iSCSI-B |
3021 |
192.168.192.252/24 |
192.168.192.254 |
To configure the IBM SVC system for iSCSI storage access, complete the following steps:
1. Log into the IBM SVC GUI and navigate to Settings > Network.
2. Click the iSCSI icon and enter the system and node names as shown:
3. Note the resulting iSCSI Name (IQN) in the Table 11 to be used later in the configuration procedure
Table 11 IBM SVC IQN
Node |
IQN |
Node 1 |
|
Node 2 |
|
Node 3 |
|
Node 4 |
|
4. Click the Ethernet Ports icon .
5. Click Actions and choose Modify iSCSI Hosts.
6. Make sure IPv4 iSCSI hosts field is set to enable – if not, change the setting to Enabled and click Modify.
7. If already set, click Cancel to close the configuration box.
8. For each of the four ports listed in Table 10, repeat the steps 9 – 17.
9. Right-click on the appropriate port and choose Modify IP Settings.
10. Enter the IP address, Subnet Mask and Gateway information in Table 10 (gateway will be configured later during ACI setup).
11. Click Modify.
12. Right-click on the newly updated port and choose Modify VLAN.
13. Check the check box to Enable VLAN.
14. Enter the appropriate VLAN from Table 10.
15. Keep the Apply change to the failover port too check box checked.
16. Click Modify.
17. Repeat the steps for all for iSCSI ports listed in Table 10.
18. Verify all ports are configured as shown below. The output below shows configuration for two SVC node pairs.
Use the cfgportip CLI command to set Jumbo Frames (MTU 9000). The default value of port MTU is 1500. An MTU of 9000 (jumbo frames) provides improved CPU utilization and increased efficiency by reducing the overhead and increasing the size of the payload.
1. In the IBM SVC management GUI, from the Settings options, select System.
2. Select I/O Groups and identify the I/O Group IDs. In this deployment, io_grp0 (ID 0) and io_grp1 (ID 1) are being utilized.
3. SSH to the IBM SVC management IP address and use following CLI command to set the MTU for ports 4 and 6:
VersaStack-SVC:admin>cfgportip -mtu 9000 -iogrp 0 4
VersaStack-SVC:admin>cfgportip -mtu 9000 -iogrp 1 4
VersaStack-SVC:admin>cfgportip -mtu 9000 -iogrp 0 6
VersaStack-SVC:admin>cfgportip -mtu 9000 -iogrp 1 6
The MTU configuration can be verified using the command: svcinfo lsportip <port number> | grep mtu
This completes the initial configuration of the IBM systems. The next section covers the Cisco UCS configuration.
This section covers the Cisco UCS setup for VersaStack infrastructure with ACI and IBM SVC design. This section includes setup for both iSCSI as well as FC SAN boot and storage access.
If a customer environment does require implementing some of the storage protocols covered in this deployment guide, the relevant configuration sections can be skipped.
Table 12 shows various VLANs, VSANs and subnets used to setup infrastructure (Foundation) tenant to provide connectivity between core elements of the design.
Table 12 Infrastructure (Foundation) Tenant Configuration
VLAN Name |
VLAN |
Subnet |
IB-MGMT |
111 |
192.168.160.0/22 |
Infra-iSCSI-A |
3010 |
192.168.191.0/24 |
infra-iSCSI-B |
3020 |
192.168.192.0/24 |
vMotion |
3000 |
192.168.179.0/24 |
Native-2 |
2 |
N/A |
VDS Pool |
1101-1120 |
Multiple – Tenant Specific |
AVS-Infra |
4093 |
N/A |
VSAN-A |
101 |
N/A |
VSAN-B |
102 |
N/A |
This section provides detailed procedures for configuring the Cisco Unified Computing System (Cisco UCS) for use in a VersaStack environment. The steps are necessary to provision the Cisco UCS C-Series and B-Series servers and should be followed precisely to avoid configuration errors.
To configure the Cisco UCS for use in a VersaStack environment, complete the following steps:
1. Connect to the console port on the first Cisco UCS 6248 fabric interconnect.
Enter the configuration method. (console/gui) ? console
Enter the setup mode; setup newly or restore from backup.(setup/restore)? setup
You have chosen to setup a new Fabric interconnect? Continue? (y/n): y
Enforce strong password? (y/n) [y]: y
Enter the password for "admin": <password>
Confirm the password for "admin": <password>
Is this Fabric interconnect part of a cluster(select 'no' for standalone)? (yes/no) [n]: yes
Which switch fabric (A/B)[]: A
Enter the system name: <Name of the System>
Physical Switch Mgmt0 IP address: <Mgmt. IP address for Fabric A>
Physical Switch Mgmt0 IPv4 netmask: <Mgmt. IP Subnet Mask>
IPv4 address of the default gateway: <Default GW for the Mgmt. IP >
Cluster IPv4 address: <Cluster Mgmt. IP address>
Configure the DNS Server IP address? (yes/no) [n]: y
DNS IP address: <DNS IP address>
Configure the default domain name? (yes/no) [n]: y
Default domain name: <DNS Domain Name>
Join centralized management environment (UCS Central)? (yes/no) [n]: n
Apply and save configuration (select 'no' if you want to re-enter)? (yes/no): yes
2. Wait for the login prompt to make sure that the configuration has been saved.
To configure the second Cisco UCS Fabric Interconnect for use in a VersaStack environment, complete the following steps:
3. Connect to the console port on the second Cisco UCS 6248 fabric interconnect.
Enter the configuration method. (console/gui) ? console
Installer has detected the presence of a peer Fabric interconnect. This
Fabric interconnect will be added to the cluster. Continue (y|n)? y
Enter the admin password for the peer Fabric interconnect: <Admin Password>
Connecting to peer Fabric interconnect... done
Retrieving config from peer Fabric interconnect... done
Peer Fabric interconnect Mgmt0 IPv4 Address: <Address provided in last step>
Peer Fabric interconnect Mgmt0 IPv4 Netmask: <Mask provided in last step>
Cluster IPv4 address : <Cluster IP provided in last step>
Peer FI is IPv4 Cluster enabled. Please Provide Local Fabric Interconnect Mgmt0 IPv4 Address
Physical switch Mgmt0 IP address: < Mgmt. IP address for Fabric B>
Apply and save the configuration (select 'no' if you want to re-enter)?
(yes/no): yes
4. Wait for the login prompt to make sure that the configuration has been saved.
To log in to the Cisco Unified Computing System (UCS) environment, complete the following steps:
1. Open a web browser and navigate to the Cisco UCS 6248 fabric interconnect cluster address.
2. Under HTML, click the Launch UCS Manager link to launch the Cisco UCS Manager HTML5 User Interface.
3. When prompted, enter admin as the user name and enter the administrative password.
4. Click Login to log in to Cisco UCS Manager.
5. Respond to the pop-up on Anonymous Reporting and click OK.
This document assumes the use of Cisco UCS 3.1(2b). To upgrade the Cisco UCS Manager software and the UCS 6248 Fabric Interconnect software to version 3.1(2b), refer to Cisco UCS Manager Install and Upgrade Guides.
It is highly recommended by Cisco to configure Call Home in UCSM. Configuring Call Home will accelerate resolution of support cases. To configure Call Home, complete the following steps:
1. In Cisco UCS Manager, click the Admin tab in the navigation pane on left.
2. Select All > Communication Management > Call Home.
3. Change the State to On.
4. Fill in all the fields according to your Management preferences and click Save Changes and OK to complete configuring Call Home.
To create a block of IP addresses for out of band (mgmt0) server Keyboard, Video, Mouse (KVM) access in the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the LAN tab in the navigation pane.
2. Expand Pools > root > IP Pools.
3. Right-click IP Pool ext-mgmt and choose Create Block of IPv4 Addresses.
4. Enter the starting IP address of the block, the number of IP addresses required, and the subnet and gateway information. Click OK.
This block of IP addresses should be in the out of band management subnet.
5. Click OK.
6. Click OK in the confirmation message.
To synchronize the Cisco UCS environment to the NTP server, complete the following steps:
1. In Cisco UCS Manager, click the Admin tab in the navigation pane.
2. Select All > Timezone Management > Timezone.
3. In the Properties pane, select the appropriate time zone in the Timezone menu.
4. Click Save Changes, and then click OK.
5. Click Add NTP Server.
6. Enter <NTP Server IP Address> and click OK.
7. Click OK.
Setting the discovery policy simplifies the addition of B-Series Cisco UCS chassis and of additional fabric extenders for further C-Series connectivity. To modify the chassis discovery policy, complete the following steps:
1. In Cisco UCS Manager, click the Equipment tab in the navigation pane and select Equipment from the list in the left pane.
2. In the right pane, click the Policies tab.
3. Under Global Policies, set the Chassis/FEX Discovery Policy to match the minimum the number of uplink ports that are cabled between any chassis IOM or fabric extender (FEX) and the fabric interconnects.
4. Set the Link Grouping Preference to Port Channel.
5. Click Save Changes.
6. Click OK.
To enable server and uplink ports, complete the following steps:
1. In Cisco UCS Manager, click the Equipment tab in the navigation pane.
2. Select Equipment > Fabric Interconnects > Fabric Interconnect A (primary) > Fixed Module.
3. Expand Fixed Module.
4. Expand and select Ethernet Ports.
5. Select the ports that are connected to the Cisco UCS 5108 chassis and UCS C-Series servers, one by one, right-click and select Configure as Server Port.
6. Click Yes to confirm server ports and click OK.
7. Verify that the ports connected to the UCS 5108 chassis and C-series servers are now configured as Server ports by selecting Fabric Interconnect A in the left and Physical Ports tab in the right pane.
8. Select the ports that are connected to the Cisco Nexus 93180 leaf switches, one by one, right-click and select Configure as Uplink Port.
9. Click Yes to confirm uplink ports and click OK.
10. Verify that the uplink ports are now configured as Network ports by selecting Fabric Interconnect A in the left and Physical Ports tab in the right pane.
11. Select Equipment > Fabric Interconnects > Fabric Interconnect B (subordinate) > Fixed Module.
12. Repeat steps 3-10 to configure server and uplink ports on Fabric Interconnect B.
When the UCS FI ports are configured as server ports, UCS chassis is automatically discovered and may need to be acknowledged. To acknowledge all Cisco UCS chassis, complete the following steps:
1. In Cisco UCS Manager, click the Equipment tab in the navigation pane.
2. Expand Chassis and select each chassis that is listed.
3. Right-click each chassis and select Acknowledge Chassis.
4. Click Yes and then click OK to complete acknowledging the chassis.
If Cisco Nexus 2232PP FEXes are part of the configuration, expand Rack-Mounts and FEX and acknowledge the FEXes one by one.
The FC port and uplink configurations can be skipped if the UCS environment does not need access to storage environment using FC
To enable FC uplink ports, complete the following steps:
This step requires a reboot. To avoid an unnecessary switchover, configure the subordinate Fabric Interconnect first.
1. In the Equipment tab, select the Fabric Interconnect B (subordinate FI in this example), and in the Actions pane, select Configure Unified Ports, and click Yes on the splash screen.
2. Slide the lever to change the ports 31-32 to Fiber Channel. Click Finish followed by Yes to the reboot message. Click OK.
3. When the subordinate has completed reboot, repeat the procedure to configure FC ports on primary Fabric Interconnect. As before, the Fabric Interconnect will reboot after the configuration is complete.
To configure the necessary virtual storage area networks (VSANs) for FC uplinks for the Cisco UCS environment, follow these steps:
1. In Cisco UCS Manager, click the SAN tab in the navigation pane.
2. Expand the SAN > SAN Cloud and select Fabric A.
3. Right-click VSANs and choose Create VSAN.
4. Enter VSAN-A as the name of the VSAN for fabric A.
5. Keep the Disabled option selected for FC Zoning.
5. Click the Fabric A radio button.
6. Enter 101 as the VSAN ID for Fabric A.
7. Enter 101 as the FCoE VLAN ID for fabric A. Click OK twice.
8. In the SAN tab, expand SAN > SAN Cloud > Fabric-B.
9. Right-click VSANs and choose Create VSAN.
10. Enter VSAN-B as the name of the VSAN for fabric B.
11. Keep the Disabled option selected for FC Zoning.
12. Click the Fabric B radio button.
13. Enter 102 as the VSAN ID for Fabric B. Enter 102 as the FCoE VLAN ID for Fabric B. Click OK twice.
To configure the necessary port channels for the Cisco UCS environment, complete the following steps:
1. In the navigation pane, under SAN > SAN Cloud, expand the Fabric A tree.
2. Right-click FC Port Channels and choose Create Port Channel.
3. Enter 1 for the port channel ID and Po1 for the port channel name.
4. Click Next then choose ports 31 and 32 and click >> to add the ports to the port channel. Click Finish.
5. Click OK.
6. Select FC Port-Channel 1 from the menu in the left pane and from the VSAN drop-down field, select VSAN 101 in the right pane.
7. Click Save Changes and then click OK.
1. Click the SAN tab. In the navigation pane, under SAN > SAN Cloud, expand the Fabric B.
2. Right-click FC Port Channels and choose Create Port Channel.
3. Enter 2 for the port channel ID and Po2 for the port channel name. Click Next.
4. Choose ports 31 and 32 and click >> to add the ports to the port channel.
5. Click Finish, and then click OK.
6. Select FC Port-Channel 2 from the menu in the left pane and from the VSAN drop-down field, select VSAN 102 in the right pane.
7. Click Save Changes and then click OK.
To initialize a quick sync of the connections to the MDS switch, right-click the recently created port channels, disable port channel and then re-enable the port channel.
To configure the necessary Ethernet port channels out of the Cisco UCS environment, complete the following steps:
In this procedure, two port channels are created one from each Fabric Interconnect (A and B) to both the Cisco Nexus 93180 leaf switches.
8. In Cisco UCS Manager, click the LAN tab in the navigation pane.
9. Under LAN > LAN Cloud, expand the Fabric A tree.
10. Right-click Port Channels and choose Create Port Channel.
11. Enter 11 as the unique ID of the port channel.
12. Enter Po11 as the name of the port channel and click Next.
13. Select the network uplink ports to be added to the port channel.
14. Click >> to add the ports to the port channel (27 and 28 in this design).
15. Click Finish to create the port channel and then click OK.
16. In the navigation pane, under LAN > LAN Cloud, expand the Fabric B tree.
17. Right-click Port Channels and choose Create Port Channel.
18. Enter 12 as the unique ID of the port channel.
19. Enter Po12 as the name of the port channel and click Next.
20. Select the network uplink ports (27 and 28 in this design) to be added to the port channel.
21. Click >> to add the ports to the port channel.
22. Click Finish to create the port channel and click OK.
Since the ACI fabric is not configured as yet, the port channels will remain in down state.
To configure the necessary MAC address pools for the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the LAN tab in the navigation pane.
2. Select Pools > root.
In this procedure, two MAC address pools are created, one for each switching fabric.
3. Right-click MAC Pools under the root organization.
4. Select Create MAC Pool to create the MAC address pool.
5. Enter MAC-Pool-A as the name of the MAC pool.
6. Optional: Enter a description for the MAC pool.
7. Select the option Sequential for the Assignment Order field and click Next.
8. Click Add.
9. Specify a starting MAC address.
It is recommended to place 0A in the second last octet of the starting MAC address to identify all of the MAC addresses as Fabric A addresses. It is also recommended to not change the first three octets of the MAC address.
10. Specify a size for the MAC address pool that is sufficient to support the available blade or rack server resources. Remember that multiple Cisco VIC vNICs will be created on each server and each vNIC will be assigned a MAC address.
11. Click OK and then click Finish.
12. In the confirmation message, click OK.
13. Right-click MAC Pools under the root organization.
14. Select Create MAC Pool to create the MAC address pool.
15. Enter MAC-Pool-B as the name of the MAC pool.
16. Optional: Enter a description for the MAC pool.
17. Select the Sequential Assignment Order and click Next.
18. Click Add.
19. Specify a starting MAC address.
It is recommended to place 0B in the second last octet of the starting MAC address to identify all the MAC addresses in this pool as fabric B addresses. It is also recommended to not change the first three octets of the MAC address.
20. Specify a size for the MAC address pool that is sufficient to support the available blade or rack server resources.
21. Click OK and then click Finish.
22. In the confirmation message, click OK.
To configure the necessary universally unique identifier (UUID) suffix pool for the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Pools > root.
3. Right-click UUID Suffix Pools and choose Create UUID Suffix Pool.
4. Enter UUID-Pool as the name of the UUID suffix pool.
5. Optional: Enter a description for the UUID suffix pool.
6. Keep the prefix at the derived option.
7. Change the Assignment Order to Sequential
8. Click Next.
9. Click Add to add a block of UUIDs.
10. Keep the From field at the default setting.
11. Specify a size for the UUID block that is sufficient to support the available blade or rack server resources.
12. Click OK. Click Finish and then click OK.
To configure the necessary server pool for the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Pools > root.
3. Right-click Server Pools and choose Create Server Pool.
4. Enter Infra-Server-Pool as the name of the server pool.
5. Optional: Enter a description for the server pool.
6. Click Next.
7. Select at least two (or more) servers to be used for the setting up the VMware environment and click >> to add them to the Infra-Pool server pool.
8. Click Finish and click OK.
This configuration step can be skipped if the UCS environment does not need to access storage environment using FC
For FC boot as well as access to FC LUNs, create a World Wide Node Name (WWNN) pool by completing the following steps:
1. In Cisco UCS Manager, click the SAN tab in the navigation pane.
2. Select Pools > root.
3. Right-click WWNN Pools under the root organization and choose Create WWNN Pool to create the WWNN address pool.
4. Enter WWNN-Pool as the name of the WWNN pool.
5. Optional: Enter a description for the WWNN pool.
6. Select the Sequential Assignment Order and click Next.
7. Click Add.
8. Specify a starting WWNN address.
9. Specify a size for the WWNN address pool that is sufficient to support the available blade or rack server resources. Each server will receive one WWNN.
10. Click OK and click Finish.
11. In the confirmation message, click OK.
This configuration step can be skipped if the UCS environment does not need access to storage environment using FC
If you are providing FC boot or access to FC LUNs, create a World Wide Port Name (WWPN) pool for each SAN switching fabric by completing the following steps:
1. In Cisco UCS Manager, click the SAN tab in the navigation pane.
2. Select Pools > root.
3. Right-click WWPN Pools under the root organization and choose Create WWPN Pool to create the first WWPN address pool.
4. Enter WWPN-Pool-A as the name of the WWPN pool.
5. Optional: Enter a description for the WWPN pool.
6. Select the Sequential Assignment Order and click Next.
7. Click Add.
8. Specify a starting WWPN address.
It is recommended to place 0A in the second last octet of the starting WWPN address to identify all of the WWPN addresses as Fabric A addresses.
9. Specify a size for the WWPN address pool that is sufficient to support the available blade or rack server resources. Each server’s Fabric A vHBA will receive one WWPN from this pool.
10. Click OK and click Finish.
11. In the confirmation message, click OK.
12. Right-click WWPN Pools under the root organization and choose Create WWPN Pool to create the second WWPN address pool.
13. Enter WWPN-Pool-B as the name of the WWPN pool.
14. Optional: Enter a description for the WWPN pool.
15. Select the Sequential Assignment Order and click Next.
16. Click Add.
17. Specify a starting WWPN address.
It is recommended to place 0B in the second last octet of the starting WWPN address to identify all of the WWPN addresses as Fabric B addresses.
18. Specify a size for the WWPN address pool that is sufficient to support the available blade or rack server resources. Each server’s Fabric B vHBA will receive one WWPN from this pool.
19. Click OK and click Finish.
20. In the confirmation message, click OK.
This configuration step can be skipped if the UCS environment does not need access to storage environment using iSCSI
To enable iSCSI boot and provide access to iSCSI LUNs, configure the necessary IQN pools in the Cisco UCS Manager by completing the following steps:
1. In the UCS Manager, select the SAN tab.
2. Select Pools > root.
3. Right-click IQN Pools under the root organization and choose Create IQN Suffix Pool to create the IQN pool.
4. Enter Infra-IQN-Pool for the name of the IQN pool.
5. Optional: Enter a description for the IQN pool.
6. Enter iqn.1992-08.com.cisco as the prefix
7. Select the option Sequential for Assignment Order field. Click Next.
8. Click Add.
9. Enter an identifier with ucs-host as the suffix. A rack number can be added to the suffix to make the IQN unique within a DC (03 in the example below)
10. Enter 1 in the From field.
11. Specify a size of the IQN block sufficient to support the available server resources. Each server will receive one IQN.
12. Click OK.
13. Click Finish. In the message box that displays, click OK.
This configuration step can be skipped if the UCS environment does not need access to storage environment using iSCSI
For enabling iSCSI storage access, these steps provide details for configuring the necessary IP pools in the Cisco UCS Manager:
Two IP pools are created, one for each switching fabric.
1. In Cisco UCS Manager, select the LAN tab.
2. Select Pools > root.
3. Right-click IP Pools under the root organization and choose Create IP Pool to create the IP pool.
4. Enter iSCSI-initiator-A for the name of the IP pool.
5. Optional: Enter a description of the IP pool.
6. Select the option Sequential for the Assignment Order field. Click Next.
7. Click Add.
8. In the From field, enter the beginning of the range to assign an iSCSI IP addresses. These addresses are covered in Table 12.
9. Enter the Subnet Mask.
10. Set the size with sufficient address range to accommodate the servers. Click OK.
11. Click Next and then click Finish.
12. Click OK in the confirmation message.
13. Right-click IP Pools under the root organization and choose Create IP Pool to create the IP pool.
14. Enter iSCSI-initiator-B for the name of the IP pool.
15. Optional: Enter a description of the IP pool.
16. Select the Sequential option for the Assignment Order field. Click Next.
17. Click Add.
18. In the From field, enter the beginning of the range to assign an iSCSI IP addresses. These addresses are covered in Table 12.
19. Enter the Subnet Mask.
20. Set the size with sufficient address range to accommodate the servers. Click OK.
21. Click Next and then click Finish.
22. Click OK in the confirmation message.
To configure the necessary VLANs in the Cisco UCS Manager, complete the following steps for all the VLANs listed in Table 13:
Table 13 VLANs on the UCS System
VLAN Name |
VLAN |
IB-Mgmt |
111 |
Infra-iSCSI-A* |
3010 |
infra-iSCSI-B* |
3020 |
vMotion |
3000 |
Native-2 |
2 |
APIC-Pool- |
1101-1120** |
AVS-Infra |
4093*** |
* Infra-iSCSI-A/B VLANs are required for iSCSI deployments only
** APIC Pool VLANs are required for VMware vDS deployments only
*** AVS-Infra VLAN is required for AVS deployment (VxLAN mode) only
1. In Cisco UCS Manager, click the LAN tab in the navigation pane.
2. Select LAN > LAN Cloud.
3. Right-click VLANs and choose Create VLANs.
4. Enter name from the VLAN Name column.
5. Keep the Common/Global option selected for the scope of the VLAN.
6. Enter the VLAN ID associated with the name.
7. Keep the Sharing Type as None.
8. Click OK, and then click OK again.
9. Click Yes, and then click OK twice.
10. Repeat these steps for all the VLAN in Table 13.
For defining a range of VLANs from a single screen, refer to figure below. In the figure, APIC-Pool- prefix will be appended to all the VLAN names (for example, APIC-Pool-1101, APIC-Pool-1102 and so on).
Firmware management policies allow the administrator to select the corresponding packages for a given server configuration. These policies often include packages for adapter, BIOS, board controller, FC adapters, host bus adapter (HBA) option ROM, and storage controller.
To create a firmware management policy for a given server configuration in the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Policies > root.
3. Right-click Host Firmware Packages and choose Create Host Firmware Package.
4. Enter Infra-FW-Pack as the name of the host firmware package.
5. Keep the Host Firmware Package as Simple.
6. Select the version 3.1(2b) for both the Blade and Rack Packages.
7. Click OK to create the host firmware package.
8. Click OK.
To configure jumbo frames in the Cisco UCS fabric, complete the following steps:
1. In Cisco UCS Manager, click the LAN tab in the navigation pane.
2. Select LAN > LAN Cloud > QoS System Class.
3. In the right pane, click the General tab.
4. On the Best Effort row, enter 9216 in the box under the MTU column.
5. Click Save Changes in the bottom of the window.
6. Click OK.
When using an external storage system, a local disk configuration for the Cisco UCS environment is necessary because the servers in the environment will not contain a local disk.
This policy should not be applied to the servers that contain local disks.
To create a local disk configuration policy for no local disks, complete the following steps:
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Policies > root.
3. Right-click Local Disk Config Policies and choose Create Local Disk Configuration Policy.
4. Enter SAN-Boot as the local disk configuration policy name.
5. Change the mode to No Local Storage.
6. Click OK to create the local disk configuration policy.
7. Click OK again.
To create a network control policy that enables Link Layer Discovery Protocol (LLDP) on virtual network ports, complete the following steps:
1. In Cisco UCS Manager, click the LAN tab in the navigation pane.
2. Select Policies > root.
3. Right-click Network Control Policies and choose Create Network Control Policy.
4. Enter Enable-LLDP as the policy name.
5. For LLDP, select Enabled for both Transmit and Receive.
6. Click OK to create the network control policy.
7. Click OK.
To create a power control policy for the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Policies > root.
3. Right-click Power Control Policies and choose Create Power Control Policy.
4. Enter No-Power-Cap as the power control policy name.
5. Change the power capping setting to No Cap.
6. Click OK to create the power control policy.
7. Click OK.
To create an optional server pool qualification policy for the Cisco UCS environment, complete the following steps:
This example creates a policy for selecting a Cisco UCS B200-M4 server.
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Policies > root.
3. Right-click Server Pool Policy Qualifications and choose Create Server Pool Policy Qualification.
4. Enter UCSB-B200-M4 as the name for the policy.
5. Choose Create Server PID Qualifications.
6. Select UCSB-B200-M4 as the PID.
7. Click OK.
8. Click OK to create the server pool policy qualification.
To create a server BIOS policy for the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Policies > root.
3. Right-click BIOS Policies and choose Create BIOS Policy.
4. Enter Infra-Host-BIOS as the BIOS policy name.
5. Change the Quiet Boot setting to disabled.
6. Click Next.
7. On the next screen labeled as Processor, make changes as captured in the following figure for high compute performance.
8. Click Next and under the screen labeled Intel Direct IO, make changes as captured in the following figure.
9. Click Next and under the screen labeled RAS Memory, make changes as captured in the following figure.
10. Click Finish to create the BIOS policy.
11. Click OK.
To update the default Maintenance Policy, complete the following steps:
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Policies > root and then select Maintenance Policies > default.
3. Change the Reboot Policy to User Ack.
4. Check the box to enable On Next Boot
5. Click Save Changes.
6. Click OK to accept the change.
To create a vNIC/vHBA placement policy for the infrastructure hosts, complete the following steps:
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Policies > root.
3. Right-click vNIC/vHBA Placement Policies and choose Create Placement Policy.
4. Enter Infra-Policy as the name of the placement policy.
5. Click 1 and select Assigned Only.
6. Click OK and then click OK again.
Eight different vNIC Templates are covered in Table 14 below. Not all the VNICs need to be created in all deployments. The vNICs templates covered below are for iSCSI vNICs, infrastructure (storage, management etc.) vNICs, and data vNICs (VM traffic) for VMware VDS or Cisco AVS. Refer to Usage column in Table 14 below to see if a vNIC is needed for a particular ESXi host.
Table 14 vNIC Templates and associated VLANs
Name |
Fabric ID |
VLANs |
Native VLAN |
MAC Pool |
Usage |
Infra-vNIC-A |
A |
IB-Mgmt, Native-2, vMotion |
Native-2 |
MAC-Pool-A |
All ESXi Hosts; |
Infra-vNIC-B |
B |
IB-Mgmt, Native-2, vMotion |
Native-2 |
MAC-Pool-B |
All ESXi Hosts; |
Infra-iSCSI-A |
A |
Infra-iSCSI-A |
Infra-iSCSI-A |
MAC-Pool-A |
iSCSI hosts only |
Infra-iSCSI-B |
B |
Infra-iSCSI-B |
Infra-iSCSI-B |
MAC-Pool-B |
iSCSI hosts only |
Infra-vNIC-VDS-A |
A |
APIC-Pool-1101 through APIC-Pool-1200 |
|
MAC-Pool-A |
All hosts using APIC controlled VDS as distributed switch |
Infra-vNIC-VDS-B |
B |
APIC-Pool-1101 through APIC-Pool-1200 |
|
MAC-Pool-B |
All hosts using APIC controlled VDS as distributed switch |
Infra-vNIC-AVS-A |
A |
AVS-Infra |
|
MAC-Pool-A |
All hosts using APIC controlled AVS as distributed switch |
Infra-vNIC-AVS-B |
B |
AVS-Infra |
|
MAC-Pool-B |
All hosts using APIC controlled AVS as distributed switch |
Repeat the following steps to setup for all the required vNICs in a customer deployment scenario (Table 14):
1. In Cisco UCS Manager, select the LAN tab.
2. Select Policies > root.
3. Right-click vNIC Templates and choose Create vNIC Template.
4. Enter Name (listed in Table 14) of the vNIC template name.
5. Select Fabric A or B as listed in Table 14. Do not select the Enable Failover check box.
6. Leave the Redundancy Type set to No Redundancy
7. Under Target, make sure that the VM check box is not selected.
8. Select Updating Template for Template Type.
9. Under VLANs, select all the VLANs as listed in Table 14.
10. Set appropriate VLAN as Native VLAN; if a Native VLAN is not listed in the Table 14; do not change the Native VLAN parameters.
11. Under MTU, enter 9000.
12. From the MAC Pool list, select appropriate MAC pool as listed in Table 14.
13. From the Network Control Policy list, select Enable-LLDP.
14. Click OK to complete creating the vNIC template.
15. Click OK.
16. Repeat the process to define all the necessary vNIC templates.
A LAN connectivity policy defines the vNICs that will be created as part of a service profile deployment. Depending on the storage protocol in use and distributed switch selected, LAN connectivity policy will differ. Refer to Table 15 for a list of vNICs that need to be created as part of LAN connectivity policy definition.
Table 15 vNIC list for LAN connectivity policy
vNIC Name |
vNIC Template |
Usage |
vNIC-A |
Infra-vNIC-A |
All ESXi Hosts |
vNIC-B |
Infra-vNIC-B |
All ESXi Hosts |
vNIC-VDS-A |
Infra-vNIC-VDS-A |
Hosts using APIC controlled VDS as distributed switch |
vNIC-VDS-B |
Infra-vNIC-VDS-B |
Hosts using APIC controlled VDS as distributed switch |
vNIC-AVS-A |
Infra-vNIC-AVS-A |
All hosts using APIC controlled AVS as distributed switch |
vNIC-AVS-B |
Infra-vNIC-AVS-B |
All hosts using APIC controlled AVS as distributed switch |
vNIC-iSCSI-A |
Infra-iSCSI-A |
iSCSI hosts only |
vNIC-iSCSI-B |
Infra-iSCSI-B |
iSCSI hosts only |
To configure the necessary Infrastructure LAN Connectivity Policies, complete the following steps:
The steps in this procedure would be repeated to create all the necessary vNICs covered in Table 15.
1. In Cisco UCS Manager, click the LAN tab in the navigation pane.
2. Select LAN > Policies > root.
3. Right-click LAN Connectivity Policies and choose Create LAN Connectivity Policy.
4. Enter Infra-LAN-pol as the name of the policy.
5. Click Add to add a vNIC.
6. In the Create vNIC dialog box, the name of the vNIC from Table 15.
7. Select the Use vNIC Template checkbox.
8. In the vNIC Template list, select the corresponding vNIC template from Table 15.
9. For the Adapter Policy field, select VMWare.
10. Click OK to add this vNIC to the policy.
11. Click Add to add another vNIC to the policy.
12. Repeat the above steps to add all the required vNICs as shown in Table 15.
13. Verify that the proper vNICs have been created for your VersaStack Implementation. A sample output for iSCSI hosts is shown below.
This configuration step can be skipped if the UCS environment does not need to access storage environment using iSCSI
Complete the following Steps only if you are using iSCSI SAN access:
1. Verify the iSCSI base vNICs are already added as part of vNIC implementation (steps 1-12).
2. Expand the Add iSCSI vNICs section to add the iSCSI boot vNICs.
3. Click Add in the iSCSI vNIC section to define an iSCSI boot vNIC.
4. Enter iSCSI-A as the name of the vNIC.
5. Select Infra-iSCSI-A for Overlay vNIC.
6. Set the iSCSI Adapter Policy to default.
7. Set the VLAN to Infra-iSCSI-A (native).
8. Leave the MAC Address set to None.
9. Click OK.
10. Click Add in the iSCSI vNIC section again.
11. Enter iSCSI-B as the name of the vNIC.
12. Set the Overlay vNIC to Infra-iSCSI-B.
13. Set the iSCSI Adapter Policy to default.
14. Set the VLAN to Infra-iSCSI-B (native).
15. Leave the MAC Address set to None.
16. Click OK.
17. Verify that the iSCSI vNICs are created correctly.
18. Click OK then OK again to finish creating the LAN Connectivity Policy.
This configuration step can be skipped if the UCS environment does not need to access storage environment using FC
To create virtual host bus adapter (vHBA) templates for the Cisco UCS environment, follow these steps:
1. In Cisco UCS Manager, click the SAN tab in the navigation pane.
2. Select Policies > root.
3. Right-click vHBA Templates and choose Create vHBA Template.
4. Enter Infra-vHBA-A as the vHBA template name.
5. Click the radio button to select Fabric A.
6. In the Select VSAN list, Choose VSAN-A.
7. In the WWPN Pool list, Choose WWPN-Pool-A.
8. Click OK to create the vHBA template.
9. Click OK.
10. Right-click vHBA Templates again and choose Create vHBA Template.
11. Enter Infra-vHBA-B as the vHBA template name.
12. Click the radio button to select Fabric B.
13. In the Select VSAN list, Choose VSAN-B.
14. In the WWPN Pool, Choose WWPN-Pool-B.
15. Click OK to create the vHBA template.
This configuration step can be skipped if the UCS environment does not need to access storage environment using FC
A SAN connectivity policy defines the vHBAs that will be created as part of a service profile deployment.
To configure the necessary FC SAN Connectivity Policies, complete the following steps:
1. In Cisco UCS Manager, click the SAN tab in the navigation pane.
2. Select SAN > Policies > root.
3. Right-click SAN Connectivity Policies and choose Create SAN Connectivity Policy.
4. Enter Infra-FC-pol as the name of the policy.
5. Select WWNN-Pool from the drop-down list under World Wide Node Name.
6. Click Add. You might have to scroll down the screen to see the Add link.
7. Under Create vHBA, enter vHBA-A in the Name field.
8. Check the check box Use vHBA Template.
9. From the vHBA Template drop-down list, select Infra-vHBA-A.
10. From the Adapter Policy drop-down list, select VMWare.
1. Click OK.
2. Click Add.
3. Under Create vHBA, enter vHBA-B in the Name field.
4. Check the check box next to Use vHBA Template.
5. From the vHBA Template drop-down list, select infra-vHBA-B.
6. From the Adapter Policy drop-down list, select VMWare.
7. Click OK.
8. Click OK again to accept creating the SAN connectivity policy.
This configuration step can be skipped if the UCS environment does not need to access storage environment using iSCSI
This procedure applies to a Cisco UCS environment in which iSCSI interface on Controller A is chosen as the primary target.
To create boot the policy for the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Policies > root.
3. Right-click Boot Policies and choose Create Boot Policy.
4. Enter Boot-iSCSI-A as the name of the boot policy.
5. Optional: Enter a description for the boot policy.
6. Keep the Reboot on Boot Order Change option cleared.
7. Expand the Local Devices drop-down menu and select Add CD/DVD.
8. Expand the iSCSI vNICs section and select Add iSCSI Boot.
9. In the Add iSCSI Boot dialog box, enter iSCSI-A.
10. Click OK.
11. Select Add iSCSI Boot.
12. In the Add iSCSI Boot dialog box, enter iSCSI-B.
13. Click OK.
14. Click OK then OK again to save the boot policy.
This configuration step can be skipped if the UCS environment does not need to access storage environment using FC
This procedure applies to a Cisco UCS environment in which two FC interfaces are used on each of the SVC nodes for host connectivity. This procedure captures a single boot policy which defines Fabric-A as the primary fabric. Customer can choose to create a second boot policy which can use Fabric-B as primary fabric to spread the boot-from-san traffic load on both the nodes in case of disaster recovery.
WWPN information from the IBM SVC nodes is required to complete this section. This information can be found by logging into the IBM SVC management address using SSH and issuing the commands as captured below. The information can be recorded in Table 16.
Since NPIV feature is enabled on the IBM SVC systems, the WWPN permitted for host communication can be different from the physical WWPN. Refer to the example below
Verify the node_id of the SVC nodes (node1 through node 4 in this example):
Use the following command to record the WWPN corresponding to ports connected to the SAN fabric:
Table 16 IBM SVC – WWPN Information
Node |
Port ID |
WWPN |
Variable |
Fabric |
SVC Node 1 |
1 |
|
WWPN-SVC-Node1-FC1-NPIV |
A |
SVC Node 1 |
3 |
|
WWPN-SVC-Node1-FC3-NPIV |
A |
SVC Node 1 |
2 |
|
WWPN-SVC-Node1-FC2-NPIV |
B |
SVC Node 1 |
4 |
|
WWPN-SVC-Node1-FC4-NPIV |
B |
SVC Node 2 |
1 |
|
WWPN-SVC-Node2-FC1-NPIV |
A |
SVC Node 2 |
3 |
|
WWPN-SVC-Node2-FC3-NPIV |
A |
SVC Node 2 |
2 |
|
WWPN-SVC-Node2-FC2-NPIV |
B |
SVC Node 2 |
4 |
|
WWPN-SVC-Node2-FC4-NPIV |
B |
SVC Node 3 |
1 |
|
WWPN-SVC-Node3-FC1-NPIV |
A |
SVC Node 3 |
3 |
|
WWPN-SVC-Node3-FC3-NPIV |
A |
SVC Node 3 |
2 |
|
WWPN-SVC-Node3-FC2-NPIV |
B |
SVC Node 3 |
4 |
|
WWPN-SVC-Node3-FC4-NPIV |
B |
SVC Node 4 |
1 |
|
WWPN-SVC-Node4-FC1-NPIV |
A |
SVC Node 4 |
3 |
|
WWPN-SVC-Node4-FC3-NPIV |
A |
SVC Node 4 |
2 |
|
WWPN-SVC-Node4-FC2-NPIV |
B |
SVC Node 4 |
4 |
|
WWPN-SVC-Node4-FC4-NPIV |
B |
To create boot policies for the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Choose Policies > root.
3. Right-click Boot Policies and choose Create Boot Policy.
4. Enter Boot-Fabric-A as the name of the boot policy.
5. Optional: Enter a description for the boot policy.
6. Keep the Reboot on the Boot Order Change check box unchecked.
7. Expand the Local Devices drop-down list and Choose Add CD/DVD.
8. Expand the vHBAs drop-down list and Choose Add SAN Boot.
9. Make sure to select Primary radio button as the Type
10. Enter Fabric-A in the vHBA field.
11. Click OK to add the SAN boot initiator.
12. From the vHBA drop-down menu, choose Add SAN Boot Target.
13. Keep 0 as the value for Boot Target LUN.
14. Enter the WWPN <WWPN-Node-1-Fabric-A> from Table 16.
15. Keep the Primary radio button selected as the SAN boot target type.
16. Click OK to add the SAN boot target.
17. From the vHBA drop-down menu, choose Add SAN Boot Target.
18. Keep 0 as the value for Boot Target LUN.
19. Enter the WWPN <WWPN-Node-2-Fabric-A> from Table 16.
20. Click OK to add the SAN boot target.
21. From the vHBA drop-down list, choose Add SAN Boot.
22. In the Add SAN Boot dialog box, enter Fabric-B in the vHBA box.
23. The SAN boot type should automatically be set to Secondary.
24. Click OK to add the SAN boot initiator.
25. From the vHBA drop-down list, choose Add SAN Boot Target.
26. Keep 0 as the value for Boot Target LUN.
27. Enter the WWPN <WWPN-Node-1-Fabric-B> from Table 16.
28. Keep Primary as the SAN boot target type.
29. Click OK to add the SAN boot target.
30. From the vHBA drop-down list, choose Add SAN Boot Target.
31. Keep 0 as the value for Boot Target LUN.
32. Enter the WWPN <WWPN-Node-2-Fabric-B> from Table 16
33. Click OK to add the SAN boot target.
34. Click OK, and then click OK again to create the boot policy.
35. Verify that your SAN boot configuration looks similar to the screenshot below.
Service profile template configuration for the iSCSI based SAN access is covered in this section.
This section can be skipped if iSCSI boot is not implemented in the customer environment.
To create the service profile template, complete the following steps:
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Service Profile Templates > root.
3. Right-click root and choose Create Service Profile Template. This opens the Create Service Profile Template wizard.
4. Enter infra-ESXi-iSCSI-Host as the name of the service profile template. This service profile template is configured to boot from storage node 1 on fabric A.
5. Select the Updating Template option.
6. Under UUID, select UUID-Pool as the UUID pool.
7. Click Next.
1. In Storage Provisioning window, select the Local Disk Configuration Policy tab.
2. Select the option SAN-Boot for Local Storage Policy. This policy usage requires servers with no local HDDs.
3. Click Next.
1. In the Networking window, keep the default setting for Dynamic vNIC Connection Policy.
2. Select the Use Connectivity Policy option to configure the LAN connectivity.
3. Select the infra-LAN-pol as the LAN Connectivity Policy.
4. Select Infra-IQN-Pool for Initiator Name Assignment.
5. Click Next.
1. Select the No vHBAs option for the How would you like to configure SAN connectivity? field and continue on to the next section.
2. Click Next.
1. For iSCSI boot option, it is not necessary to configure any Zoning options. Click Next.
1. In the vNIC/vHBA Placement window, for the field Select Placement, select Infra-Policy.
2. Choose vCon 1 and assign the vHBAs/vNICs to the virtual network interfaces policy in the following order:
k. vNIC-A
l. vNIC-B
m. vNIC-VDS-A OR vNIC-AVS-A (depending on distributed switch in use)
n. vNIC-VDS-B OR vNIC-AVS-B (depending on distributed switch in use)
o. vNIC-iSCSI-A
p. vNIC-iSCSI-B
3. Click Next.
1. Do not configure a vMedia Policy.
2. Click Next.
This step assumes the ESXi boot LUNS are configured to be accessed using SVC IO_grp0 (Node1 and Node2)
1. Select Boot-iSCSI-A for Boot Policy.
2. In the Boot Order pane, expand iSCSI and select iSCSI-A.
3. Click Set iSCSI Boot Parameters.
4. Leave the Initiator Name Assignment dialog box <not set> to use the single Service Profile Initiator Name defined in the previous steps.
5. Set iSCSI-initiator-A as the Initiator IP address Policy.
6. Keep the iSCSI Static Target Interface selected and click Add.
7. In the Create iSCSI Static Target field, add the iSCSI target node name for Node 1 (IQN) from Table 11.
8. Enter the IP address of Node 1 iSCSI-A interface from Table 10.
9. Click OK to add the iSCSI static target.
10. Keep the iSCSI Static Target Interface option selected and click Add.
11. In the Create iSCSI Static Target dialog box, add the iSCSI target node name for Node 2 (IQN) from Table 11.
12. Enter the IP address of Node 2 iSCSI-A interface from Table 10.
13. Click OK.
14. Verify both the targets on iSCSI Path A as shown below:
15. Click OK.
16. In the Boot Order pane, select iSCSI-B.
17. Click Set iSCSI Boot Parameters.
18. In the Set iSCSI Boot Parameters dialog box, set the leave the “Initiator Name Assignment” to <not set>.
19. In the Set iSCSI Boot Parameters dialog box, set the initiator IP address policy to iSCSI-initiator-B.
20. Keep the iSCSI Static Target Interface option selected and click Add.
21. In the Create iSCSI Static Target dialog box, add the iSCSI target node name for Node 1 (IQN) from Table 11.
22. Enter the IP address of Node 1 iSCSI-B interface from Table 10.
23. Click OK to add the iSCSI static target.
24. Keep the iSCSI Static Target Interface option selected and click Add.
25. In the Create iSCSI Static Target dialog box, add the iSCSI target node name for Node 2 (IQN) from Table 11.
26. Enter the IP address of Node 2 iSCSI-B interface from Table 10.
27. Click OK.
28. Verify both the targets on iSCSI Path A as shown below:
29. Click OK.
30. Review the table to make sure that all boot devices were created and identified. Verify that the boot devices are in the correct boot sequence.
31. Click Next to continue to the next section.
1. Select the default Maintenance Policy from the drop-down list.
2. Click Next.
1. In the Pool Assignment list, select Infra-Server-Pool
2. Optional: Select a Server Pool Qualification policy.
3. Select Up as the power state to be applied when the profile is associated with the server.
4. Expand Firmware Management at the bottom of the page and select Infra-FW-Pack from the Host Firmware list.
5. Click Next.
1. In the BIOS Policy list, select Infra-Host-BIOS.
2. Expand Power Control Policy Configuration and select No-Power-Cap in the Power Control Policy list.
3. Click Finish to create the service profile template.
4. Click OK in the confirmation message.
In this procedure, a service profile template is created to use FC Fabric A as primary boot path.
This section can be skipped if FC boot is not implemented in the customer environment.
To create service profile templates, complete the following steps:
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Choose Service Profile Templates > root.
3. Right-click root and choose Create Service Profile Template. This opens the Create Service Profile Template wizard.
4. Enter Infra-ESXi-Host as the name of the service profile template.
5. Select the Updating Template option.
6. Under UUID, select UUID-Pool as the UUID pool.
7. Click Next.
1. Select the Local Disk Configuration Policy tab.
2. Select the SAN-Boot Local Storage Policy. This policy usage requires servers with no local HDDs.
3. Click Next.
1. Keep the default setting for Dynamic vNIC Connection Policy.
2. Select the Use Connectivity Policy option to configure the LAN connectivity.
3. Select the infra-LAN-Policy as the LAN Connectivity Policy.
4. Click Next.
1. Select the Use Connectivity Policy option to configure the SAN connectivity.
2. Select the Infra-FC-Boot as the SAN Connectivity Policy.
3. Click Next.
4. It is not necessary to configure any Zoning options.
5. Click Next.
1. For the Select Placement field, select the Infra-Policy.
2. Choose vCon1 and assign the vHBAs/vNICs to the virtual network interfaces policy in the following order:
a. vHBA Fabric-A
b. vHBA Fabric-B
c. vNIC vNIC-A
d. vNIC vNIC-B
e. vNIC vNIC-VDS-A OR vNIC-AVS-A (depending on distributed switch in use)
f. vNIC vNIC-VDS-B OR vNIC-AVS-A (depending on distributed switch in use)
3. Review to verify that all vNICs and vHBAs were assigned to the policy in the appropriate order.
4. Click Next.
1. There is no need to set a vMedia Policy
2. Click Next.
1. Select Boot-Fabric-A as the Boot Policy
2. Verify all the boot devices are listed correctly
3. Click Next.
1. Choose the default Maintenance Policy.
2. Click Next.
1. For the Pool Assignment field, select Infra-Server-Pool.
2. Optional: Select a Server Pool Qualification policy.
3. Select the option Up for the power state to be applied when the profile is associated with the server.
4. Expand Firmware Management and select Infra-FW-Pack from the Host Firmware list.
5. Click Next.
6. For the BIOS Policy field, select Infra-Host-BIOS.
7. Expand Power Control Policy Configuration and select No-Power-Cap for the Power Control Policy field.
8. Click Finish to create the service profile template.
9. Click OK in the confirmation message.
To create service profiles from the service profile template, complete the following steps:
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Choose Service Profile Templates > root > Service Template Infra-ESXi-Host (Infra-ESXi-iSCSI-Host for iSCSI Deployment).
3. Right-click and choose Create Service Profiles from Template.
4. Enter Infra-ESXi-Host- (Infra-ESXi-iSCSI-Host- for iSCSI deployment) as the service profile prefix.
5. Enter 1 as the Name Suffix Staring Number.
6. Enter the Number of servers to be deploy in the Number of Instances field.
Four service profiles were deployed during this validation – two on UCS C-series servers and two on UCS B-series servers.
7. Click OK to create the service profile.
8. Click OK in the confirmation message.
9. Verify that the service profiles are successfully created and automatically associated with the servers from the pool.
It is recommended to back up the Cisco UCS Configuration. Refer to the link below for additional information:
http://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/ucs-manager/GUI-User-Guides/Admin-Management/3-1/b_Cisco_UCS_Admin_Mgmt_Guide_3_1/b_Cisco_UCS_Admin_Mgmt_Guide_3_1_chapter_01001.html
Additional server pools, service profile templates, and service profiles can be created under root or in organizations under the root. All the policies at the root level can be shared among the organizations. Any new physical blades can be added to the existing or new server pools and associated with the existing or new service profile templates.
After the Cisco UCS service profiles have been created, each infrastructure blade in the environment will be assigned certain unique configuration parameters. To proceed with the SAN configuration, this deployment specific information must be gathered from each Cisco UCS blade. Complete the following steps:
1. To gather the vHBA WWPN information, launch the Cisco UCS Manager GUI. In the navigation pane, click the Servers tab. Expand Servers > Service Profiles > root. Select each service profile and expand to see the vHBAs.
2. Click vHBAs to see the WWPNs for both HBAs.
3. Record the WWPN information that is displayed for both the Fabric A vHBA and the Fabric B vHBA for each service profile into the WWPN variable in Table 17. Please add or remove rows from the table depending on the number of ESXi hosts.
Table 17 UCS WWPN Information
Host |
vHBA |
|
Value |
Infra-ESXi-Host-1 |
Fabric-A |
WWPN-Infra-ESXi-Host-1-A |
20:00:00:25:b5: |
Fabric-B |
WWPN-Infra-ESXi-Host-1-B |
20:00:00:25:b5: |
|
Infra-ESXi-Host-2 |
Fabric-A |
WWPN-Infra-ESXi-Host-2-A |
20:00:00:25:b5: |
Fabric-B |
WWPN-Infra-ESXi-Host-2-B |
20:00:00:25:b5: |
|
Infra-ESXi-Host-3 |
Fabric-A |
WWPN-Infra-ESXi-Host-3-A |
20:00:00:25:b5: |
Fabric-B |
WWPN-Infra-ESXi-Host-3-B |
20:00:00:25:b5: |
|
Infra-ESXi-Host-4 |
Fabric-A |
WWPN-Infra-ESXi-Host-4-A |
20:00:00:25:b5: |
Fabric-B |
WWPN-Infra-ESXi-Host-4-B |
20:00:00:25:b5: |
After the Cisco UCS service profiles have been created, each infrastructure blade in the environment will be assigned certain unique configuration parameters. To proceed with the SAN configuration, this deployment specific information must be gathered from each Cisco UCS blade. Complete the following steps:
1. To gather the vNIC IQN information, launch the Cisco UCS Manager GUI. In the navigation pane, click the Servers tab. Expand Servers > Service Profiles > root.
2. Click each service profile and then click the “iSCSI vNICs” tab on the right. Note “Initiator Name” displayed at the top of the page under “Service Profile Initiator Name”.
Cisco UCS Service Profile Name |
iSCSI IQN |
Infra-ESXi-Host-1 |
iqn.1992-08.com.cisco:ucs-host____ |
Infra-ESXi-Host-2 |
iqn.1992-08.com.cisco:ucs-host____ |
Infra-ESXi-Host-3 |
iqn.1992-08.com.cisco:ucs-host____ |
Infra-ESXi-Host-4 |
iqn.1992-08.com.cisco:ucs-host____ |
As part of IBM SVC iSCSI configuration, complete the following steps:
· Setup Volumes
· Map Volumes to Hosts
Table 19 List of Volumes on iSCSI IBM SVC*
Volume Name |
Capacity (GB) |
Purpose |
Mapping |
Infra-ESXi-iSCSI-Host-01 |
10 |
Boot LUN for the Host |
Infra-ESXi-iSCSI-Host-01 |
Infra-ESXi-iSCSI-Host-02 |
10 |
Boot LUN for the Host |
Infra-ESXi-iSCSI-Host-02 |
Infra-ESXi-iSCSI-Host-03 |
10 |
Boot LUN for the Host |
Infra-ESXi-iSCSI-Host-03 |
Infra-ESXi-iSCSI-Host-04 |
10 |
Boot LUN for the Host |
Infra-ESXi-iSCSI-Host-04 |
Infra-iSCSI-datastore-1 |
1000** |
Shared volume to host VMs |
All ESXi hosts: Infra-ESXi-iSCSI-Host-01 to Infra-ESXi-iSCSI-Host-04 |
Infra-iSCSI-swap |
300** |
Shared volume to host VMware VM swap directory |
All ESXi hosts: Infra-ESXi-iSCSI-Host-01 to Infra-ESXi-Host-04 |
* Customers should adjust the names and values used for server and volumes names based on their deployment
** The volume size can be adjusted based on customer requirements
1. Log into the IBM SVC GUI and select the Volumes icon one the left screen and select Volumes.
Following steps will be repeated to create and map all the volumes shown in Table 19.
2. Click Create Volumes as shown in the figure.
3. Click Basic and the select the pool (Bronze in this example) from the drop-down menu.
4. Input quantity 1 and the capacity and name from Table 19. Select Thin-provisioned for Capacity savings and enter the Name of the volume. Select I/O group io_grp0.
5. Click Create.
6. Repeat the steps above to create all the required volumes and verify all the volumes have successfully been created as shown in the sample output below.
7. Click Hosts.
8. Follow the procedure below to add all ESXi hosts (Table 18) to the IBM SVC system.
9. Click Add Host.
10. Select iSCSI Host.
11. Add the name of the host to match the ESXi service profile name from Table 18.
12. Type the IQN corresponding to the ESXi host from Table 18 and click Add.
13. Click Close.
14. Click Volumes.
15. Right-click the Boot LUN for the ESXi host and choose Map to Host.
16. From the drop-down menu, select the newly created iSCSI host.
17. Click Map Volumes and when the process is complete, click Close.
18. Repeat the above steps to map shared volumes from Table 19 to the host as well.
19. Repeat the steps outlined in this procedure to add all the ESXi hosts to the storage system and modify their volume mappings to add both the boot LUN as well as the shared volumes to the host.
As part of IBM SVC Fibre Channel configuration, complete the following steps:
· Setup Zoning on Cisco MDS switches
· Setup Volumes on IBM SVC
· Map Volumes to Hosts
The following steps will configure zoning for the WWPNs for the UCS hosts and the IBM SVC nodes. WWPN information collected from the previous steps will be used in this section. Multiple zones will be created for servers in VSAN 101 on Switch A and VSAN 102 on Switch B.
The configuration below assumes 4 UCS services profiles have been deployed in this example. Customers can adjust the configuration according to their deployment size.
The configuration below assumes 4 UCS services profiles have been deployed. Customers can adjust the configuration according to their deployment.
Log in to the MDS switch and complete the following steps.
1. Configure the ports and the port-channel for UCS.
interface port-channel1 (For UCS)
channel mode active
switchport rate-mode dedicated
!
interface fc1/31 (UCS Fabric A)
port-license acquire
channel-group 1 force
no shutdown
!
interface fc1/32 (UCS Fabric A)
port-license acquire
channel-group 1 force
no shutdown
2. Create the VSAN.
vsan database
vsan 101 interface port-channel1
3. The WWPNs recorded in Table 16 and Table 17 will be used in the next step. Replace the variables with actual WWPN values.
device-alias database
device-alias name Infra-ESXi-Host-01 pwwn <WWPN-Infra-ESXi-Host-1-A>
device-alias name Infra-ESXi-Host-02 pwwn <WWPN-Infra-ESXi-Host-2-A>
device-alias name Infra-ESXi-Host-03 pwwn <WWPN-Infra-ESXi-Host-3-A>
device-alias name Infra-ESXi-Host-04 pwwn <WWPN-Infra-ESXi-Host-4-A>
device-alias name SVC-Node1-FC1-NPIV pwwn <WWPN-SVC-Node1-FC1-NPIV>
device-alias name SVC-Node1-FC3-NPIV pwwn <WWPN-SVC-Node1-FC3-NPIV>
device-alias name SVC-Node2-FC1-NPIV pwwn <WWPN-SVC-Node2-FC1-NPIV>
device-alias name SVC-Node2-FC3-NPIV pwwn <WWPN-SVC-Node2-FC3-NPIV>
device-alias name SVC-Node3-FC1-NPIV pwwn <WWPN-SVC-Node3-FC1-NPIV>
device-alias name SVC-Node3-FC3-NPIV pwwn <WWPN-SVC-Node3-FC3-NPIV>
device-alias name SVC-Node4-FC1-NPIV pwwn <WWPN-SVC-Node4-FC1-NPIV>
device-alias name SVC-Node4-FC3-NPIV pwwn <WWPN-SVC-Node4-FC3-NPIV>
device-alias commit
4. Create the zones and add device-alias members for the 4 blades.
zone name Infra-ESXi-Host-01 vsan 101
member device-alias Infra-ESXi-Host-01
member device-alias SVC-Node1-FC1-NPIV
member device-alias SVC-Node1-FC3-NPIV
member device-alias SVC-Node2-FC1-NPIV
member device-alias SVC-Node2-FC3-NPIV
member device-alias SVC-Node3-FC1-NPIV
member device-alias SVC-Node3-FC3-NPIV
member device-alias SVC-Node4-FC1-NPIV
member device-alias SVC-Node4-FC3-NPIV
!
zone name Infra-ESXi-Host-02 vsan 101
member device-alias Infra-ESXi-Host-02
member device-alias SVC-Node1-FC1-NPIV
member device-alias SVC-Node1-FC3-NPIV
member device-alias SVC-Node2-FC1-NPIV
member device-alias SVC-Node2-FC3-NPIV
member device-alias SVC-Node3-FC1-NPIV
member device-alias SVC-Node3-FC3-NPIV
member device-alias SVC-Node4-FC1-NPIV
member device-alias SVC-Node4-FC3-NPIV
!
zone name Infra-ESXi-Host-03 vsan 101
member device-alias Infra-ESXi-Host-03
member device-alias SVC-Node1-FC1-NPIV
member device-alias SVC-Node1-FC3-NPIV
member device-alias SVC-Node2-FC1-NPIV
member device-alias SVC-Node2-FC3-NPIV
member device-alias SVC-Node3-FC1-NPIV
member device-alias SVC-Node3-FC3-NPIV
member device-alias SVC-Node4-FC1-NPIV
member device-alias SVC-Node4-FC3-NPIV
!
zone name Infra-ESXi-Host-04 vsan 101
member device-alias Infra-ESXi-Host-04
member device-alias SVC-Node1-FC1-NPIV
member device-alias SVC-Node1-FC3-NPIV
member device-alias SVC-Node2-FC1-NPIV
member device-alias SVC-Node2-FC3-NPIV
member device-alias SVC-Node3-FC1-NPIV
member device-alias SVC-Node3-FC3-NPIV
member device-alias SVC-Node4-FC1-NPIV
member device-alias SVC-Node4-FC3-NPIV
!
5. Add zones to zoneset.
zoneset name versastackzoneset vsan 101
member Infra-ESXi-Host-01
member Infra-ESXi-Host-02
member Infra-ESXi-Host-03
member Infra-ESXi-Host-04
6. Activate the zoneset.
zoneset activate name versastackzoneset vsan 101
Validate all the HBA's are logged into the MDS switch. The SVC nodes and the Cisco servers should be powered on. To start the Cisco servers from Cisco UCS Manager, select the server tab, then click ServersàServiceàProfilesàroot, and right-click service profile then select boot server.
7. Validate that all the powered on system’s HBAs are logged into the switch through the show zoneset command.
show zoneset active
MDS-9396S-A# show zoneset active
<SNIP>
zone name Infra-ESXi-Host-01 vsan 101
* fcid 0x400004 [pwwn 20:00:00:25:b5:00:0a:00] [Infra-ESXi-Host-01]
* fcid 0x400021 [pwwn 50:05:07:68:0c:25:22:71] [SVC-Node1-FC1-NPIV]
* fcid 0x400161 [pwwn 50:05:07:68:0c:15:22:71] [SVC-Node1-FC3-NPIV]
* fcid 0x400041 [pwwn 50:05:07:68:0c:25:22:67] [SVC-Node2-FC1-NPIV]
* fcid 0x400121 [pwwn 50:05:07:68:0c:15:22:67] [SVC-Node2-FC3-NPIV]
* fcid 0x400181 [pwwn 50:05:07:68:0c:15:6f:60] [SVC-Node3-FC1-NPIV]
* fcid 0x4001a1 [pwwn 50:05:07:68:0c:16:6f:60] [SVC-Node3-FC3-NPIV]
* fcid 0x4001c1 [pwwn 50:05:07:68:0c:15:6f:63] [SVC-Node4-FC1-NPIV]
* fcid 0x4001e1 [pwwn 50:05:07:68:0c:16:6f:63] [SVC-Node4-FC3-NPIV]
<SNIP>
8. Save the configuration.
copy run start
The configuration below assumes 4 UCS services profiles have been deployed. Customers can adjust the configuration according to their deployment.
Log in to the MDS switch and complete the following steps.
1. Configure the ports and the port-channel for UCS.
interface port-channel2 (For UCS)
channel mode active
switchport rate-mode dedicated
!
interface fc1/31 (UCS Fabric B)
port-license acquire
channel-group 2 force
no shutdown
!
interface fc1/32 (UCS Fabric B)
port-license acquire
channel-group 2 force
no shutdown
2. Create the VSAN.
vsan database
vsan 102 interface port-channel2
3. The WWPNs recorded in Table 16 and Table 17 will be used in the next step. Replace the variables with actual WWPN values.
device-alias database
device-alias name Infra-ESXi-Host-01 pwwn <WWPN-Infra-ESXi-Host-1-B>
device-alias name Infra-ESXi-Host-02 pwwn <WWPN-Infra-ESXi-Host-2-B>
device-alias name Infra-ESXi-Host-03 pwwn <WWPN-Infra-ESXi-Host-3-B>
device-alias name Infra-ESXi-Host-04 pwwn <WWPN-Infra-ESXi-Host-4-B>
device-alias name SVC-Node1-FC2-NPIV pwwn <WWPN-SVC-Node1-FC2-NPIV>
device-alias name SVC-Node1-FC4-NPIV pwwn <WWPN-SVC-Node1-FC4-NPIV>
device-alias name SVC-Node2-FC2-NPIV pwwn <WWPN-SVC-Node2-FC2-NPIV>
device-alias name SVC-Node2-FC4-NPIV pwwn <WWPN-SVC-Node2-FC4-NPIV>
device-alias name SVC-Node3-FC2-NPIV pwwn <WWPN-SVC-Node3-FC2-NPIV>
device-alias name SVC-Node3-FC4-NPIV pwwn <WWPN-SVC-Node3-FC4-NPIV>
device-alias name SVC-Node4-FC2-NPIV pwwn <WWPN-SVC-Node4-FC2-NPIV>
device-alias name SVC-Node4-FC4-NPIV pwwn <WWPN-SVC-Node4-FC4-NPIV>
device-alias commit
4. Create the zones and add device-alias members for the 4 blades.
zone name Infra-ESXi-Host-01 vsan 102
member device-alias Infra-ESXi-Host-01
member device-alias SVC-Node1-FC2-NPIV
member device-alias SVC-Node1-FC4-NPIV
member device-alias SVC-Node2-FC2-NPIV
member device-alias SVC-Node2-FC4-NPIV
member device-alias SVC-Node3-FC2-NPIV
member device-alias SVC-Node3-FC4-NPIV
member device-alias SVC-Node4-FC2-NPIV
member device-alias SVC-Node4-FC4-NPIV
!
zone name Infra-ESXi-Host-02 vsan 102
member device-alias Infra-ESXi-Host-02
member device-alias SVC-Node1-FC2-NPIV
member device-alias SVC-Node1-FC4-NPIV
member device-alias SVC-Node2-FC2-NPIV
member device-alias SVC-Node2-FC4-NPIV
member device-alias SVC-Node3-FC2-NPIV
member device-alias SVC-Node3-FC4-NPIV
member device-alias SVC-Node4-FC2-NPIV
member device-alias SVC-Node4-FC4-NPIV
!
zone name Infra-ESXi-Host-03 vsan 102
member device-alias Infra-ESXi-Host-03
member device-alias SVC-Node1-FC2-NPIV
member device-alias SVC-Node1-FC4-NPIV
member device-alias SVC-Node2-FC2-NPIV
member device-alias SVC-Node2-FC4-NPIV
member device-alias SVC-Node3-FC2-NPIV
member device-alias SVC-Node3-FC4-NPIV
member device-alias SVC-Node4-FC2-NPIV
member device-alias SVC-Node4-FC4-NPIV
!
zone name Infra-ESXi-Host-04 vsan 102
member device-alias Infra-ESXi-Host-04
member device-alias SVC-Node1-FC2-NPIV
member device-alias SVC-Node1-FC4-NPIV
member device-alias SVC-Node2-FC2-NPIV
member device-alias SVC-Node2-FC4-NPIV
member device-alias SVC-Node3-FC2-NPIV
member device-alias SVC-Node3-FC4-NPIV
member device-alias SVC-Node4-FC2-NPIV
member device-alias SVC-Node4-FC4-NPIV
!
5. Add zones to zoneset.
zoneset name versastackzoneset vsan 102
member Infra-ESXi-Host-01
member Infra-ESXi-Host-02
member Infra-ESXi-Host-03
member Infra-ESXi-Host-04
6. Activate the zoneset.
zoneset activate name versastackzoneset vsan 102
Validate all the HBA's are logged into the MDS switch. The SVC nodes and the Cisco servers should be powered on. To start the Cisco servers from Cisco UCS Manager, select the server tab, then click ServersàServiceàProfilesàroot, and right-click service profile then select boot server.
7. Validate that all the powered on system’s HBAs are logged into the switch through the show zoneset command.
show zoneset active
MDS-9396S-B# show zoneset active
<SNIP>
zone name Infra-ESXi-Host-01 vsan 102
* fcid 0x770002 [pwwn 20:00:00:25:b5:00:0b:00] [Infra-ESXi-Host-01]
* fcid 0x770041 [pwwn 50:05:07:68:0c:26:22:71] [SVC-Node1-FC2-NPIV]
* fcid 0x770141 [pwwn 50:05:07:68:0c:16:22:71] [SVC-Node1-FC4-NPIV]
* fcid 0x770021 [pwwn 50:05:07:68:0c:26:22:67] [SVC-Node2-FC2-NPIV]
* fcid 0x770101 [pwwn 50:05:07:68:0c:16:22:67] [SVC-Node2-FC4-NPIV]
* fcid 0x770181 [pwwn 50:05:07:68:0c:17:6f:60] [SVC-Node3-FC2-NPIV]
* fcid 0x7701a1 [pwwn 50:05:07:68:0c:18:6f:60] [SVC-Node3-FC4-NPIV]
* fcid 0x770161 [pwwn 50:05:07:68:0c:17:6f:63] [SVC-Node4-FC2-NPIV]
* fcid 0x770201 [pwwn 50:05:07:68:0c:18:6f:63] [SVC-Node4-FC4-NPIV]
<SNIP>
8. Save the configuration.
copy run start
As part of IBM SVC FC configuration, complete the following steps:
· Create ESXi boot Volumes (Boot LUNs for all the ESXi hosts)
· Create Share Storage Volumes (for hosting VMs)
· Map Volumes to Hosts
In this deployment example, there are four ESXi hosts – following volumes will be created in this process:
Table 20 List of FC volumes on IBM SVC*
Volume Name |
Capacity (GB) |
Purpose |
Mapping |
Infra-ESXi-Host-01 |
10 |
Boot LUN for the Host |
Infra-ESXi-Host-01 |
Infra-ESXi-Host-02 |
10 |
Boot LUN for the Host |
Infra-ESXi-Host-02 |
Infra-ESXi-Host-03 |
10 |
Boot LUN for the Host |
Infra-ESXi-Host-03 |
Infra-ESXi-Host-04 |
10 |
Boot LUN for the Host |
Infra-ESXi-Host-03 |
Infra-datastore-1 |
1000* |
Shared volume to host VMs |
All ESXi hosts: Infra-ESXi-Host-01 to Infra-ESXi-Host-04 |
Infra-swap |
300* |
Shared volume to host VMware VM swap directory |
All ESXi hosts: Infra-ESXi-Host-01 to Infra-ESXi-Host-04 |
* Customers should adjust the names and values based on their environment.
1. Log into the IBM SVC GUI and select the Volumes icon one the left screen and select Volumes.
Following steps will be repeated to create and map all the volumes shown in Table 20.
2. Click Create Volumes as shown in the figure.
3. Click Basic and the select the pool (Bronze in this example) from the drop-down menu.
4. Input quantity 1 and the capacity and name from Table 20. Select Thin-provisioned for Capacity savings and enter the Name of the volume. Select io_grp0 for the I/O group.
5. Click Create.
6. Repeat the steps above to create all the required volumes and verify all the volumes have successfully been created as shown in the sample output below.
7. Select Host Icon in the left pane and click Hosts.
8. Follow the procedure below to add all ESXi hosts (Table 17) to the IBM SVC system.
9. Click Add Host.
10. Select the Fibre Channel Host.
11. Add the name of the host to match the ESXi service profile name from Table 17.
12. From the drop-down menu, select both (Fabric A and B) WWPNs corresponding to the host in Table 17.
13. Select Host Type Generic and I/O groups All.
14. Click Add.
15. Right-click on the newly created host and select Modify VOLUME Mappings...
16. Select the Boot LUN corresponding to the host and the shared volumes to the column on the right labelled as Volume Mapped to the Host.
17. Click Map Volumes. Once the process is complete, the Host Mappings column should show Yes as shown in the below screenshot.
18. Repeat the steps above to add all the ESXi hosts in the environment and modify their mappings.
This section provides a detailed configuration procedure of the Cisco ACI Fabric and Infrastructure (Foundation) Tenant for the use in a VersaStack environment.
Follow the physical connectivity guidelines for VersaStack as covered in Figure 2.
In ACI, both spine and leaf switches are configured using APIC, individual configuration of the switches is not required. Cisco APIC discovers the ACI infrastructure switches using LLDP and acts as the central control and management point for the entire configuration.
This sub-section guides you through setting up the Cisco APIC. Cisco recommends a cluster of at least 3 APICs controlling an ACI Fabric.
1. On the back of the first APIC, connect the M port, the Eth1-1 and Eth1-2 ports to the out of band management switches. The M port will be used for connectivity to the server’s Cisco Integrated Management Controller (CIMC) and the Eth1-1/2 ports will provide HTTPS and SSH access to the APIC.
Cisco recommends connecting E1-1 and Eth1-2 to two different management switches for redundancy.
2. Using the supplied KVM dongle cable, connect a keyboard and monitor to the first APIC. Power on the machine, and using <F8> key enter the CIMC Configuration Utility. Configure the CIMC with an OOB management IP address. Make sure Dedicated NIC Mode and No NIC Redundancy are selected to put the CIMC interface on the M port. Also set the CIMC password.
3. Save the CIMC configuration using <F10> and use <ESC> to exit the configuration tool.
The following configuration can be completed using the same KVM console or CIMC connection
4. Press Enter to start APIC initial Setup.
5. Press <Enter> to accept the default fabric name (ACI Fabric1). This value can be changed if desired.
6. Press <Enter> to accept the default fabric ID (1). This value can be changed if desired.
7. Press <Enter> to select the default value (3) for the field Enter the number of controllers in the fabric. While the fabric can operate with a single APIC, a minimum 3 APICs are recommended for redundancy and to overcome the split brain condition.
8. Enter the POD ID or accept the default POD ID (1)
9. Enter the controller number currently being set up under Enter the controller ID (1-3). Remember only controller number 1 will allow you to setup the admin password. Remaining controllers and switches sync their passwords to the admin password set on the controller 1.
10. Enter the controller name or press <Enter> to accept the default (apic1).
11. Press <Enter> to select the default pool under Enter the address pool for TEP addresses (10.0.0.0/16). If this subnet is already in use in the customer environment, select a different range.
12. Enter the VLAN ID for the fabric’s infra network or the fabric’s system VLAN. A recommended ID for this VLAN is 4093 specially when using Cisco AVS.
13. Press <Enter> to select the default address pool for Bridge Domain multicast addresses (225.0.0.0/15).
14. Press <Enter> to disable IPv6 for the Out-of-Band Mgmt Interface.
15. Enter an IP and subnet length in the out of band management subnet for the Out-of-band management interface.
16. Enter the gateway IP address of the out of band management subnet.
17. Press <Enter> to select auto speed/duplex mode.
18. Press <Enter> to enable strong passwords.
19. Enter the password for the admin user.
20. Re-enter this password.
21. The Complete configuration is displayed. If all values are correct, press <Enter> to exit the configuration without any changes.
22. The APIC will continue configuration and continue boot up until the login: prompt appears.
23. Repeat the above steps for all APIC controllers adjusting the controller ID as necessary.
This section details the steps for Cisco ACI Fabric Discovery, where leaf switches, spine switches and APICs are automatically discovered in the ACI Fabric and shows assigned node ids. Cisco recommends a cluster of at least 3 APICs controlling an ACI Fabric.
1. Log into the APIC Advanced GUI using a web browser, by browsing out of band IP address configured for APIC in the last step. Select the Advanced Mode from the Mode drop-down list and login with the admin user id and password.
In this validation, Google Chrome was used as the web browser. It might take a few minutes before APIC GUI is available after the initial setup
2. Take appropriate action to close any warning screens.
3. At the top in the APIC home page, select the Fabric tab.
4. In the left pane, select and expand Fabric Membership.
5. A single Leaf will be listed on the Fabric Membership page as shown:
6. Connect to the two leaf and two spine switches using serial consoles and login in as admin with no password (press enter). Use show inventory to get the leaf’s serial number.
show inventory
NAME: "Chassis", DESCR: "Nexus C93180YC-EX Chassis"
PID: N9K-C93180YC-EX , VID: V01 , SN: FDO20352B6P
7. Match the serial numbers from the leaf listing to determine whether Leaf 1 or Leaf 2 has appeared under Fabric Membership.
8. In the APIC GUI, under Fabric Membership, double click the leaf in the list. Enter a Node ID and a Node Name for the Leaf switch and click Update.
9. The fabric discovery will continue and all the spines and leaves will start appearing under Fabric Membership one after another.
It may be necessary to click the refresh button to see new items in the Fabric Membership list.
10. Repeat steps 7-9 to assign Node IDs and Node Names to these switches. Continue this process until all switches have been assigned Node IDs and Node Names. All switches will also receive IPs in the TEP address space assigned during the initial setup of the APIC.
11. Click Topology in the left pane. The discovered ACI Fabric topology will appear. It may take a few minutes and you will need to click the refresh button for the complete topology to appear.
This section details the steps for initial setup of the Cisco ACI Fabric, where the software release is validated, out of band management IPs are assigned to the leaves and spines, NTP is setup, and the fabric BGP route reflectors are set up.
1. In the APIC Advanced GUI, at the top select Admin > Firmware.
2. This document was validated with ACI software release 2.0(2h). Select Fabric Node Firmware in the left pane under Firmware Management. All switches should show the same firmware release and the release version should be at minimum n9000-12.0(2h). The switch software version should also match the APIC version.
3. Click Admin > Firmware > Controller Firmware. If all APICs are not at the same release at a minimum of 2.0(2h), follow the Cisco APIC Controller and Switch Software Upgrade and Downgrade Guide to upgrade both the APICs and switches to a minimum release of 2.0(2h) on APIC and 12.0(2h) on the switches.
1. To add out of band management interfaces for all the switches in the ACI Fabric, select Tenants > mgmt.
2. Expand Tenant mgmt on the left. Right-click Node Management Addresses and select Create Static Node Management Addresses.
3. Enter the node number range for the leaf switches (601-602 in this example).
4. Select the checkbox for Out-of-Band Addresses.
5. Select default for Out-of-Band Management EPG.
6. Considering that the IPs will be applied in a consecutive range of two IPs, enter a starting IP address and netmask in the Out-Of-Band IPV4 Address field.
7. Enter the out of band management gateway address in the Gateway field.
8. Click SUBMIT, then click YES.
9. On the left, right-click Node Management Addresses and select Create Static Node Management Addresses.
10. Enter the node number range for the spine switches (701-702).
11. Select the checkbox for Out-of-Band Addresses.
12. Select default for Out-of-Band Management EPG.
13. Considering that the IPs will be applied in a consecutive range of two IPs, enter a starting IP address and netmask in the Out-Of-Band IPV4 Address field.
14. Enter the out of band management gateway address in the Gateway field.
15. Click SUBMIT, then click YES.
16. On the left, expand Node Management Addresses and select Static Node Management Addresses. Verify the mapping of IPs to switching nodes.
17. Direct out of band access to the switches should now be available for SSH.
This procedure will allow customers to setup an NTP server for synchronizing the fabric time.
1. To set up NTP in the fabric, select and expand Fabric > Fabric Policies > Pod Policies > Policies > Date and Time.
2. Select default. In the Datetime Format - default pane, use the drop-down to select the correct Time Zone. Select the appropriate Display Format and Offset State. Click SUBMIT.
3. On the left, select Policy default.
4. On the right use the + sign to add NTP servers accessible on the out of band management subnet. Enter an IP address accessible on the out of band management subnet and select the default (Out-of-Band) Management EPG. Click Submit to add the NTP server. Repeat this process to add all NTP servers.
1. To configure optional DNS in the ACI fabric, select and expand Fabric > Fabric Policies > Global Poli- cies > DNS Profiles > default.
2. In the Management EPG drop-down, select the default (Out-of-Band) Management EPG.
3. Use the + signs to the right of DNS Providers and DNS Domains to add DNS servers and the DNS domain name. Note that the DNS servers should be reachable from the out of band management subnet. Click SUBMIT to complete the DNS configuration.
In this ACI deployment, both the spine switches are set up as BGP route-reflectors to distribute the leaf routes throughout the fabric.
1. To configure the BGP Route Reflector, select and expand Fabric > Fabric Policies > Pod Policies > Policies > BGP Route Reflector default.
2. Select a unique Autonomous System Number for this ACI fabric. Use the + sign on the right to add the two spines to the list of Route Reflector Nodes. Click SUBMIT to complete configuring the BGP Route Reflector.
3. To enable the BGP Route Reflector, on the left just under the BGP Route Reflector default, right-click Policy Groups under Pod Policies and select Create Pod Policy Group.
4. In the Create Pod Policy Group window, name the Policy Group ppg-Pod1.
5. Select the default BGP Route Reflector Policy.
6. Click SUBMIT to complete creating the Policy Group.
7. On the left expand Profiles under Pod Policies and select default.
8. Using the drop-down, select the ppg-Pod1 Fabric Policy Group.
9. Click SUBMIT.
This section details the steps to create various access policies creating parameters for CDP, LLDP, LACP, etc. These policies will be used during vPC and VM domain creation. To define fabric access policies, complete the following steps:
1. Log into APIC Advanced GUI.
2. In the APIC Advanced GUI, select and expand Fabric > Access Policies > Interface Policies > Policies.
This procedure will create link level policies for setting up the 1Gbps and 10Gbps link speeds.
1. In the left pane, right-click Link Level and select Create Link Level Policy.
2. Name the policy as 1Gbps-Link and select the 1Gbps Speed.
3. Click SUBMIT to complete creating the policy.
4. In the left pane, right-click on Link Level and select Create Link Level Policy.
5. Name the policy 10Gbps-Link and select the 10Gbps Speed.
6. Click SUBMIT to complete creating the policy.
This procedure will create policies to enable or disable CDP on a link.
1. In the left pane, right-click CDP interface and select Create CDP Interface Policy.
2. Name the policy as CDP-Enabled and enable the Admin State.
3. Click SUBMIT to complete creating the policy.
4. In the left pane, right-click on the CDP Interface and select Create CDP Interface Policy.
5. Name the policy CDP-Disabled and disable the Admin State.
6. Click SUBMIT to complete creating the policy.
This procedure will create policies to enable or disable LLDP on a link.
1. In the left pane, right-click LLDP lnterface and select Create LLDP Interface Policy.
2. Name the policy as LLDP-Enabled and enable both Transmit State and Receive State.
3. Click SUBMIT to complete creating the policy.
4. In the left, right-click the LLDP lnterface and select Create LLDP Interface Policy.
5. Name the policy as LLDP-Disabled and disable both the Transmit State and Receive State.
6. Click SUBMIT to complete creating the policy.
This procedure will create policies to set LACP active mode configuration, LACP Mode On configuration and the MAC-Pinning mode configuration.
1. In the left pane, right-click the Port Channel and select Create Port Channel Policy.
2. Name the policy as LACP-Active and select LACP Active for the Mode. Do not change any of the other values.
3. Click SUBMIT to complete creating the policy.
4. In the left pane, right-click Port Channel and select Create Port Channel Policy.
5. Name the policy as MAC-Pinning and select MAC Pinning-Physical-NIC-load for the Mode. Do not change any of the other values.
6. Click SUBMIT to complete creating the policy.
7. In the left pane, right-click Port Channel and select Create Port Channel Policy.
8. Name the policy as Mode-On and select Static Channel – Mode On for the Mode. Do not change any of the other values.
9. Click SUBMIT to complete creating the policy.
This procedure will create policies to enabled or disabled BPDU filter and guard.
1. In the left pane, right-click Spanning Tree Interface and select Create Spanning Tree Interface Policy.
2. Name the policy as BPDU-FG-Enabled and select both the BPDU filter and BPDU Guard Interface Controls.
3. Click SUBMIT to complete creating the policy.
4. In the left pane, right-click Spanning Tree Interface and select Create Spanning Tree Interface Policy.
5. Name the policy as BPDU-FG-Disabled and make sure both the BPDU filter and BPDU Guard Interface Controls are cleared.
6. Click SUBMIT to complete creating the policy.
This procedure will create policies to enable global scope for all the VLANs.
1. In the left pane, right-click on the L2 Interface and select Create L2 Interface Policy.
2. Name the policy as VLAN-Scope-Global and make sure Global scope is selected.
3. Click SUBMIT to complete creating the policy.
This procedure will create policies to disable Firewall.
1. In the left pane, right-click Firewall and select Create Firewall Policy.
2. Name the policy Firewall-Disabled and select Disabled for Mode. Do not change any of the other values.
3. Click SUBMIT to complete creating the policy.
This sub-section details the steps to setup vPCs for connectivity to the Management Network, Cisco UCS and IBM Storage.
Complete the following steps to setup vPCs for connectivity to the existing Management Network.
This deployment guide covers configuration for a pre-existing Cisco Catalyst management switch. Customers can adjust the management configuration depending on their connectivity setup.
1. In the APIC Advanced GUI, at the top select Fabric > Access Policies > Quick Start.
2. In the right pane, select Configure an interface, PC and VPC.
3. In the configuration window, configure a VPC domain between the leaf switches by clicking “+” under VPC Switch Pairs
4. Enter a VPC Domain ID (10 in this example)
5. From the drop-down list, select Switch 1 and Switch 2 IDs to select the two leaf switches.
6. Click SAVE.
7. Click + under Configured Switch Interfaces.
8. From the Switches drop-down list on the right, select both the leaf switches.
9. Leave the system generated Switch Profile Name in place.
10. Click big green “+” to configured switch interfaces
11. Configure various fields as shown in the figure below. In this screen shot, port 1/33 on both leaf switches is connected to Cisco catalyst switch using 1Gbps links.
12. Click SAVE.
13. Click SAVE again to finish the configuring switch interfaces
14. Click SUBMIT.
To validate the configuration, log into the catalyst switch and verify the port-channel is up (show etherchannel summary)
The two management servers are configured to boot ESXi hypervisor from FlexFlash and are configured in a VMware cluster to host management VMs such as vCenter and Active Directory (AD) on iSCSI based shared datastores mounted from IBM SVC. ACI configuration for these two management nodes consists of configuring a VPC using the 10Gbps ports to carry vMotion and iSCSI VLANs.
This section covers the ACI VPC configuration for the management hosts. The VLANs configured for these hosts are:
Table 21 VLANs for Management Hosts
Name |
VLAN |
vMotion |
<3002> |
iSCSI-A |
<3012> |
iSCSI-B |
<3022> |
In the VersaStack with ACI design, different VLANs are utilized when two or more paths are mapped to the same EPG. The vMotion and iSCSI VLANs configured for management nodes are therefore different than the vMotion and iSCSI VLANs configured for UCS or IBM SVC.
1. In the APIC Advanced GUI, select Fabric > Access Policies > Quick Start.
2. In the right pane, select Configure and interface, PC and VPC.
3. Select the switches configured in the last step under Configured Switch Interfaces
4. Click on the right to add switch interfaces
5. Configure various fields as shown in the figure below. In this screen shot, port 1/15 on both leaf switches is connected to management node 1 using 10Gbps links. The management ESXi node (configured later) utilizes vSwitch based networking configuration therefore the port-channel mode is set to ON (instead of LACP).
6. Click SAVE.
7. Click SAVE again to finish the configuring switch interfaces
8. Click SUBMIT.
9. From the right pane, select Configure and interface, PC and VPC.
10. Select the switches configured in the last step under Configured Switch Interfaces
11. Click on the right to add switch interfaces
12. Configure various fields as shown in the figure below. In this screen shot, port 1/16 on both leaf switches is connected to management node 2 using 10Gbps links. Instead of creating a new domain, the Physical Domain created in the last step is attached to the second management node as shown below.
13. Click SAVE.
14. Click SAVE again to finish the configuring switch interfaces
15. Click SUBMIT.
Complete the following steps to setup vPCs for connectivity to the UCS Fabric Interconnects. The VLANs configured for UCS are shown in the table below
Table 22 VLANs for UCS Hosts
VLAN |
|
Native |
<2> |
IB-Mgmt |
<111> |
vMotion |
<3000> |
iSCSI-A |
<3010> |
iSCSI-B |
<3020> |
1. In the APIC Advanced GUI, select Fabric > Access Policies > Quick Start.
2. In the right pane, select Configure and interface, PC and VPC.
3. Select the switches configured in the last step under Configured Switch Interfaces
4. Click on the right to add switch interfaces
5. Configure various fields as shown in the figure below. In this screen shot, port 1/27 on both leaf switches is connected to UCS Fabric Interconnect A using 10Gbps links.
6. Click SAVE.
7. Click SAVE again to finish the configuring switch interfaces
8. Click SUBMIT.
9. From the right pane, select Configure and interface, PC and VPC.
10. Select the switches configured in the last step under Configured Switch Interfaces
11. Click on the right to add switch interfaces
12. Configure various fields as shown in the figure below. In this screen shot, port 1/28 on both leaf switches is connected to UCS Fabric Interconnect B using 10Gbps links. Instead of creating a new domain, the External Bridge Domain created in the last step (UCS) is attached to the FI-B as shown below.
13. Click SAVE.
14. Click SAVE again to finish the configuring switch interfaces
15. Click SUBMIT.
16. Optional: Repeat 9 through 15 to configure any additional UCS domain. For a uniform configuration, the External Bridge Domain (UCS) will be utilized for all the Fabric Interconnects.
This section details the steps to setup ACI configuration for SVC nodes to provide iSCSI connectivity. The physical connectivity between IBM SVC nodes and Cisco Nexus 93180 switches is shown in the figure below:
Table 23 shows the configuration parameters for setting up the iSCSI links
Table 23 iSCSI Port Configuration Parameters
Switch |
Switch Port |
SVC Node |
Port |
Path |
VLAN |
Nexus 93180-1 |
15 |
SVC Node 1 |
Port 4 |
iSCSI-A |
3011 |
Nexus 93180-1 |
15 |
SVC Node 1 |
Port 6 |
iSCSI-A |
3021 |
Nexus 93180-2 |
16 |
SVC Node 2 |
Port 4 |
iSCSI-A |
3011 |
Nexus 93180-2 |
16 |
SVC Node 2 |
Port 6 |
iSCSI-B |
3021 |
Nexus 93180-1 |
17 |
SVC Node 3 |
Port 4 |
iSCSI-A |
3011 |
Nexus 93180-1 |
17 |
SVC Node 3 |
Port 6 |
iSCSI-A |
3021 |
Nexus 93180-2 |
18 |
SVC Node 4 |
Port 4 |
iSCSI-A |
3011 |
Nexus 93180-2 |
18 |
SVC Node 4 |
Port 6 |
iSCSI-B |
3021 |
1. In the APIC Advanced GUI, select Fabric > Access Policies > Quick Start.
2. In the right pane, select Configure and interface, PC and VPC
3. Click “+” under Configured Switch Interfaces
4. Select first leaf switch from the drop-down list Switches
5. Leave the system generated Switch Profile Name in place.
6. Click in the right pane to add switch interfaces
7. Configure various fields as shown in the figure below. In this screen shot, port 1/17 is configured using 10Gbps links. The details of the port connectivity can be obtained from Table 23.
8. Click SAVE.
9. Click SAVE again to finish the configuring switch interfaces
10. Click SUBMIT.
11. From the right pane, select Configure and interface, PC and VPC.
12. Select the switch configured in the last step under Configured Switch Interfaces
13. Click on the right to add switch interfaces
14. Configure various fields as shown in the figure below. In this screen shot, port 1/18 is connected to IBM SVC Node 2 using 10Gbps links. Instead of creating a new domain, the Physical Domain created in the last step (SVC-iSCSI-A) is attached to the IBM SVC Node 2 as shown below.
15. Click SAVE.
16. Click SAVE again to finish the configuring switch interfaces
17. Click SUBMIT.
18. Repeat Steps 9 through 15 to add remaining iSCSI-A SVC Node port configurations.
1. In the APIC Advanced GUI, select Fabric > Access Policies > Quick Start.
2. In the right pane, select Configure and interface, PC and VPC
3. Click “+” under Configured Switch Interfaces
4. Select second leaf switch from the drop-down list Switches
5. Leave the system generated Switch Profile Name in place.
6. Click in the right pane to add switch interfaces
7. Configure various fields as shown in the figure below. In this screen shot, port 1/17 is configured using 10Gbps links. The details of the port connectivity can be obtained from Table 23.
8. Click SAVE.
9. Click SAVE again to finish the configuring switch interfaces
10. Click SUBMIT.
11. From the right pane, select Configure and interface, PC and VPC.
12. Select the switch configured in the last step under Configured Switch Interfaces
13. Click on the right to add switch interfaces
14. Configure various fields as shown in the figure below. In this screen shot, port 1/18 is connected to IBM SVC Node 2 using 10Gbps links. Instead of creating a new domain, the Physical Domain created in the last step (SVC-iSCSI-B) is attached to the IBM SVC Node 2 as shown below.
15. Click SAVE.
16. Click SAVE again to finish the configuring switch interfaces
17. Click SUBMIT.
18. Repeat Steps 9 through 15 to add remaining iSCSI-B SVC Node port configurations.
This section details the steps to setup in-band management access in the Tenant common. This design will allow all the other tenant EPGs to access the common management segment. Various constructs of this design are shown in the figure below:
1. In the APIC Advanced GUI, at the top select Tenants > common.
2. In the left pane, expand Tenant common and Networking.
1. Right-click on VRF and select Create VRF.
2. Enter vrf-Common-IB-Mgmt as the name of the VRF.
3. Uncheck Create A Bridge Domain
4. Click FINISH.
5. Right-click the VRF and select Create VRF.
6. Enter vrf-Common-Outside as the name of the VRF.
7. Uncheck Create a Bridge Domain.
8. Click FINISH.
VRF vrf-Common-Outside is not required for management access but will be utilized later to provide Shared L3 access.
1. In the APIC Advanced GUI, select Tenants > common.
2. In the left pane, expand Tenant common and Networking.
3. Right-click on the Bridge Domain and select Create Bridge Domain.
4. Name the Bridge Domain as bd-Common-IB-Mgmt
5. Select common/vrf-Common-IB-Mgmt from the VRF drop-down list.
6. Select Custom under Forwarding and enable the flooding as shown in the figure.
7. Click NEXT.
8. Do not change any configuration on next screen (L3 Configurations). Select NEXT.
9. No changes are needed Advanced/Troubleshooting. Click FINISH.
10. Right-click on the Bridge Domain and select Create Bridge Domain.
11. Name the Bridge Domain as bd-Common-Outside.
The bridge domain bd-Common-Outside is not needed for setting up common management segment and will be utilized later in the document when setting up Shared L3 Out
12. Select the common/vrf-Common-Outside.
13. Select Custom under Forwarding and enable the flooding as shown in the figure.
14. Click NEXT.
15. Do not change any configuration on next screen (L3 Configurations). Select NEXT.
16. No changes are needed Advanced/Troubleshooting. Click FINISH.
1. In the APIC Advanced GUI, select Tenants > common.
2. In the left pane, expand Tenant common and Application Profiles.
3. Right-click on the Application Profiles and select Create Application Profiles.
4. Enter ap-Common-Mgmt as the name of the application profile.
5. Click SUBMIT.
1. Expand the ap-Common-Mgmt application profile and right-click on the Application EPGs.
2. Select Create Application EPG.
3. Enter epg-Common-IB-Mgmt as the name of the EPG.
4. Select common/bd-Common-IB-Mgmt from the drop-down list for Bridge Domain.
5. Click FINISH.
1. Expand the newly create EPG and click Domains.
2. From the ACTIONS drop-down list, select Add L2 External Domain Association.
3. Select the Mgmt-Switch as the L2 External Domain Profile.
4. Change the Deploy Immediacy and Resolution Immediacy to Immediate.
5. Click SUBMIT.
1. In the left pane, Right-click on the Static Bindings (Paths).
2. Select Deploy Static EPG on PC, VPC, or Interface.
3. In the next screen, for the Path Type, select Virtual Port Channel and from the drop-down list, select the VPC for Mgmt Switch.
4. Enter the management VLAN under Port Encap.
5. Change Deployment Immediacy to Immediate.
6. Set the Mode to Trunk.
7. Click SUBMIT.
1. In the left pane, right-click Contracts and select Add Provided Contract.
2. In the Add Provided Contract window, select Create Contract from the drop-down list.
3. Name the Contract Allow-Common-IB-Mgmt.
4. Set the scope to Global.
5. Click + to add a Subject to the Contract.
Following steps create a contract to allow all the traffic between various tenants and the common management segment. Customers are encouraged to limit the traffic by setting restrictive filters.
6. Name the subject Allow-All-Traffic.
7. Click + under Filter Chain to add a Filter.
8. From the drop-down Name list, select default.
9. In the Create Contract Subject window, click UPDATE to add the Filter Chain to the Contract Subject.
10. Click OK to add the Contract Subject.
The Contract Subject Filter Chain can be modified later.
11. Click SUBMIT to finish creating the Contract.
12. Click SUBMIT to finish adding a Provided Contract.
This section details the steps for creating the Foundation Tenant in the ACI Fabric. This tenant will host infrastructure connectivity for the compute (VMware on UCS and management nodes) and the storage (IBM) environments. To deploy the Foundation Tenant, complete the following steps.
1. In the APIC Advanced GUI, select Tenants > Add Tenant.
2. Name the Tenant as Foundation.
3. For the VRF Name, enter Foundation. Keep the check box Take me to this tenant when I click finish, checked.
4. Click SUBMIT to finish creating the Tenant.
1. In the left pane, expand Tenant Foundation and Networking.
2. Right-click on the Bridge Domain and select Create Bridge Domain.
3. Name the Bridge Domain bd-Foundation-Internal.
4. Select Foundation/Foundation from the VRF drop-down list.
5. Select Custom under Forwarding and enable the flooding as shown in the figure.
6. Click NEXT.
7. Do not change any configuration on next screen (L3 Configurations). Select NEXT.
8. No changes are needed for Advanced/Troubleshooting. Click FINISH to finish creating Bridge Domain.
1. In the left pane, expand tenant Foundation, right-click on the Application Profiles and select Create Application Profile.
2. Name the Application Profile as ap-Mgmt and click SUBMIT to complete adding the Application Profile.
1. In the left pane, expand the Application Profiles and right-click the Application EPGs and select Create Application EPG.
2. Name the EPG as epg-IB-Mgmt.
3. From the Bridge Domain drop-down list, select Bridge Domain common/bd-Common-IB-Mgmt.
4. Click FINISH to complete creating the EPG.
5. In the left pane, expand the Application EPGs and name the EPG as epg-IB-Mgmt.
6. Right-click on the Domains and select Add L2 External Domain Association.
7. From the drop-down list, select the previously defined domain for UCS FIs (UCS in this example) L2 External Domain Profile.
8. Select Immediate for both the Deploy Immediacy and the Resolution Immediacy.
9. Click SUBMIT to complete the L2 External Domain Association.
10. Right-click Static-Bindings (Paths) and select Deploy Static EPG on PC, VPC, or Interface.
11. In the Deploy Static EPG on PC, VPC, Or Interface Window, select the Virtual Port Channel Path Type.
12. From the drop-down list, select the VPC for UCS Fabric Interconnect A.
13. Enter <UCS Management VLAN> (111) for Port Encap.
14. Select the Immediate for Deployment Immediacy and for Mode select Trunk.
15. Click SUBMIT to complete adding the Static Path Mapping.
16. Repeat the above steps to add the Static Path Mapping for UCS Fabric Interconnect B and the remaining UCS domains.
17. In the left menu, right-click Contracts and select Add Consumed Contract.
18. From the drop-down list for the Contract, select common/Allow-Common-IB-Mgmt
19. Click SUBMIT.
At this point, connection from UCS domain(s) to the existing management switch through tenant common should be enabled. This EPG will be utilized to provide ESXi hosts as well as the VMs access to the existing in-band management network.
1. In the left pane, under the Tenant Foundation, right-click Application Profiles and select Create Application Profile.
2. Name the Profile ap-ESXi-Connectivity and click SUBMIT to complete adding the Application Profile.
Following EPGs and the corresponding mappings will be created under this application profile. Depending on the customer storage design, not all the EPGs need to be configured.
Refer to Table 24 for the information required during the following configuration. Items marked by { } will need to be updated according to Table 24. The bridge domain used for all the EPGs is Foundation/bd-Foundation-Internal.
Table 24 EPGs and mappings for ap-ESXi-Connectivity
EPG Name |
Default Gateway IP |
Domain |
Static Path – Compute |
Static Path - Storage |
epg-vMotion |
192.168.179.254/24 |
L2 External: UCS Physical: Mgmt-Node |
VPC for all UCS FIs VLAN 3000 VPC for Mgmt. Nodes VLAN 3002 |
|
epg-iSCSI-A (For iSCSI Deployment) |
192.168.191.254/24 |
L2 External: UCS Physical: Mgmt-Node Physical: SVC-iSCSI-A |
VPC for all UCS FIs VLAN 3010 VPC for Mgmt. Nodes VLAN 3012 |
Leaf 93180-1 Individual Ports E1/17-20 for 4 IBM SVC Nodes VLAN 3011 |
epg-iSCSI-B (For iSCSI Deployment) |
192.168.192.254/24 |
L2 External: UCS Physical: Mgmt-Node Physical: SVC-iSCSI-B |
VPC for all UCS FIs VLAN 3020 VPC for Mgmt. Nodes VLAN 3022 |
Leaf 93180-2 Individual Ports E1/17-20 for 4 IBM SVC Nodes VLAN 3021 |
1. In the left pane, expand Application Profiles. Right-click on the Application EPGs and select Create Application EPG.
2. Name the EPG {epg-vMotion}.
3. From the Bridge Domain drop-down list, select Bridge Domain Foundation/bd-Foundation-Internal
4. Click FINISH to complete creating the EPG.
5. In the left pane, expand the Application EPGs and EPG {epg-vMotion}.
6. Right-click Domains and select Add L2 External Domain Association.
7. From the drop-down list, select the previously defined {UCS} L2 External Domain Profile.
8. Select Immediate for both Deploy Immediacy and Resolution Immediacy.
9. Click SUBMIT to complete the L2 External Domain Association.
10. Repeat the Domain Association steps (6-9) to add appropriate EPG specific domains from Table 24.
11. Right-click on the Static-Bindings (Paths) and select Deploy EPG on PC, VPC, or Interface.
12. In the Deploy Static EPG on PC, VPC, Or Interface Window, select the appropriate Path Type from Table 24. For example, for UCS Fabric Interconnect, select Virtual Port Channel.
13. From the drop-down list, select the appropriate VPC(s) or the individual port(s).
14. Enter VLAN from Table 24 {3000} for Port Encap.
15. Select Immediate for Deployment Immediacy and for Mode select Trunk.
16. Click SUBMIT to complete adding the Static Path Mapping.
17. Repeat the above steps to add all the Static Path Mapping for the EPG listed in Table 24.
18. In the left pane, right-click on Subnets and select Create EPG Subnet.
19. Enter the Default Gateway IP address and subnet from the Table 24.
20. Leave the field Scope as Private to VRF.
21. Click SUBMIT.
22. Repeat these steps to create epg-iSCIS-A and epg-iSCSI-B using the values listed in Table 24.
The VPC and port mappings for epg-iSCIS-A are shown below:
The VPC and port mappings for epg-iSCSI-B are shown below:
After the EPG configuration is complete, management network and iSCSI paths would be established for the UCS Domains and Management C-series servers.
Proceed with the VMware configuration as covered in the next section.
This section provides detailed instructions for installing VMware ESXi 6.0 U2 on the dedicated management nodes in the VersaStack environment. After the procedures are completed, two ESXi hosts will be provisioned to host management services.
Several methods exist for installing ESXi in a VMware environment. These procedures focus on how to use the built-in keyboard, video, mouse (KVM) console and virtual media features to map remote installation media to individual servers and connect to their boot logical unit numbers (LUNs).
To setup a CIMC address and username/password, refer to: http://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/c/hw/C220/install/C220/install.html#46057
The management nodes in this validation were equipped with Cisco FlexFlash. VMware 6.0U2 ESXi image was installed on the FlexFlash for each management node.
To setup each of the Cisco c-series server, complete the following steps:
1. Open a web browser and enter the IP address for the Cisco CIMC address for the management node.
2. Select the Storage tab and click Cisco FlexFlash.
3. Select Virtual Drive Info tab and click Enable/Disable Virtual Drive(s).
4. Click to check Hypervisor and Click Save.
5. Click Erase Virtual Drive(s) and select Hypervisor.
6. Click Save.
7. Click Server tab on the left and click BIOS
8. Click Configure Boot Override Priority.
9. Select Hypervisor
10. Click Apply.
11. Click Configure Boot Order and click OK for any warnings
12. Add DVD and SD Card as boot devices
13. Click SAVE.
The IP KVM enables the administrator to begin the installation of the operating system (OS) through remote media. It is necessary to log in to the UCS c-series server to run the IP KVM.
To launch KVM for each of the Cisco c-series server, complete the following steps:
1. Open a web browser and enter the IP address for the Cisco CIMC address for the management node.
2. Click the keyboard icon to launch the KVM. Make sure the management station has Java installed and enabled.
3. If prompted, accept any warnings and save the jlnp file.
4. Open the file and accept the warning prompts
5. Boot each server by selecting Power->Power On system.
To install VMware ESXi to the FlexFlash of the host, complete the following steps on each host. The Cisco customer VMware ESXi image can be downloaded from:
1. In the KVM window, click Virtual Media.
2. Click Activate Virtual Devices.
3. If prompted to accept an Unencrypted KVM session, accept as necessary.
4. Click Virtual Media and select Map CD/DVD.
5. Browse to the ESXi installer ISO image file and click Open.
6. Click Map Device.
7. Click the KVM tab to monitor the server boot.
8. Reset the server by clicking Reset button. Click OK.
9. Select Power Cycle on the next window and click OK and OK again.
10. On reboot, the VMware ESXi installer will be loaded.
11. From the ESXi Boot Menu, select the ESXi installer.
12. After the installer has finished loading, press Enter to continue with the installation.
13. Read and accept the end-user license agreement (EULA). Press F11 to accept and continue.
14. Select the SD Card as the installation disk for ESXi and press Enter to continue with the installation.
15. Select the appropriate keyboard layout and press Enter.
16. Enter and confirm the root password and press Enter.
17. The installer issues a warning that the selected disk will be repartitioned. Press F11 to continue with the installation.
18. After the installation is complete, press Enter to reboot the server.
19. Repeat the ESXi installation process for all the Service Profiles.
Adding a management network for each VMware host is necessary for managing the host. In this deployment, two 1Gbps Ethernet ports are connected to existing management switches (preferably two different switches). On the management switches, these ports are configured as access ports.
interface Ethernet104/1/25 (1st c-series Gig Port – Mgmt. Switch 1)
switchport access vlan 11
interface Ethernet104/1/31 (2nd c-series Gig Port – Mgmt. Switch 2)
switchport access vlan 11
To add a management network for the VMware hosts, complete the following steps on each ESXi host.
To configure the ESXi hosts with access to the management network, complete the following steps:
1. After the server has finished post-installation rebooting, press F2 to customize the system.
2. Log in as root, enter the password chosen during the initial setup, and press Enter to log in.
3. Select the Configure Management Network option and press Enter.
4. Select Network Adapters
5. Select vmnic0 (if it is not already selected) by pressing the Space Bar.
6. Press Enter to save and exit the Network Adapters window.
7. Select IPv4 Configuration and press Enter.
8. Select the Set Static IP Address and Network Configuration option by using the Space Bar.
9. Enter the IP address for managing the ESXi host.
10. Enter the subnet mask for the management network of the ESXi host.
11. Enter the default gateway for the ESXi host.
12. Press Enter to accept the changes to the IP configuration.
13. Select the IPv6 Configuration option and press Enter.
14. Using the Space Bar, select Disable IPv6 (restart required) and press Enter.
15. Select the DNS Configuration option and press Enter.
Because the IP address is assigned manually, the DNS information must also be entered manually.
16. Enter the IP address of the primary DNS server.
17. Optional: Enter the IP address of the secondary DNS server.
18. Enter the fully qualified domain name (FQDN) for the ESXi host.
19. Press Enter to accept the changes to the DNS configuration.
20. Press Esc to exit the Configure Management Network submenu.
21. Press Y to confirm the changes and reboot the host.
22. Repeat this procedure for both the management ESXi hosts in the setup.
To download the VMware vSphere Client, complete the following steps:
1. Open a web browser on the management workstation and navigate to the management IP address of any ESXi servers.
2. Download and install the vSphere Client for Windows.
To login to the ESXi host using the VMware vSphere Client, complete the following steps:
1. Open the recently downloaded VMware vSphere Client and enter the management IP address of the host.
2. Enter root for the user name.
3. Enter the root password configured during the installation process.
4. Click Login to connect.
5. Repeat this process to log into all the ESXi hosts.
To set up the VMkernel ports and the virtual switches on the ESXi hosts, complete the following steps:
1. From vSphere Client, select the host in the inventory.
2. Click the Configuration tab.
3. Click Networking in the Hardware pane.
4. Click Properties on the right side of vSwitch0.
5. Select the Management Network configuration and click Edit.
6. Change the network label to VMkernel-MGMT and check the Management Traffic checkbox.
7. Click OK to finalize the edits for Management Network.
8. Select the VM Network configuration and click Edit.
9. Change the network label to IB-MGMT.
10. Click OK to finalize the edits for VM Network.
11. Click Network Adapter tab and Click Add.
12. Select vmnic1and click Next.
13. In the Failover order window, move the vmnic1 down to Standby Adapters
14. Click Next and then Click Finish.
A second vSwitch will be deployed and the 10Gbps Ethernet ports will be added as uplinks to this vSwitch. This switch enabled communication with the ACI fabric for vMotion and iSCSI traffic. The VLANs defined as part of ACI configuration (Table 21) will be utilized in this configuration.
While this deployment guide covers vSwitch configuration, customers can choose to deploy a VMware distributed switch for network connectivity
1. From vSphere Client, select the host in the inventory.
2. Click the Configuration tab.
3. Click Networking in the Hardware pane.
4. Click Add Networking from the right hand window.
5. Select VMkernel and click Next.
6. Select Create vSphere standard switch and select vmnic2 and vmnic3. Click Next.
7. Change the network label to VMkernel-vMotion and enter <<vMotion VLAN>> (3002) in the VLAN ID (Optional) field.
8. Select the Use This Port Group for vMotion checkbox.
9. Click Next to continue with the vMotion VMkernel creation.
10. Enter the IP address <<vMotion IP address>> and the subnet mask <<vMotionSubnet>> for the vMotion VLAN interface for the ESXi Host.
11. Click Next to continue with the vMotion VMkernel creation.
12. Click Finish to finalize the creation of the vMotion VMkernel interface.
13. Click Properties on the right side of vSwitch1.
14. Select the vSwitch configuration and click Edit.
15. From the General tab, change the MTU to 9000.
16. Click OK to close the properties for vSwitch1.
17. Select the VMkernel-vMotion configuration and click Edit.
18. Change the MTU to 9000.
19. Click OK to finalize the edits for the VMkernel-vMotion network.
20. Click Add to add another VMkernel port
21. In the popup, select VMkernel and click Next.
22. Set the Network Label as VMkernel-iSCSI-A.
23. Set the VLAN ID (Optional) as <<ISCSI-A VLAN>> (3012).
24. Click Next.
25. Set the IP address <<iSCSI-A IP address>> and Subnet Mask <<iSCSI-A Subnet Mask>>.
26. Click Next.
27. Click Finish.
28. Click the newly created VMkernel port, VMkernel-iSCSI-A, and click Edit.
29. Set MTU to 9000 and click OK.
30. Click Add to add another VMkernel port
31. In the popup, select VMkernel and click Next.
32. Set the Network Label as VMkernel-iSCSI-B.
33. Set the VLAN ID (Optional) as <<ISCSI-B VLAN>> (3022).
34. Click Next.
35. Set the IP address <<iSCSI-B IP address>> and Subnet Mask <<iSCSI-B Subnet Mask>>.
36. Click Next.
37. Click Finish.
38. Click the newly created VMkernel port, VMkernel-iSCSI-B, and click Edit.
39. Set MTU to 9000 and click OK.
40. Click Close to finish adding the networking elements
41. Repeat this process for both the management ESXi servers
This section provides detailed instructions for installing VMware ESXi 6.0 U2 in the VersaStack UCS environment. After the procedures are completed, multiple ESXi hosts will be provisioned to host customer workloads.
Several methods exist for installing ESXi in a VMware environment. These procedures focus on how to use the built-in keyboard, video, mouse (KVM) console and virtual media features in Cisco UCS Manager to map remote installation media to individual servers and connect to their boot logical unit numbers (LUNs).
The IP KVM enables the administrator to begin the installation of the operating system (OS) through remote media. It is necessary to log in to the UCS environment to run the IP KVM.
To log in to the Cisco UCS environment, complete the following steps:
1. Open a web browser and enter the IP address for the Cisco UCS cluster address. This step launches the Cisco UCS Manager application.
2. Under HTML, click the Launch UCS Manager link.
3. When prompted, enter admin as the user name and enter the administrative password.
4. To log in to Cisco UCS Manager, click Login.
5. From the main menu, click the Servers tab.
6. Select Servers > Service Profiles > root > Infra-ESXi-Host-01.
For iSCSI setup, the name of the profile will be Infra-ESXi-iSCSI-Host-01
7. Right-click Infra-ESXi-Host-01 and select KVM Console.
8. If prompted to accept an Unencrypted KVM session, accept as necessary.
9. Open KVM connection to all the hosts by right-clicking the Service Profile and launching the KVM console
10. Boot each server by selecting Boot Server and clicking OK. Then click OK again.
To install VMware ESXi to the boot LUN of the hosts, complete the following steps on each host. The Cisco customer VMware ESXi image can be downloaded from:
1. In the KVM window, click Virtual Media.
2. Click Activate Virtual Devices.
3. If prompted to accept an Unencrypted KVM session, accept as necessary.
4. Click Virtual Media and select Map CD/DVD.
5. Browse to the ESXi installer ISO image file and click Open.
6. Click Map Device.
7. Click the KVM tab to monitor the server boot.
8. Reset the server by clicking Reset button. Click OK.
9. Select Power Cycle on the next window and click OK and OK again.
10. On reboot, the machine detects the presence of the boot LUNs (sample output below).
11. From the ESXi Boot Menu, select the ESXi installer.
12. After the installer has finished loading, press Enter to continue with the installation.
13. Read and accept the end-user license agreement (EULA). Press F11 to accept and continue.
14. Select the LUN that was previously set up and discovered as the installation disk for ESXi and press Enter to continue with the installation.
15. Select the appropriate keyboard layout and press Enter.
16. Enter and confirm the root password and press Enter.
17. The installer issues a warning that the selected disk will be repartitioned. Press F11 to continue with the installation.
18. After the installation is complete, press Enter to reboot the server.
19. Repeat the ESXi installation process for all the Service Profiles.
Adding a management network for each VMware host is necessary for managing the host. To add a management network for the VMware hosts, complete the following steps on each ESXi host.
To configure the ESXi hosts with access to the management network, complete the following steps:
1. After the server has finished post-installation rebooting, press F2 to customize the system.
2. Log in as root, enter the password chosen during the initial setup, and press Enter to log in.
3. Select the Configure Management Network option and press Enter.
4. Select Network Adapters
5. Select vmnic0 (if it is not already selected) by pressing the Space Bar.
6. Press Enter to save and exit the Network Adapters window.
7. Select the VLAN (Optional) and press Enter.
8. Enter the <IB Mgmt VLAN> (111) and press Enter.
9. Select IPv4 Configuration and press Enter.
10. Select the Set Static IP Address and Network Configuration option by using the Space Bar.
11. Enter the IP address for managing the ESXi host.
12. Enter the subnet mask for the management network of the ESXi host.
13. Enter the default gateway for the ESXi host.
14. Press Enter to accept the changes to the IP configuration.
15. Select the IPv6 Configuration option and press Enter.
16. Using the Space Bar, select Disable IPv6 (restart required) and press Enter.
17. Select the DNS Configuration option and press Enter.
Because the IP address is assigned manually, the DNS information must also be entered manually.
18. Enter the IP address of the primary DNS server.
19. Optional: Enter the IP address of the secondary DNS server.
20. Enter the fully qualified domain name (FQDN) for the ESXi host.
21. Press Enter to accept the changes to the DNS configuration.
22. Press Esc to exit the Configure Management Network submenu.
23. Press Y to confirm the changes and reboot the host.
24. Repeat this procedure for all the ESXi hosts in the setup.
To download the VMware vSphere Client, complete the following steps:
1. Open a web browser on the management workstation and navigate to the management IP address an any ESXi servers.
2. Download and install the vSphere Client for Windows.
To log in to the ESXi host using the VMware vSphere Client, complete the following steps:
1. Open the recently downloaded VMware vSphere Client and enter the management IP address of the host.
2. Enter root for the user name.
3. Enter the root password configured during the installation process.
4. Click Login to connect.
5. Repeat this process to log into all the ESXi hosts.
To set up the VMkernel ports and the virtual switches on the ESXi hosts, complete the following steps:
1. From vSphere Client, select the host in the inventory.
2. Click the Configuration tab.
3. Click Networking in the Hardware pane.
4. Click Properties on the right side of vSwitch0.
5. Select the vSwitch configuration and click Edit.
6. From the General tab, change the MTU to 9000.
7. Click OK to close the properties for vSwitch0.
8. Select the Management Network configuration and click Edit.
9. Change the network label to VMkernel-MGMT and check the Management Traffic checkbox.
10. Click OK to finalize the edits for Management Network.
11. Select the VM Network configuration and click Edit.
12. Change the network label to IB-MGMT and enter <<Management VLAN>> (111) in the VLAN ID (Optional) field.
13. Click OK to finalize the edits for VM Network.
14. Click Add to add a network element.
15. Select VMkernel and click Next.
16. Change the network label to VMkernel-vMotion and enter <<vMotion VLAN>> (3000) in the VLAN ID (Optional) field.
17. Select the Use This Port Group for vMotion checkbox.
18. Click Next to continue with the vMotion VMkernel creation.
19. Enter the IP address <<vMotion IP address>> and the subnet mask <<vMotionSubnet>> for the vMotion VLAN interface for the ESXi Host.
20. Click Next to continue with the vMotion VMkernel creation.
21. Click Finish to finalize the creation of the vMotion VMkernel interface.
22. Select the VMkernel-vMotion configuration and click Edit.
23. Change the MTU to 9000.
24. Click OK to finalize the edits for the VMkernel-vMotion network.
25. Click the Network Adapter tab at the top of the window and click Add.
26. Select vmnic1 and click Next.
27. Make sure vmnic1 is added to the Active Adapters and click Next.
28. Click Finish.
29. Close the dialog box to finalize the ESXi host networking setup.
To set up the iSCSI VMkernel ports and the virtual switches on the ESXi hosts, complete the following steps:
1. From vSphere Client, select the host in the inventory.
2. Click the Configuration tab.
3. In the Configuration screen select Networking in the left.
1. Click Properties next to the iSCSIBootvSwitch.
2. In the popup, select VMkernel and click Edit.
3. Rename the VMkernel port to VMkernel-iSCSI-A.
4. Change MTU to 9000.
It is important to not set a VLAN ID here because the iSCSI VLAN was set as the Native VLAN of the vNIC and these iSCSI packets should come from the vSwitch without a VLAN tag.
5. Click OK.
6. Select the vSwitch configuration and click Edit.
7. Change the MTU to 9000.
8. Click OK.
9. Click Close.
1. In the Networking screen select Add Networking.
2. In the popup, select VMkernel to add a VMkernel port in the Infrastructure iSCSI-B subnet. Click Next.
3. Select vmnic5 and click Next.
4. Label the Network VMkernel-iSCSI-B. Do not add a VLAN ID.
5. Click Next.
6. Enter an IP Address for this ESXi host’s iSCSI-B interface.
7. Click Next.
8. Click Finish.
9. Click Properties to the right of the newly created vSwitch.
10. Select the vSwitch configuration and click Edit.
11. Change the MTU to 9000 and click OK.
12. Select the VMkernel-iSCSI-B configuration and click Edit.
13. Change the MTU to 9000 and click OK.
14. Click Close.
1. In the left pane under the Hardware, select Storage Adapters.
2. Select the iSCSI Software Adapter and click Properties.
3. In the iSCSI Initiator Properties window, click the Dynamic Discovery tab.
4. Click Add. Enter the first iSCSI interface IP address for IBM SVC Node 1 storage from Table 10 and click OK.
5. Repeat the previous step to add all IP addresses for all the nodes.
6. Click Close.
7. Click Yes to Rescan the host bus adapter.
8. Repeat this procedure for all the iSCSI ESXi Hosts.
Download and extract the following VIC Drivers to the Management workstation:
FNIC Driver version 1.6.0.28:
ENIC Driver version 2.3.0.10:
https://my.vmware.com/group/vmware/details?downloadGroup=DT-ESXI60-CISCO-ENIC-23010&productId=491
Complete the following steps to install VIC Drivers on ALL the ESXi hosts:
1. From each vSphere Client, select the host in the inventory.
2. Click the Summary tab to view the environment summary.
3. From Resources > Storage, right-click datastore1 and select Browse Datastore.
4. Click the fourth button and select Upload File.
5. Navigate to the saved location for the downloaded VIC drivers and select fnic_driver_1.6.0.28-offline_bundle-4179603.zip.
6. Click Open and Yes to upload the file to datastore1.
7. Click the fourth button and select Upload File.
8. Navigate to the saved location for the downloaded VIC drivers and select ESXi6.0_enic-2.3.0.10-offline_bundle-4303638.zip.
9. Click Open and Yes to upload the file to datastore1.
10. Make sure the files have been uploaded to both ESXi hosts.
11. In the ESXi host vSphere Client, select the Configuration tab.
12. In the Software pane, select Security Profile.
13. To the right of Services, click Properties.
14. Select SSH and click Options.
15. Click Start and OK.
The step above does not enable SSH service and the service will not be restarted when ESXi host reboots.
16. Click OK to close the window.
17. Ensure SSH is started on each host.
18. From the management workstation, start an ssh session to each ESXi host. Login as root with the root password.
19. At the command prompt, run the following commands to account for each host:
esxcli software vib update -d /vmfs/volumes/datastore1/ fnic_driver_1.6.0.28-offline_bundle-4179603.zip
esxcli software vib update -d /vmfs/volumes/datastore1/ ESXi6.0_enic-2.3.0.10-offline_bundle-4303638.zip
reboot
20. After each host has rebooted, log back into each host with vSphere Client.
To mount the required datastores, complete the following steps on each ESXi host:
1. From the vSphere Client, select the host in the inventory.
2. Click the Configuration tab.
3. Click Storage in the Hardware window.
4. From the Datastore area, click Add Storage to open the Add Storage wizard.
5. Select Disk/LUN and click Next.
6. Verifying by using the size of the datastore LUN, select the LUN configured for VM hosting and click Next.
7. Accept default VMFS setting and click Next.
8. Click Next for the disk layout.
9. Enter infra-datastore-1 as the datastore name.
10. Click Next to retain maximum available space.
11. Click Finish.
12. Select the second LUN configured for swap file location and click Next.
13. Accept default VMFS setting and click Next.
14. Click Next for the disk layout.
15. Enter infra_swap as the datastore name.
16. Click Next to retain maximum available space.
17. Click Finish.
18. The storage configuration should look similar to figure shown below.
19. Repeat these steps on all the ESXi hosts.
To configure Network Time Protocol (NTP) on the ESXi hosts, complete the following steps on each host:
1. From the vSphere Client, select the host in the inventory.
2. Click the Configuration tab.
3. Click Time Configuration in the Software pane.
4. Click Properties.
5. At the bottom of the Time Configuration dialog box, click NTP Client Enabled.
6. At the bottom of the Time Configuration dialog box, click Options.
7. In the NTP Daemon (ntpd) Options dialog box, complete the following steps:
a. Click General in the left pane and select Start and stop with host.
b. Click NTP Settings in the left pane and click Add.
c. In the Add NTP Server dialog box, enter <NTP Server IP Address> as the IP address of the NTP server and click OK.
d. In the NTP Daemon Options dialog box, select the Restart NTP service to apply changes checkbox and click OK.
e. Click OK.
8. In the Time Configuration dialog box, verify that the clock is now set to approximately the correct time.
To move the VM swap file location, complete the following steps on each ESXi host:
1. From the vSphere Client, select the host in the inventory.
2. Click the Configuration tab.
3. Click Virtual Machine Swapfile Location in the Software pane.
4. Click Edit at the upper-right side of the window.
5. Select the option Store the swapfile in a swapfile datastore selected below.
6. Select the infra-swap datastore to house the swap files.
7. Click OK to finalize the swap file location.
The procedures in the following subsections provide detailed instructions for installing the VMware vCenter 6.0 Update 2 Server Appliance in an environment. After the procedures are completed, a VMware vCenter Server will be configured.
1. Download the .iso installer for the version 6.0U2 vCenter Server Appliance and Client Integration Plug-in
2. Mount the ISO image on the management workstation.
3. In the mounted ISO directory, navigate to the vcsa directory and double-click VMware-ClientIntegrationPlugin-6.0.0.exe. The Client Integration Plug-in installation wizard appears.
4. On the Welcome page, click Next.
5. Read and accept the terms in the End-User License Agreement and click Next.
6. Click Next.
7. Click Install.
8. Click Finish.
To build the VMware vCenter virtual machine, complete the following steps:
1. In the mounted ISO main directory, double-click vcsa-setup.html.
2. Allow the plug-in to run on the browser when prompted.
3. In the Home page, click Install to start the vCenter Server Appliance deployment wizard.
4. Read and accept the license agreement, and click Next.
5. In the Connect to target server page, enter the ESXi host name, User name and Password for one of the two management ESXi servers.
6. Click Next.
7. Click Yes to accept the certificate.
8. Enter the Appliance name and password details in the Set up virtual machine page.
9. Click Next.
10. In the Select deployment type page, select the option Install vCenter Server with an embedded Platform Services Controller.
11. Click Next.
12. In the Set up Single Sign-On page, select the option Create a new SSO domain.
13. Enter the SSO password, Domain name and Site name.
14. Click Next.
15. Select the appliance size, for example, Small (up to 100 hosts, 1,000 VMs) as shown in the screenshot.
16. Click Next.
17. In the Select datastore page, select infra-datastore-1. Check the checkbox for Enable Thin Disk Mode.
18. Click Next.
19. Select Use an embedded database in the Configure database page. Click Next.
20. In the Network Settings page, configure the following:
· Choose a Network: IB-Mgmt
· IP address family: IPV4
· Network type: static
· Network address: <vcenter-ip>
· System name: <vcenter-fqdn>
· Subnet mask: <vcenter-netmask>
· Network gateway: <vcenter-gateway>
· Network DNS Servers
· Configure time sync: Use NTP servers
· Enable SSH
21. Click Next.
22. Review the configuration and click Finish.
23. The vCenter appliance installation will take a few minutes to complete.
1. Using a web browser, navigate to <vCenter IP Address>.
2. Click the link Log in to vSphere Web Client.
If prompted, run and install the VMWare Remote Console Plug-in.
3. Log in as root, with the root password entered above in the vCenter installation.
To setup the vCenter Server, complete the following steps:
1. In the vSphere Web Client, navigate to the vCenter Inventory Lists > Resources > vCenter Servers.
2. Select the vCenter instance.
3. Go to Actions in the toolbar and select New Datacenter from the drop-down.
4. Enter a name for the datacenter and click OK.
5. Make sure the system takes to the newly created Datacenter. Go to Actions in the toolbar and select New Cluster from the drop-down.
6. In the New Cluster window, provide a cluster name, enable DRS, vSphere HA and Host monitoring.
7. Click OK.
If mixing Cisco UCS B or C-Series M2, M3 or M4 servers within a vCenter cluster, it is necessary to enable VMware Enhanced vMotion Compatibility (EVC) mode. For more information about setting up EVC mode, refer to Enhanced vMotion Compatibility (EVC) Processor Support.
To add a host to the newly created Cluster, complete the following steps:
1. Select the newly created cluster in the left.
2. Go to Actions in the menu bar and select Add Host from the drop-down list.
3. In the Add Host window, in the Name and Location screen, provide the IP address or FQDN of the host.
4. In the Connection settings screen, provide the root access credentials for the host.
5. Click Yes to accept the certificate.
6. In the Host summary screen, review the information and click Next.
7. Assign a license key to the host Click Next.
8. In the Lockdown mode screen, select the appropriate lockdown mode. For this validation, the lockdown mode was set to Disabled. Click Next.
9. In the Resource pool screen, click Next.
10. In the Ready to complete screen, review the summary and click Finish.
11. Repeat this procedure to 2nd management host to the cluster.
12. In vSphere, in the left pane right-click the newly created cluster, and under Storage click Rescan Storage.
To setup the vCenter Server, complete the following steps:
1. In the vSphere Web Client, navigate to the vCenter Inventory Lists > Resources > vCenter Servers.
2. Select the vCenter instance.
3. Go to Actions in the toolbar and select New Datacenter from the drop-down.
4. Enter a name for the datacenter and click OK.
5. Make sure the system takes to the newly created Datacenter. Go to Actions in the toolbar and select New Cluster from the drop-down.
6. In the New Cluster window, provide a cluster name, enable DRS, vSphere HA and Host monitoring.
7. Click OK.
If mixing Cisco UCS B or C-Series M2, M3 or M4 servers within a vCenter cluster, it is necessary to enable VMware Enhanced vMotion Compatibility (EVC) mode. For more information about setting up EVC mode, refer to Enhanced vMotion Compatibility (EVC) Processor Support.
To add a host to the newly created Cluster, complete the following steps:
1. Select the newly created cluster in the left.
2. Go to Actions in the menu bar and select Add Host from the drop-down list.
3. In the Add Host window, in the Name and Location screen, provide the IP address or FQDN of the host.
4. In the Connection settings screen, provide the root access credentials for the host.
5. Click Yes to accept the certificate.
6. In the Host summary screen, review the information and click Next.
7. Assign a license key to the host Click Next.
8. In the Lockdown mode screen, select the appropriate lockdown mode. For this validation, the lockdown mode was set to Disabled. Click Next.
9. In the Resource pool screen, click Next.
10. In the Ready to complete screen, review the summary and click Finish.
11. Repeat this procedure to add other Hosts to the cluster.
12. In vSphere, in the left pane right-click the newly created cluster, and under Storage click Rescan Storage.
ESXi hosts booted with iSCSI need to be configured with ESXi dump collection. The Dump Collector functionality is supported by the vCenter but is not enabled by default on the vCenter Appliance.
Make sure the account used to login is Administrator@vsphere.local (or a system admin account)
1. In the vSphere web client, select Home.
2. In the center pane, click System Configuration.
3. In the left hand pane, select Services and select VMware vSphere ESXi Dump Collector.
4. In the Actions menu, choose Start.
5. In the Actions menu, click Edit Startup Type.
6. Select Automatic.
7. Click OK.
8. Select Home > Hosts and Clusters.
9. Expand the DataCenter and Cluster.
10. For each ESXi host, right-click the host and select Settings. Scroll down and select Security Profile. Scroll down to Services and select Edit. Select SSH and click Start. Click OK.
11. SSH to each ESXi hosts and use root for the user id and the associated password to log into the system. Type the following commands to enable dump collection:
esxcli system coredump network set –-interface-name vmk0 –-server-ipv4 <vcenter-ip> --server-port 6500
esxcli system coredump network set –-enable true
esxcli system coredump network check
12. Optional: Turn off SSH on the host servers.
This deployment guide covers both APIC-controlled VMware VDS as well as APIC-controlled Cisco AVS in VXLAN switching mode. Customers can choose to deploy any of these two distributed switching architecture and the customers can also choose to deploy both the switching architectures preferable on different ESXi hosts at the same time. In this deployment, both Cisco AVS and VMware VDS were deployed at the same time on different ESXi clusters.
The VMware VDS is a distributed Virtual Switch (DVS) that uses VLANs for network separation and is included in vSphere with Enterprise Plus licensing. For installing the VDS in this VersaStack, complete the following steps:
To add the VDS in the APIC Advanced GUI, complete the following steps:
1. Log into the APIC Advanced GUI using the admin user.
2. At the top, click VM Networking.
3. In the left pane, select VMware.
4. In the right pane, click + to add a vCenter Domain.
5. In the Create vCenter Domain window, enter a Virtual Switch Name. The name used in the deployment below is A06-VC-VDS.
6. Make sure VMware vSphere Distributed Switch is selected.
7. Select the <UCS> AEP (UCS_AttEntityP) from the Associated Attachable Entity Profile drop-down list to associate the VDS with the UCS Domain.
1. From the VLAN Pool drop-down list select Create VLAN Pool.
2. In the Create VLAN Pool window, name the pool <Virtual Center>-VDS-vlans. The name used in this deployment is A06-VC-VDS_vlans.
3. Make sure Dynamic Allocation is selected. Click + to add a VLAN range.
4. In the Create Ranges window, enter the VLAN range that was entered in the Cisco UCS for the APIC-VDS VLANs. Refer to Table 13 to find the range.
5. Select the option Dynamic Allocation for Allocation Mode.
6. Click OK to create the VLAN range.
7. Click SUBMIT to create the VLAN Pool.
1. Click the + sign to the right of vCenter Credentials.
2. In the Create vCenter Credential window, for vCenter Credential Name, <vcenter-name>-Creds. The name used for this deployment is A06-VC-Creds.
3. For Username enter root.
4. Enter and confirm the password for root.
5. Click OK to complete adding the credential.
1. Click + on the right of vCenter/vShield to add the vCenter server for APIC.
2. In the Add vCenter/vShield Controller window, select Type vCenter.
3. Enter a name for the vCenter. The name used in this deployment is A06-VC1.
4. Enter the vCenter IP Address or Host Name.
5. For DVS Version, select DVS Version 6.0
6. Enable Stats Collection.
7. For Datacenter, enter the exact vCenter Datacenter name (A06-DC1).
8. Do not select a Management EPG.
9. For vCenter Credential Name, select the vCenter credentials created in the last step (A06-VC-Creds).
10. Click OK to add the vCenter Controller.
11. In the Create vCenter Domain Window, select the MAC Pinning-Physical-NIC-load as the Port Channel Mode.
12. Select CDP vSwitch Policy.
13. Select the Disabled for the Firewall Mode.
14. Click SUBMIT to complete creating the vCenter Domain and adding the VDS.
15. From the left menu, expand VMware and click on the newly added domain (A06-VC-VDS)
16. Scroll down in the right configuration area and select the vSwitch Policies as shown in the figure:
17. Click SUBMIT.
18. Log into the vCenter vSphere Web Client and navigate to Networking.
19. A distributed switch should have been added.
To add the VMware ESXi Hosts to the VDS, complete the following steps:
1. Log into the vSphere Web Client.
2. From the Home screen, select Networking under Inventories.
3. In the left, expand the Datacenter and the VDS folder. Select the VDS switch.
4. Right-click on the VDS switch and select Add and manage hosts.
5. In the Add and Manage Hosts window, make sure the option Add hosts is selected; click Next.
6. Click + to add New hosts.
7. In the Select new hosts window, select all of the relevant ESXi hosts.
8. Click OK to complete the host selection.
9. Click Next.
10. Select Manage physical adapters.
11. Click Next.
12. On the hosts, select the appropriate vmnics (vmnic2 and vmnic3), click Assign uplink, and then click OK.
13. Repeat this process until all vmnics (2 per host) have been assigned.
14. Click Next.
15. Verify that these changes will have no impact and click Next.
16. Click Finish to complete adding the ESXi hosts to the VDS.
17. With the VDS selected, in the center pane select the Related Objects tab.
18. Under Related Objects, select the Hosts tab. Verify the ESXi hosts are now part of the VDS.
The AVS in VXLAN switching mode uses VXLANs for network separation. For installing the AVS in this VersaStack environment, complete the following steps:
To install VSUM into your VersaStack Management Cluster, complete the following steps:
1. Download and unzip the VSUM Release 2.1 .zip file from Cisco VSUM 2.1 Download.
2. In the Nexus1000v-vsum.2.1-pkg folder that is unzipped from the downloaded zip, unzip the Nexus1000v-vsum.2.0.zip file to obtain the OVA file
3. Log into vSphere Web Client as the VersaStack Admin user.
4. From the Home screen, in the left pane, select VMs and Templates.
5. Select the vCenter and the DataCenter on the in the left and using the Actions pulldown in the center pane, select Deploy OVF Template.
6. If a Security Prompt pops up, click Allow to allow the Client Integration Plugin to run.
7. In the Deploy OVF Template window, select Local file, then Browse and browse to the Nexus1000v-vsum.2.1.ova file downloaded and unzipped above.
8. Select the file and click Open.
9. Click Next.
10. Review the details and click Next.
11. Click Accept to accept the License Agreement and click Next.
12. Give the VM a name and select the appropriate Datacenter (Mgmt).
13. Click Next.
14. Select infra-datastore-1 and make sure the Thin Provision virtual disk format is selected. Click Next.
15. Make sure the IB-Mgmt network is chosen and click Next.
16. Fill in all IP, DNS, and vCenter properties for the VSUM Appliance and click Next.
The VSUM IP address should be in the IB-MGMT subnet.
17. Review all the values and click Finish to complete the deployment of the VSUM Appliance.
18. In the left pane, expand the vCenter and Datacenter. Right-click the VSUM VM and select Power > Power On.
19. Right-click the VSUM VM again and select Open Console. When a login prompt appears, close the console.
20. Log out and Log back in to the vSphere Web Client.
21. Verify that Cisco Virtual Switch Update Manager now appears in the center pane under Inventories.
For AVS deployment, infrastructure VLAN must be enabled on the Attachable Access Entity Profile (AEP) for UCS domains. To enable the VLAN:
1. Log into APIC Advanced GUI using the admin user.
2. At the top, select Fabric then Access Policies.
3. On the left, expand Global Policies and Attachable Access Entity Profiles.
4. Select the AEP for UCS (UCS_AttEntityP) and in the right configuration pane, check Enable Infrastructure VLAN.
5. Click SUBMIT.
To add the AVS in the APIC Advanced GUI, complete the following steps:
1. Log into the APIC Advanced GUI using the admin user.
2. At the top, select VM Networking. In the left pane, select VMware.
3. From VM Networking > Inventory > VMware, on the right, click + to add a vCenter Domain.
4. In the Create vCenter Domain window, enter a Virtual Switch Name. A suggested name is <vcenter-name>-AVS. Select the Cisco AVS.
5. For Switching Preference, select Local Switching.
6. For Encapsulation, select VXLAN.
7. For Associated Attachable Entity Profile, select <UCS AEP> (UCS_AttEntityP).
8. For the AVS Fabric-Wide Multicast Address, enter a Multicast Address. For this deployment, 230.0.0.1 was used.
1. For the Pool of Multicast Addresses (one per-EPG), use the drop-down list to select Create Multicast Address Pool.
2. Name the pool <vcenter-name>-AVS-MCAST.
3. Click + to create an Address Block.
4. Enter a multicast address IP Range. The range used in this validation is 230.0.0.2 to 230.0.0.250.
5. Click OK to complete creating the Multicast Address Block.
6. Click SUBMIT to complete creating the Multicast Address Pool.
1. Click + to add vCenter Credentials.
2. In the Create vCenter Credential window, name the account profile <vcenter-name>-AVS-Creds. Name used in this deployment is A06-VC-AVS-Creds.
3. For Username, enter root.
4. Enter and confirm the use root password.
5. Click OK to complete adding the vCenter credentials.
1. Click + to add the vCenter server for APIC to vCenter communication.
2. In the Create vCenter Controller window, enter <vcenter-name>-AVS for Name. Name used in this deployment is A06-VC-AVS.
3. Enter the vCenter IP Address or Host Name.
4. For DVS Version, select DVS Version 6.0.
5. For Datacenter, enter the exact vCenter Datacenter name. vCenter Datacenter used in this deployment is A06-DC2.
6. Do not select a Management EPG.
7. For Associated Credential, select <vcenter-name>-AVS-Creds.
8. Click OK to complete adding the vCenter Controller.
9. For Port Channel Mode, select MAC Pinning.
10. For vSwitch Policy, select CDP. Do not select BPDU Guard or BPDU Filter.
11. For Firewall Mode, select Disabled.
12. Click SUBMIT to add Cisco AVS.
13. From the left menu, expand VMware and click on the newly added domain (A06-VC-AVC)
14. Scroll down in the right configuration area and select the vSwitch Policies as shown in the figure:
15. Click SUBMIT.
AVS does not support MAC Pinning-Physical-NIC-load as the Port-Channel Policy. As shown in the above screen capture, APIC generated Port Channel Policy which supports “MAC Pinning” was left unchanged
1. Log into the vCenter vSphere Web Client and navigate to Networking.
2. A distributed switch should have been added.
To add the VMware ESXi Hosts to the AVS, complete the following steps:
1. Download Cisco AVS version 5.2(1)SV3(2.2), by going to Cisco AVS Download and navigating to version 5.2(1)SV3(2.2). Download the CiscoAVS_2.2-5.2.1.SV3.2.2-pkg.zip file, but do not unzip it.
2. Log into the vSphere Web Client as the VersaStack Admin.
3. From the Home screen, select Cisco Virtual Switch Update Manager under Inventories.
4. Under Basic Tasks, select AVS.
5. Under Image Tasks, select Upload.
6. On the right under Upload switch image, click Upload.
7. Click Choose File.
8. Navigate to the CiscoAVS_2.2-5.2.1.SV3.2.2-pkg.zip file, select it and click Open.
9. Click Upload to upload the file.
10. Click OK.
11. Close the Cisco Virtual Switch Update Manager tab in the browser and return to vSphere Web Client.
12. Click Refresh at the lower right. CiscoAVS_2.2 should now appear in the list of Manage Uploaded switch images.
13. In the left pane under the Basic Tasks, select AVS.
14. Click Configure.
15. On the right, select the Datacenter (A06-DC2 in this deployment).
16. Select the AVS for the Distributed Virtual Switch (A06-VC-AVS in this deployment)
17. Click Manage.
18. In the center pane, under Manage, select the Cisco AVS tab.
19. In the center pane, select the Add Host – AVS tab.
20. Using the pulldown, select the 5.2(1)SV3(2.2) Target Version. Click Show Host.
21. Expand VMware Cluster (A06-Cluster2) and select all the relevant ESXi hosts.
22. Click Suggest.
23. Under PNIC Selection, select the two vmnics per host set up for AVS. Make sure to select the vmnics for all the hosts.
24. Click Finish to install the Virtual Ethernet Module (VEM) on each host and add the host to the AVS. The process might take a few minutes.
25. In the left pane, select Hosts. The ESXi hosts should now show up as part of the AVS.
To add a second VTEP to each ESXi host in the Cisco AVS for load balancing, complete the following steps:
1. In the vSphere Web Client, from the Home screen, select Hosts and Clusters.
2. In the left pane, expand vCenter, Datacenter, and Cluster. Select the first ESXi host.
3. In the center pane, under the Manage tab, select the Networking tab. Select VMkernel adapters.
4. In the list of VMkernel ports, make sure the vtep VMkernel port has been assigned an IP address in the ACI Fabric system subnet (10.0.0.0/16 by default).
5. Click + to add a VMkernel port.
6. In the Add Networking window, make sure VMkernel Network Adapter is selected and click Next.
7. Leave Select an existing network selected and click Browse.
8. Select vtep and click OK.
9. Make sure vtep is now in the text box and click Next.
10. Make sure the Default TCP/IP stack is selected. Do not enable any services. Click Next.
11. Leave Obtain IPv4 settings automatically selected and click Next.
12. Click Finish to complete adding the VTEP.
13. Verify that the just added VTEP obtains an IP address in the same subnet as the first VTEP.
It might take a couple of minutes for the new VTEP to obtain a new IP address.
14. Repeat this procedure to add a VTEP to all the remaining ESXi hosts.
This section provides a detailed procedure for setting the Shared Layer 3 Out in tenant “common” to connect to Nexus 7000 core switches. The configuration utilizes four interfaces between the pair of the ACI leaf switches and the pair of Nexus 7000 switches. The routing protocol being utilized is OSPF. Some highlights of this connectivity are:
· A dedicated bridge domain bd-Common-Outside and associated dedicated VRF vrf-Common-Outside is configured in tenant common for external connectivity.
· The shared Layer 3 Out created in Tenant common “provides” an external connectivity contract that can be “consumed” from any tenant.
· Each of the two Nexus 7000s is connected to each of the two Nexus 9000 leaf switches.
· Sub-interfaces are configured and used for external connectivity.
· The Nexus 7000s are configured to originate and send a default route to the Nexus 9000 leaf switches using OSPF.
· ACI leaf switches advertise tenant subnet back to Nexus 7000 switches
· The physical connectivity is shown in the Figure 10 ACI Shared Layer 3 Out Connectivity Details
Figure 10 ACI Shared Layer 3 Out Connectivity Details
The following configuration is a sample from the virtual device contexts (VDCs) of two Nexus 7004s.
The Nexus 7000 configuration below is not complete and is meant to be used only as a reference
feature ospf
feature interface-vlan
!
vlan 100
name OSPF-Peering
!
interface Vlan100
no shutdown
mtu 9216
no ip redirects
ip address 10.253.253.253/30
ip ospf mtu-ignore
ip router ospf 10 area 0.0.0.0
!
interface Ethernet3/1
description To A06-93180-1 E1/47
no shutdown
!
interface Ethernet3/1.301
description To A06-93180-1 E1/47
encapsulation dot1q 301
ip address 10.253.253.2/30
ip ospf network point-to-point
ip ospf mtu-ignore
ip router ospf 10 area 0.0.0.10
no shutdown
!
interface Ethernet3/2
description To A06-93180-2 E1/47
no shutdown
!
interface Ethernet3/2.302
description To A06-93180-2 E1/47
encapsulation dot1q 302
ip address 10.253.253.6/30
ip ospf network point-to-point
ip ospf mtu-ignore
ip router ospf 10 area 0.0.0.10
no shutdown
!
interface Ethernet3/5
switchport
switchport mode trunk
switchport trunk allowed vlan 100
mtu 9216
!
interface loopback0
ip address 10.253.254.1/32
ip router ospf 10 area 0.0.0.0
!
router ospf 10
router-id 10.253.254.1
area 0.0.0.10 nssa no-summary no-redistribution default-information-originate
!
feature ospf
feature interface-vlan
!
vlan 100
name OSPF-Peering
!
interface Vlan100
no shutdown
mtu 9216
no ip redirects
ip address 10.253.253.254/30
ip ospf mtu-ignore
ip router ospf 10 area 0.0.0.0
!
interface Ethernet3/1
description To A06-93180-1 E1/48
no shutdown
!
interface Ethernet3/1.303
description To A06-93180-1 E1/48
encapsulation dot1q 303
ip address 10.253.253.10/30
ip ospf network point-to-point
ip ospf mtu-ignore
ip router ospf 10 area 0.0.0.10
no shutdown
!
interface Ethernet3/2
description To A06-93180-2 E1/48
no shutdown
!
interface Ethernet3/2.304
description To A06-93180-2 E1/48
encapsulation dot1q 304
ip address 10.253.253.14/30
ip ospf network point-to-point
ip ospf mtu-ignore
ip router ospf 10 area 0.0.0.10
no shutdown
!
interface Ethernet3/5
switchport
switchport mode trunk
switchport trunk allowed vlan 100
mtu 9216
!
interface loopback0
ip address 10.253.254.2/32
ip router ospf 10 area 0.0.0.0
!
router ospf 10
router-id 10.253.254.2
area 0.0.0.10 nssa no-summary no-redistribution default-information-originate
!
1. At the top, select Fabric > Access Policies.
2. In the left pane, expand Physical and External Domains.
3. Right-click External Routed Domains and select Create Layer 3 Domain.
4. Name the Domain N7K-SharedL3Out.
5. From the Associated Attachable Entity Profile drop-down list, select Create Attachable Entity Profile.
6. Name the Profile N7K-SharedL3Out and click NEXT.
7. Click FINISH to continue without specifying interfaces.
8. Back in the Create Layer 3 Domain window, use the VLAN Pool drop-down list to select Create VLAN Pool.
9. Name the VLAN Pool N7K-SharedL3Out_vlans
10. Select Static Allocation.
11. Click + to add an Encap Block.
12. In the Create Ranges window, enter the VLAN range as shown in Figure 10 (301-304).
13. Select Static Allocation.
14. Click OK to complete adding the VLAN range.
15. Click SUBMIT to complete creating the VLAN Pool.
16. Click SUBMIT to complete creating the Layer 3 Domain.
1. In the APIC Advanced GUI, select Fabric > Access Policies > Quick Start.
2. In the right pane, select Configure and interface, PC and VPC
3. Click the Leaf switch under Configured Switch Interfaces (601 as shown in figure below)
4. Click in the right pane to add switch interfaces
5. Configure various fields as shown in the figure below. In this screen shot, port 1/47 is configured using 10Gbps links. The details of the port connectivity can be seen in Figure 10.
6. Click SAVE.
7. Click SAVE again to finish the configuring switch interfaces
8. Click SUBMIT.
9. Repeat this process for remaining three ports connected to Nexus 7000.
1. At the top, select Tenants > common.
2. In the left pane, expand Tenant common and Networking.
3. Right-click External Routed Networks and select Create Routed Outside.
4. Name the Routed Outside Nexus-7K-Shared.
5. Check the check box next to OSPF.
6. Enter 0.0.0.10 (configured in the Nexus 7000s) as the OSPF Area ID.
7. From the VRF drop-down list, select common/Common-Outside.
8. From the External Routed Domain drop-down list, select N7K-SharedL3Out.
9. Click + to add a Node Profile.
10. Name the Node Profile Node-601-602 (601 and 602 are Node IDs of leaf switches connected to Nexus 7000).
11. Click + to add a Node.
12. In the select Node and Configure Static Routes window, select Leaf switch 601 from the drop-down list.
13. Provide a Router ID IP address – this address will be configured as the Loopback Address. The address used in this deployment is 10.253.254.3.
14. Click OK to complete selecting the Node.
15. Click + to add another Node.
16. In the select Node window, select Leaf switch 602.
17. Provide a Router ID IP address – this address will be configured as the Loopback Address. The address used in this deployment is 10.253.254.4.
18. Click OK to complete selecting the Node.
19. Click + to create an OSPF Interface Profile.
20. Name the profile Nexus-7K-Int-Prof.
21. Using the OSPF Policy pulldown, select Create OSPF Interface Policy.
22. Name the policy ospf-Nexus-7K.
23. Select the Point-to-Point Network Type.
24. Select the MTU ignore Interface Controls.
25. Click SUBMIT to complete creating the policy.
26. Select Routed Sub-Interface under Interfaces.
27. Click + to add a routed sub-interface.
For adding Routed Sub-interfaces, refer to Figure 10 for Interface, IP and VLAN details
28. In the Select Routed Sub-Interface window, for Path, select the interface on Nexus 93180-1 (Node 601) that is connected to Nexus 7004-1.
29. Enter vlan-<interface vlan> (301) for Encap.
30. Enter the IPv4 Address as shown in Figure 10 (10.253.253.1/30)
31. Leave the MTU set to inherit.
32. Click OK to complete creating the routed sub-interface.
33. Repeat these steps to all four sub-interfaces shown in Figure 10. The Routed Sub-Interfaces will be similar to the figure shown below.
34. Click OK to complete creating the Interface Profile.
35. Click OK to complete creating the Node Profile.
36. Click NEXT on Create Routed Outside Screen.
37. Click + to create an External EPG Network.
38. Name the External Network Default-Route.
39. Click + to add a Subnet.
40. Enter 0.0.0.0/0 as the IP Address. Select the checkboxes for External Subnets for the External EPG, Shared Route Control Subnet, and Shared Security Import Subnet.
41. Click OK to complete creating the subnet.
42. Click OK to complete creating the external network.
43. Click FINISH to complete creating the Routed Outside.
44. In the left pane, expand Security Policies, Right-click on Contracts and select Create Contract.
45. Name the contract Allow-Shared-L3-Traffic.
46. Select the Global Scope to allow the contract to be consumed from all tenants.
47. Click + to add a contract subject.
48. Name the subject Allow-Shared-L3-Out.
49. Click + to add a filter.
50. From the drop-down list, select the default from Tenant common.
51. Click UPDATE.
52. Click OK to complete creating the contract subject.
53. Click SUBMIT to complete creating the contract.
54. In the left pane expand Tenant common, Networking, External Routed Networks, Nexus-7K-Shared, and Networks. Select Default-Route.
55. In the right pane under Policy, select Contracts.
56. Click + to add a Provided Contract.
57. Select the common/Allow-Shared-L3-Traffic contract.
58. Click UPDATE.
Tenant EPGs can now consume the Allow-Shared-L3-Traffic contract and route traffic outside fabric. This deployment example shows default filter to allow all traffic. More restrictive contracts can be created for more restrictive access outside the Fabric.
This section details the steps for creating a sample two-tier application called App-A. This tenant will comprise of a Web and App tier which will be mapped to relevant EPGs on the ACI fabric.
To deploy the Application Tenant and associate it to the VM networking, complete the following steps:
1. In the APIC Advanced GUI, select Tenants.
2. At the top select Tenants > Add Tenant.
3. Name the Tenant App-A.
4. For the VRF Name, also enter App-A. Leave the Take me to this tenant when I click finish checkbox checked.
5. Click SUBMIT to finish creating the Tenant.
1. In the left pane expand Tenant App-A > Networking.
2. Right-click on the Bridge Domain and select Create Bridge Domain.
In this deployment, two different bridge domains will be created to host Web and App application tiers to keep the external and application-internal traffic separated. Customers can choose to create a single Bridge Domain to host both.
3. Name the Bridge Domain bd-App-A-External, select Forwarding as Optimize, and click NEXT, NEXT and click SUBMIT to complete adding the Bridge Domain.
4. Repeat the steps above to add another Bridge Domain called bd-App-A-Internal.
1. In the left pane, right-click Application Profiles and select Create Application Profile.
2. Name the Application Profile App-A and click SUBMIT.
1. In the left pane expand Application Profiles > App-A.
2. Right-click Application EPGs and select Create Application EPG.
3. Name the EPG App-A-Web. Leave Intra EPG Isolation Unenforced.
4. From the Bridge Domain drop-down list, select App-A/bd-App-A-External.
5. Check the check box next to Associate to VM Domain Profiles.
6. Click NEXT.
7. Click + to Associate VM Domain Profiles.
8. From the Domain Profile drop-down list, select VMware domain. If customers have deployed both VDS and AVS domains, both the domain will be visible in the drop-down list as shown below. In this example, VMware domain for VDS is selected to deploy the EPG.
9. Change the Deployment Immediacy and Resolution Immediacy to Immediate.
10. Click UPDATE.
11. Click FINISH to complete creating the EPG.
12. In the left pane expand EPG App-A-Web, right-click on the Subnets and select Create EPG Subnet.
13. For the Default Gateway IP, enter a gateway IP address and mask. In this deployment, the GW address configured for Web VMs is 10.10.0.254/24.
14. Since the Web VM Subnet is advertised to Nexus 7000s and to App EPG, select Advertise Externally and Shared between the VRFs.
15. Click SUBMIT.
At this point, a new port-group should have been created on the VMware VDS. Log into the vSphere Web Client, browse to Networking > VDS and verify.
1. In the left pane expand Application Profiles > App-A.
2. Right-click Application EPGs and select Create Application EPG.
3. Name the EPG App-A-App. Leave Intra EPG Isolation Unenforced.
4. From the Bridge Domain drop-down list select App-A/bd-App-A-Internal.
5. Check the check box next to Associate to VM Domain Profiles.
6. Click NEXT.
7. Click + to Associate VM Domain Profiles.
8. From the Domain Profile drop-down list, select VMware domain. If customers have deployed both VDS and AVS domains, both the domain will be visible in the drop-down list as shown below. In this example, VMware domain for VDS is selected to deploy the EPG.
9. Change the Deployment Immediacy and Resolution Immediacy to Immediate.
10. Click UPDATE.
11. Click FINISH to complete creating the EPG.
12. In the left pane expand EPG App-A-App, right-click on the Subnets and select Create EPG Subnet.
13. For the Default Gateway IP, enter a gateway IP address and mask. In this deployment, the GW address configured for App VMs is 10.10.1.254/24.
14. Since the App VMs only need to communicate with Web VMs EPG, select Private to VRFs.
15. Click SUBMIT.
At this point, a new port-group should have been created on the VMware VDS. Log into the vSphere Web Client, browse to Networking > VDS and verify.
1. In the APIC Advanced GUI, select Tenants > App-A.
2. In the left pane, expand Tenant App-A > Application Profiles > App-A > Application EPGs > EPG App-A-App.
3. Right-click on Contract and select Add Provided Contract.
4. In the Add Provided Contract window, from the Contract drop-down list, select Create Contract.
5. Name the Contract Allow-App-to-Web-Comm.
6. Select Tenant for Scope.
7. Click + to add a Contract Subject.
8. Name the subject Allow-App-to-Web.
9. Click + to add a Contract filter.
10. Click + to add a new Subject.
11. For Filter Identity Name, enter Allow-App-A-All.
12. Click + to add an entity.
13. Enter Allow-All as the name of Entries.
14. From the EtherType drop-down list, select IP.
15. Click UPDATE.
16. Click SUBMIT.
17. Click UPDATE in the Create Contract Subject window.
18. Click OK to finish creating the Contract Subject.
19. Click SUBMIT to complete creating the Contract.
20. Click SUBMIT to complete adding the Provided Contract.
1. In the left pane expand Tenant App-A > Application Profiles > App-A > Application EPGs > EPG App-A-Web.
2. Right-click on Contracts and select Add Consumed Contract.
3. In the Add Consumed Contract window, use the drop-down list to select the contract defined in the last step, App-A/Allow-App-to-Web-Comm.
4. Click SUBMIT to complete adding the Consumed Contract.
The communication between Web and App tiers of the application should be enabled now. Customers can use more restrictive contracts to replace the Allow-All contract defined in this example.
To enable App-A’s Web VMs to communicate outside the Fabric, Shared L3 Out contract defined in the Common Tenant will be consumed in the App-A-Web EPG. Complete the following steps to enable traffic rom Web VMs to outside the fabric:
1. In the APIC Advanced GUI, select Tenants > App-A
2. In the left pane, expand Tenant App-A > Application Profiles > App-A > Application EPGs > EPG App-A-Web
3. Right-click on Contracts and select Add Consumed Contract.
4. In the Add Consumed Contract window, use the drop-down list to select Common/Allow-Shared-L3-Traffic.
5. Click SUBMIT to complete adding the Consumed Contract.
6. Log into the core Nexus 7000 switch to verify App-A-Web EPG’s subnet (10.10.0.0/24) is being advertised
This procedure details an ACI L4-L7 VLAN Stitching feature. In this design, a pair of Cisco ASA-5585 firewall devices in a High Availability configuration is connected to the ACI Fabric. The firewalls are connected to Cisco Nexus 93180 leaf switches as shown in Figure 11.
Figure 11 Cisco ASA Physical Connectivity
VLAN Stitching does not make use of device packages therefore ASA devices need to be configured using CLI or ASDM. However, Cisco ACI fabric will configured as shown in this section to provide the necessary traffic/VLAN “stitching”. The VLANs and IP subnet details utilized in this setup are shown in Figure 12.
Figure 12 Cisco ASA Logical Connectivity
The following configuration is a sample from the ASA devices.
The Cisco ASA configuration below is not complete and is meant to be used only as a reference.
interface TenGigabitEthernet0/6
channel-group 2 mode active
!
interface TenGigabitEthernet0/7
channel-group 2 mode active
!
interface Port-channel2
description To Nexus 9K Leaf
lacp max-bundle 8
!
interface Port-channel2.501
description Inside Interface for Context N1
vlan 501
!
interface Port-channel2.601
description Outside Interface for Context N1
vlan 601
!
context N1
allocate-interface Port-channel2.501
allocate-interface Port-channel2.601
config-url disk0:/N1.cfg
join-failover-group 1
!
!
interface Port-channel2.501
nameif inside
security-level 100
ip address 10.10.0.100 255.255.255.0 standby 10.10.0.101
!
interface Port-channel2.601
nameif outside
security-level 0
ip address 192.168.249.100 255.255.255.0 standby 192.168.249.101
!
object network Inside-Net
subnet 10.10.0.0 255.255.255.0
!
access-list permit-all extended permit ip any any
access-list outside_access_in extended permit ip any any
!
object network Inside-Net
nat (any,outside) dynamic interface
!
access-group outside_access_in in interface outside
route outside 0.0.0.0 0.0.0.0 192.168.249.254 1
!
1. In APIC advanced GUI, click on Tenant > Common.
2. Expand Tenant Common > Networking > Bridge Domain > bd-Common-Outside.
3. Right-click on the Subnets and select Create Subnet.
4. Enter the gateway IP address and subnet and set the scope as shown in the figure below. The GW IP address used in this validation is 192.168.249.254/24.
5. Click SUBMIT.
1. Click the Bridge Domain bd-Common-Outside and in the right pane under Policy, select L3 Configurations.
2. Click + to Associate L3 Out.
3. From the drop-down list, select common/Nexus-7K-Shared and click UPDATE.
4. Click SUBMIT.
1. At the top, select Fabric > Access Policies.
2. In the left pane, expand Physical and External Domains.
3. Right-click Physical Domains and select Create Physical Domain.
4. Name the Domain ASA5585.
5. From the Associated Attachable Entity Profile drop-down list, select Create Attachable Entity Profile.
6. Name the Profile ASA5585 and click NEXT.
7. Click FINISH to continue without specifying interfaces.
8. Back in the Create Physical Domain window, use the VLAN Pool drop-down list to select Create VLAN Pool.
9. Name the VLAN ASA5585_vlans.
10. Select Static Allocation.
11. Click + to add an Encap Block.
12. In the Create Ranges window, enter the VLAN range as shown in Figure 10
13. Select Static Allocation.
14. Click OK to complete adding the VLAN range.
15. Click SUBMIT to complete creating the VLAN Pool.
In this validation, two different ranges of 5 VLANs (total 10 VLANs) are added to the pool. VLANs 501-505 are to be used to configure up to 5 tenant context inside interfaces. VLANs 601-605 are to be used for up to 5 tenant context outside interfaces. Customers can adjust these ranges based on number of tenants.
16. Click SUBMIT to complete creating the Physical Domain.
This section details setup of port-channels for the Cisco ASA-5585 devices as shown in Figure 11.
1. In the APIC Advanced GUI, at the top select Fabric > Access Policies. Select Quick Start on the left.
2. In the right pane, select Configure an interface, PC and VPC.
3. Under the Configured Switch Interfaces, select first leaf (601)
4. Click big green “+” on the right to configured switch interfaces
5. Configure various fields as shown in the figure below. In this screen shot, port 1/45 and 1/46 on leaf switch is connected to Cisco ASA using a two-port 10G port-channel.
6. Click SAVE.
7. Click SAVE again to finish the configuring switch interfaces
8. Click SUBMIT.
9. Under the Configured Switch Interfaces, select second leaf (602)
10. Click big green “+” on the right to configured switch interfaces
11. Configure various fields as shown in the figure below. In this screen shot, port 1/45 and 1/46 on second leaf switch is connected to Cisco ASA using a two-port 10G port-channel.
12. Click SAVE.
13. Click SAVE again to finish the configuring switch interfaces
14. Click SUBMIT.
At this time, the port-channels between ACI leaf switches and ASA can be verified by logging into ASA and issuing show port-channel command
This section covers L4-L7 Device and Service Graph setup for tenant App-A.
1. From the Cisco APIC Advanced GUI, select Tenants > App-A.
2. In the left pane expand Tenant App-A, Application Profiles, App-A, Application EPGs, and EPG Web.
The Shared L3 connectivity configured previously must be removed and replaced by ASA Firewall contracts.
1. Under EPG Web, select Contracts.
2. Right-click on the Allow-Shared-L3-Traffic contract and select Delete to remove this contract association.
3. Click YES for the confirmation.
4. In the left pane, expand Subnets and select EPG subnet.
5. Under the Properties area, uncheck the check box Advertised Externally. If the Web EPG is connected to the Core-Services, then keep the option Shared between VRFs selected. Otherwise, select Private to VRF.
6. Click SUBMIT to complete modifying the subnet.
At this point, log into the Nexus 7000 and verify the subnet defined under the EPG has disappeared from the routing table and is not learnt via OSPF anymore.
1. In the left pane expand Tenant App-A and L4-L7 Services.
2. Right-click on the L4-L7 Devices and select Create L4-L7 Devices.
3. In the Create L4-L7 Devices window, uncheck the Managed check box.
4. Name the Device ASA5585-<context name>. This deployment uses the ASA5585-N1 as the name.
5. Select the Firewall as Service Type.
6. For the Physical Domain, select ASA5585.
7. Select the HA Node under View.
8. For the Function Type, select GoTo.
9. Under Device 1, Click + to add the Device Interface.
10. Name the Device Interface ASA5585-1-PC.
11. From the drop-down list, select Path Type PC.
12. Select the PC for the first ASA.
13. Click UPDATE.
14. Under Device 2, click + to add the Device Interface.
15. Name the Device ASA5585-2-PC.
16. From the drop-down list, select Path Type PC.
17. Select the PC for the second ASA.
18. Click UPDATE.
19. Under Cluster, click + to add a Concrete Interface.
20. Name the interface outside.
21. From the drop-down list, select both the ASA-1 and ASA-2 devices.
22. For Encap, input vlan<App-A-outside-VLAN-ID> (Vlan-601).
23. Click UPDATE.
24. Under Cluster, click + to add a second Concrete Interface.
25. Name the interface inside.
26. From the drop-down list, select both the ASA-1 and ASA-2 devices.
27. For Encap, input vlan<App-A-inside-VLAN-ID> (VLAN 501).
28. Click UPDATE.
29. Click FINISH to complete creating the L4-L7 Device.
1. Right-click L4-L7 Service Graph Templates and select Create L4-L7 Service Graph Template.
2. In the Create L4-L7 Service Graph Template window, name the Graph ASA-5585-N1.
3. Make sure Graph Type is set to Create A New One.
4. Drag the ASA5585-N1 Firewall icon to between the two EPGs.
5. Select the Routed Firewall.
6. Click SUBMIT to complete creating the Service Graph Template.
1. In the left, expand L4-L7 Service Graph Templates and select the ASA5585-N1 Template.
2. Right-click the ASA-App-A-Context Template and select Apply L4-L7 Service Graph Template.
3. In the Apply L4-L7 Service Graph Template to EPGs window, from the Consumer EPG drop-down list, select common/Nexus-7K-Shared/epg-Default-Route.
4. From the Provider EPG drop-down list, select App-A/App-A/epg-App-A-Web.
These EPG selections place the firewall between the Shared-L3-Out and App-A Web EPGs.
5. Under Contract Information, leave Create A New Contract selected and name the contract Allow-ASA-Traffic.
6. Click NEXT.
7. Under Consumer Connector, from the BD drop-down list, select common/bd-Common-Outside.
8. Under Consumer Connector, from the Cluster Interface drop-down list, select outside.
9. Under Provider Connector, from the BD drop-down list, select App-A/bd-App-A-External.
10. Under Provider Connector, from the Cluster Interface drop-down list, select inside.
11. Click FINISH to compete applying the Service Graph Template.
12. In the left pane expand Deployed Graph Instances and Allow-ASA-Traffic-ASA5585-N1.
13. Select Function Node – N1.
14. Verify that the Function Connectors display values for Encap and interfaces.
15. For VMs with interfaces in the Web EPG, set the default gateway to the ASA’s inside interface IP (10.10.0.100/24).
16. The ASA in this deployment was configured to NAT all the Web tier traffic to ASA’s outside interface IP address.
17. Log into the Nexus 7000 and verify the 192.168.249.0/24 subnet is not being advertised from the leaf switches.
A07-7004-1-ACI# show ip route 192.168.249.0/24
IP Route Table for VRF "default"
'*' denotes best ucast next-hop
'**' denotes best mcast next-hop
'[x/y]' denotes [preference/metric]
'%<string>' in via output denotes VRF <string>
192.168.249.0/24, ubest/mbest: 2/0
*via 10.253.253.1, Eth3/1.301, [110/20], 4d17h, ospf-10, nssa type-2
*via 10.253.253.5, Eth3/2.302, [110/20], 4d17h, ospf-10, nssa type-2
Haseeb Niazi, Technical Marketing Engineer, Computing Systems Product Group, Cisco Systems, Inc.
Haseeb Niazi has over 17 years of experience at Cisco in the Data Center, Enterprise and Service Provider solutions and technologies. As a member of various solution teams and Advanced Services, Haseeb has helped many enterprise and service provider customers evaluate and deploy a wide range of Cisco solutions. As a technical marking engineer at Cisco UCS solutions group, Haseeb currently focuses on network, compute, virtualization, storage and orchestration aspects of various Compute Stacks. Haseeb holds a master's degree in Computer Engineering from the University of Southern California and is a Cisco Certified Internetwork Expert (CCIE 7848).
Adam H. Reid, Test Specialist, Systems & Technology Group, IBM
Adam H. Reid is a published author with more than 15 years of Computer Engineering experience. Focused more recently on IBM's Spectrum Virtualize, he’s been deeply involved with the testing and configuration of virtualized environments pivotal to the future of software defined storage. Adam has designed, tested and validated systems to meet the demands of a wide range of mid-range and enterprise environments.
Following individual(s) contributed to building this solution and participated in writing of this design document:
· Sreenivasa Edula, Technical Marketing Engineer, Cisco Systems, Inc.