Last Updated: November 13, 2017
About Cisco Validated Designs
The CVD program consists of systems and solutions designed, tested, and documented to facilitate faster, more reliable, and more predictable customer deployments. For more information, visit:
http://www.cisco.com/go/designzone.
ALL DESIGNS, SPECIFICATIONS, STATEMENTS, INFORMATION, AND RECOMMENDATIONS (COLLECTIVELY, "DESIGNS") IN THIS MANUAL ARE PRESENTED "AS IS," WITH ALL FAULTS. CISCO AND ITS SUPPLIERS DISCLAIM ALL WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE. IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THE DESIGNS, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
THE DESIGNS ARE SUBJECT TO CHANGE WITHOUT NOTICE. USERS ARE SOLELY RESPONSIBLE FOR THEIR APPLICATION OF THE DESIGNS. THE DESIGNS DO NOT CONSTITUTE THE TECHNICAL OR OTHER PROFESSIONAL ADVICE OF CISCO, ITS SUPPLIERS OR PARTNERS. USERS SHOULD CONSULT THEIR OWN TECHNICAL ADVISORS BEFORE IMPLEMENTING THE DESIGNS. RESULTS MAY VARY DEPENDING ON FACTORS NOT TESTED BY CISCO.
CCDE, CCENT, Cisco Eos, Cisco Lumin, Cisco Nexus, Cisco StadiumVision, Cisco TelePresence, Cisco WebEx, the Cisco logo, DCE, and Welcome to the Human Network are trademarks; Changing the Way We Work, Live, Play, and Learn and Cisco Store are service marks; and Access Registrar, Aironet, AsyncOS, Bringing the Meeting To You, Catalyst, CCDA, CCDP, CCIE, CCIP, CCNA, CCNP, CCSP, CCVP, Cisco, the Cisco Certified Internetwork Expert logo, Cisco IOS, Cisco Press, Cisco Systems, Cisco Systems Capital, the Cisco Systems logo, Cisco Unified Computing System (Cisco UCS), Cisco UCS B-Series Blade Servers, Cisco UCS C-Series Rack Servers, Cisco UCS S-Series Storage Servers, Cisco UCS Manager, Cisco UCS Management Software, Cisco Unified Fabric, Cisco Application Centric Infrastructure, Cisco Nexus 9000 Series, Cisco Nexus 7000 Series. Cisco Prime Data Center Network Manager, Cisco NX-OS Software, Cisco MDS Series, Cisco Unity, Collaboration Without Limitation, EtherFast, EtherSwitch, Event Center, Fast Step, Follow Me Browsing, FormShare, GigaDrive, HomeLink, Internet Quotient, IOS, iPhone, iQuick Study, LightStream, Linksys, MediaTone, MeetingPlace, MeetingPlace Chime Sound, MGX, Networkers, Networking Academy, Network Registrar, PCNow, PIX, PowerPanels, ProConnect, ScriptShare, SenderBase, SMARTnet, Spectrum Expert, StackWise, The Fastest Way to Increase Your Internet Quotient, TransPath, WebEx, and the WebEx logo are registered trademarks of Cisco Systems, Inc. and/or its affiliates in the United States and certain other countries.
All other trademarks mentioned in this document or website are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (0809R)
© 2017 Cisco Systems, Inc. All rights reserved.
Table of Contents
FlashStack Nexus Switch Configuration
Setting the NX-OS image on the switch
Cisco Nexus Basic System Configuration Dialog
Cisco Nexus Switch Configuration
Add Individual Port Descriptions for Troubleshooting
Add NTP Distribution Interface
Configure Port Channel Member Interfaces
Configure Virtual Port Channels
FlashArray Storage Configuration
FlashArray Initial Configuration
Configuring the Domain Name System (DNS) Server IP Addresses
MDS Basic System Configuration Dialog
Upgrade Cisco MDS NX-OS release 7.3(0)DY(1)
Cisco UCS Compute Configuration
Upgrade Cisco UCS Manager Software to Version 3.1(2b)
Enable Server and Uplink Ports
Create vNIC/vHBA Placement Policy for Virtual Machine Infrastructure Hosts
Configure UCS LAN Connectivity
Set Jumbo Frames in Cisco UCS Fabric
Create LAN Connectivity Policy
Configure Cisco UCS SAN Connectivity
Create SAN Connectivity Policy
Create Service Profile Template
Create Device Aliases for the Connected FlashArray Ports
Private Volumes for each ESXi Host
Pure Storage vSphere Web Client Plugin
Log in to Cisco UCS 6332-16UP Fabric Interconnect
Set Up VMware ESXi Installation
Set Up Management Networking for ESXi Hosts
Add the VMware ESXi Hosts Using the VMware vSphere Web Client
Configure ESXi Hosts in the Cluster
Install VMware Drivers for the Cisco Virtual Interface Card (VIC)
Create a VMware vDS for Application and Production Networks
Pure Storage Best Practices for vSphere
Cisco Validated Designs consist of systems and solutions that are designed, tested, and documented to facilitate and improve customer deployments. These designs incorporate a wide range of technologies and products into a portfolio of solutions that have been developed to address the business needs of our customers.
This document details the design described in the FlashStack Virtual Server Infrastructure Design Guide for VMware vSphere 6.0 U2, which showed a validated converged infrastructure jointly developed by Cisco and Pure Storage. In this solution we will walk through the deployment of a predesigned, best-practice data center architecture with VMware vSphere built on the Cisco Unified Computing System (UCS), the Cisco Nexus® 9000 family of switches, Cisco MDS 9000 family of Fibre Channel switches and Pure Storage FlashArray//M all flash storage.
When deployed, the architecture presents a robust infrastructure viable for a wide range of application workloads implemented as a virtual server infrastructure.
In the current industry there is a trend for pre-engineered solutions which standardize the data center infrastructure, offering the business operational efficiencies, agility and scale to address cloud, bimodal IT and their business. Their challenge is complexity, diverse application support, efficiency and risk. All these are met by FlashStack with:
· Reduced complexity and automatable infrastructure and easily deployed resources
· Robust components capable of supporting high performance and high bandwidth virtualized applications
· Efficiency through optimization of network bandwidth and in-line storage compression with de-duplication
· Risk reduction at each level of the design with resiliency built into each touch point throughout
Cisco and Pure Storage have partnered to deliver this Cisco Validated Design, which uses best of breed storage, server and network components to serve as the foundation for virtualized workloads, enabling efficient architectural designs that can be quickly and confidently deployed.
In this document we will describe a reference architecture detailing a Virtual Server Infrastructure composed of Cisco Nexus switches, Cisco UCS Compute, Cisco MDS Multilayer Fabric Switches and a Pure Storage FlashArray//M delivering a VMware vSphere 6.0 U2 hypervisor environment.
The audience for this document includes, but is not limited to, sales engineers, field consultants, professional services, IT managers, partner engineers, and customers who want to take advantage of an infrastructure built to deliver IT efficiency and enable IT innovation.
This document details a step-by-step configuration and implementation guide for FlashStack, centered around the Cisco UCS 6332-16UP Fabric Interconnect and the Pure Storage FlashArray//M70. These components are supported by the 100G capable Cisco Nexus 93180YC-EX switch and the Cisco MDS 9148S Multilayer fabric switch to deliver a Virtual Server infrastructure on Cisco UCS B200 M4 Blade Servers running VMware vSphere 6.0 U2.
The design that will be implemented is discussed in the FlashStack Virtual Server Infrastructure Design Guide for VMware vSphere 6.0 U2 found at:
The FlashStack Virtual Server Infrastructure is a validated reference architecture, collaborated on by Cisco and Pure Storage, built to serve enterprise datacenters. The solution is built to deliver a VMware vSphere based environment, leveraging the Cisco Unified Computing System (UCS), Cisco Nexus switches, Cisco MDS Multilayer Fabric switches, and Pure Storage FlashArray.
The architecture brings together a simple, wire once solution, that is SAN booted and highly resilient at each layer of the design. This creates an infrastructure that is ideal for a variety of virtual application deployments that can reliably scale when growth is needed. Figure 1 shows the base physical architecture used in FlashStack Virtual Server Infrastructure.
Figure 1 FlashStack with Cisco UCS 6332-16UP and Pure Storage FlashArray//M70
The reference hardware configuration includes:
· Two Cisco Nexus 93180YC-EX Switches
· Two Cisco UCS 6332-16UP Fabric Interconnects
· Cisco UCS 5108 Chassis with two Cisco UCS 2304 Fabric Extenders
· Cisco UCS B200 M4 Blade Servers
· Two Cisco MDS 9148S Multilayer Fabric Switches
· A Pure Storage FlashArray//M70
The virtual environment this supports is within VMware vSphere 6.0 U2, and includes virtual management and automation components from Cisco and Pure Storage built into the solution, or as optional add-ons.
This document will provide a low-level example of steps to deploy this base architecture that may need some adjustments depending on the customer environment. These steps include physical cabling, network, storage, compute, and virtual device configurations.
Table 1 lists the software versions for hardware and virtual components used in this solution.
Layer |
Device |
Image |
Comments |
Compute |
Cisco UCS Fabric Interconnects 6300 Series, UCS B-200 M4 |
3.1(2b) |
Includes the Cisco UCS IOM 2304 and Cisco UCS VIC 1340 |
|
Cisco eNIC |
2.3.0.10 |
|
|
Cisco fNIC |
1.6.0.28 |
|
Network |
Cisco Nexus 9000 NX-OS |
7.0(3)I4(2) |
|
|
Cisco Nexus 1000V |
5.2(1)SV3(2.1) |
Optional |
Storage |
Cisco MDS 9148S |
7.3(0)DY(1) |
Original validation occurred on 7.3(0)D1(1), but a potential data integrity issue was identified with CSCva64432, leading to a recommendation to install 7.3.(0)DY(1). |
|
Pure Storage FlashArray//M70 |
4.7.4 |
|
Software |
Cisco UCS Manager |
3.1(2b) |
|
|
Cisco UCS Director |
5.5 |
Optional |
|
Cisco Virtual Switch Update Manager |
2.0 |
Only if installing Cisco Nexus 1000V |
|
VMware vSphere ESXi |
6.0 U2 |
|
|
VMware vCenter |
6.0 U2 |
|
|
Pure Storage vSphere Web Client Plugin |
2.1.0 |
|
The Cisco Hardware Compatibility List (HCL) and the Pure Compatibility Matrix (customer credentials from Pure are required) defines the current qualified components and versions you can use to build a FlashStack. Please consult the Pure and Cisco interoperability for the latest support information regarding all hardware and software elements. It is also strongly suggested to align FlashStack deployments with the recommended releases for Cisco MDS 9000 Series Switches and Cisco Nexus 9000 Switches used in the architecture:
This document details the step by step configuration of a fully redundant and highly available Virtual Server Infrastructure built on Cisco and Pure Storage components. References are made to which component is being configured with each step, either 01 or 02 or A and B. For example, controller-1 and controller-2 are used to identify the two controllers within the Pure Storage FlashArray//M that are provisioned with this document, and Cisco Nexus A or Cisco Nexus B identifies the pair of Cisco Nexus switches that are configured. The Cisco UCS fabric interconnects are similarly configured. Additionally, this document details the steps for provisioning multiple Cisco UCS hosts, and these examples are identified as: VM-Host-Infra-01, VM-Host-Infra-02 to represent infrastructure and production hosts deployed to the fabric interconnects in this document. Finally, to indicate that you should include information pertinent to your environment in a given step, <<text>> appears in bold as part of the command structure. See the following example during a configuration step for both Cisco Nexus switches:
b19-93180-1&2 (config)# ntp server <<var_oob_ntp>> use-vrf management
This document is intended to enable you to fully configure the customer environment. In this process, various steps require you to insert customer-specific naming conventions, IP addresses, and VLAN schemes, as well as to record appropriate MAC addresses. Table 2 describes the VLANs necessary for deployment as outlined in this guide, and Table 3 lists the virtual machines (VMs) necessary for deployment as outlined in this guide.
VLAN Name |
VLAN Purpose |
ID Used in Validating this Document |
Customer Deployed Value |
Out of Band Mgmt |
VLAN for out-of-band management interfaces |
15 |
|
In-Band Mgmt |
VLAN for in-band management interfaces |
115 |
|
Native |
VLAN to which untagged frames are assigned |
2 |
|
vMotion |
VLAN for VMware vMotion |
200 |
|
VM-App1 |
VLAN for Production VM Interfaces |
201 |
|
VM-App2 |
VLAN for Production VM Interfaces |
202 |
|
VM-App2 |
VLAN for Production VM Interfaces |
203 |
|
Table 3 Infrastructure Virtual Machines
Virtual Machine Description |
VM Name Used in Validating This Document |
Customer Deployed Value |
Active Directory |
Pure-AD |
|
vCenter Server |
Pure-VC |
|
N1K VSM Primary |
vsm_primary_pure |
|
N1K VSM Secondary |
vsm_secondary_pure |
|
Table 4 Configuration Variables
Variable |
Variable Description |
Customer Deployed Value |
<<var_nexus_A_hostname>> |
Nexus switch A hostname (Example: b19-93180-1) |
|
<<var_nexus_A_mgmt_ip>> |
Out-of-band management IP for Nexus switch A (Example: 192.168.164.13) |
|
<<var_oob_mgmt_mask>> |
Out-of-band management network netmask (Example: 255.255.255.0) |
|
<<var_oob_gateway>> |
Out-of-band management network gateway (Example: 192.168.164.254) |
|
<<var_oob_ntp>> |
Out-of-band management network NTP server (Example: 192.168.164.254) |
|
<<var_nexus_B_hostname>> |
Nexus switch B hostname (Example: b19-93180-2) |
|
<<var_nexus_B_mgmt_ip>> |
Out-of-band management IP for Nexus switch B (Example: 192.168.164.14) |
|
<<var_nexus_A_ib_ip>> |
In-band management network interface for Nexus switch A (Example: 10.1.164.13) |
|
<<var_nexus_B_ib_ip>> |
In-band management network interface for Nexus switch B (Example: 10.1.164.14) |
|
<<var_flasharray_hostname>> |
Array hostname set during setup (Example: flashstack-1) |
|
<<var_flasharray_vip>> |
Virtual IP that will answer for the active management controller (Example: 192.168.164.30 |
|
<<var_contoller-1_mgmt_ip>> |
Out-of-band management IP for FlashArray controller-1 (Example: 192.168.164.31) |
|
<<var_ contoller-1_mgmt_mask>> |
Out-of-band management network netmask (Example: 255.255.255.0) |
|
<<var_ contoller-1_mgmt_gateway>> |
Out-of-band management network default gateway (Example: 192.168.164.254) |
|
<<var_ contoller-2_mgmt_ip>> |
Out-of-band management IP for FlashArray controller-2 (Example: 192.168.164.32) |
|
<<var_ contoller-2_mgmt_mask>> |
Out-of-band management network netmask (Example: 255.255.255.0) |
|
<<var_ contoller-2_mgmt_gateway>> |
Out-of-band management network default gateway (Example: 192.168.164.1) |
|
<<var_password>> |
Administrative password (Example: Fl@shSt4x) |
|
<<var_dns_domain_name>> |
DNS domain name (Example: flashstack.cisco.com) |
|
<<var_nameserver_ip>> |
DNS server IP(s) (Example: 10.1.164.9) |
|
<<var_smtp_ip>> |
Email Relay Server IP Address or FQDN (Example: smtp.flashstack.cisco.com) |
|
<<var_smtp_domain_name>> |
Email Domain Name (Example: flashstack.cisco.com) |
|
<<var_timezone>> |
FlashStack time zone (Example: America/New_York) |
|
<<var_oob_mgmt_vlan_id>> |
Out-of-band management network VLAN ID (Example: 15) |
|
<<var_ib_mgmt_vlan_id>> |
In-band management network VLAN ID (Example: 115) |
|
<<var_ib_mgmt_vlan_netmask_length>> |
Length of IB-MGMT-VLAN Netmask (Example: /24) |
|
<<var_ib_gateway_ip>> |
In-band management network VLAN ID (Example: 10.1.164.254) |
|
<<var_vmotion_vlan_id>> |
In-band management network VLAN ID (Example: 200) |
|
<<var_vmotion_vlan_netmask_length>> |
Length of IB-MGMT-VLAN Netmask (Example: /24) |
|
<<var_native_vlan_id>> |
Native network VLAN ID (Example: 2) |
|
<<var_app_vlan_id>> |
Example Application network VLAN ID (Example: 201) |
|
<<var_snmp_contact>> |
Administrator e-mail address (Example: admin@flashstack.cisco.com) |
|
<<var_snmp_location>> |
Cluster location string (Example: RTP9-B19) |
|
<<var_mds_A_mgmt_ip>> |
Cisco MDS Management IP address (Example: 192.168.164.15) |
|
<<var_mds_A_hostname>> |
Cisco MDS hostname (Example: mds-9148s-a) |
|
<<var_mds_B_mgmt_ip>> |
Cisco MDS Management IP address (Example: 192.168.164.15) |
|
<<var_mds_B_hostname>> |
Cisco MDS hostname (Example: mds-9148s-b) |
|
<<var_vsan_a_id>> |
VSAN used for the A Fabric between the FlashArray/MDS/FI (Example: 101) |
|
<<var_vsan_b_id>> |
VSAN used for the A Fabric between the FlashArray/MDS/FI (Example: 102) |
|
<<var_ucs_clustername>> |
Cisco UCS Manager cluster host name (Example: ucs-6332) |
|
<<var_ucsa_mgmt_ip>> |
Cisco UCS fabric interconnect (FI) A out-of-band management IP address (Example: 192.168.164.51) |
|
<<var_ucs_mgmt_vip>> |
Cisco UCS fabric interconnect (FI) Cluster out-of-band management IP address (Example: 192.168.164.50) |
|
<<var_ucsb_mgmt_ip>> |
Cisco UCS FI B out-of-band management IP address (Example: 192.168.164.52) |
|
<<var_vm_host_infra_01_ip>> |
VMware ESXi host 01 in-band management IP (Example: 10.1.164.21) |
|
<<var_vm_host_infra_02_ip>> |
VMware ESXi host 02 in-band management IP (Example: 10.1.164.22) |
|
<<var_vm_host_infra_vmotion_01_ip>> |
VMware ESXi host 01 vMotion IP (Example: 10.1.15.21) |
|
<<var_vm_host_infra_vmotion_01_ip>> |
VMware ESXi host 02 in-band management IP (Example: 10.1.15.22) |
|
<<var_vmotion_subnet_mask>> |
vMotion subnet mask (Example: 255.255.255.0) |
|
<<var_vcenter_server_ip>> |
IP address of the vCenter Server (Example: 10.1.164.100) |
|
This section details a cabling example for a FlashStack environment. To make connectivity clear in this example, the tables include both the local and remote port locations.
This document assumes that out-of-band management ports are plugged into an existing management infrastructure at the deployment site. The upstream network from the Nexus 93180YC-EX switches is out of scope of this document, with only the assumption that these switches will connect to the upstream switch or switches with a vPC.
Figure 2 shows the cabling configuration used in this FlashStack design.
Figure 2 FlashStack Cabling in the Validated Topology
Table 5 through Table 12 provide the connectivity information for the components in the figure above.
Table 5 Cisco Nexus 93180YC-EX-A Cabling Information
Local Device |
Local Port |
Connection |
Remote Device |
Remote Port |
Cisco Nexus 93180YC-EX A
|
Eth1/1 |
10GbE |
Cisco Nexus 93180YC-EX B |
Eth1/1 |
Eth1/2 |
10GbE |
Cisco Nexus 93180YC-EX B |
Eth1/2 |
|
Eth1/51 |
40GbE |
Cisco UCS 6332-16UP FI A |
Eth 1/33 |
|
Eth1/52 |
40GbE |
Cisco UCS 6332-16UP FI B |
Eth 1/33 |
|
Eth1/54 |
40GbE or 100GbE |
Upstream Network Switch |
Any |
|
MGMT0 |
GbE |
GbE management switch |
Any |
Table 6 Cisco Nexus 93180YC-EX-B Cabling Information
Local Device |
Local Port |
Connection |
Remote Device |
Remote Port |
Cisco Nexus 93180YC-EX B
|
Eth1/1 |
10GbE |
Cisco Nexus 93180YC-EX A |
Eth1/1 |
Eth1/2 |
10GbE |
Cisco Nexus 93180YC-EX A |
Eth1/2 |
|
Eth1/51 |
40GbE |
Cisco UCS 6332-16UP FI A |
Eth 1/34 |
|
Eth1/52 |
40GbE |
Cisco UCS 6332-16UP FI B |
Eth 1/34 |
|
Eth1/54 |
40GbE or 100GbE |
Upstream Network Switch |
Any |
|
MGMT0 |
GbE |
GbE management switch |
Any |
The ports Eth1/49-1/54 of the 93180YC-EX switches are ALE (Application Leaf Engine) uplink ports and do not support auto-negotiation. Devices connecting to these ports may need to have speed forced to 40GbE in interfaces on both sides. For the connections shown above going to the 6332-16UP FIs, BiDi (QSFP-40G-SR-BD) transceivers were used between the 93180YC-EX switches and the Fabric Interconnects to establish the 40Gb connection.
Table 7 Cisco UCS 6332-16UP FI A Cabling Information
Local Device |
Local Port |
Connection |
Remote Device |
Remote Port |
Cisco UCS 6332-16UP FI A
|
FC 1/1 |
16Gb FC |
MDS 9148S A |
FC 1/5 |
FC 1/2 |
16Gb FC |
MDS 9148S A |
FC 1/6 |
|
FC 1/3 |
16Gb FC |
MDS 9148S A |
FC 1/7 |
|
FC 1/4 |
16Gb FC |
MDS 9148S A |
FC 1/8 |
|
Eth1/17 |
40GbE |
Cisco UCS Chassis 1 2304 FEX A |
IOM 1/1 |
|
Eth1/18 |
40GbE |
Cisco UCS Chassis 1 2304 FEX A |
IOM 1/2 |
|
Eth1/33 |
40GbE |
Cisco Nexus 93180YC-EX A |
Eth1/51 |
|
Eth1/34 |
40GbE |
Cisco Nexus 93180YC-EX B |
Eth1/51 |
|
MGMT0 |
GbE |
GbE management switch |
Any |
|
L1 |
GbE |
Cisco UCS 6332-16UP FI B |
L1 |
|
|
L2 |
GbE |
Cisco UCS 6332-16UP FI B |
L2 |
Table 8 Cisco UCS 6332-16UP FI B Cabling Information
Local Device |
Local Port |
Connection |
Remote Device |
Remote Port |
Cisco UCS 6332-16UP FI B
|
FC 1/1 |
16Gb FC |
MDS 9148S B |
FC 1/5 |
FC 1/2 |
16Gb FC |
MDS 9148S B |
FC 1/6 |
|
FC 1/3 |
16Gb FC |
MDS 9148S B |
FC 1/7 |
|
FC 1/4 |
16Gb FC |
MDS 9148S B |
FC 1/8 |
|
Eth1/17 |
40GbE |
Cisco UCS Chassis 1 2304 FEX B |
IOM 1/1 |
|
Eth1/18 |
40GbE |
Cisco UCS Chassis 1 2304 FEX B |
IOM 1/2 |
|
Eth1/33 |
40GbE |
Cisco Nexus 93180YC-EX A |
Eth1/52 |
|
Eth1/34 |
40GbE |
Cisco Nexus 93180YC-EX B |
Eth1/52 |
|
MGMT0 |
GbE |
GbE management switch |
Any |
|
L1 |
GbE |
Cisco UCS 6332-16UP FI B |
L1 |
|
|
L2 |
GbE |
Cisco UCS 6332-16UP FI B |
L2 |
Table 9 Cisco MDS 9148S A Cabling Information
Local Device |
Local Port |
Connection |
Remote Device |
Remote Port |
Cisco MDS 9148S A
|
FC 1/1 |
16Gb FC |
FlashArray//M70 Controller 1 |
FC0 |
FC 1/2 |
16Gb FC |
FlashArray//M70 Controller 2 |
FC0 |
|
FC 1/3 |
16Gb FC |
FlashArray//M70 Controller 1 |
FC2 |
|
FC 1/4 |
16Gb FC |
FlashArray//M70 Controller 2 |
FC2 |
|
FC 1/5 |
16Gb FC |
Cisco UCS 6332-16UP FI A |
FC 1/1 |
|
FC 1/6 |
16Gb FC |
Cisco UCS 6332-16UP FI A |
FC 1/2 |
|
FC 1/7 |
16Gb FC |
Cisco UCS 6332-16UP FI A |
FC 1/3 |
|
FC 1/8 |
16Gb FC |
Cisco UCS 6332-16UP FI A |
FC 1/4 |
|
MGMT0 |
GbE |
GbE management switch |
Any |
Table 10 Cisco MDS 9148S B Cabling Information
Local Device |
Local Port |
Connection |
Remote Device |
Remote Port |
Cisco MDS 9148S B
|
FC 1/1 |
16Gb FC |
FlashArray//M70 Controller 1 |
FC1 |
FC 1/2 |
16Gb FC |
FlashArray//M70 Controller 2 |
FC1 |
|
FC 1/3 |
16Gb FC |
FlashArray//M70 Controller 1 |
FC3 |
|
FC 1/4 |
16Gb FC |
FlashArray//M70 Controller 2 |
FC3 |
|
FC 1/5 |
16Gb FC |
Cisco UCS 6332-16UP FI B |
FC 1/1 |
|
FC 1/6 |
16Gb FC |
Cisco UCS 6332-16UP FI B |
FC 1/2 |
|
FC 1/7 |
16Gb FC |
Cisco UCS 6332-16UP FI B |
FC 1/3 |
|
FC 1/8 |
16Gb FC |
Cisco UCS 6332-16UP FI B |
FC 1/4 |
|
MGMT0 |
GbE |
GbE management switch |
Any |
Table 11 Pure Storage FlashArray//M70 Controller 1 Cabling Information
Local Device |
Local Port |
Connection |
Remote Device |
Remote Port |
FlashArray//M70 Controller 1
|
FC0 |
16Gb FC |
Cisco MDS 9148S A |
FC 1/1 |
FC1 |
16Gb FC |
Cisco MDS 9148S B |
FC 1/1 |
|
FC2 |
16Gb FC |
Cisco MDS 9148S A |
FC 1/3 |
|
FC3 |
16Gb FC |
Cisco MDS 9148S B |
FC 1/3 |
|
Eth0 |
GbE |
GbE management switch |
Any |
Table 12 Pure Storage FlashArray//M70 Controller 2 Cabling Information
Local Device |
Local Port |
Connection |
Remote Device |
Remote Port |
FlashArray//M70 Controller 2 |
FC0 |
16Gb FC |
Cisco MDS 9148S A |
FC 1/2 |
FC1 |
16Gb FC |
Cisco MDS 9148S B |
FC 1/2 |
|
FC2 |
16Gb FC |
Cisco MDS 9148S A |
FC 1/4 |
|
FC3 |
16Gb FC |
Cisco MDS 9148S B |
FC 1/4 |
|
Eth0 |
GbE |
GbE management switch |
Any |
Figure 3 Cisco Nexus Configuration Workflow
Physical cabling should be completed by following the diagram and table references in the previous section referenced as FlashStack Cabling.
The following procedures describe how to configure the Cisco Nexus switches for use in a base FlashStack environment. This procedure assumes the use of Nexus 93180YC-EX switches running 7.0(3)I4(2). Configuration on a differing model of Nexus 9000 series switch should be comparable, but may differ slightly with model and changes in NX-OS release. The Cisco Nexus 93180YC-EX switch and NX-OS release were used in the validation of this FlashStack solution, the steps will reflect this model and release.
The following procedure includes the setup of NTP distribution on the In-Band Management VLAN. The interface-vlan feature and ntp commands are used to set this up. This procedure also assumes the default VRF will be used to route the In-Band Management VLAN.
The Cisco Nexus 93180YC-EX switch ships with the Application Centric Infrastructure (ACI) and will need to be reinstalled with NX-OS standalone release specified in this document. The NX-OS standalone software can be downloaded from software.cisco.com. With the image downloaded, it can be transferred to the switches via a USB or SCP from the loader prompt.
For an SCP transfer, the image will need to be accessible from a host reachable by the management interface connected to the switch. Login as admin and configure an available IP for the switch if it is not already on the network. Copy the image from the server it was placed on and reload the switch.
(none)#
(none)# ifconfig eth0 inet <<var_nexus_A_mgmt_ip>> netmask <<var_oob_mgmt_mask>>
(none)# scp localadmin@192.168.164.155:/tmp/nxos* /bootflash
(none)# reload
This command will reload the chassis, Proceed (y/n)? [n]: y
During the reload, hit Ctrl-C to interrupt the boot process and enter the loader prompt. From the loader prompt, boot the image copied over.
loader >
loader > boot nxos.7.0.3.I4.2.bin
Booting nxos.7.0.3.I4.2.bin
Trying diskboot
....
Set up the initial configuration for the Cisco Nexus A switch on <<var_nexus_A_hostname>>, by reviewing the following dialogue steps:
Abort Auto Provisioning and continue with normal setup ?(yes/no)[n]: y
---- System Admin Account Setup ----
Do you want to enforce secure password standard (yes/no) [y]:
Enter the password for "admin": ********
Confirm the password for "admin": ********
---- Basic System Configuration Dialog VDC: 1 ----
This setup utility will guide you through the basic configuration of
the system. Setup configures only enough connectivity for management
of the system.
Please register Cisco Nexus9000 Family devices promptly with your
supplier. Failure to register may affect response times for initial
service calls. Nexus9000 devices must be registered to receive
entitled support services.
Press Enter at anytime to skip a dialog. Use ctrl-c at anytime
to skip the remaining dialogs.
Would you like to enter the basic configuration dialog (yes/no): yes
Create another login account (yes/no) [n]:
Configure read-only SNMP community string (yes/no) [n]:
Configure read-write SNMP community string (yes/no) [n]:
Enter the switch name : <<var_nexus_A_hostname>>
Continue with Out-of-band (mgmt0) management configuration? (yes/no) [y]:
Mgmt0 IPv4 address : <<var_nexus_A_mgmt_ip>>
Mgmt0 IPv4 netmask : <<var_oob_mgmt_mask>>
Configure the default gateway? (yes/no) [y]: y
IPv4 address of the default gateway : <<var_oob_gateway>>
Configure advanced IP options? (yes/no) [n]:
Enable the telnet service? (yes/no) [n]:
Enable the ssh service? (yes/no) [y]:
Type of ssh key you would like to generate (dsa/rsa) [rsa]:
Number of rsa key bits <1024-2048> [1024]:
Configure the ntp server? (yes/no) [n]: y
NTP server IPv4 address : <<var_oob_ntp>>
Configure default interface layer (L3/L2) [L2]:
Configure default switchport interface state (shut/noshut) [noshut]: shut
Configure CoPP system profile (strict/moderate/lenient/dense) [strict]:
The following configuration will be applied:
password strength-check
switchname b19-93180-1
vrf context management
ip route 0.0.0.0/0 192.168.164.254
exit
no feature telnet
ssh key rsa 1024 force
feature ssh
ntp server 192.168.164.254
system default switchport
system default switchport shutdown
copp profile strict
interface mgmt0
ip address 192.168.164.13 255.255.255.0
no shutdown
Would you like to edit the configuration? (yes/no) [n]:
Use this configuration and save it? (yes/no) [y]:
Set up the initial configuration for the Cisco Nexus B switch on <<var_nexus_B_hostname>>, by running through the same steps followed in the above configuration, making the appropriate substitutions for <<var_nexus_B_hostname>> and <<var_nexus_B_mgmt_ip>>.
To enable IP switching features, run the following commands on each Cisco Nexus:
b19-93180-1&2 (config)# feature lacp
b19-93180-1&2 (config)# feature vpc
b19-93180-1&2 (config)# feature interface-vlan
The feature interface-vlan is an optional requirement if configuring an In-Band VLAN interface to redistribute NTP. Layer-3 routing is possible with Nexus switches after setting this feature, but is not covered in this architecture.
Additionally, configure the spanning tree and save the running configuration to start-up:
b19-93180-1&2 (config)# spanning-tree port type network default
b19-93180-1&2 (config)# spanning-tree port type edge bpduguard default
b19-93180-1&2 (config)# spanning-tree port type edge bpdufilter default
Run the following commands on both switches to set global configurations:
b19-93180-1&2 (config)# port-channel load-balance src-dst l4port
b19-93180-1&2 (config)# ip route 0.0.0.0/0 <<var_ib-mgmt-vlan_gateway>>
b19-93180-1&2 (config)# ntp server <<var_oob_ntp>> use-vrf management
Run the following commands on both switches to create VLANs:
b19-93180-1&2 (config)# vlan <<var_ib-mgmt_vlan_id>>
b19-93180-1&2 (config-vlan)# name IB-MGMT-VLAN
b19-93180-1&2 (config-vlan)# vlan <<var_native_vlan_id>>
b19-93180-1&2 (config-vlan)# name Native-VLAN
b19-93180-1&2 (config-vlan)# vlan <<var_vmotion_vlan_id>>
b19-93180-1&2 (config-vlan)# name vMotion-VLAN
b19-93180-1&2 (config-vlan)# vlan <<var_application_vlan_id>>
b19-93180-1&2 (config-vlan)# name VM-App1-VLAN
Continue adding VLANs as appropriate to the customer’s environment.
To add individual port descriptions for troubleshooting activity and verification for switch A, enter the following commands from the global configuration mode:
b19-93180-1(config)# interface Vlan115
b19-93180-1(config-if)# description In-Band NTP Redistribution Interface VLAN 115
b19-93180-1(config-if)# interface port-channel 11
b19-93180-1(config-if)# description vPC peer-link
b19-93180-1(config-if)# interface port-channel 151
b19-93180-1(config-if)# description vPC UCS 6332-16UP-1 FI
b19-93180-1(config-if)# interface port-channel 152
b19-93180-1(config-if)# description vPC UCS 6332-16UP-2 FI
b19-93180-1(config-if)# interface port-channel 153
b19-93180-1(config-if)# description vPC Upstream Network Switch A
b19-93180-1(config-if)# interface port-channel 154
b19-93180-1(config-if)# description vPC Upstream Network Switch B
b19-93180-1(config-if)# interface Ethernet1/1
b19-93180-1(config-if)# description vPC peer-link connection to b19-93180-2 Ethernet1/1
b19-93180-1(config-if)# interface Ethernet1/2
b19-93180-1(config-if)# description vPC peer-link connection to b19-93180-2 Ethernet1/2
b19-93180-1(config-if)# interface Ethernet1/51
b19-93180-1(config-if)# description vPC 151 connection to UCS 6332-16UP-1 FI Ethernet1/33
b19-93180-1(config-if)# interface Ethernet1/52
b19-93180-1(config-if)# description vPC 152 connection to UCS 6332-16UP-2 FI Ethernet1/33
b19-93180-1(config-if)# interface Ethernet1/53
b19-93180-1(config-if)# description vPC 153 connection to Upstream Network Switch A
b19-93180-1(config-if)# interface Ethernet1/54
b19-93180-1(config-if)# description vPC 154 connection to Upstream Network Switch B
In these steps, the interface commands for the VLAN interface and Port-Channel interfaces, will create these interfaces if they do not already exist.
To add individual port descriptions for troubleshooting activity and verification for switch B, enter the following commands from the global configuration mode:
b19-93180-2(config)# interface Vlan115
b19-93180-2(config-if)# description In-Band NTP Redistribution Interface VLAN 115
b19-93180-2(config-if)# interface port-channel 11
b19-93180-2(config-if)# description vPC peer-link
b19-93180-2(config-if)# interface port-channel 151
b19-93180-2(config-if)# description vPC UCS 6332-16UP-1 FI
b19-93180-2(config-if)# interface port-channel 152
b19-93180-2(config-if)# description vPC UCS 6332-16UP-2 FI
b19-93180-2(config-if)# interface port-channel 153
b19-93180-2(config-if)# description vPC Upstream Network Switch A
b19-93180-2(config-if)# interface port-channel 154
b19-93180-2(config-if)# description vPC Upstream Network Switch B
b19-93180-2(config-if)# interface Ethernet1/1
b19-93180-2(config-if)# description vPC peer-link connection to b19-93180-1 Ethernet1/1
b19-93180-2(config-if)# interface Ethernet1/2
b19-93180-2(config-if)# description vPC peer-link connection to b19-93180-1 Ethernet1/2
b19-93180-2(config-if)# interface Ethernet1/51
b19-93180-2(config-if)# description vPC 151 connection to UCS 6332-16UP-1 FI Ethernet1/34
b19-93180-2(config-if)# interface Ethernet1/52
b19-93180-2(config-if)# description vPC 152 connection to UCS 6332-16UP-2 FI Ethernet1/34
b19-93180-2(config-if)# interface Ethernet1/53
b19-93180-2(config-if)# description vPC 153 connection to Upstream Network Switch A
b19-93180-2(config-if)# interface Ethernet1/54
b19-93180-2(config-if)# description vPC 154 connection to Upstream Network Switch B
Optional VLAN interfaces are created on each Nexus switch to redistribute NTP to In-Band networks from their Out-of-Band network source. 93180YC-EX A is as follows:
b19-93180-1(config)# ntp source <<var_nexus_A_ib_ip>>
b19-93180-1(config)# ntp master 3
b19-93180-1(config)# interface Vlan115
b19-93180-1(config)# ip route 0.0.0.0/0 <<var_ib_gateway_ip>>
b19-93180-1(config-if)# no shutdown
b19-93180-1(config-if)# no ip redirects
b19-93180-1(config-if)# ip address <<var_nexus_A_ib_ip>>/<<var_ib-mgmt_vlan_netmask_length>>
b19-93180-1(config-if)# no ipv6 redirects
93180YC-EX B is as follows:
b19-93180-2(config)# ntp source <<var_nexus_B_ib_ip>>
b19-93180-2(config)# ntp master 3
b19-93180-2(config)# interface Vlan115
b19-93180-1(config)# ip route 0.0.0.0/0 <<var_ib_gateway_ip>>
b19-93180-2(config-if)# no shutdown
b19-93180-2(config-if)# no ip redirects
b19-93180-2(config-if)# ip address <<var_nexus_A_ib_ip>>/<<var_ib-mgmt_vlan_netmask_length>>
b19-93180-2(config-if)# no ipv6 redirects
The vPC domain will be a assigned a unique number from 1-1000 and will handle the vPC settings specified within the switches. To set the vPC domain configuration on 93180YC-EX A, run the following commands:
b19-93180-1(config)# vpc domain 10
b19-93180-1(config-vpc-domain)# peer-switch
b19-93180-1(config-vpc-domain)# role priority 10
b19-93180-1(config-vpc-domain)# peer-keepalive destination <<var_nexus_B_mgmt_ip>> source <<var_nexus_A_mgmt_ip>>
b19-93180-1(config-vpc-domain)# delay restore 150
b19-93180-1(config-vpc-domain)# peer-gateway
b19-93180-1(config-vpc-domain)# auto-recovery
b19-93180-1(config-vpc-domain)# ip arp synchronize
On the 93180YC-EX B switch run these slightly differing commands, noting that role priority and peer-keepalive commands will differ from what was previously set:
b19-93180-2(config)# vpc domain 10
b19-93180-2(config-vpc-domain)# peer-switch
b19-93180-2(config-vpc-domain)# role priority 20
b19-93180-2(config-vpc-domain)# peer-keepalive destination <<var_nexus_A_mgmt_ip>> source <<var_nexus_B_mgmt_ip>>
b19-93180-2(config-vpc-domain)# delay restore 150
b19-93180-2(config-vpc-domain)# peer-gateway
b19-93180-2(config-vpc-domain)# auto-recovery
b19-93180-2(config-vpc-domain)# ip arp synchronize
On each switch, configure the Port Channel member interfaces that will be part of the vPC Peer Link and configure the vPC Peer Link:
b19-93180-1&2 (config)# int eth 1/1-2
b19-93180-1&2 (config-if-range)# channel-group 11 mode active
b19-93180-1&2 (config-if-range)# no shut
b19-93180-1&2 (config-if-range)# int port-channel 11
b19-93180-1&2 (config-if)# switchport mode trunk
b19-93180-1&2 (config-if)# switchport trunk native vlan 2
b19-93180-1&2 (config-if)# switchport trunk allowed vlan 115,200-203
b19-93180-1&2 (config-if)# vpc peer-link
On each switch, configure the Port Channel member interfaces and the vPC Port Channels to the Cisco UCS Fabric Interconnect and the upstream network switches:
b19-93180-1&2 (config-if)# int ethernet 1/51
b19-93180-1&2 (config-if)# channel-group 151 mode active
b19-93180-1&2 (config-if)# no shut
b19-93180-1&2 (config-if)# int port-channel 151
b19-93180-1&2 (config-if)# switchport mode trunk
b19-93180-1&2 (config-if)# switchport trunk native vlan 2
b19-93180-1&2 (config-if)# switchport trunk allowed vlan 115,200-203
b19-93180-1&2 (config-if)# spanning-tree port type edge trunk
b19-93180-1&2 (config-if)# mtu 9216
b19-93180-1&2 (config-if)# load-interval counter 3 60
b19-93180-1&2 (config-if)# vpc 151
b19-93180-1&2 (config-if)# int ethernet 1/52
b19-93180-1&2 (config-if)# channel-group 152 mode active
b19-93180-1&2 (config-if)# no shut
b19-93180-1&2 (config-if)# int port-channel 152
b19-93180-1&2 (config-if)# switchport mode trunk
b19-93180-1&2 (config-if)# switchport trunk native vlan 2
b19-93180-1&2 (config-if)# switchport trunk allowed vlan 115,200-203
b19-93180-1&2 (config-if)# spanning-tree port type edge trunk
b19-93180-1&2 (config-if)# mtu 9216
b19-93180-1&2 (config-if)# load-interval counter 3 60
b19-93180-1&2 (config-if)# channel-group 152 mode active
b19-93180-1&2 (config-if)# vpc 152
b19-93180-1&2 (config-if)# interface Ethernet1/53
b19-93180-1&2 (config-if)# channel-group 153 mode active
b19-93180-1&2 (config-if)# no shut
b19-93180-1&2 (config-if)# int port-channel 153
b19-93180-1&2 (config-if)# switchport mode trunk
b19-93180-1&2 (config-if)# switchport trunk native vlan 2
b19-93180-1&2 (config-if)# switchport trunk allowed vlan 115
b19-93180-1&2 (config-if)# channel-group 153 mode active
b19-93180-1&2 (config-if)# vpc 153
b19-93180-1&2 (config-if)# interface Ethernet1/54
b19-93180-1&2 (config-if)# channel-group 154 mode active
b19-93180-1&2 (config-if)# no shut
b19-93180-1&2 (config-if)# switchport mode trunk
b19-93180-1&2 (config-if)# switchport trunk native vlan 2
b19-93180-1&2 (config-if)# switchport trunk allowed vlan 115
b19-93180-1&2 (config-if)# int port-channel 154
b19-93180-1&2 (config-if)# vpc 154
*** Save all configuration to this point on both Nexus Switches ***
b19-93180-1&2 (config)# copy running-config startup-config
vPC numbers have been chosen to correspond with the module and first port within a Port Channel, for example, having a first member of Ethernet 1/54 results in a vPC/Port Channel number of 154. This is optional, but it can help in identifying port to Port Channel memberships.
The following information should be gathered to enable the installation and configuration of the FlashArray. An official representative of Pure Storage will help rack and configure the new installation of the FlashArray.
Table 13 FlashArray Setup Information
Global Array Settings |
|
Array Name (Hostname for Pure Array): |
|
Virtual IP Address for Management: |
|
Physical IP Address for Management on Controller 0 (CT0): |
|
Physical IP Address for Management on Controller 1 (CT1): |
|
Netmask: |
|
Gateway IP Address: |
|
DNS Server IP Address(es): |
|
DNS Domain Suffix: (Optional) |
|
NTP Server IP Address or FQDN: |
|
Email Relay Server (SMTP Gateway IP address or FQDN): (Optional) |
|
Email Domain Name: |
|
Alert Email Recipients Address(es): (Optional) |
|
HTTP Proxy Server and Port (For Pure1): (Optional) |
|
Time Zone: |
|
When the FlashArray has completed initial configuration, it is important to configure the Cloud Assist phone-home connection in order to provide the best pro-active support experience possible. Furthermore, this will enable the analytics functionalities provided by Pure1.
The Support Connectivity sub-view allows you to view and manage the Purity remote assist, phone home, and log features.
The Remote Assist section displays the remote assist status as "Connected" or "Disconnected". By default, remote assist is disconnected. A connected remote assist status means that a remote assist session has been opened, allowing Pure Storage Support to connect to the array. Disconnect the remote assist session to close the session.
The Phone Home section manages the phone home facility. The phone home facility provides a secure direct link between the array and the Pure Storage Technical Support web site. The link is used to transmit log contents and alert messages to the Pure Storage Support team so that when diagnosis or remedial action is required, complete recent history about array performance and significant events is available.
By default, the phone home facility is enabled. If the phone home facility is enabled to send information automatically, Purity transmits log and alert information directly to Pure Storage Support via a secure network connection. Log contents are transmitted hourly and stored at the support web site, enabling detection of array performance and error rate trends. Alerts are reported immediately when they occur so that timely action can be taken.
Phone home logs can also be sent to Pure Storage Technical support on demand, with options including Today's Logs, Yesterday's Logs, or All Log History.
The Support Logs section allows you to download the Purity log contents of the specified controller to the current administrative workstation. Purity continuously logs a variety of array activities, including performance summaries, hardware and operating status reports, and administrative actions.
The Alerts sub-view is used to manage the list of addresses to which Purity delivers alert notifications, and the attributes of alert message delivery. You can designate up to 19 alert recipients. The Alert Recipients section displays a list of email addresses that are designated to receive Purity alert messages. Up to 20 alert recipients can be designated. The list includes the built-in flasharray-alerts@purestorage.com address, which cannot be deleted.
The Relay Host section displays the hostname or IP address of an SMTP relay host, if one is configured for the array. If you specify a relay host, Purity routes the email messages via the relay (mail forwarding) address rather than sending them directly to the alert recipient addresses.
In the Sender Domain section, the sender domain determines how Purity logs are parsed and treated by Pure Storage Support and Escalations. By default, the sender domain is set to the domain name please-configure.me.
It is crucial that you set the sender domain to the correct domain name. If the array is not a Pure Storage test array, set the sender domain to the actual customer domain name. For example, mycompany.com.
The email address that Purity uses to send alert messages includes the sender domain name and is comprised of the following components:
<Array_Name>-<Controller_Name>@<Sender_Domain_Name>.com
To add an alert recipient, complete the following steps:
1. Select System > Configuration > Alerts.
2. In the Alert Recipients section, click the menu icon and select Add Alert Recipient. The Create Alert User dialog box appears.
3. In the email field, enter the email address of the alert recipient.
4. Click Save.
To configure the DNS server IP addresses, complete the following steps:
1. Select System > Configuration > Networking.
2. In the DNS section, hover over the domain name and click the pencil icon. The Edit DNS dialog box appears.
3. Complete the following fields:
a. Domain: Specify the domain suffix to be appended by the array when doing DNS lookups.
b. DNS#: Specify up to three DNS server IP addresses for Purity to use to resolve hostnames to IP addresses. Enter one IP address in each DNS# field. Purity queries the DNS servers in the order that the IP addresses are listed.
4. Click Save.
The Directory Service sub-view manages the integration of FlashArrays with an existing directory service. When the Directory Service sub-view is configured and enabled, the FlashArray leverages a directory service to perform user account and permission level searches. Configuring directory services is OPTIONAL.
The FlashArray is delivered with a single local user, named pureuser, with array-wide (Array Admin) permissions.
To support multiple FlashArray users, integrate the array with a directory service, such as Microsoft Active Directory or OpenLDAP.
Role-based access control is achieved by configuring groups in the directory that correspond to the following permission groups (roles) on the array:
· Read Only Group. Read Only users have read-only privileges to run commands that convey the state of the array. Read Only uses cannot alter the state of the array.
· Storage Admin Group. Storage Admin users have all the privileges of Read Only users, plus the ability to run commands related to storage operations, such as administering volumes, hosts, and host groups. Storage Admin users cannot perform operations that deal with global and system configurations.
· Array Admin Group. Array Admin users have all the privileges of Storage Admin users, plus the ability to perform array-wide changes. In other words, Array Admin users can perform all FlashArray operations.
When a user connects to the FlashArray with a username other than pureuser, the array confirms the user's identity from the directory service. The response from the directory service includes the user's group, which Purity maps to a role on the array, granting access accordingly.
To configure the directory service settings, complete the following steps:
1. Select System > Configuration > Directory Service.
2. Configure the Directory Service fields:
a. Enabled: Select the check box to leverage the directory service to perform user account and permission level searches.
b. URI: Enter the comma-separated list of up to 30 URIs of the directory servers. The URI must include a URL scheme (ldap, or ldaps for LDAP over SSL), the hostname, and the domain. You can optionally specify a port. For example, ldap://ad.company.com configures the directory service with the hostname "ad" in the domain "company.com" while specifying the unencrypted LDAP protocol.
c. Base DN: Enter the base distinguished name (DN) of the directory service. The Base DN is built from the domain and should consist only of domain components (DCs). For example, for ldap://ad.storage.company.com, the Base DN would be: “DC=storage,DC=company,DC=com”
d. Bind User: Username used to bind to and query the directory. For Active Directory, enter the username - often referred to as sAMAccountName or User Logon Name - of the account that is used to perform directory lookups. The username cannot contain the characters " [ ] : ; | = + * ? < > / \, and cannot exceed 20 characters in length. For OpenLDAP, enter the full DN of the user. For example, "CN=John,OU=Users,DC=example,DC=com".
e. Bind Password: Enter the password for the bind user account.
f. Group Base: Enter the organizational unit (OU) to the configured groups in the directory tree. The Group Base consists of OUs that, when combined with the base DN attribute and the configured group CNs, complete the full Distinguished Name of each groups. The group base should specify "OU=" for each OU and multiple OUs should be separated by commas. The order of OUs should get larger in scope from left to right. In the following example, SANManagers contains the sub-organizational unit PureGroups: "OU=PureGroups,OU=SANManagers".
g. Array Admin Group: Common Name (CN) of the directory service group containing administrators with full privileges to manage the FlashArray. Array Admin Group administrators have the same privileges as pureuser. The name should be the Common Name of the group without the "CN=" specifier. If the configured groups are not in the same OU, also specify the OU. For example, "pureadmins,OU=PureStorage", where pureadmins is the common name of the directory service group.
h. Storage Admin Group: Common Name (CN) of the configured directory service group containing administrators with storage related privileges on the FlashArray. The name should be the Common Name of the group without the "CN=" specifier. If the configured groups are not in the same OU, also specify the OU. For example, "pureusers,OU=PureStorage", where pureusers is the common name of the directory service group.
i. Read Only Group: Common Name (CN) of the configured directory service group containing users with read-only privileges on the FlashArray. The name should be the Common Name of the group without the "CN=" specifier. If the configured groups are not in the same OU, also specify the OU. For example, "purereadonly,OU=PureStorage", where purereadonly is the common name of the directory service group.
j. Check Peer: Select the check box to validate the authenticity of the directory servers using the CA Certificate. If you enable Check Peer, you must provide a CA Certificate.
k. CA Certificate: Enter the certificate of the issuing certificate authority. Only one certificate can be configured at a time, so the same certificate authority should be the issuer of all directory server certificates. The certificate must be PEM formatted (Base64 encoded) and include the "-----BEGIN CERTIFICATE-----" and "-----END CERTIFICATE-----" lines. The certificate cannot exceed 3000 characters in total length.
3. Click Save.
4. Click Test to test the configuration settings. The LDAP Test Results pop-up window appears. Green squares represent successful checks. Red squares represent failed checks.
Purity creates a self-signed certificate and private key when you start the system for the first time. The SSL Certificate sub-view allows you to view and change certificate attributes, create a new self-signed certificate, construct certificate signing requests, import certificates and private keys, and export certificates.
Creating a self-signed certificate replaces the current certificate. When you create a self-signed certificate, include any attribute changes, specify the validity period of the new certificate, and optionally generate a new private key.
When you create the self-signed certificate, you can generate a private key and specify a different key size. If you do not generate a private key, the new certificate uses the existing key.
You can change the validity period of the new self-signed certificate. By default, self-signed certificates are valid for 3650 days.
Certificate authorities (CA) are third party entities outside the organization that issue certificates. To obtain a CA certificate, you must first construct a certificate signing request (CSR) on the array.
The CSR represents a block of encrypted data specific to your organization. You can change the certificate attributes when you construct the CSR; otherwise, Purity will reuse the attributes of the current certificate (self-signed or imported) to construct the new one. Note that the certificate attribute changes will only be visible after you import the signed certificate from the CA.
Send the CSR to a certificate authority for signing. The certificate authority returns the SSL certificate for you to import. Verify that the signed certificate is PEM formatted (Base64 encoded), includes the "-----BEGIN CERTIFICATE-----" and "-----END CERTIFICATE-----" lines, and does not exceed 3000 characters in total length. When you import the certificate, also import the intermediate certificate if it is not bundled with the CA certificate.
If the certificate is signed with the CSR that was constructed on the current array and you did not change the private key, you do not need to import the key. However, if the CSR was not constructed on the current array or if the private key has changed since you constructed the CSR, you must import the private key. If the private key is encrypted, also specify the passphrase.
This section provides detailed instructions for the configuration of the Cisco MDS 9148S Multilayer Fabric Switches used in this FlashStack solution. Some changes may be appropriate for a customer’s environment, but care should be taken when stepping outside of these instructions as it may lead to an improper configuration.
Figure 4 Cisco 9148S Multilayer Fabric Switch Configuration Workflow
Physical cabling should be completed by following the diagram and table references in the previous section referenced as FlashStack Cabling.
Set up the initial configuration for the Cisco MDS A switch on <<var_mds_A_hostname>>, by walking through the following dialogue steps:
Abort Auto Provisioning and continue with normal setup ?(yes/no)[n]: y
---- System Admin Account Setup ----
Do you want to enforce secure password standard (yes/no) [y]:
Enter the password for "admin":
Confirm the password for "admin":
---- Basic System Configuration Dialog ----
This setup utility will guide you through the basic configuration of
the system. Setup configures only enough connectivity for management
of the system.
Please register Cisco MDS 9000 Family devices promptly with your
supplier. Failure to register may affect response times for initial
service calls. MDS devices must be registered to receive entitled
support services.
Press Enter at anytime to skip a dialog. Use ctrl-c at anytime
to skip the remaining dialogs.
Would you like to enter the basic configuration dialog (yes/no): yes
Create another login account (yes/no) [n]:
Configure read-only SNMP community string (yes/no) [n]:
Configure read-write SNMP community string (yes/no) [n]:
Enter the switch name : <<var_mds_A_hostname>>
Continue with Out-of-band (mgmt0) management configuration? (yes/no) [y]:
Mgmt0 IPv4 address : <<var_mds_A_mgmt_ip>>
Mgmt0 IPv4 netmask : <<var_oob_mgmt_mask>>
Configure the default gateway? (yes/no) [y]:
IPv4 address of the default gateway : <<var_oob_gateway>>
Configure advanced IP options? (yes/no) [n]:
Enable the ssh service? (yes/no) [y]:
Type of ssh key you would like to generate (dsa/rsa) [rsa]:
Number of rsa key bits <1024-2048> [1024]: 2048
Enable the telnet service? (yes/no) [n]:
Configure congestion/no_credit drop for fc interfaces? (yes/no) [y]:
Enter the type of drop to configure congestion/no_credit drop? (con/no) [c]:
Enter milliseconds in multiples of 10 for congestion-drop for port mode F
in range (<100-500>/default), where default is 500. [d]:
Congestion-drop for port mode E must be greater than or equal to
Congestion-drop for port mode F. Hence, Congestion drop for port
mode E will be set as default.
Enable the http-server? (yes/no) [y]:
Configure clock? (yes/no) [n]:
Configure timezone? (yes/no) [n]: y
Enter timezone config [PST/MST/CST/EST] :EST
Enter Hrs offset from UTC [-23:+23] :-5
Enter Minutes offset from UTC [0-59] :0
Configure summertime? (yes/no) [n]:
Configure the ntp server? (yes/no) [n]: y
NTP server IPv4 address : 192.168.164.254
Configure default switchport interface state (shut/noshut) [shut]:
Configure default switchport trunk mode (on/off/auto) [on]:
Configure default switchport port mode F (yes/no) [n]:
Configure default zone policy (permit/deny) [deny]:
Enable full zoneset distribution? (yes/no) [n]:
Configure default zone mode (basic/enhanced) [basic]:
The following configuration will be applied:
password strength-check
switchname mds-9148s-a
interface mgmt0
ip address 192.168.164.15 255.255.255.0
no shutdown
ip default-gateway 192.168.164.254
ssh key rsa 2048 force
feature ssh
no feature telnet
system timeout congestion-drop default mode F
system timeout congestion-drop default mode E
feature http-server
clock timezone EST -5 0
ntp server 192.168.164.254
system default switchport shutdown
system default switchport trunk mode on
no system default zone default-zone permit
no system default zone distribute full
no system default zone mode enhanced
Would you like to edit the configuration? (yes/no) [n]:
Use this configuration and save it? (yes/no) [y]:
Set up the initial configuration for the Cisco MDS B switch on <<var_mds_B_hostname>>, by running through the same steps followed in the configuration, making the appropriate substitutions for <<var_mds_B_hostname>> and <<var_mds_B_mgmt_ip>>.
This document assumes the use of Cisco NX-OS 7.3(0)DY(1). To upgrade the Cisco MDS 9148S software to version 7.3(0)DY(1), refer to the Cisco MDS 9000 NX-OS Software Upgrade and Downgrade Guide, Release 7.3(x).
On each MDS 9148S switch, enable these features:
mds-9148s-a&b(config)# feature npiv
mds-9148s-a&b(config)# feature fport-channel-trunk
On MDS 9148S A create a Port Channel that will uplink to the Cisco UCS Fabric Interconnect:
mds-9148s-a(config)# interface port-channel 1
On MDS 9148S A create a Port Channel that will uplink to the Cisco UCS Fabric Interconnect:
mds-9148s-b(config)# interface port-channel 2
On MDS 9148S A create the VSAN that will be used for connectivity to the Cisco UCS Fabric Interconnect and the Pure Storage FlashArray. Assign this VSAN to the interfaces that will connect to the Pure Storage FlashArray, as well as the interfaces and the Port Channel they create that are connected to the Cisco UCS Fabric Interconnect:
vsan database
vsan <<var_vsan_a_id>>
vsan <<var_vsan_a_id>> interface fc1/8
vsan <<var_vsan_a_id>> interface po1
interface fc1/1-8
no shut
Repeat these commands on MDS 9148S B using the Fabric B appropriate VSAN ID:
vsan database
vsan <<var_vsan_b_id>>
vsan <<var_vsan_b_id>> interface fc1/8
vsan <<var_vsan_b_id>> interface po1
interface fc1/1-8
no shut
Configure the MDS 9148S A Port Channel and add the interfaces connecting into the Cisco UCS Fabric Interconnect into it:
interface port-channel 1
channel mode active
switchport rate-mode dedicated
interface fc1/5-8
port-license acquire
channel-group 1 force
no shutdown
Repeat these commands on MDS 9148S B using the Fabric B appropriate Port Channel:
interface port-channel 1
channel mode active
switchport rate-mode dedicated
interface fc1/5-8
port-license acquire
channel-group 1 force
no shutdown
This section provides detailed instructions for the configuration of the Cisco UCS 6332-16UP Fabric Interconnects used in this FlashStack solution. As with the Nexus and MDS Switches covered beforehand, some changes may be appropriate for a customer’s environment, but care should be taken when stepping outside of these instructions as it may lead to an improper configuration.
Figure 5 Cisco UCS Configuration Workflow
Physical cabling should be completed by following the diagram and table references in the previous section referenced as FlashStack Cabling.
The initial configuration dialogue for the UCS 6332-16UP Fabric Interconnects will be provide the primary information to the first fabric interconnect, with the second taking on most settings after joining the cluster.
To start on the configuration of the Fabric Interconnect A, connect to the console of the fabric interconnect and step through the Basic System Configuration Dialogue:
---- Basic System Configuration Dialog ----
This setup utility will guide you through the basic configuration of
the system. Only minimal configuration including IP connectivity to
the Fabric interconnect and its clustering mode is performed through these steps.
Type Ctrl-C at any time to abort configuration and reboot system.
To back track or make modifications to already entered values,
complete input till end of section and answer no when prompted
to apply configuration.
Enter the configuration method. (console/gui) ? console
Enter the setup mode; setup newly or restore from backup. (setup/restore) ? setup
You have chosen to setup a new Fabric interconnect. Continue? (y/n): y
Enforce strong password? (y/n) [y]:
Enter the password for "admin": ********
Confirm the password for "admin": ********
Is this Fabric interconnect part of a cluster(select 'no' for standalone)? (yes/no) [n]: yes
Enter the switch fabric (A/B) []: A
Enter the system name: <<var_ucs_6332_clustername>>
Physical Switch Mgmt0 IP address : <<var_ucsa_mgmt_ip>>
Physical Switch Mgmt0 IPv4 netmask : <<var_oob_mgmt_mask>>
IPv4 address of the default gateway : <<var_oob_gateway>>
Cluster IPv4 address : <<var_ucs_mgmt_vip>>
Configure the DNS Server IP address? (yes/no) [n]: y
DNS IP address : <<var_nameserver_ntp>>
Configure the default domain name? (yes/no) [n]: y
Default domain name : <<var_dns_domain_name>>
Join centralized management environment (UCS Central)? (yes/no) [n]:
Following configurations will be applied:
Switch Fabric=A
System Name=ucs-6332-16up-a
Enforced Strong Password=yes
Physical Switch Mgmt0 IP Address=192.168.164.51
Physical Switch Mgmt0 IP Netmask=255.255.255.0
Default Gateway=192.168.164.254
Ipv6 value=0
DNS Server=10.1.164.9
Domain Name=earthquakes.cisco.com
Cluster Enabled=yes
Cluster IP Address=192.168.164.50
NOTE: Cluster IP will be configured only after both Fabric Interconnects are initialized.
UCSM will be functional only after peer FI is configured in clustering mode.
Apply and save the configuration (select 'no' if you want to re-enter)? (yes/no): yes
Applying configuration. Please wait.
Configuration file - Ok
Continue the configuration on the console of the Fabric Interconnect B:
Enter the configuration method. (console/gui) [console] ?
Installer has detected the presence of a peer Fabric interconnect. This Fabric interconnect will be added to the cluster. Continue (y/n) ? y
Enter the admin password of the peer Fabric interconnect:
Connecting to peer Fabric interconnect... done
Retrieving config from peer Fabric interconnect... done
Peer Fabric interconnect Mgmt0 IPv4 Address: 192.168.164.51
Peer Fabric interconnect Mgmt0 IPv4 Netmask: 255.255.255.0
Cluster IPv4 address : 192.168.164.50
Peer FI is IPv4 Cluster enabled. Please Provide Local Fabric Interconnect Mgmt0 IPv4 Address
Physical Switch Mgmt0 IP address : 192.168.164.52
Apply and save the configuration (select 'no' if you want to re-enter)? (yes/no): yes
Applying configuration. Please wait.
To log in to the Cisco Unified Computing System (UCS) environment, complete the following steps:
1. Open a web browser and navigate to the Cisco UCS fabric interconnect cluster address.
2. Click the Launch UCS Manager link within the HTML section, or select the equivalent link below the HTML section to download the Cisco UCS Manager software.
3. If prompted to accept security certificates, accept as necessary.
4. When the UCS Manager login is prompted, enter admin as the user name and enter the administrative password.
5. Click Login to log in to Cisco UCS Manager.
This document assumes the use of Cisco UCS 3.1(2b). To upgrade the Cisco UCS Manager software and the Cisco UCS Fabric Interconnect software to version 3.1(2b), refer to Cisco UCS Manager Install and Upgrade Guides.
During the first connection to the Cisco UCS Manager GUI, a pop-up window will appear to allow for the configuration of Anonymous Reporting to Cisco on use to help with future development. To create anonymous reporting, complete the following step:
1. In the Anonymous Reporting window, select whether to send anonymous data to Cisco for improving future products:
If there is a desire to enable or disable Anonymous Reporting at a later date, it can be found in Cisco UCS Manager under: Admin -> Communication Management -> Call Home, which has a tab on the far right for Anonymous Reporting.
To synchronize the Cisco UCS environment to the NTP server, complete the following steps:
1. In Cisco UCS Manager, click the Admin tab in the navigation pane.
2. Select All > Timezone Management.
3. In the Properties pane, select the appropriate time zone in the Timezone menu.
4. Click Save Changes, and then click OK.
5. Click Add NTP Server.
6. Enter <<var_oob_ntp>> and click OK.
7. Click OK.
Setting the discovery policy simplifies the addition of B-Series Cisco UCS chassis. To modify the chassis discovery policy, complete the following steps:
1. In Cisco UCS Manager, click the Equipment tab in the navigation pane and select Equipment in the list on the left under the pulldown.
2. In the right pane, click the Policies tab.
3. Under Global Policies, set the Chassis/FEX Discovery Policy to match the number of uplink ports that are cabled between the chassis or fabric extenders (FEXes) and the fabric interconnects.
4. Set the Link Grouping Preference to Port Channel.
5. Leave other settings alone or change if appropriate to your environment.
6. Click Save Changes.
7. Click OK.
To enable server and uplink ports, complete the following steps:
1. In Cisco UCS Manager, click the Equipment tab in the navigation pane.
2. Select Equipment > Fabric Interconnects > Fabric Interconnect A (primary) > Fixed Module.
3. Expand Ethernet Ports.
4. Select the ports that are connected to the chassis, right-click them, and select “Configure as Server Port.”
5. Click Yes to confirm server ports and click OK.
6. Verify that the ports connected to the chassis are now configured as server ports.
7. Select ports 39 and 40 that are connected to the Cisco Nexus switches, right-click them, and select Configure as Uplink Port.
The last 6 ports of the UCS 6332 and UCS 6332-16UP FIs will only work with optical based QSFP transceivers and AOC cables, so they can be better utilized as uplinks to upstream resources that might be optical only.
8. Click Yes to confirm uplink ports and click OK.
9. Select Equipment > Fabric Interconnects > Fabric Interconnect B (subordinate) > Fixed Module.
10. Expand Ethernet Ports.
11. Select the ports that are connected to the chassis, right-click them and select Configure as Server Port.
12. Click Yes to confirm server ports and click OK.
13. Select ports 39 and 40 that are connected to the Cisco Nexus switches, right-click them, and select Configure as Uplink Port.
14. Click Yes to confirm the uplink ports and click OK.
To acknowledge all Cisco UCS chassis, complete the following steps:
1. In Cisco UCS Manager, click the Equipment tab in the navigation pane.
2. Expand Chassis and select each chassis that is listed.
3. Right-click each chassis and select Acknowledge Chassis.
4. Click Yes and then click OK to complete acknowledging the chassis.
To configure the necessary MAC address pools for the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the LAN tab in the navigation pane.
2. Select Pools > root.
In this procedure, two MAC address pools are created, one for each switching fabric.
3. Right-click MAC Pools under the root organization.
4. Select Create MAC Pool to create the MAC address pool.
5. Enter MAC_Pool_A as the name of the MAC pool.
6. Optional: Enter a description for the MAC pool.
7. Select Sequential as the option for Assignment Order.
8. Click Next.
9. Click Add.
10. Specify a starting MAC address.
For Cisco UCS deployments, the recommendation is to place 0A in the next-to-last octet of the starting MAC address to identify all of the MAC addresses as fabric A addresses. In our example, we have carried forward the of also embedding the extra building, floor and Cisco UCS domain number information giving us 00:25:B5:91:1A:00 as our first MAC address.
11. Specify a size for the MAC address pool that is sufficient to support the available blade or server resources.
12. Click OK.
13. Click Finish.
14. In the confirmation message, click OK.
15. Right-click MAC Pools under the root organization.
16. Select Create MAC Pool to create the MAC address pool.
17. Enter MAC_Pool_B as the name of the MAC pool.
18. Optional: Enter a description for the MAC pool.
19. Click Next.
20. Click Add.
21. Specify a starting MAC address.
For Cisco UCS deployments, it is recommended to place 0B in the next to last octet of the starting MAC address to identify all the MAC addresses in this pool as fabric B addresses. Once again, we have carried forward in our example of also embedding the extra building, floor and Cisco UCS domain number information giving us 00:25:B5:91:1B:00 as our first MAC address.
22. Specify a size for the MAC address pool that is sufficient to support the available blade or server resources.
23. Click OK.
24. Click Finish.
25. In the confirmation message, click OK.
To configure the necessary universally unique identifier (UUID) suffix pool for the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Pools > root.
3. Right-click UUID Suffix Pools.
4. Select Create UUID Suffix Pool.
5. Enter UUID_Pool as the name of the UUID suffix pool.
6. Optional: Enter a description for the UUID suffix pool.
7. Keep the prefix at the derived option.
8. Select Sequential for the Assignment Order.
9. Click Next.
10. Click Add to add a block of UUIDs.
11. Keep the From: field at the default setting.
12. Specify a size for the UUID block that is sufficient to support the available blade or server resources.
13. Click OK.
14. Click Finish.
15. Click OK.
To configure the necessary server pool for the Cisco UCS environment, complete the following steps:
Consider creating unique server pools to achieve the granularity that is required in your environment.
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Pools > root.
3. Right-click Server Pools.
4. Select Create Server Pool.
5. Enter Infra_Pool as the name of the server pool.
6. Optional: Enter a description for the server pool.
7. Click Next.
8. Select two (or more) servers to be used for the VMware management cluster and click >> to add them to the Infra_Pool server pool.
9. Click Finish.
10. Click OK.
To create a block of IP addresses for in band server Keyboard, Video, Mouse (KVM) access in the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the LAN tab in the navigation pane.
2. Select Pools > root > IP Pools.
3. Right-click IP Pool ext-mgmt and select Create Block of IPv4 Addresses.
4. Enter the starting IP address of the block and the number of IP addresses required, and the subnet and gateway information.
5. Click OK to create the block of IPs.
6. Click OK.
To configure the necessary WWNN pool for the Cisco UCS environment, complete the following steps on Cisco UCS Manager.
1. Select the SAN tab on the left.
2. Select Pools > root.
3. Right-click WWNN Pools under the root organization.
4. Select Create WWNN Pool to create the WWNN pool.
5. Enter WWNN_Pool for the name of the WWNN pool.
6. Optional: Enter a description for the WWNN pool.
7. Select Sequential for Assignment Order.
8. Click Next.
9. Click Add.
10. Modify the From field as necessary for the UCS Environment.
Modifications of the WWN block, as well as the WWPN and MAC Addresses, can convey identifying information for the Cisco UCS domain. Within the From field in our example, the 6th octet was changed from 00 to 01 to represent as identifying information for this being our first Cisco UCS domain.
Also, when having multiple Cisco UCS domains sitting in adjacency, it is important that these blocks, the WWNN, WWPN, and MAC hold differing values between each set.
11. Specify a size of the WWNN block sufficient to support the available server resources.
12. Click OK.
13. Click Finish to create the WWNN Pool.
14. Click OK.
To configure the necessary WWPN pools for the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the SAN tab in the navigation pane.
2. Select Pools > root.
3. In this procedure, two WWPN pools are created, one for each switching fabric.
4. Right-click WWPN Pools under the root organization.
5. Select Create WWPN Pool to create the WWPN pool.
6. Enter WWPN_Pool_A as the name of the WWPN pool.
7. Optional: Enter a description for the WWPN pool.
8. Select Sequential for Assignment Order.
9. Click Next.
10. Click Add.
11. Specify a starting WWPN.
For the FlashStack solution, the recommendation is to place 0A in the next-to-last octet of the starting WWPN to identify all of the WWPNs as fabric A addresses. Merging this with the pattern you used for the WWNN you will see a WWPN block starting with 20:00:00:25:B5:01:0A:00.
12. Specify a size for the WWPN pool that is sufficient to support the available blade or server resources.
13. Click OK.
14. Click Finish.
15. In the confirmation message, click OK.
16. Right-click WWPN Pools under the root organization.
17. Select Create WWPN Pool to create the WWPN pool.
18. Enter WWPN_Pool_B as the name of the WWPN pool.
19. Optional: Enter a description for the WWPN pool.
20. Select Sequential for Assignment Order.
21. Click Next.
22. Click Add.
23. Specify a starting WWPN.
For the FlashStack solution, the recommendation is to place 0B in the next-to-last octet of the starting WWPN to identify all of the WWPNs as fabric A addresses. Merging this with the pattern we used for the WWNN we see a WWPN block starting with 20:00:00:25:B5:01:0B:00.
24. Specify a size for the WWPN address pool that is sufficient to support the available blade or server resources.
25. Click OK.
26. Click Finish.
27. In the confirmation message, click OK.
Firmware management policies allow the administrator to select the corresponding packages for a given server configuration. These policies often include packages for adapter, BIOS, board controller, FC adapters, host bus adapter (HBA) option ROM, and storage controller properties.
To create a firmware management policy for a given server configuration in the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Policies > root.
3. Expand Host Firmware Packages.
4. Select default.
5. In the Actions pane, select Modify Package Versions.
6. Select the version 3.1(2b)B for both the Blade Package, leave Rack Package not set.
7. Leave Excluded Components with only Local Disk selected.
8. Click OK to modify the host firmware package.
To create an optional server pool qualification policy for the Cisco UCS environment, complete the following steps:
This example creates a policy for Cisco UCS B-Series and Cisco UCS C-Series servers with the Intel E2660 v4 Xeon Broadwell processors.
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Policies > root.
3. Right-click Server Pool Policy Qualifications.
4. Select Create Server Pool Policy Qualification.
5. Name the policy UCS-Broadwell.
6. Select Create CPU/Core Qualifications.
7. Select Xeon for the Processor/Architecture.
8. Select UCS-CPU-E52660E as the PID.
9. Click OK to create the CPU/Core qualification.
10. Click OK to create the policy then OK for the confirmation.
The VMware Cisco Custom Image will need to be downloaded for use during installation by manual access to the UCS KVM vMedia, or through a vMedia Policy covered in the subsection that follows these steps. To download the Cisco Custom Image, complete the following steps:
1. Click the following link for the VMware login page.
2. Type your email or customer number and the password and then click Log in.
3. Click the following link Cisco Custom Image 6.0 U2.
4. Click Download Now.
5. Save it to your destination folder.
This ESXi 6.0 U2 Cisco custom image includes updates for the fnic and enic drivers. The versions that are part of this image are: enic: 2.3.0.7; fnic: 1.6.0.26 There are newer versions for both of these drivers and their installation is covered later in this document.
A separate HTTP web server is required to automate the availability of the ESXi image to each Service Profile on first power on. The creation of this web server is not covered in this document, but can be any existing web server capable of serving files via HTTP that are accessible on the OOB network that the ESXi image can be placed upon.
Place the Cisco Custom Image VMware ESXi 6.0 U2 ISO on the HTTP server complete the following steps to create a vMedia Policy:
1. In Cisco UCS Manager, select the Servers tab.
2. Select Policies > root.
3. Right-click vMedia Policies.
4. Select Create vMedia Policy.
5. Name the policy ESXi-6.0U2-HTTP.
6. Enter “Mounts Cisco Custom ISO for ESXi 6.0 U2” in the Description field.
7. Leave “Yes” selected for Retry on Mount Failure.
8. Click Add.
9. Name the mount ESXi-6.0U2-HTTP.
10. Enter “ESXi ISO mount via HTTP” in the Description field.
11. Select the CDD Device Type.
12. Select the HTTP Protocol.
13. Enter the IP Address of the web server.
Since DNS server IPs were not entered into the KVM IP earlier, it is necessary to enter the IP of the web server instead of the host name.pool
14. Leave “None” selected for Image Name Variable.
15. Enter Vmware-ESXi-6.0.0-3620759-Custom-Cisco-6.0.2.1.iso as the Remote File name.
16. Enter the web server path to the ISO file in the Remote Path field.
17. Leave Username and Password blank.
18. Click OK to create the vMedia Mount.
19. Click OK then OK again to complete creating the vMedia Policy.
For any new servers added to the Cisco UCS environment the vMedia service profile template can be used to install the ESXi host. On first boot the host will boot into the ESXi installer. After ESXi is installed, the vMedia will not be referenced as long as the boot disk is accessible.
To create a server BIOS policy for the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Policies > root.
3. Right-click BIOS Policies.
4. Select Create BIOS Policy.
5. Enter VM-Host-Infra as the BIOS policy name.
6. Leave Reboot on BIOS Setting Change unselected.
7. Change the Quiet Boot setting to disabled.
8. Change Consistent Device Naming to enabled.
9. Click Finish to create the BIOS policy.
10. Click OK.
To update the default Maintenance Policy, complete the following steps:
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Policies > root.
3. Select Maintenance Policies > default.
4. Change the Reboot Policy to User Ack.
5. (Optional: Click “On Next Boot” to delegate maintenance windows to server owners).
6. Click Save Changes.
7. Click OK to accept the change.
A local disk configuration for the Cisco UCS environment is necessary if the servers in the environment do not have a local disk.
This policy should not be used on servers that contain local disks.
To create a local disk configuration policy, complete the following steps:
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Policies > root.
3. Right-click Local Disk Config Policies.
4. Select Create Local Disk Configuration Policy.
5. Enter SAN-Boot as the local disk configuration policy name.
6. Change the mode to No Local Storage.
7. Click OK to create the local disk configuration policy.
8. Click OK.
To create a power control policy for the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Policies > root.
3. Right-click Power Control Policies.
4. Select Create Power Control Policy.
5. Enter No-Power-Cap as the power control policy name.
6. Change the power capping setting to No Cap.
7. Click OK to create the power control policy.
8. Click OK.
To create a vNIC/vHBA placement policy for the infrastructure hosts, complete the following steps:
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Policies > root.
3. Right-click vNIC/vHBA Placement Policies.
4. Select Create Placement Policy.
5. Enter VM-Host-Infra as the name of the placement policy.
6. Click 1 and select Assigned Only for the Selection Preference.
7. Click OK, and then click OK again.
To create a network control policy that enables Cisco Discovery Protocol (CDP) on virtual network ports, complete the following steps:
1. In Cisco UCS Manager, click the LAN tab in the navigation pane.
2. Select Policies > root.
3. Right-click Network Control Policies.
4. Select Create Network Control Policy.
5. Enter Enable_CDP as the policy name.
6. For CDP, select the Enabled option.
7. Click OK to create the network control policy.
8. Click OK.
To configure the necessary port channels out of the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the LAN tab in the navigation pane.
In this procedure, two port channels are created: one from fabric A to both Cisco Nexus switches and one from fabric B to both Cisco Nexus switches.
2. Under LAN > LAN Cloud, expand the Fabric A tree.
3. Right-click Port Channels.
4. Select Create Port Channel.
5. Enter a unique ID for the port channel, (151 in our example to correspond with the upstream Nexus port channel).
6. With 151 selected, enter vPC-151-Nexus as the name of the port channel.
7. Click Next.
8. Select the following ports to be added to the port channel:
- Slot ID 1 and port 39
- Slot ID 1 and port 40
9. Click >> to add the ports to the port channel.
10. Click Finish to create the port channel.
11. Click OK.
12. In the navigation pane, under LAN > LAN Cloud, expand the fabric B tree.
13. Right-click Port Channels.
14. Select Create Port Channel.
15. Enter a unique ID for the port channel, (152 in our example to correspond with the upstream Nexus port channel).
16. With 152 selected, enter vPC-152-Nexus as the name of the port channel.
17. Click Next.
18. Select the following ports to be added to the port channel:
- Slot ID 1 and port 39
- Slot ID 1 and port 40
19. Click >> to add the ports to the port channel.
20. Click Finish to create the port channel.
21. Click OK.
To configure the necessary virtual local area networks (VLANs) for the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the LAN tab in the navigation pane.
In this procedure, six unique VLANs are created. See Table 2 for a list of VLANs to be created.
2. Select LAN > LAN Cloud.
3. Right-click VLANs.
4. Select Create VLANs.
5. Enter Native-VLAN as the name of the VLAN to be used as the native VLAN.
6. Keep the Common/Global option selected for the scope of the VLAN.
7. Enter the native VLAN ID.
8. Keep the Sharing Type as None.
9. Click OK, and then click OK again.
10. Expand the list of VLANs in the navigation pane, right-click the newly created Native-VLAN and select Set as Native VLAN.
11. Click Yes, and then click OK.
12. Right-click VLANs.
13. Select Create VLANs
14. Enter IB-Mgmt as the name of the VLAN to be used for management traffic.
15. Keep the Common/Global option selected for the scope of the VLAN.
16. Enter the In-Band management VLAN ID.
17. Keep the Sharing Type as None.
18. Click OK, and then click OK again.
19. Right-click VLANs.
20. Select Create VLANs.
21. Enter vMotion as the name of the VLAN to be used for vMotion.
22. Keep the Common/Global option selected for the scope of the VLAN.
23. Enter the vMotion VLAN ID.
24. Keep the Sharing Type as None.
25. Click OK, and then click OK again.
26. Right-click VLANs.
27. Select Create VLANs.
28. Enter VM-App-1 as the name of the VLAN to be used for VM Traffic.
29. Keep the Common/Global option selected for the scope of the VLAN.
30. Enter the VM-Traffic VLAN ID.
31. Keep the Sharing Type as None.
32. Click OK and then click OK again.
33. Repeat as needed for any additional VLANs created on the upstream Nexus switches.
To create multiple virtual network interface card (vNIC) templates for the Cisco UCS environment, complete the following steps. A total of 6 vNIC Templates will be created.
1. In Cisco UCS Manager, click the LAN tab in the navigation pane.
2. Select Policies > root.
3. Right-click vNIC Templates.
4. Select Create vNIC Template.
5. Enter vNIC_Mgmt_A as the vNIC template name.
6. Keep Fabric A selected.
7. Do not select the Enable Failover checkbox.
8. Select Primary Template for the Redundancy Type.
9. Leave Peer Redundancy Template as <not set>
10. Under Target, make sure that the VM checkbox is not selected.
11. Select Updating Template as the Template Type.
12. Under VLANs, select the checkboxes for IB-MGMT and Native-VLAN VLANs.
13. Set Native-VLAN as the native VLAN.
14. Leave vNIC Name selected for the CDN Source.
15. Leave 1500 for the MTU.
16. In the MAC Pool list, select MAC_Pool_A.
17. In the Network Control Policy list, select Enable_CDP.
18. Click OK to create the vNIC template.
19. Click OK.
For the vNIC_Mgmt_B Template, complete the following steps:
1. In the navigation pane, select the LAN tab.
2. Select Policies > root.
3. Right-click vNIC Templates.
4. Select Create vNIC Template
5. Enter vNIC_Mgmt_B as the vNIC template name.
6. Select Fabric B.
7. Do not select the Enable Failover checkbox.
8. Select Secondary Template for Redundancy Type.
9. For the Peer Redundancy Template pulldown, select vNIC_Mgmt_A.
10. Under Target, make sure the VM checkbox is not selected.
11. Select Updating Template as the template type.
12. Under VLANs, select the checkboxes for IB-MGMT and Native-VLAN VLANs.
13. Set default as the native VLAN.
14. Leave vNIC Name selected for the CDN Source.
15. Leave 1500 for the MTU.
16. In the MAC Pool list, select MAC_Pool_B.
17. In the Network Control Policy list, select Enable_CDP.
18. Click OK to create the vNIC template.
19. Click OK.
1. In Cisco UCS Manager, click the LAN tab in the navigation pane.
2. Select Policies > root.
3. Right-click vNIC Templates.
4. Select Create vNIC Template.
5. Enter vNIC_vMotion_A as the vNIC template name.
6. Keep Fabric A selected.
7. Do not select the Enable Failover checkbox.
8. Select Primary Template for the Redundancy Type.
9. Leave Peer Redundancy Template as <not set>
10. Under Target, make sure that the VM checkbox is not selected.
11. Select Updating Template as the Template Type.
12. Under VLANs, select the checkboxes vMotion as the only VLAN.
13. Set vMotion as the native VLAN.
14. For MTU, enter 9000.
15. In the MAC Pool list, select MAC_Pool_A.
16. In the Network Control Policy list, select Enable_CDP.
17. Click OK to create the vNIC template.
18. Click OK.
For the vNIC_vMotion_B Template, complete the following steps:
1. In the navigation pane, select the LAN tab.
2. Select Policies > root.
3. Right-click vNIC Templates.
4. Select Create vNIC Template
5. Enter vNIC_vMotion_B as the vNIC template name.
6. Select Fabric B.
7. Do not select the Enable Failover checkbox.
8. Select Secondary Template for Redundancy Type.
9. For the Peer Redundancy Template pulldown, select vNIC_vMotion_A.
10. Under Target, make sure the VM checkbox is not selected.
11. Select Updating Template as the template type.
12. Under VLANs, select the checkbox for the vMotion VLAN.
13. Select vNIC Name for the CDN Source.
14. For MTU, enter 9000.
15. In the MAC Pool list, select MAC_Pool_B.
16. In the Network Control Policy list, select Enable_CDP.
17. Click OK to create the vNIC template.
18. Click OK.
1. In Cisco UCS Manager, click the LAN tab in the navigation pane.
2. Select Policies > root.
3. Right-click vNIC Templates.
4. Select Create vNIC Template.
5. Enter vNIC_App_A as the vNIC template name.
6. Keep Fabric A selected.
7. Do not select the Enable Failover checkbox.
8. Select Primary Template for the Redundancy Type.
9. Leave Peer Redundancy Template as <not set>
10. Under Target, make sure that the VM checkbox is not selected.
11. Select Updating Template as the Template Type.
12. Under VLANs, select the checkboxes for any application or production VLANs that should be delivered to the ESXi hosts.
13. For MTU, enter 9000.
14. In the MAC Pool list, select MAC_Pool_A.
15. In the Network Control Policy list, select Enable_CDP.
16. Click OK to create the vNIC template.
17. Click OK.
For the vNIC_App_B Template, complete the following steps:
1. In the navigation pane, select the LAN tab.
2. Select Policies > root.
3. Right-click vNIC Templates.
4. Select Create vNIC Template
5. Enter vNIC_App_B as the vNIC template name.
6. Select Fabric B.
7. Do not select the Enable Failover checkbox.
8. Select Secondary Template for Redundancy Type.
9. For the Peer Redundancy Template pulldown, select vNIC_App_A.
10. Under Target, make sure the VM checkbox is not selected.
11. Select Updating Template as the template type.
12. Under VLANs, select the same checkboxes for the application or production VLANs selected for the vNIC_App_A vNIC Template.
13. Set default as the native VLAN.
14. Select vNIC Name for the CDN Source.
15. For MTU, enter 9000.
16. In the MAC Pool list, select MAC_Pool_B.
17. In the Network Control Policy list, select Enable_CDP.
18. Click OK to create the vNIC template.
19. Click OK.
To configure jumbo frames and enable quality of service in the Cisco UCS fabric, complete the following steps:
1. In Cisco UCS Manager, click the LAN tab in the navigation pane.
2. Select LAN > LAN Cloud > QoS System Class.
3. In the right pane, click the General tab.
4. On the Best Effort row, enter 9216 in the box under the MTU column.
5. Click Save Changes in the bottom of the window.
6. Click OK
To configure the necessary Infrastructure LAN Connectivity Policy, complete the following steps:
1. In Cisco UCS Manager, click the LAN tab in the navigation pane.
2. Select LAN > Policies > root.
3. Right-click LAN Connectivity Policies.
4. Select Create LAN Connectivity Policy.
5. Enter Infra-LAN-Policy as the name of the policy.
6. Click the upper Add button to add a vNIC.
7. In the Create vNIC dialog box, enter 00-Mgmt-A as the name of the vNIC.
The numeric prefix of “00-“ and subsequent increments on the later vNICs are used in the vNIC naming to force the device ordering through Consistent Device Naming (CDN). Without this, some operating systems might not respect the device ordering that is set within Cisco UCS.
8. Select the Use vNIC Template checkbox.
9. In the vNIC Template list, select vNIC_Mgmt_A.
10. In the Adapter Policy list, select VMWare.
11. Click OK to add this vNIC to the policy.
12. Click the upper Add button to add another vNIC to the policy.
13. In the Create vNIC box, enter 01-Mgmt-B as the name of the vNIC.
14. Select the Use vNIC Template checkbox.
15. In the vNIC Template list, select vNIC_Mgmt_B.
16. In the Adapter Policy list, select VMWare.
17. Click OK to add the vNIC to the policy.
18. Click the upper Add button to add a vNIC.
19. In the Create vNIC dialog box, enter 02-vMotion-A as the name of the vNIC.
20. Select the Use vNIC Template checkbox.
21. In the vNIC Template list, select vNIC_vMotion_A.
22. In the Adapter Policy list, select VMWare.
23. Click OK to add this vNIC to the policy.
24. Click the upper Add button to add a vNIC to the policy.
25. In the Create vNIC dialog box, enter 03-vMotion-B as the name of the vNIC.
26. Select the Use vNIC Template checkbox.
27. In the vNIC Template list, select vNIC_vMotion_B.
28. In the Adapter Policy list, select VMWare.
20. Click OK to add this vNIC to the policy.
29. Click the upper Add button to add a vNIC.
30. In the Create vNIC dialog box, enter 04-App-A as the name of the vNIC.
31. Select the Use vNIC Template checkbox.
32. In the vNIC Template list, select vNIC_App_A.
33. In the Adapter Policy list, select VMWare.
34. Click OK to add this vNIC to the policy.
35. Click the upper Add button to add a vNIC to the policy.
36. In the Create vNIC dialog box, enter 05-App-B as the name of the vNIC.
37. Select the Use vNIC Template checkbox.
38. In the vNIC Template list, select vNIC_App_B.
39. In the Adapter Policy list, select VMWare.
40. Click OK to add this vNIC to the policy.
41. Click OK to create the LAN Connectivity Policy.
42. Click OK.
The Cisco UCS 6332-16UP Fabric Interconnects will have a slider mechanism within the Cisco UCS Manager GUI interface that will control the first 16 ports starting from the first port, and configured in increments of the first 6, 12, or all 16 of the unified ports.
To enable the fibre channel ports, complete the following steps:
1. In Cisco UCS Manager, click the Equipment tab in the navigation pane.
2. Select Equipment > Fabric Interconnects > Fabric Interconnect A (primary)
3. Select Configure Unified Ports.
4. Click Yes on the pop-up window warning that changes to the fixed module will require a reboot of the fabric interconnect and changes to the expansion module will require a reboot of that module.
5. Within the Configured Fixed Ports pop-up window move the gray slider bar from the left to the right to select either 6, 12, or 16 ports to be set as FC Uplinks.
6. Click OK to continue
7. Select Equipment > Fabric Interconnects > Fabric Interconnect B (primary)
8. Select Configure Unified Ports.
9. Click Yes on the pop-up window warning that changes to the fixed module will require a reboot of the fabric interconnect and changes to the expansion module will require a reboot of that module.
10. Within the Configured Fixed Ports pop-up window move the gray slider bar from the left to the right to select either 6, 12, or 16 ports to be set as FC Uplinks.
11. Click OK to continue
The Fabric Interconnects will reboot, reconnect to Cisco UCS Manager after they are back up.
To configure the necessary virtual storage area networks (VSANs) for the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the SAN tab in the navigation pane.
In this procedure, two VSANs are created.
2. Select SAN > SAN Cloud.
3. Right-click VSANs.
4. Select Create VSAN.
5. Enter VSAN_A as the name of the VSAN to be used for Fabric A
6. Leave Disabled selected for FC Zoning.
7. Select Fabric A.
8. Enter a unique VSAN ID and a corresponding FCoE VLAN ID. It is recommended use the same ID for both parameters and to use something other than 1.
9. Click OK, and then click OK again.
10. Under SAN Cloud, right-click VSANs.
11. Select Create VSAN.
12. Enter VSAN_B as the name of the VSAN to be used for Fabric B.
13. Leave Disabled selected for FC Zoning.
14. Select Fabric B.
15. Enter a unique VSAN ID and a corresponding FCoE VLAN ID. It is recommended use the same ID for both parameters and to use something other than 1.
16. Click OK and then click OK again.
To configure the necessary port channels for the Cisco UCS environment, complete the following steps:
1. In the navigation pane, under SAN > SAN Cloud, expand the Fabric A tree.
2. Right-click FC Port Channels.
3. Select Create Port Channel.
4. Enter 1 for the ID and Po1 for the Port Channel name.
5. Click Next then choose appropriate ports and click >> to add the ports to the port channel.
6. Click Finish.
7. Click OK.
8. Under the VSAN pulldown for Port-Channel 1, select VSAN_A 101.
9. Click Save Changes and then click OK.
10. In the navigation pane, under SAN > SAN Cloud, expand the Fabric B tree.
11. Right-click FC Port Channels.
12. Select Create Port Channel.
13. Enter 2 for the ID and Po2 for the Port Channel name.
14. Click Next then choose appropriate ports and click >> to add the ports to the port channel.
15. Click Finish.
16. Click OK.
17. Under the VSAN pulldown for Port-Channel 2, select VSAN_B 102.
18. Click Save Changes and then click OK.
To create the necessary virtual host bus adapter (vHBA) templates for the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the SAN tab in the navigation pane.
2. Select Policies > root.
3. Right-click vHBA Templates.
4. Select Create vHBA Template.
5. Enter vHBA_Template_A as the vHBA template name.
6. Keep Fabric A selected.
7. Leave Redundancy Type as No Redundancy.
8. Select VSAN_A.
9. Leave Initial Template as the Template Type.
10. Select WWPN_Pool_A as the WWPN Pool.
11. Click OK to create the vHBA template.
12. Click OK.
13. Right-click vHBA Templates.
14. Select Create vHBA Template.
15. Enter vHBA_Template_B as the vHBA template name.
16. Select Fabric B as the Fabric ID.
17. Select VSAN_B.
18. Leave Redundancy Type as No Redundancy.
19. Leave Initial Template as the Template Type.
20. Select WWPN_Pool_B as the WWPN Pool.
21. Click OK to create the vHBA template.
22. Click OK.
To configure the necessary Infrastructure SAN Connectivity Policy, complete the following steps:
1. In Cisco UCS Manager, click the SAN tab in the navigation pane.
2. Select SAN > Policies > root.
3. Right-click SAN Connectivity Policies.
4. Select Create SAN Connectivity Policy.
5. Enter Infra-SAN-Policy as the name of the policy.
6. Select the previously created WWNN_Pool for the WWNN Assignment.
7. Click the Add button at the bottom to add a vHBA.
8. In the Create vHBA dialog box, enter Fabric-A as the name of the vHBA.
9. Select the Use vHBA Template checkbox.
10. Leave Redundancy Pair unselected.
11. In the vHBA Template list, select vHBA_Template_A.
12. In the Adapter Policy list, select VMWare.
13. Click OK.
14. Click the Add button at the bottom to add a second vHBA.
15. In the Create vHBA dialog box, enter Fabric-B as the name of the vHBA.
16. Select the Use vHBA Template checkbox.
17. Leave Redundancy Pair unselected.
18. In the vHBA Template list, select vHBA_Template_B.
19. In the Adapter Policy list, select VMWare.
20. Click OK.
21. Click OK to create the SAN Connectivity Policy.
22. Click OK to confirm creation.
This procedure will define the Primary and Secondary Boot Targets for each Fabric side (A/B). These will be the WWNs that need to be collected from the first adapter of each controller on the Pure Storage FlashArray that are visible from the System Health tab under the System section of the FlashArray Web GUI.
Find the FC0 adapters for each controller from within the System view and record the values to be used for Primary and Secondary Targets. In the example lab environment, these appear as the first ports on the right side of each controller shown.
Table 14 Fabric A Boot Targets for the FlashArray//m
|
Port Name |
Target Role |
WWN/WWPN Example Environment |
WWN/WWPN Customer Environment |
FlashArray//m Controller 0 |
CT0.FC0 |
Primary |
52:4a:93:7a:98:4c:3e:00 |
|
FlashArray//m Controller 1 |
CT1.FC0 |
Secondary |
52:4a:93:7a:98:4c:3e:10 |
|
Within the same System view, find the FC1 adapters for each controller and record the values to be used for Primary and Secondary Targets. In the example lab environment, these appear as the second ports on the right side of each controller shown.
Table 15 Fabric B Boot Targets for the FlashArray//m
|
Port Name |
Target Role |
WWN/WWPN Example Environment |
WWN/WWPN Customer Environment |
FlashArray//m Controller 0 |
CT0.FC1 |
Primary |
52:4a:93:7a:98:4c:3e:01 |
|
FlashArray//m Controller 1 |
CT1.FC1 |
Secondary |
52:4a:93:7a:98:4c:3e:11 |
|
To create boot policies for the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Policies > root.
3. Right-click Boot Policies.
4. Select Create Boot Policy.
5. Enter Boot-FC-A as the name of the boot policy.
6. Optional: Enter a description for the boot policy.
Do not select the Reboot on Boot Order Change checkbox.
7. Keep the Reboot on Boot Order Change option cleared.
8. Expand the Local Devices drop-down menu and select Add Remote CD/DVD.
9. Expand the vHBAs drop-down menu and select Add SAN Boot.
10. In the Add SAN Boot dialog box, enter Fabric-A in the vHBA field.
11. Confirm that Primary is selected for the Type option.
12. Click OK to add the SAN boot initiator.
13. From the vHBA drop-down menu, select Add SAN Boot Target.
14. Enter 1 as the value for Boot Target LUN.
15. Enter the WWPN for CT0.FC0 recorded in Table 13.
16. Select Primary for the SAN boot target type.
17. Click OK to add the SAN boot target.
18. From the vHBA drop-down menu, select Add SAN Boot Target.
19. Enter 1 as the value for Boot Target LUN.
20. Enter the WWPN for CT1.FC0 recorded in Table 13.
21. Click OK to add the SAN boot target.
22. From the vHBA drop-down menu, select Add SAN Boot.
23. In the Add SAN Boot dialog box, enter Fabric-B in the vHBA box.
24. The SAN boot type should automatically be set to Secondary, and the Type option should be unavailable.
25. Click OK to add the SAN boot initiator.
26. From the vHBA drop-down menu, select Add SAN Boot Target.
27. Enter 1 as the value for Boot Target LUN.
28. Enter the WWPN for CT0.FC1 recorded in Table 14.
29. Select Primary for the SAN boot target type.
30. Click OK to add the SAN boot target.
31. From the vHBA drop-down menu, select Add SAN Boot Target.
32. Enter 1 as the value for Boot Target LUN.
33. Enter the WWPN for CT1.FC1 recorded in Table 14.
34. Click OK to add the SAN boot target.
35. Expand CIMC Mounted Media and select Add CIMC Mounted CD/DVD.
36. Click OK then click OK again to create the boot policy.
In this procedure, one service profile template for Infrastructure ESXi hosts is created for fabric A boot.
To create the service profile template, complete the following steps:
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Service Profile Templates > root.
3. Right-click root.
4. Select Create Service Profile Template to open the Create Service Profile Template wizard.
5. Enter VM-Host-Infra-A as the name of the service profile template. This service profile template is configured to boot from FlashArray//m controller 1 on fabric A.
6. Select the “Updating Template” option.
7. Under UUID, select UUID_Pool as the UUID pool.
8. Click Next.
1. If you have servers with no physical disks, click the Local Disk Configuration Policy tab and select the SAN-Boot Local Storage Policy. Otherwise, select the default Local Storage Policy.
2. Click Next.
To configure the network options, complete the following steps:
1. Keep the default setting for Dynamic vNIC Connection Policy.
2. Select the “Use Connectivity Policy” option to configure the LAN connectivity.
3. Select Infra-LAN-Policy from the LAN Connectivity Policy pull-down.
4. Click Next.
1. Select the Use Connectivity Policy option for the “How would you like to configure SAN connectivity?” field.
2. Pick the Infra-SAN-Policy option from the SAN Connectivity Policy pull-down.
3. Click Next.
1. Set no Zoning options and click Next.
Configure vNIC/HBA Placement
1. In the “Select Placement” list, leave the placement policy as “Let System Perform Placement”.
2. Click Next.
1. From the vMedia Policy pulldown select “ESXi-6.0U2-HTTP”
2. Click Next.
1. Select Boot-FC-A for Boot Policy.
2. Click Next to continue to the next section.
1. Change the Maintenance Policy to default.
2. Click Next.
To configure server assignment, complete the following steps:
1. In the Pool Assignment list, select Infra_Pool.
2. Optional: Select a Server Pool Qualification policy.
3. Select Down as the power state to be applied when the profile is associated with the server.
4. Select “UCS-Broadwell” for the Server Pool Qualification.
5. Firmware Management at the bottom of the page can be left alone as it will use default from the Host Firmware list.
6. Click Next.
To configure the operational policies, complete the following steps:
1. In the BIOS Policy list, select VM-Host-Infra.
2. Expand Power Control Policy Configuration and select No-Power-Cap in the Power Control Policy list.
3. Click Finish to create the service profile template.
4. Click OK in the confirmation message.
To create service profiles from the service profile template, complete the following steps:
1. Connect to the UCS 6332-16UP Fabric Interconnect UCS Manager, click the Servers tab in the navigation pane.
2. Select Service Profile Templates > root > Service Template VM-Host-Prod-Fabric-A.
3. Right-click VM-Host-Infra-Fabric-A and select Create Service Profiles from Template.
4. Enter VM-Host-Infra-0 as the service profile prefix.
5. Leave 1 as “Name Suffix Starting Number.”
6. Leave 2 as the “Number of Instances.”
7. Click OK to create the service profiles.
8. Click OK in the confirmation message to provision two FlashStack Service Profiles.
This section continues the configuration of the Cisco MDS 9148S Multilayer Fabric Switches now that resources are attached, to provide zoning for supported devices.
Figure 6 MDS Fabric Zoning Workflow
Gather the WWPN of the FlashArray adapters using the show flogi database command on each switch and create a spreadsheet to reference when creating device aliases on each MDS. For MDS 9148S A this will be:
mds-9148s-a# sh flogi database
--------------------------------------------------------------------------------
INTERFACE VSAN FCID PORT NAME NODE NAME
--------------------------------------------------------------------------------
fc1/1 101 0xe00200 52:4a:93:7a:98:4c:3e:00 52:4a:93:7a:98:4c:3e:00
fc1/2 101 0xe00300 52:4a:93:7a:98:4c:3e:02 52:4a:93:7a:98:4c:3e:02
fc1/3 101 0xe00000 52:4a:93:7a:98:4c:3e:10 52:4a:93:7a:98:4c:3e:10
fc1/4 101 0xe00100 52:4a:93:7a:98:4c:3e:12 52:4a:93:7a:98:4c:3e:12
port-channel1 101 0xe00400 24:01:00:de:fb:07:c9:80 20:65:00:de:fb:07:c9:81
port-channel1 101 0xe00401 20:00:00:25:b5:01:0a:00 20:00:00:25:b5:01:00:00
port-channel1 101 0xe00406 20:00:00:25:b5:01:0a:01 20:00:00:25:b5:01:00:01
Match these values to their sources previously collected in Table 13 and Table 14 from the Pure Storage Portal System Health view:
and the UCS Service Profile vHBA listing for each host found within Servers -> Service Profiles -> <Service Profile of Source Host> -> Storage -> vHBAs.
Source |
Switch/Port |
WWPN/PWWN |
Customer WWPN/PWWN |
FlashStack-CT0FC2-fabricA |
MDS A fc 1/1 |
52:4a:93:7a:98:4c:3e:00 |
|
FlashStack-CT0FC0-fabricA |
MDS A fc 1/2 |
52:4a:93:7a:98:4c:3e:02 |
|
FlashStack-CT1FC2-fabricA |
MDS A fc 1/3 |
52:4a:93:7a:98:4c:3e:10 |
|
FlashStack-CT1FC0-fabricA |
MDS A fc 1/4 |
52:4a:93:7a:98:4c:3e:12 |
|
UCS Fabric Interconnect |
SAN Port Channel A |
20:65:00:de:fb:07:c9:81 |
|
VM-Host-Infra-01-A |
SAN Port Channel A |
20:00:00:25:b5:01:0a:00 |
|
VM-Host-Infra-02-A |
SAN Port Channel A |
20:00:00:25:b5:01:0a:01 |
|
Create device alias database entries for each of the PWWNs mapping them to their human readable Source names:
mds-9148s-a(config-if)# device-alias database
mds-9148s-a(config-device-alias-db)# device-alias name FlashArray-CT0FC0-fabricA pwwn 52:4a:93:7a:98:4c:3e:00
mds-9148s-a(config-device-alias-db)# device-alias name FlashArray-CT0FC2-fabricA pwwn 52:4a:93:7a:98:4c:3e:02
mds-9148s-a(config-device-alias-db)# device-alias name FlashArray-CT1FC0-fabricA pwwn 52:4a:93:7a:98:4c:3e:10
mds-9148s-a(config-device-alias-db)# device-alias name FlashArray-CT0FC2-fabricA pwwn 52:4a:93:7a:98:4c:3e:12
mds-9148s-a(config-device-alias-db)# device-alias name VM-Host-Infra-01-A pwwn 20:00:00:25:b5:01:0a:00
mds-9148s-a(config-device-alias-db)# device-alias name VM-Host-Infra-02-A pwwn 20:00:00:25:b5:01:0a:01
mds-9148s-a(config-device-alias-db)# exit
mds-9148s-a(config)# device-alias commit
Repeat these steps on MDS 9148S B, starting with gathering the flogi database information:
mds-9148s-b# sh flogi database
--------------------------------------------------------------------------------
INTERFACE VSAN FCID PORT NAME NODE NAME
--------------------------------------------------------------------------------
fc1/1 102 0x640200 52:4a:93:7a:98:4c:3e:01 52:4a:93:7a:98:4c:3e:01
fc1/2 102 0x640300 52:4a:93:7a:98:4c:3e:03 52:4a:93:7a:98:4c:3e:03
fc1/3 102 0x640000 52:4a:93:7a:98:4c:3e:11 52:4a:93:7a:98:4c:3e:11
fc1/4 102 0x640100 52:4a:93:7a:98:4c:3e:13 52:4a:93:7a:98:4c:3e:13
port-channel2 102 0x640400 24:02:00:de:fb:25:f1:00 20:66:00:de:fb:25:f1:01
port-channel2 102 0x640401 20:00:00:25:b5:01:0b:00 20:00:00:25:b5:01:00:00
port-channel2 102 0x640407 20:00:00:25:b5:01:0b:01 20:00:00:25:b5:01:00:01
Source |
Switch/Port |
WWPN/PWWN |
Customer WWPN/PWWN |
FlashStack-CT0FC2-fabricB |
MDS B fc 1/1 |
52:4a:93:7a:98:4c:3e:03 |
|
FlashStack-CT0FC0-fabricB |
MDS B fc 1/2 |
52:4a:93:7a:98:4c:3e:01 |
|
FlashStack-CT1FC2-fabricB |
MDS B fc 1/3 |
52:4a:93:7a:98:4c:3e:13 |
|
FlashStack-CT1FC0-fabricB |
MDS B fc 1/4 |
52:4a:93:7a:98:4c:3e:11 |
|
UCS Fabric Interconnect |
SAN Port Channel B |
24:02:00:de:fb:25:f1:00 |
|
VM-Host-Infra-01-B |
SAN Port Channel B |
20:00:00:25:b5:01:0b:00 |
|
VM-Host-Infra-02-B |
SAN Port Channel B |
20:00:00:25:b5:01:0b:01 |
|
Create device alias database entries for each of the PWWNs mapping them to their human readable Source names:
mds-9148s-b(config-if)# device-alias database
mds-9148s-b(config-device-alias-db)# device-alias name FlashArray-CT0FC1-fabricB pwwn 52:4a:93:7a:98:4c:3e:01
mds-9148s-b(config-device-alias-db)# device-alias name FlashArray-CT0FC3-fabricB pwwn 52:4a:93:7a:98:4c:3e:03
mds-9148s-b(config-device-alias-db)# device-alias name FlashArray-CT1FC1-fabricB pwwn 52:4a:93:7a:98:4c:3e:11
mds-9148s-b(config-device-alias-db)# device-alias name FlashArray-CT1FC3-fabricB pwwn 52:4a:93:7a:98:4c:3e:13
mds-9148s-b(config-device-alias-db)# device-alias name VM-Host-Infra-01-B pwwn 20:00:00:25:b5:01:0b:00
mds-9148s-b(config-device-alias-db)# device-alias name VM-Host-Infra-02-B pwwn 20:00:00:25:b5:01:0b:01
mds-9148s-b(config-device-alias-db)# exit
mds-9148s-b(config)# device-alias commit
Create zones for each host using the device aliases created in the previous step:
mds-9148s-a(config)# zone name VM-Host-Infra-01-A vsan 101
mds-9148s-a(config-zone)# member device-alias VM-Host-Infra-01-A
mds-9148s-a(config-zone)# member device-alias FlashArray-CT0FC0-fabricA
mds-9148s-a(config-zone)# member device-alias FlashArray-CT0FC2-fabricA
mds-9148s-a(config-zone)# member device-alias FlashArray-CT1FC0-fabricA
mds-9148s-a(config-zone)# member device-alias FlashArray-CT1FC2-fabricA
mds-9148s-a(config-zone)# zone name VM-Host-Infra-02-A vsan 101
mds-9148s-a(config-zone)# member device-alias VM-Host-Infra-02-A
mds-9148s-a(config-zone)# member device-alias FlashArray-CT0FC0-fabricA
mds-9148s-a(config-zone)# member device-alias FlashArray-CT0FC2-fabricA
mds-9148s-a(config-zone)# member device-alias FlashArray-CT1FC0-fabricA
mds-9148s-a(config-zone)# member device-alias FlashArray-CT1FC2-fabricA
mds-9148s-b(config)# zone name VM-Host-Infra-01-B vsan 102
mds-9148s-b(config-zone)# member device-alias VM-Host-Infra-01-B
mds-9148s-b(config-zone)# member device-alias FlashArray-CT0FC1-fabricB
mds-9148s-b(config-zone)# member device-alias FlashArray-CT0FC3-fabricB
mds-9148s-b(config-zone)# member device-alias FlashArray-CT1FC1-fabricB
mds-9148s-b(config-zone)# member device-alias FlashArray-CT1FC3-fabricB
mds-9148s-b(config)# zone name VM-Host-Infra-02-B vsan 102
mds-9148s-b(config-zone)# member device-alias VM-Host-Infra-02-B
mds-9148s-b(config-zone)# member device-alias FlashArray-CT0FC1-fabricB
mds-9148s-b(config-zone)# member device-alias FlashArray-CT0FC3-fabricB
mds-9148s-b(config-zone)# member device-alias FlashArray-CT1FC1-fabricB
mds-9148s-b(config-zone)# member device-alias FlashArray-CT1FC3-fabricB
Repeating these steps on each MDS for each Cisco UCS host provisioned.
Add the zones to a zoneset on each MDS switch:
mds-9148s-a(config-zone)# zoneset name flashstack-zoneset vsan 101
mds-9148s-a(config-zoneset)# member VM-Host-Infra-01-A
mds-9148s-a(config-zoneset)# member VM-Host-Infra-02-A
mds-9148s-b(config-zone)# zoneset name flashstack-zoneset vsan 102
mds-9148s-b(config-zoneset)# member VM-Host-Infra-01-B
mds-9148s-b(config-zoneset)# member VM-Host-Infra-02-B
Activate the zonesets and save the configuration:
mds-9148s-a(config-zoneset)# zoneset activate name flashstack-zoneset vsan 101
mds-9148s-a(config)# copy run start
mds-9148s-b(config-zoneset)# zoneset activate name flashstack-zoneset vsan 102
mds-9148s-b(config)# copy run start
The Pure Storage FlashArray//M is accessible to the FlashStack, but no storage has been deployed at this point. The storage to be deployed will include:
· ESXi Fibre Channel Boot LUNs
· VMFS Datastores
The FC Boot LUNs will need to be setup from the Pure Storage Web Portal, and the VMFS datastores will be directly provisioned from the vSphere Web Client after the Pure Storage vSphere Web Client Plugin has been registered with the vCenter.
Figure 7 FlashArray//m Storage Deployment Workflow
For Host registration, complete the following steps:
1. Host entries can be made from the Pure Storage Web Portal from the STORAGE tab, by selecting the + box next to Hosts appearing in the left side column:
2. After clicking the Create Host option, a pop-up will appear to create an individual host entry on the FlashArray:
3. To create more than one host entry, click the Create Multiple… option, filling in the Name, Start Number, Count, and Number of Digits, with a “#” appearing in the name where an iterating number will appear:
4. Click Create to add the hosts.
5. For each host created, select the host from within the STORAGE tab, and click the Host Ports tab within the individual host view. From the Host Ports tab select the gear icon pull-down and select Configure Fibre Channel WWNs:
6. A pop-up will appear for Configure Fibre Channel WWNs for Host <host being configured>. Within this pop-up, click the Enter WWNs Manually button and enter in the WWNs (WWPN) for each host previously recorded in Table 15 and Table 16:
7. After adding the WWNs, click Confirm to add the Host Ports. Repeat these steps for each host created.
To create private volumes for each ESXi host, complete the following steps:
1. Volumes can be provisioned from the Pure Storage Web Portal from the STORAGE tab, by clicking the + box next to Volumes appearing in the left side column:
2. A pop-up will appear to create a volume on the FlashArray:
3. To create more than one volume, click the Create Multiple… option, filling in the Name, Provisioned Size, Staring Number, Count, and Number of Digits, with a “#” appearing in the name where an iterating number will appear:
4. Click Create to provision the volumes to be used as FC boot LUNs.
5. Go back to the Hosts section under the STORAGE tab. Click one of the hosts and select the gear icon pull-down within the Connected Volumes tab within that host.
6. Within the pull-down of the gear icon, select Connect Volumes, and a pop-up will appear:
7. Select the volume that has been provisioned for the host, click the + next to the volume and select Confirm to proceed. Repeat the steps for connecting volumes for each of the host/volume pairs configured.
The Host entries allow for the individual boot LUNs to associate to each ESXi host, but the shared volumes to use as VM datastores need Host Groups to have those volumes shared amongst multiple hosts.
To create a Host Group in the Pure Storage Web Portal, complete the following steps:
1. Select the STORAGE tab and click the + box next to Hosts appearing in the left side column:
2. Select the Create Host Group option and provide a name for the Host Group to be used by the ESXi cluster:
3. With Hosts still selected within the STORAGE tab, click the gear icon pull-down within the Hosts tab of the Host Group created, and select Add Hosts:
4. Select the + icon next to each host, and click Confirm to add them to the Host Group:
The Pure Storage vSphere Web Client Plugin will be accessible through the vSphere Web Client after registration through the Pure Storage Web Portal.
To access the Pure Storage vSphere Web Client Plugin, complete the following steps:
1. Go to System -> Plugins -> vSphere:
2. Enter the vCenter Host IP or FQDN, the Administrator User to connect with, the password for the Administrator User, and click Connect. Once connected, select the Install button to register the plugin.
3. With the plugin registered, connect to the vSphere Web Client and select the Pure Storage Plugin from the Home page:
The vCenter server for this environment should be in place on an independent management cluster that is accessible to the In-Band management network the ESXi hosts will be deployed to.
If a new dedicated vCenter server is required for your environment, please follow the instructions from VMware found at: https://pubs.vmware.com/vsphere-60/index.jsp#com.vmware.vsphere.install.doc/GUID-BEFF9A8A-EEC1-4B19-B81D-D5E3F56C78CE.html.
This section provides detailed instructions to install VMware ESXi 6.0 U2 in an environment. After the procedures are completed, two SAN booted ESXi hosts will be provisioned.
Figure 8 vSphere Deployment Workflow
Several methods exist for installing ESXi in a VMware environment. These procedures focus on how to use the built-in keyboard, video, mouse (KVM) console and virtual media features in Cisco UCS Manager to map remote installation media to individual servers and connect to their boot logical unit numbers (LUNs).
The IP KVM enables the administrator to begin the installation of the operating system (OS) through remote media. It is necessary to log in to the Cisco UCS environment to run the IP KVM.
To log in to the Cisco UCS environment, complete the following steps:
1. Open a web browser to https:// <<var_ucs_mgmt_vip>>
2. Select the Launch UCS Manager Section in the HTML section to pull up the UCSM HTML5 GUI.
3. Enter admin for the Username, and provide the password used during setup.
4. Within the UCSM select Servers -> Service Profiles, and pick the first host provisioned VM-Host-Infra-01.
5. Click the KVM Console option within Actions, and select to Keep the file downloaded.
6. Click the downloaded Java Network Launch Protocol (jnlp) file. If prompted for an application to open this file with, select Browse and find the Java Web Start application.
7. Click Run when prompted “Do you want to run this application?”, and select Continue pas the Warning – Security pop-up for the self-signed certificate.
Skip this step if you are using vMedia policies. The ISO file is already connected to KVM.
To prepare the server for the OS installation, complete the following steps on each ESXi host:
1. In the KVM window, click Virtual Media.
2. Click Activate Virtual Devices
3. If prompted to accept an Unencrypted KVM session, accept as necessary.
4. Click Virtual Media and select Map CD/DVD.
5. Browse to the ESXi installer ISO image file and click Open.
6. Click Map Device.
7. Click the KVM tab to monitor the server boot.
8. Boot the server by selecting Boot Server and clicking OK then click OK again.
To install VMware ESXi to the FC-bootable LUN of the hosts, complete the following steps on each host:
1. On reboot, the machine detects the presence of the ESXi installation media. Select the ESXi installer from the boot menu that is displayed.
2. After the installer is finished loading, press Enter to continue with the installation.
3. Read and accept the end-user license agreement (EULA). Press F11 to accept and continue.
4. Select the LUN that was previously set up as the installation disk for ESXi and press Enter to continue with the installation.
5. Select the appropriate keyboard layout and press Enter.
6. Enter and confirm the root password and press Enter.
7. The installer issues a warning that the selected disk will be repartitioned. Press F11 to continue with the installation.
8. After the installation is complete, if using locally mapped Virtual Media, click the Virtual Media tab and clear the checkmark next to the ESXi installation media. Click Yes.
The ESXi installation image must be unmapped to make sure that the server reboots into ESXi and not into the installer. If using a vMedia Policy, this will be unnecessary as the vMedia will appear after the installed OS.
9. From the KVM window, press Enter to reboot the server.
Adding a management network for each VMware host is necessary for managing the host. To add a management network for the VMware hosts, complete the following steps on each ESXi host:
To configure the VM-Host-Infra-01 ESXi host with access to the management network, complete the following steps:
1. After the server has finished rebooting, press F2 to customize the system.
2. Log in as root, enter the corresponding password, and press Enter to log in.
3. Select the Configure the Management Network option and press Enter.
4. Select Network Adapters option leave vmnic0 selected, arrow down to vmnic1 and press space to select as well and press Enter.
5. Select the VLAN (Optional) option and press Enter.
6. Enter the <<var_ib_mgmt_vlan_id>> and press Enter.
7. From the Configure Management Network menu, select IPv4 Configuration and press Enter.
8. Select the Set Static IP Address and Network Configuration option by using the space bar.
9. Enter <<var_vm_host_infra_01_ip>> for the IPv4 Address for managing the first ESXi host.
10. Enter <<var_ib_mgmt_vlan_netmask_length>> for the Subnet Mask for the first ESXi host.
11. Enter <<var_ib_mgmt_gateway>> for the Default Gateway for the first ESXi host.
12. Press Enter to accept the changes to the IPv4 configuration.
13. Select the IPv6 Configuration option and press Enter.
14. Using the spacebar, select Disable IPv6 (restart required) and press Enter.
15. Select the DNS Configuration option and press Enter.
Because the IP address is assigned manually, the DNS information must also be entered manually.
16. Enter the IP address of <<var_nameserver_ip>> for the Primary DNS Server.
17. Optional: Enter the IP address of the Secondary DNS Server.
18. Enter the fully qualified domain name (FQDN) for the first ESXi host.
19. Press Enter to accept the changes to the DNS configuration.
20. Press Esc to exit the Configure Management Network submenu.
21. Press Y to confirm the changes and return to the main menu.
22. The ESXi host reboots. After reboot, press F2 and log back in as root.
23. Select Test Management Network to verify that the management network is set up correctly and press Enter.
24. Press Enter to run the test.
25. Press Enter to exit the window.
26. Press Esc to log out of the VMware console.
27. Repeat steps 1-50 for VM-Host-Infra-02 and any additional hosts provisioned, using appropriate values.
To add the VMware ESXi Hosts using the VMware vSphere Web Client, complete the following steps:
1. Connect to the vSphere Web Client and click Hosts and Clusters from the left side Navigator window, or the Hosts and Clusters icon from the Home center window:
2. From Hosts and Clusters:
— If a new Datacenter is needed for the FlashStack resources, right click the vCenter icon, and select New Datacenter… from the pull-down options.
— From the New Datacenter pop-up dialogue enter in a Datacenter name and click OK.
— Right-click the new or existing Datacenter within the Navigation window, and select New Cluster… from the pull-down options.
— Enter a name for the new cluster, select the DRS and HA check mark boxes, leaving all other options with defaults.
— Click OK to create the cluster.
— Right-click the newly created cluster and select the Add Host… pull-down option.
— Enter the IP or FQDN of the first ESXi host and click Next.
— Enter root for the User Name, provide the password set during initial setup, and click Next.
— Click Yes in the Security Alert pop-up to confirm the host’s certificate.
— Click Next past the Host summary dialogue.
— Provide a license by clicking the green + icon under the License title, select an existing license, or skip past the Assign license dialogue by clicking Next.
— Leave lockdown mode Disabled within the Lockdown mode dialogue window, and click Next.
— Skip past the Resource pool dialogue by clicking Next.
— Confirm the Summary dialogue and add the ESXi host to the cluster by clicking Next.
— Repeat these steps for each ESXi host to be added to the cluster.
To configure the first host in the ESXi cluster, complete the following steps:
1. Select the ESXi host installed from VM-Host-Infra-01 within the newly create cluster, click the Manage tab within that host and Networking within the Manage tab.
2. With vSwitch0 selected, click the third icon (a green adapter card with a wrench) over from the left under Virtual switches to produce a Manage Physical Network Adapters for vSwitch0 pop-up window.
3. Select vmnic1 within the Standby adapters and click the blue up arrow under Assigned adapters to move vmnic1 from the Standby adapters to the Active adapters.
4. Click OK to commit the change.
5. Still within the Manage tab under Networking -> Virtual switches, click the far left icon under Virtual switches to Add host networking.
6. Leave VMkernel Network Adapter selected within Select connection type of the Add Networking pop-up window that is generated and click Next.
7. Within Select target device click the New standard switch option and click Next.
8. Within the Create Standard Switch dialogue press the green + icon below Assigned adapters.
9. Select vmnic2 within the Network Adapters and click OK
10. While still in the Create a Standard Switch dialogue, click the green + icon one more time.
11. Select vmnic3, and from the Failover order group pull-down, select Standby adapters. Click OK.
12. Click Next.
13. Within Port properties under Connection settings, set the Network label to be VMkernel vMotion, set the VLAN ID to the value for <<var_vmotion_vlan_id>>, and checkmark vMotion traffic under Available services. Click Next.
14. Enter <<var_vm_host_infra_vmotion_01_ip>> in the filed for IPv4 address, and <<var_vmotion_subnet_mask>> for the Subnet mask. Click Next.
15. Confirm the values shown on the Ready to complete summary page and click Finish to create the vSwitch and VMkernel for vMotion.
16. Still within the Manage tab for the host, under Networking -> Virtual switches, make sure that vSwitch1 is selected, and click the pencil icon under the Virtual Switches title to edit the vSwitch properties to adjust the MTU for the vMotion vSwitch.
17. Enter 9000 in the Properties dialogue for the vSwitch1 – Edit Settings pop-up that appears. Click OK to apply the change.
18. Click the VMkernel adapters within Manage -> Networking for the host, and with the VMkernel for vMotion (vmk1) selected, click the pencil icon to edit the VMkernel settings.
19. Click the NIC settings in the vmk1 – Edit Settings pop-up window that appears, and enter 9000 for the MTU value to use for the VMkernel. Click OK to apply the change.
20. Repeat these steps for each host being added to the cluster, changing the vMotion VMkernel IP to an appropriate unique value for each host.
These next steps add a datastore to place VMs on the FlashArray//M and optionally a second datastore for keeping their swapfiles.
A dedicated swapfile location will not provide a performance increase over the existing all flash datastores created from the FlashArray//M, but can be useful to have these files in a separate location to have them excluded from snapshots and backups.
1. Right-click the cluster and select the Pure Storage -> Create Datastore option from the pull-down.
2. Give the Datastore Name a value appropriate for VM store in the environment, select a starting size for the Datastore Size, and click Create to provision the volume.
3. Optionally, repeat these similar steps to create a swap datastore to be used by the ESXi hosts. Right-click the cluster and select the Pure Storage -> Create Datastore option from the pull-down.
4. Give the Datastore Name a value appropriate for VM swapfiles on the ESXi host, select a starting size for the Datastore Size, and click Create to provision the volume.
A couple of base settings are needed for stability of the vSphere environment, as well as optional enablement of SSH connectivity to each host for the updating of drivers.
To configure ESXi settings, complete the following steps:
1. Select the first ESXi host to configure with standard settings.
2. Click Manage -> Settings -> Time Configuration for the host, and click the Edit button.
3. Select Use Network Time Protocol (Enable NTP client), click Start, click Start and stop with port usage, and enter <<var_nexus_A_ib_ip>>, <<var_nexus_A_ib_ip>> for the NTP Servers. Click OK to submit the changes.
4. (Optional) Click Security Profile under Manage -> Settings for the host.
Security Profile setting of ESXi Shell and SSH are to enabled for the update of enic and fnic drivers later. These steps are unnecessary if using VMware Update Manager and these drivers are being handled by being included into a configured baseline.
5. Click Edit.
6. Select the ESXi Shell entry, change the Startup Policy to Start and stop with port usage, and click Start. Repeat these steps for the SSH entry. Click OK.
7. If an optional ESXi swap datastore was configured earlier, click System Swap under Manage -> Settings for the host and click Edit.
8. Checkmark the Can use datastore option, and from the pull-down select the ESXi swap datastore that was configured. Click OK.
9. Repeat these steps on each ESXi host being added into the cluster.
The Cisco Custom Image for VMware vSphere 6.0 U2 comes with fnic 1.6.0.26 and enic 2.3.0.7 drivers that are older than the recommended drivers stated in the Cisco UCS HW and SW Availability Interoperability Matrix at the time of this document’s writing.
For the appropriate drivers, download and extract the following VMware VIC Drivers to the system the vSphere Web Client is being run from:
These drivers can be applied to the new hosts using a VMware Update Manager baseline (not covered in this document), can be installed using the VMware vSphere Remote CLI, or can be installed to each host using the following procedure:
1. Select the Storage icon in the Navigator window, finding the VM datastore that was provisioned earlier.
2. Right-click the datastore and select Browse Files.
3. From this selection, the window will change to Manage -> Files window. Click the first icon on the left within this window to download the VMware-ClientIntegrationPlugin-6.0.0.
4. With the plugin downloaded, close all browser windows, and open the plugin from the download location.
5. Click Run to acknowledge the Open File – Security Warning pop-up that will appear.
6. Click Next at the first window of the VMware Client Integration Plug-in 6.0.0.
7. Click to accept the terms of the License Agreement and click Next.
8. Adjust the Destination Folder if needed and then click Next.
9. Click Install to install the plug-in.
10. Click Finish, reopen the web browser and navigate to the vCenter address for the vSphere Web Client.
11. Navigate back to the created VM datastore, selecting Manage -> Files.
12. Re-select the icon for uploading files.
13. Select the enic driver from its download location, and click Open to upload it to the datastore.
14. Repeat steps 13 and 14 for the fnic driver that was downloaded.
15. From the vSphere Web Client, select one of the newly installed hosts from the Hosts and Clusters.
16. Right-click the host, and select Maintenance Mode -> Enter Maintenance Mode from the pull-down options.
17. Click OK in the Confirm Maintenance Mode pop-up window that appears.
18. Click Yes for any dialogue box presented.
19. Repeat steps 15-18 for each newly installed host.
20. Connect to the first ESXi host via SSH as the root account and unzip the driver bundles:
— ssh root@10.1.164.21
— unzip /vmfs/volumes/VM-Production/ESXi6.0_enic-2.3.0.10-4303638.zip
— unzip /vmfs/volumes/VM-Production/ fnic_driver_1.6.0.28-4179603.zip
21. As root install the offline_bundles and reboot the host:
— esxcli software vib update -d /vmfs/volumes/VM-Production/ESXi6.0_enic-2.3.0.10-offline_bundle-4303638.zip
— esxcli software vib update -d /vmfs/volumes/VM-Production/fnic_driver_1.6.0.28-offline_bundle-4179603.zip
— reboot
22. Connect as root to each additional host being installed and repeat step 16.
23. Wait for each host to finishing rebooting.
24. Right-click each host, and from the pull-down select Maintenance Mode -> Exit Maintenance Mode.
Production networks will be configured on a VMware vDS to allow additional configuration, as well as consistency between hosts. To configure the VMware vDS, click the right-most icon within the Navigation window, and complete the following steps:
1. Right-click the Datacenter (FlashStack-VSI in the example picture), select from the pulldown Distributed Switch -> New Distributed Switch…
2. Provide a relevant name for the Name field and click Next.
3. Leave the version selected as Distributed switch: 6.0.0 and click Next.
4. Change the Number of uplinks from 4 to 2 and click Next.
5. Review the summary in the Ready to complete page and click Finish to create the vDS.
6. Finding the vDS showing up as DSwitch FlashStack under the Network icon of the Navigator pane, right-click the vDS and select Distributed Port Group -> New Distributed Port Group…
7. Enter an appropriate name into the Name field for application/production networks that will be carried on the vDS and click Next.
8. Select VLAN from the VLAN type pull-down and enter the appropriate VLAN number into the VLAN ID field. Click Next.
9. Confirm the summary shown on the Ready to complete page and click Finish to create the distributed port group.
10. Repeat steps 6-9 for each application network that has been configured on the Cisco UCS Fabric.
With the vDS and the distributed port groups created within the vDS in place, the ESXi hosts will be added to the vDS.
To Add the ESXi Hosts to the vDS, complete the following steps:
1. Within the Networking sub-tab of Hosts and Clusters of the Navigator window, right-click the vDS and select Add and Manage Hosts…
2. Leave Add hosts selected and click Next.
3. Click the green + icon next to New hosts…
4. In the Select new hosts pop-up that appears, select the hosts to be added, and click OK to begin joining them to the vDS.
5. Click the Configure identical network settings on multiple hosts (template mode) checkbox near the bottom of the window, and click Next.
6. Select the first host to be the template host and click Next.
7. Unselect Manage VMkernel adapters (template mode) if it is selected and click Next.
8. For each vmnic (vmnic4 and vmnic5) to be assigned from the Host/Physical Network Adapters column, select the vmnic and click the Assign uplink.
9. Assign the first to Uplink 1 and assign the second to Uplink 2.
10. With both vmnics assigned, click Apply to all within the second part of this page, click OK in the Host Settings Not Applied pop-up that will appear, and click Next.
11. Proceed past the Analyze impact screen if no issues appear.
12. Review the Ready to complete summary and click Finish to add the hosts to the vDS.
The Pure Storage FlashArray has a few necessary best practice changes for VMware ESXi. The following are a few requirements and considerations:
· Multi-pathing. By default, ESXi claims Pure Storage FlashArray devices using the Storage Array Type Plugin (SATP) of ALUA. The default Path Selection Policy (PSP) is Most Recently Used (MRU). This is not the ideal PSP for the active-active nature of the FlashArray front-end ports. The recommendation is to use the VMware Native Multipathing Plugin PSP of Round Robin. Furthermore, to promote improved balanced of I/O across physical paths and decrease path failover time, Pure Storage recommends changing the default IO Operation Limit of Round Robin for FlashArray devices. The default value for this is 1,000 (for a given device, NMP will switch logical paths after 1,000 I/Os) to a value of 1. The best way to set these changes is with a SATP rule. Create this rule on every ESXi host prior to provisioning any storage. This will make all FlashArray storage conform to these recommendations by default without any per-device changes required.
esxcli storage nmp satp rule add -s "VMW_SATP_ALUA" -V "PURE" -M "FlashArray" -P "VMW_PSP_RR" -O "iops=1"
· Virtual Disk Types: Pure Storage recommends thin type virtual disks for the majority of virtual machines. Thin virtual disks are the most flexible and provide benefits such as in-guest space reclamation support. For virtual machines that demand the lowest possible latency with the most consistent performance, eagerzeroedthick virtual disks should be used. The use of zeroedthick (aka “lazy” or “sparse”) is discouraged at all times.
· Virtual Machine SCSI adapter: Pure Storage recommends using the Paravirtual SCSI adapter in virtual machines to provide access to virtual disks/RDMs. The Paravirtual SCSI adapter provides the highest possible performance levels with the most efficient use of CPU during intense workloads. Virtual machines with small I/O requirements can use the default adapters if preferred.
· Volume sizing and volume count: Pure Storage has no recommendations around volume sizing or volume count. The FlashArray volumes have no artificially limited queue depth, not on the volume level or the port level. A single volume can use the entire performance of the FlashArray if needed. In the case of very large volumes, or volumes serving intense workloads it might be necessary to increase internal queues inside of ESXi (HBA device queue, Disk.SchedNumReqOutstanding, virtual SCSI adapter queue).
· Run Space Reclamation (UNMAP) on a regular basis. This ensures the FlashArray capacity usage accurately reflects the actual usage inside of VMware. Enable the option EnableBlockDelete in ESXi version 6.0 and later to provide in-guest UNMAP capabilities.
· For iSCSI FlashArrays, disable DelayedAck an increase the Login Timeout to 30 seconds (from a value of 5).
version 7.0(3)I4(2)
switchname b19-93180-1
vdc b19-93180-1 id 1
limit-resource vlan minimum 16 maximum 4094
limit-resource vrf minimum 2 maximum 4096
limit-resource port-channel minimum 0 maximum 511
limit-resource u4route-mem minimum 248 maximum 248
limit-resource u6route-mem minimum 96 maximum 96
limit-resource m4route-mem minimum 58 maximum 58
limit-resource m6route-mem minimum 8 maximum 8
cfs eth distribute
feature interface-vlan
feature lacp
feature vpc
username admin password 5 $5$JXKeJeBt$AiT5ys/yITyKSslQRZJ0MX1AiaE160K89W5IwJ4r9q7 ro
le network-admin
ip domain-lookup
system default switchport
copp profile strict
snmp-server user admin network-admin auth md5 0x6a85fb275aedd28f4481cea9cd8724e1 priv
0x6a85fb275aedd28f4481cea9cd8724e1 localizedkey
rmon event 1 log trap public description FATAL(1) owner PMON@FATAL
rmon event 2 log trap public description CRITICAL(2) owner PMON@CRITICAL
rmon event 3 log trap public description ERROR(3) owner PMON@ERROR
rmon event 4 log trap public description WARNING(4) owner PMON@WARNING
rmon event 5 log trap public description INFORMATION(5) owner PMON@INFO
ntp server 192.168.164.254 use-vrf management
ntp source 10.1.164.13
ntp master 3
vlan 1-2,115,200-203
vlan 2
name Native-VLAN
vlan 115
name IB-MGMT-VLAN
vlan 200
name vMotion-VLAN
vlan 201
name VM-App1-VLAN
vlan 202
name VM-App2-VLAN
vlan 203
name VM-App3-VLAN
spanning-tree port type edge bpduguard default
spanning-tree port type edge bpdufilter default
spanning-tree port type network default
vrf context management
ip route 0.0.0.0/0 192.168.164.254
port-channel load-balance src-dst l4port
vpc domain 10
peer-switch
role priority 10
peer-keepalive destination 192.168.164.14 source 192.168.164.13
delay restore 150
peer-gateway
auto-recovery
ip arp synchronize
interface Vlan1
interface Vlan115
description In-Band NTP Redistribution Interface VLAN 115
no shutdown
no ip redirects
ip address 10.1.164.13/24
no ipv6 redirects
interface port-channel11
switchport mode trunk
switchport trunk native vlan 2
switchport trunk allowed vlan 115,200-203
spanning-tree port type network
vpc peer-link
interface port-channel151
switchport mode trunk
switchport trunk native vlan 2
switchport trunk allowed vlan 115,200-203
spanning-tree port type edge trunk
mtu 9216
load-interval counter 3 60
vpc 151
interface port-channel152
description UCS 6332-16UP-2 FI
switchport mode trunk
switchport trunk native vlan 2
switchport trunk allowed vlan 115,200-203
spanning-tree port type edge trunk
mtu 9216
load-interval counter 3 60
vpc 152
interface port-channel153
description Mgmt Switch
switchport mode trunk
switchport trunk native vlan 2
switchport trunk allowed vlan 115
spanning-tree port type network
mtu 9216
vpc 153
interface port-channel154
description Mgmt Switch
switchport mode trunk
switchport trunk native vlan 2
switchport trunk allowed vlan 115
spanning-tree port type network
mtu 9216
vpc 154
interface Ethernet1/1
description vPC peer-link connection to b19-93180-2 Ethernet1/1
switchport mode trunk
switchport trunk native vlan 2
switchport trunk allowed vlan 115,200-203
channel-group 11 mode active
no shutdown
interface Ethernet1/2
description vPC peer-link connection to b19-93180-2 Ethernet1/2
switchport mode trunk
switchport trunk native vlan 2
switchport trunk allowed vlan 115,200-203
channel-group 11 mode active
no shutdown
interface Ethernet1/3
interface Ethernet1/4
interface Ethernet1/5
interface Ethernet1/6
interface Ethernet1/7
interface Ethernet1/8
interface Ethernet1/9
interface Ethernet1/10
interface Ethernet1/11
interface Ethernet1/12
interface Ethernet1/13
interface Ethernet1/14
interface Ethernet1/15
interface Ethernet1/16
interface Ethernet1/17
interface Ethernet1/18
interface Ethernet1/19
interface Ethernet1/20
interface Ethernet1/21
interface Ethernet1/22
interface Ethernet1/23
interface Ethernet1/24
interface Ethernet1/25
interface Ethernet1/26
interface Ethernet1/27
interface Ethernet1/28
interface Ethernet1/29
interface Ethernet1/30
interface Ethernet1/31
interface Ethernet1/32
interface Ethernet1/33
interface Ethernet1/34
interface Ethernet1/35
interface Ethernet1/36
interface Ethernet1/37
interface Ethernet1/38
interface Ethernet1/39
interface Ethernet1/40
interface Ethernet1/41
interface Ethernet1/42
interface Ethernet1/43
interface Ethernet1/44
interface Ethernet1/45
interface Ethernet1/46
interface Ethernet1/47
interface Ethernet1/48
interface Ethernet1/49
interface Ethernet1/50
interface Ethernet1/51
description vPC 151 connection to UCS 6332-16UP-1 FI Ethernet1/33
switchport mode trunk
switchport trunk native vlan 2
switchport trunk allowed vlan 115,200-203
mtu 9216
load-interval counter 3 60
channel-group 151 mode active
no shutdown
interface Ethernet1/52
description vPC 152 connection to UCS 6332-16UP-2 FI Ethernet1/33
switchport mode trunk
switchport trunk native vlan 2
switchport trunk allowed vlan 115,200-203
mtu 9216
load-interval counter 3 60
channel-group 152 mode active
no shutdown
interface Ethernet1/53
description vPC 153 connection to Upstream Network Switch A
switchport mode trunk
switchport trunk native vlan 2
switchport trunk allowed vlan 115
mtu 9216
channel-group 153 mode active
no shutdown
interface Ethernet1/54
description vPC 154 connection to Upstream Network Switch B
switchport mode trunk
switchport trunk native vlan 2
switchport trunk allowed vlan 115
mtu 9216
channel-group 154 mode active
no shutdown
interface mgmt0
vrf member management
ip address 192.168.164.13/24
line console
line vty
boot nxos bootflash:/nxos.7.0.3.I4.2.bin
ip route 0.0.0.0/0 10.1.164.254
version 7.0(3)I4(2)
switchname b19-93180-2
vdc b19-93180-2 id 1
limit-resource vlan minimum 16 maximum 4094
limit-resource vrf minimum 2 maximum 4096
limit-resource port-channel minimum 0 maximum 511
limit-resource u4route-mem minimum 248 maximum 248
limit-resource u6route-mem minimum 96 maximum 96
limit-resource m4route-mem minimum 58 maximum 58
limit-resource m6route-mem minimum 8 maximum 8
cfs eth distribute
feature interface-vlan
feature lacp
feature vpc
username admin password 5 $5$D2EmPIzj$QAlwjzc/KcandBmhkr9rkukM88F6DPxCJi02Yj2TXV8 ro
le network-admin
ip domain-lookup
system default switchport
copp profile strict
snmp-server user admin network-admin auth md5 0xff46f80beea9e51b005db0cf74071b95 priv
0xff46f80beea9e51b005db0cf74071b95 localizedkey
rmon event 1 log trap public description FATAL(1) owner PMON@FATAL
rmon event 2 log trap public description CRITICAL(2) owner PMON@CRITICAL
rmon event 3 log trap public description ERROR(3) owner PMON@ERROR
rmon event 4 log trap public description WARNING(4) owner PMON@WARNING
rmon event 5 log trap public description INFORMATION(5) owner PMON@INFO
ntp server 192.168.164.254 use-vrf management
ntp source 10.1.164.14
ntp master 3
vlan 1-2,115,200-203
vlan 2
name Native-VLAN
vlan 115
name IB-MGMT-VLAN
vlan 200
name vMotion-VLAN
vlan 201
name VM-App1-VLAN
vlan 202
name VM-App2-VLAN
vlan 203
name VM-App3-VLAN
spanning-tree port type edge bpduguard default
spanning-tree port type edge bpdufilter default
spanning-tree port type network default
vrf context management
ip route 0.0.0.0/0 192.168.164.254
port-channel load-balance src-dst l4port
vpc domain 10
peer-switch
role priority 20
peer-keepalive destination 192.168.164.13 source 192.168.164.14
delay restore 150
peer-gateway
auto-recovery
ip arp synchronize
interface Vlan1
interface Vlan115
description In-Band NTP Redistribution Interface VLAN 115
no shutdown
no ip redirects
ip address 10.1.164.14/24
no ipv6 redirects
interface port-channel11
switchport mode trunk
switchport trunk native vlan 2
switchport trunk allowed vlan 115,200-203
spanning-tree port type network
vpc peer-link
interface port-channel151
switchport mode trunk
switchport trunk native vlan 2
switchport trunk allowed vlan 115,200-203
spanning-tree port type edge trunk
mtu 9216
load-interval counter 3 60
vpc 151
interface port-channel152
description UCS 6332-16UP-2 FI
switchport mode trunk
switchport trunk native vlan 2
switchport trunk allowed vlan 115,200-203
spanning-tree port type edge trunk
mtu 9216
load-interval counter 3 60
vpc 152
interface port-channel153
description Mgmt Switch
switchport mode trunk
switchport trunk native vlan 2
switchport trunk allowed vlan 115
spanning-tree port type network
mtu 9216
vpc 153
interface port-channel154
description Mgmt Switch
switchport mode trunk
switchport trunk native vlan 2
switchport trunk allowed vlan 115
spanning-tree port type network
mtu 9216
vpc 154
interface Ethernet1/1
description vPC peer-link connection to b19-93180-1 Ethernet1/1
switchport mode trunk
switchport trunk native vlan 2
switchport trunk allowed vlan 115,200-203
channel-group 11 mode active
no shutdown
interface Ethernet1/2
description vPC peer-link connection to b19-93180-1 Ethernet1/2
switchport mode trunk
switchport trunk native vlan 2
switchport trunk allowed vlan 115,200-203
channel-group 11 mode active
no shutdown
interface Ethernet1/3
interface Ethernet1/4
interface Ethernet1/5
interface Ethernet1/6
interface Ethernet1/7
interface Ethernet1/8
interface Ethernet1/9
interface Ethernet1/10
interface Ethernet1/11
interface Ethernet1/12
interface Ethernet1/13
interface Ethernet1/14
interface Ethernet1/15
interface Ethernet1/16
interface Ethernet1/17
interface Ethernet1/18
interface Ethernet1/19
interface Ethernet1/20
interface Ethernet1/21
interface Ethernet1/22
interface Ethernet1/23
interface Ethernet1/24
interface Ethernet1/25
interface Ethernet1/26
interface Ethernet1/27
interface Ethernet1/28
interface Ethernet1/29
interface Ethernet1/30
interface Ethernet1/31
interface Ethernet1/32
interface Ethernet1/33
interface Ethernet1/34
interface Ethernet1/35
interface Ethernet1/36
interface Ethernet1/37
interface Ethernet1/38
interface Ethernet1/39
interface Ethernet1/40
interface Ethernet1/41
interface Ethernet1/42
interface Ethernet1/43
interface Ethernet1/44
interface Ethernet1/45
interface Ethernet1/46
interface Ethernet1/47
interface Ethernet1/48
interface Ethernet1/49
interface Ethernet1/50
interface Ethernet1/51
description vPC 151 connection to UCS 6332-16UP-1 FI Ethernet1/34
switchport mode trunk
switchport trunk native vlan 2
switchport trunk allowed vlan 115,200-203
mtu 9216
load-interval counter 3 60
channel-group 151 mode active
no shutdown
interface Ethernet1/52
description vPC 152 connection to UCS 6332-16UP-2 FI Ethernet1/34
switchport mode trunk
switchport trunk native vlan 2
switchport trunk allowed vlan 115,200-203
mtu 9216
load-interval counter 3 60
channel-group 152 mode active
no shutdown
interface Ethernet1/53
description vPC 153 connection to Upstream Network Switch A
switchport mode trunk
switchport trunk native vlan 2
switchport trunk allowed vlan 115
mtu 9216
channel-group 153 mode active
no shutdown
interface Ethernet1/54
description vPC 154 connection to Upstream Network Switch B
switchport mode trunk
switchport trunk native vlan 2
switchport trunk allowed vlan 115
mtu 9216
channel-group 154 mode active
no shutdown
interface mgmt0
vrf member management
ip address 192.168.164.14/24
line console
line vty
boot nxos bootflash:/nxos.7.0.3.I4.2.bin
ip route 0.0.0.0/0 10.1.164.254
version 7.3(0)DY(1)
power redundancy-mode redundant
feature npiv
feature fport-channel-trunk
role name default-role
description This is a system defined role and applies to all users.
rule 5 permit show feature environment
rule 4 permit show feature hardware
rule 3 permit show feature module
rule 2 permit show feature snmp
rule 1 permit show feature system
username admin password 5 $5$VAPll/l.$7MBrRrPUWwxvuc5NSN19b2vv4N5yc09sFrofJ/6vgK3 role network-admin
ssh key rsa 2048
ip domain-lookup
ip host mds-9148s-a 192.168.164.15
aaa group server radius radius
snmp-server user admin network-admin auth md5 0x65ef5a4dec8cd0c72253a031d8595eba priv 0x65ef5a4dec8cd0c72253a031d85
95eba localizedkey
rmon event 1 log description FATAL(1) owner PMON@FATAL
rmon event 2 log description CRITICAL(2) owner PMON@CRITICAL
rmon event 3 log description ERROR(3) owner PMON@ERROR
rmon event 4 log description WARNING(4) owner PMON@WARNING
rmon event 5 log description INFORMATION(5) owner PMON@INFO
snmp-server community ucspm group network-operator
ntp server 192.168.164.254
vsan database
vsan 101
device-alias database
device-alias name VM-Host-Infra-01-A pwwn 20:00:00:25:b5:01:0a:00
device-alias name VM-Host-Infra-02-A pwwn 20:00:00:25:b5:01:0a:01
device-alias name FlashArray-CT0FC0-fabricA pwwn 52:4a:93:7a:98:4c:3e:00
device-alias name FlashArray-CT0FC2-fabricA pwwn 52:4a:93:7a:98:4c:3e:02
device-alias name FlashArray-CT1FC0-fabricA pwwn 52:4a:93:7a:98:4c:3e:10
device-alias name FlashArray-CT1FC2-fabricA pwwn 52:4a:93:7a:98:4c:3e:12
device-alias commit
fcdomain fcid database
vsan 101 wwn 52:4a:93:7a:98:4c:3e:10 fcid 0xe00000 dynamic
! [FlashArray-CT1FC0-fabricA]
vsan 101 wwn 52:4a:93:7a:98:4c:3e:12 fcid 0xe00100 dynamic
! [FlashArray-CT1FC2-fabricA]
vsan 101 wwn 52:4a:93:7a:98:4c:3e:00 fcid 0xe00200 dynamic
! [FlashArray-CT0FC0-fabricA]
vsan 101 wwn 52:4a:93:7a:98:4c:3e:02 fcid 0xe00300 dynamic
! [FlashArray-CT0FC2-fabricA]
vsan 101 wwn 24:01:00:de:fb:07:c9:80 fcid 0xe00400 dynamic
vsan 101 wwn 20:00:00:25:b5:01:0a:00 fcid 0xe00401 dynamic
! [VM-Host-Infra-01-A]
vsan 101 wwn 20:00:00:25:b5:01:0a:01 fcid 0xe00406 dynamic
! [VM-Host-Infra-02-A]
!Active Zone Database Section for vsan 101
zone name VM-Host-Infra-01-A vsan 101
member pwwn 20:00:00:25:b5:01:0a:00
! [VM-Host-Infra-01-A]
member pwwn 52:4a:93:7a:98:4c:3e:00
! [FlashArray-CT0FC0-fabricA]
member pwwn 52:4a:93:7a:98:4c:3e:02
! [FlashArray-CT0FC2-fabricA]
member pwwn 52:4a:93:7a:98:4c:3e:12
! [FlashArray-CT1FC2-fabricA]
member pwwn 52:4a:93:7a:98:4c:3e:10
! [FlashArray-CT1FC0-fabricA]
zone name VM-Host-Infra-02-A vsan 101
member pwwn 20:00:00:25:b5:01:0a:01
! [VM-Host-Infra-02-A]
member pwwn 52:4a:93:7a:98:4c:3e:00
! [FlashArray-CT0FC0-fabricA]
member pwwn 52:4a:93:7a:98:4c:3e:02
! [FlashArray-CT0FC2-fabricA]
member pwwn 52:4a:93:7a:98:4c:3e:12
! [FlashArray-CT1FC2-fabricA]
member pwwn 52:4a:93:7a:98:4c:3e:10
! [FlashArray-CT1FC0-fabricA]
zoneset name flashstack-zoneset vsan 101
member VM-Host-Infra-01-A
member VM-Host-Infra-02-A
zoneset activate name flashstack-zoneset vsan 101
do clear zone database vsan 101
!Full Zone Database Section for vsan 101
zone name VM-Host-Infra-01-A vsan 101
member pwwn 20:00:00:25:b5:01:0a:00
! [VM-Host-Infra-01-A]
member pwwn 52:4a:93:7a:98:4c:3e:00
! [FlashArray-CT0FC0-fabricA]
member pwwn 52:4a:93:7a:98:4c:3e:02
! [FlashArray-CT0FC2-fabricA]
member pwwn 52:4a:93:7a:98:4c:3e:12
! [FlashArray-CT1FC2-fabricA]
member pwwn 52:4a:93:7a:98:4c:3e:10
! [FlashArray-CT1FC0-fabricA]
zone name VM-Host-Infra-02-A vsan 101
member pwwn 20:00:00:25:b5:01:0a:01
! [VM-Host-Infra-02-A]
member pwwn 52:4a:93:7a:98:4c:3e:00
! [FlashArray-CT0FC0-fabricA]
member pwwn 52:4a:93:7a:98:4c:3e:02
! [FlashArray-CT0FC2-fabricA]
member pwwn 52:4a:93:7a:98:4c:3e:12
! [FlashArray-CT1FC2-fabricA]
member pwwn 52:4a:93:7a:98:4c:3e:10
! [FlashArray-CT1FC0-fabricA]
zoneset name flashstack-zoneset vsan 101
member VM-Host-Infra-01-A
member VM-Host-Infra-02-A
interface mgmt0
ip address 192.168.164.15 255.255.255.0
interface port-channel1
channel mode active
switchport rate-mode dedicated
vsan database
vsan 101 interface port-channel1
vsan 101 interface fc1/1
vsan 101 interface fc1/2
vsan 101 interface fc1/3
vsan 101 interface fc1/4
clock timezone EST -5 0
switchname mds-9148s-a
line console
line vty
boot kickstart bootflash:/m9100-s5ek9-kickstart-mz.7.3.0.DY.1.bin
boot system bootflash:/m9100-s5ek9-mz.7.3.0.DY.1.bin
interface fc1/5
interface fc1/6
interface fc1/7
interface fc1/8
interface fc1/1
interface fc1/2
interface fc1/3
interface fc1/4
interface fc1/9
interface fc1/10
interface fc1/11
interface fc1/12
interface fc1/13
interface fc1/14
interface fc1/15
interface fc1/16
interface fc1/17
interface fc1/18
interface fc1/19
interface fc1/20
interface fc1/21
interface fc1/22
interface fc1/23
interface fc1/24
interface fc1/25
interface fc1/26
interface fc1/27
interface fc1/28
interface fc1/29
interface fc1/30
interface fc1/31
interface fc1/32
interface fc1/33
interface fc1/34
interface fc1/35
interface fc1/36
interface fc1/37
interface fc1/38
interface fc1/39
interface fc1/40
interface fc1/41
interface fc1/42
interface fc1/43
interface fc1/44
interface fc1/45
interface fc1/46
interface fc1/47
interface fc1/48
interface fc1/5
interface fc1/6
interface fc1/7
interface fc1/8
interface fc1/1
switchport description FlashArray-CT0FC0
port-license acquire
no shutdown
interface fc1/2
switchport description FlashArray-CT0FC2
port-license acquire
no shutdown
interface fc1/3
switchport description FlashArray-CT1FC0
port-license acquire
no shutdown
interface fc1/4
switchport description FlashArray-CT1FC2
port-license acquire
no shutdown
interface fc1/5
switchport description UCS 6332-16UP-A Port 1
port-license acquire
channel-group 1 force
no shutdown
interface fc1/6
switchport description UCS 6332-16UP-A Port 2
port-license acquire
channel-group 1 force
no shutdown
interface fc1/7
switchport description UCS 6332-16UP-A Port 3
port-license acquire
channel-group 1 force
no shutdown
interface fc1/8
switchport description UCS 6332-16UP-A Port 4
port-license acquire
channel-group 1 force
no shutdown
interface fc1/9
port-license acquire
interface fc1/10
port-license acquire
interface fc1/11
port-license acquire
interface fc1/12
port-license acquire
interface fc1/13
interface fc1/14
interface fc1/15
interface fc1/16
interface fc1/17
interface fc1/18
interface fc1/19
interface fc1/20
interface fc1/21
interface fc1/22
interface fc1/23
interface fc1/24
interface fc1/25
interface fc1/26
interface fc1/27
interface fc1/28
interface fc1/29
interface fc1/30
interface fc1/31
interface fc1/32
interface fc1/33
interface fc1/34
interface fc1/35
interface fc1/36
interface fc1/37
interface fc1/38
interface fc1/39
interface fc1/40
interface fc1/41
interface fc1/42
interface fc1/43
interface fc1/44
interface fc1/45
interface fc1/46
interface fc1/47
interface fc1/48
ip default-gateway 192.168.164.254
version 7.3(0)DY(1)
power redundancy-mode redundant
feature npiv
feature fport-channel-trunk
role name default-role
description This is a system defined role and applies to all users.
rule 5 permit show feature environment
rule 4 permit show feature hardware
rule 3 permit show feature module
rule 2 permit show feature snmp
rule 1 permit show feature system
username admin password 5 $5$6BSGwPG5$SpgWoDQBQdZtf9UdnPo9fl93pRBjMBXgxIssnoJe.o9 role network-admin
ssh key rsa 2048
ip domain-lookup
ip host mds-9148s-b 192.168.164.16
aaa group server radius radius
snmp-server user admin network-admin auth md5 0x60988f2817fde405d4ec90d7cee74f1c priv 0x60988f2817fde405d4ec90d7cee74f
1c localizedkey
rmon event 1 log description FATAL(1) owner PMON@FATAL
rmon event 2 log description CRITICAL(2) owner PMON@CRITICAL
rmon event 3 log description ERROR(3) owner PMON@ERROR
rmon event 4 log description WARNING(4) owner PMON@WARNING
rmon event 5 log description INFORMATION(5) owner PMON@INFO
snmp-server community ucspm group network-operator
ntp server 192.168.164.254
vsan database
vsan 102
device-alias database
device-alias name VM-Host-Harness-01 pwwn 20:00:00:25:b5:01:0b:02
device-alias name VM-Host-Harness-02 pwwn 20:00:00:25:b5:01:0b:03
device-alias name VM-Host-Harness-03 pwwn 20:00:00:25:b5:01:0b:04
device-alias name VM-Host-Harness-04 pwwn 20:00:00:25:b5:01:0b:05
device-alias name VM-Host-Harness-05 pwwn 20:00:00:25:b5:01:0b:06
device-alias name VM-Host-Harness-06 pwwn 20:00:00:25:b5:01:0b:07
device-alias name VM-Host-Infra-01-B pwwn 20:00:00:25:b5:01:0b:00
device-alias name VM-Host-Infra-02-B pwwn 20:00:00:25:b5:01:0b:01
device-alias name FlashArray-CT0FC1-fabricB pwwn 52:4a:93:7a:98:4c:3e:01
device-alias name FlashArray-CT0FC3-fabricB pwwn 52:4a:93:7a:98:4c:3e:03
device-alias name FlashArray-CT1FC1-fabricB pwwn 52:4a:93:7a:98:4c:3e:11
device-alias name FlashArray-CT1FC3-fabricB pwwn 52:4a:93:7a:98:4c:3e:13
device-alias commit
fcdomain fcid database
vsan 102 wwn 52:4a:93:7a:98:4c:3e:11 fcid 0x640000 dynamic
! [FlashArray-CT1FC1-fabricB]
vsan 102 wwn 52:4a:93:7a:98:4c:3e:13 fcid 0x640100 dynamic
! [FlashArray-CT1FC3-fabricB]
vsan 102 wwn 52:4a:93:7a:98:4c:3e:01 fcid 0x640200 dynamic
! [FlashArray-CT0FC1-fabricB]
vsan 102 wwn 52:4a:93:7a:98:4c:3e:03 fcid 0x640300 dynamic
! [FlashArray-CT0FC3-fabricB]
vsan 102 wwn 24:02:00:de:fb:25:f1:00 fcid 0x640400 dynamic
vsan 102 wwn 20:00:00:25:b5:01:0b:00 fcid 0x640401 dynamic
! [VM-Host-Infra-01-B]
vsan 102 wwn 20:00:00:25:b5:01:0b:02 fcid 0x640402 dynamic
! [VM-Host-Harness-01]
vsan 102 wwn 20:00:00:25:b5:01:0b:03 fcid 0x640403 dynamic
! [VM-Host-Harness-02]
vsan 102 wwn 20:00:00:25:b5:01:0b:04 fcid 0x640404 dynamic
! [VM-Host-Harness-03]
vsan 102 wwn 20:00:00:25:b5:01:0b:06 fcid 0x640405 dynamic
! [VM-Host-Harness-05]
vsan 102 wwn 20:00:00:25:b5:01:0b:05 fcid 0x640406 dynamic
! [VM-Host-Harness-04]
vsan 102 wwn 20:00:00:25:b5:01:0b:01 fcid 0x640407 dynamic
! [VM-Host-Infra-02-B]
vsan 102 wwn 20:00:00:25:b5:01:0b:07 fcid 0x640408 dynamic
! [VM-Host-Harness-06]
!Active Zone Database Section for vsan 102
zone name VM-Host-Infra-01-B vsan 102
member pwwn 20:00:00:25:b5:01:0b:00
! [VM-Host-Infra-01-B]
member pwwn 52:4a:93:7a:98:4c:3e:01
! [FlashArray-CT0FC1-fabricB]
member pwwn 52:4a:93:7a:98:4c:3e:03
! [FlashArray-CT0FC3-fabricB]
member pwwn 52:4a:93:7a:98:4c:3e:13
! [FlashArray-CT1FC3-fabricB]
member pwwn 52:4a:93:7a:98:4c:3e:11
! [FlashArray-CT1FC1-fabricB]
zone name VM-Host-Infra-02-B vsan 102
member pwwn 20:00:00:25:b5:01:0b:01
! [VM-Host-Infra-02-B]
member pwwn 52:4a:93:7a:98:4c:3e:01
! [FlashArray-CT0FC1-fabricB]
member pwwn 52:4a:93:7a:98:4c:3e:03
! [FlashArray-CT0FC3-fabricB]
member pwwn 52:4a:93:7a:98:4c:3e:13
! [FlashArray-CT1FC3-fabricB]
member pwwn 52:4a:93:7a:98:4c:3e:11
! [FlashArray-CT1FC1-fabricB]
zone name VM-Host-Harness-01-B vsan 102
member pwwn 20:00:00:25:b5:01:0b:02
! [VM-Host-Harness-01]
member pwwn 52:4a:93:7a:98:4c:3e:01
! [FlashArray-CT0FC1-fabricB]
member pwwn 52:4a:93:7a:98:4c:3e:03
! [FlashArray-CT0FC3-fabricB]
member pwwn 52:4a:93:7a:98:4c:3e:11
! [FlashArray-CT1FC1-fabricB]
member pwwn 52:4a:93:7a:98:4c:3e:13
! [FlashArray-CT1FC3-fabricB]
zone name VM-Host-Harness-02-B vsan 102
member pwwn 20:00:00:25:b5:01:0b:03
! [VM-Host-Harness-02]
member pwwn 52:4a:93:7a:98:4c:3e:01
! [FlashArray-CT0FC1-fabricB]
member pwwn 52:4a:93:7a:98:4c:3e:03
! [FlashArray-CT0FC3-fabricB]
member pwwn 52:4a:93:7a:98:4c:3e:11
! [FlashArray-CT1FC1-fabricB]
member pwwn 52:4a:93:7a:98:4c:3e:13
! [FlashArray-CT1FC3-fabricB]
zone name VM-Host-Harness-03-B vsan 102
member pwwn 20:00:00:25:b5:01:0b:04
! [VM-Host-Harness-03]
member pwwn 52:4a:93:7a:98:4c:3e:01
! [FlashArray-CT0FC1-fabricB]
member pwwn 52:4a:93:7a:98:4c:3e:03
! [FlashArray-CT0FC3-fabricB]
member pwwn 52:4a:93:7a:98:4c:3e:11
! [FlashArray-CT1FC1-fabricB]
member pwwn 52:4a:93:7a:98:4c:3e:13
! [FlashArray-CT1FC3-fabricB]
zone name VM-Host-Harness-04-B vsan 102
member pwwn 20:00:00:25:b5:01:0b:05
! [VM-Host-Harness-04]
member pwwn 52:4a:93:7a:98:4c:3e:01
! [FlashArray-CT0FC1-fabricB]
member pwwn 52:4a:93:7a:98:4c:3e:03
! [FlashArray-CT0FC3-fabricB]
member pwwn 52:4a:93:7a:98:4c:3e:11
! [FlashArray-CT1FC1-fabricB]
member pwwn 52:4a:93:7a:98:4c:3e:13
! [FlashArray-CT1FC3-fabricB]
zone name VM-Host-Harness-05-B vsan 102
member pwwn 20:00:00:25:b5:01:0b:06
! [VM-Host-Harness-05]
member pwwn 52:4a:93:7a:98:4c:3e:01
! [FlashArray-CT0FC1-fabricB]
member pwwn 52:4a:93:7a:98:4c:3e:03
! [FlashArray-CT0FC3-fabricB]
member pwwn 52:4a:93:7a:98:4c:3e:11
! [FlashArray-CT1FC1-fabricB]
member pwwn 52:4a:93:7a:98:4c:3e:13
! [FlashArray-CT1FC3-fabricB]
zone name VM-Host-Harness-06-B vsan 102
member pwwn 20:00:00:25:b5:01:0b:07
! [VM-Host-Harness-06]
member pwwn 52:4a:93:7a:98:4c:3e:01
! [FlashArray-CT0FC1-fabricB]
member pwwn 52:4a:93:7a:98:4c:3e:03
! [FlashArray-CT0FC3-fabricB]
member pwwn 52:4a:93:7a:98:4c:3e:11
! [FlashArray-CT1FC1-fabricB]
member pwwn 52:4a:93:7a:98:4c:3e:13
! [FlashArray-CT1FC3-fabricB]
zoneset name flashstack-zoneset vsan 102
member VM-Host-Infra-01-B
member VM-Host-Infra-02-B
member VM-Host-Harness-01-B
member VM-Host-Harness-02-B
member VM-Host-Harness-03-B
member VM-Host-Harness-04-B
member VM-Host-Harness-05-B
member VM-Host-Harness-06-B
zoneset activate name flashstack-zoneset vsan 102
do clear zone database vsan 102
!Full Zone Database Section for vsan 102
zone name VM-Host-Infra-01-B vsan 102
member pwwn 20:00:00:25:b5:01:0b:00
! [VM-Host-Infra-01-B]
member pwwn 52:4a:93:7a:98:4c:3e:01
! [FlashArray-CT0FC1-fabricB]
member pwwn 52:4a:93:7a:98:4c:3e:03
! [FlashArray-CT0FC3-fabricB]
member pwwn 52:4a:93:7a:98:4c:3e:13
! [FlashArray-CT1FC3-fabricB]
member pwwn 52:4a:93:7a:98:4c:3e:11
! [FlashArray-CT1FC1-fabricB]
zone name VM-Host-Infra-02-B vsan 102
member pwwn 20:00:00:25:b5:01:0b:01
! [VM-Host-Infra-02-B]
member pwwn 52:4a:93:7a:98:4c:3e:01
! [FlashArray-CT0FC1-fabricB]
member pwwn 52:4a:93:7a:98:4c:3e:03
! [FlashArray-CT0FC3-fabricB]
member pwwn 52:4a:93:7a:98:4c:3e:13
! [FlashArray-CT1FC3-fabricB]
member pwwn 52:4a:93:7a:98:4c:3e:11
! [FlashArray-CT1FC1-fabricB]
zone name VM-Host-Harness-01-B vsan 102
member pwwn 20:00:00:25:b5:01:0b:02
! [VM-Host-Harness-01]
member pwwn 52:4a:93:7a:98:4c:3e:01
! [FlashArray-CT0FC1-fabricB]
member pwwn 52:4a:93:7a:98:4c:3e:03
! [FlashArray-CT0FC3-fabricB]
member pwwn 52:4a:93:7a:98:4c:3e:11
! [FlashArray-CT1FC1-fabricB]
member pwwn 52:4a:93:7a:98:4c:3e:13
! [FlashArray-CT1FC3-fabricB]
zone name VM-Host-Harness-02-B vsan 102
member pwwn 20:00:00:25:b5:01:0b:03
! [VM-Host-Harness-02]
member pwwn 52:4a:93:7a:98:4c:3e:01
! [FlashArray-CT0FC1-fabricB]
member pwwn 52:4a:93:7a:98:4c:3e:03
! [FlashArray-CT0FC3-fabricB]
member pwwn 52:4a:93:7a:98:4c:3e:11
! [FlashArray-CT1FC1-fabricB]
member pwwn 52:4a:93:7a:98:4c:3e:13
! [FlashArray-CT1FC3-fabricB]
zone name VM-Host-Harness-03-B vsan 102
member pwwn 20:00:00:25:b5:01:0b:04
! [VM-Host-Harness-03]
member pwwn 52:4a:93:7a:98:4c:3e:01
! [FlashArray-CT0FC1-fabricB]
member pwwn 52:4a:93:7a:98:4c:3e:03
! [FlashArray-CT0FC3-fabricB]
member pwwn 52:4a:93:7a:98:4c:3e:11
! [FlashArray-CT1FC1-fabricB]
member pwwn 52:4a:93:7a:98:4c:3e:13
! [FlashArray-CT1FC3-fabricB]
zone name VM-Host-Harness-04-B vsan 102
member pwwn 20:00:00:25:b5:01:0b:05
! [VM-Host-Harness-04]
member pwwn 52:4a:93:7a:98:4c:3e:01
! [FlashArray-CT0FC1-fabricB]
member pwwn 52:4a:93:7a:98:4c:3e:03
! [FlashArray-CT0FC3-fabricB]
member pwwn 52:4a:93:7a:98:4c:3e:11
! [FlashArray-CT1FC1-fabricB]
member pwwn 52:4a:93:7a:98:4c:3e:13
! [FlashArray-CT1FC3-fabricB]
zone name VM-Host-Harness-05-B vsan 102
member pwwn 20:00:00:25:b5:01:0b:06
! [VM-Host-Harness-05]
member pwwn 52:4a:93:7a:98:4c:3e:01
! [FlashArray-CT0FC1-fabricB]
member pwwn 52:4a:93:7a:98:4c:3e:03
! [FlashArray-CT0FC3-fabricB]
member pwwn 52:4a:93:7a:98:4c:3e:11
! [FlashArray-CT1FC1-fabricB]
member pwwn 52:4a:93:7a:98:4c:3e:13
! [FlashArray-CT1FC3-fabricB]
zone name VM-Host-Harness-06-B vsan 102
member pwwn 20:00:00:25:b5:01:0b:07
! [VM-Host-Harness-06]
member pwwn 52:4a:93:7a:98:4c:3e:01
! [FlashArray-CT0FC1-fabricB]
member pwwn 52:4a:93:7a:98:4c:3e:03
! [FlashArray-CT0FC3-fabricB]
member pwwn 52:4a:93:7a:98:4c:3e:11
! [FlashArray-CT1FC1-fabricB]
member pwwn 52:4a:93:7a:98:4c:3e:13
! [FlashArray-CT1FC3-fabricB]
zoneset name flashstack-zoneset vsan 102
member VM-Host-Infra-01-B
member VM-Host-Infra-02-B
member VM-Host-Harness-01-B
member VM-Host-Harness-02-B
member VM-Host-Harness-03-B
member VM-Host-Harness-04-B
member VM-Host-Harness-05-B
member VM-Host-Harness-06-B
interface mgmt0
ip address 192.168.164.16 255.255.255.0
interface port-channel2
channel mode active
switchport rate-mode dedicated
vsan database
vsan 102 interface port-channel2
vsan 102 interface fc1/1
vsan 102 interface fc1/2
vsan 102 interface fc1/3
vsan 102 interface fc1/4
clock timezone EST -5 0
switchname mds-9148s-b
line console
line vty
boot kickstart bootflash:/m9100-s5ek9-kickstart-mz.7.3.0.DY.1.bin
boot system bootflash:/m9100-s5ek9-mz.7.3.0.DY.1.bin
interface fc1/5
interface fc1/6
interface fc1/7
interface fc1/8
interface fc1/1
interface fc1/2
interface fc1/3
interface fc1/4
interface fc1/9
interface fc1/10
interface fc1/11
interface fc1/12
interface fc1/13
interface fc1/14
interface fc1/15
interface fc1/16
interface fc1/17
interface fc1/18
interface fc1/19
interface fc1/20
interface fc1/21
interface fc1/22
interface fc1/23
interface fc1/24
interface fc1/25
interface fc1/26
interface fc1/27
interface fc1/28
interface fc1/29
interface fc1/30
interface fc1/31
interface fc1/32
interface fc1/33
interface fc1/34
interface fc1/35
interface fc1/36
interface fc1/37
interface fc1/38
interface fc1/39
interface fc1/40
interface fc1/41
interface fc1/42
interface fc1/43
interface fc1/44
interface fc1/45
interface fc1/46
interface fc1/47
interface fc1/48
interface fc1/5
interface fc1/6
interface fc1/7
interface fc1/8
interface fc1/1
switchport description FlashStack-CT0FC1
port-license acquire
no shutdown
interface fc1/2
switchport description FlashStack-CT0FC3
port-license acquire
no shutdown
interface fc1/3
switchport description FlashStack-CT1FC1
port-license acquire
no shutdown
interface fc1/4
switchport description FlashStack-CT1FC3
port-license acquire
no shutdown
interface fc1/5
switchport description UCS 6332-16UP-B Port 1
port-license acquire
channel-group 2 force
no shutdown
interface fc1/6
switchport description UCS 6332-16UP-B Port 1
port-license acquire
channel-group 2 force
no shutdown
interface fc1/7
switchport description UCS 6332-16UP-B Port 1
port-license acquire
channel-group 2 force
no shutdown
interface fc1/8
switchport description UCS 6332-16UP-B Port 1
port-license acquire
channel-group 2 force
no shutdown
interface fc1/9
port-license acquire
interface fc1/10
port-license acquire
interface fc1/11
port-license acquire
interface fc1/12
port-license acquire
interface fc1/13
interface fc1/14
interface fc1/15
interface fc1/16
interface fc1/17
interface fc1/18
interface fc1/19
interface fc1/20
interface fc1/21
interface fc1/22
interface fc1/23
interface fc1/24
interface fc1/25
interface fc1/26
interface fc1/27
interface fc1/28
interface fc1/29
interface fc1/30
interface fc1/31
interface fc1/32
interface fc1/33
interface fc1/34
interface fc1/35
interface fc1/36
interface fc1/37
interface fc1/38
interface fc1/39
interface fc1/40
interface fc1/41
interface fc1/42
interface fc1/43
interface fc1/44
interface fc1/45
interface fc1/46
interface fc1/47
interface fc1/48
ip default-gateway 192.168.164.254
Ramesh Isaac, Technical Marketing Engineer, Cisco Systems, Inc.
Ramesh Isaac is a Technical Marketing Engineer in the Cisco UCS Data Center Solutions Group. Ramesh has worked in data center and mixed-use lab settings since 1995. He started in information technology supporting UNIX environments and focused on designing and implementing multi-tenant virtualization solutions in Cisco labs before entering Technical Marketing. Ramesh holds certifications from Cisco, VMware, and Red Hat.
Cody Hosterman, Technical Director for Virtualization Ecosystem Integration at Pure Storage
Cody Hosterman focuses on the core VMware vSphere virtualization platform, VMware cloud and management applications and 3rd party products. He has a deep background in virtualization and storage technologies, including experience as a Solutions Engineer and Principal Virtualization Technologist. In his current position, he is responsible for VMware integration strategy, best practices, and developing new integrations and documentation. Cody has over 7 years of experience in virtualization and storage in various technical capacities. He is a VMware vExpert, and holds a bachelor’s degree from Pennsylvania State University in Information Sciences and Technology.
For their support and contribution to the design, validation, and creation of this Cisco Validated Design, the authors would like to thank:
· John George, Cisco Systems, Inc.
· Archana Sharma, Cisco Systems, Inc.