Understanding DMM SAN Topologies

Cisco MDS DMM is designed to support a variety of SAN topologies. The SAN topology influences the location of the MSM-18/4 module or MDS 9222i switch and the DMM feature configuration. The following sections describe common SAN topologies and their implications for DMM:

Overview

Cisco DMM supports homogeneous SANs (all Cisco MDS switches), as well as heterogeneous SANs (a mixture of MDS switches and other vendor switches). In a heterogeneous SAN, you must connect the existing and new storage to Cisco MDS switches.

In both homogeneous and heterogeneous SANs, Cisco MDS DMM supports dual-fabric and single-fabric SAN topologies. Dual-fabric and single-fabric topologies both support single path and multipath configurations.

In a single path configuration, a migration job includes only the one path (which is represented as an initiator/target port pair). In a multipath configuration, a migration job must include all paths (which are represented as two initiator/target port pairs).

FC-Redirect

When a data migration job is in progress, all traffic (in both directions) sent between the server HBA port and the existing storage is intercepted and forwarded to the MSM-18/4 module or the MDS 9222i switch, using the FC-Redirect capability.

FC-Redirect requirements for the SAN topology configuration include the following:

  • The existing storage must be connected to a switch with FC-Redirect capability. FC-Redirect capability is available on MDS 9500 Series switch or MDS 9222i switch running Cisco SAN-OS Release 3.2(2c) or later or Cisco NX-OS Release 4.x or later.
  • FC-Redirect capability is introduced in Cisco MDS 9710 Series Switch running with Cisco NX-OS 6.2(3) or later.
  • FC-Redirect capability is introduced in Cisco MDS 9250i Series Switch running with Cisco NX-OS 6.2(5) or later.
  • Server HBA ports may be connected to a switch with or without FC-Redirect capability.
  • The switches with FC-Redirect must be running SAN-OS 3.2(1) or NX-OS 4.1(1b) or later release.
  • The server HBA port and the existing storage port must be zoned together. The default-zone policy must be configured as deny.
  • The MSM-18/4 module or the MDS 9222i switch can be located anywhere in the fabric, as long as the FCNS database in the MSM-18/4 module or the MDS 9222i switch has the required information about the server HBA ports and the existing storage ports. The MSM-18/4 module or the MDS 9222i switch must be running SAN-OS 3.2(1) or NX-OS 4.1(1b) or later release.

The following examples show the server-to-storage packet flow when a data migration job is in progress. For clarity, the example shows the MSM-18/4 module or MDS 9222i switch and the existing storage connected to separate switches. The recommended practice is to connect the existing storage to the same switch as the MSM-18/4 module or MDS 9222i switch.

The server HBA port (Figure 3-1) is connected to switch A and the existing storage is connected to switch C. Both switches have FC Redirect capability. The MSM-18/4 module or MDS 9222i switch is installed on switch B. All three switches are running SAN-OS 3.2(1) or NX-OS 4.1(1b) or later.

Figure 3-1 Host Connected to FC-Redirect Switch

 

When the data migration job is started, FC-Redirect is configured on switch A to divert the server traffic to the MSM-18/4 module or MDS 9222i switch. FC-Redirect is configured on switch C to redirect the storage traffic to the MSM-18/4 module or MDS 9222i switch.

The server HBA port (Figure 3-2) is connected to switch A, which either does not have FC-Redirect capability or is not running SAN-OS 3.2(1) or NX-OS 4.1(1b) or later. The existing storage is connected to switch C, which has FC-Redirect capability. The MSM-18/4 module or MDS 9222i switch is installed on switch B. Switches B and C are running SAN-OS 3.2(1) or NX-OS 4.1(1b) or later.

When the data migration job is started, FC-Redirect is configured on switch C to redirect the server and storage traffic to the module. This configuration introduces additional network latency and consumes additional bandwidth, because traffic from the server travels an extra network hop (A to C, C to B, B to C). The recommended configuration (placing the MSM-18/4 module or MDS 9222i switch in switch C) avoids the increase in network latency and bandwidth.

Figure 3-2 Host Not Connected to FC-Redirect Switch

 

DMM Topology Guidelines

When determining the provisioning and configuration requirements for DMM, note the following guidelines related to a SAN topology:

  • The existing and new storage must be connected to MDS switches.
  • Switches connected to the storage ports must be running MDS SAN-OS 3.2(1) or NX-OS 4.1(1b) or later release.
  • The MSM-18/4 module is supported on MDS 9500 Series switches and MDS 9200 Series switches. The switch must be running MDS SAN-OS 3.2(1) or NX-OS 4.1(1b) or later release. MDS 9250i switch running 6.2.5 or later release
  • DMM requires a minimum of one MSM-18/4 module or MDS 9222i switch in each fabric.
  • DMM does not support migration of logical volumes. For example, if the existing storage is a logical volume with three physical LUNs, DMM treats this as three LUN-to-LUN migration sessions.
  • If you plan to deploy DMM and FCIP write acceleration together, there are restrictions in the supported topologies. Contact Cisco for assistance with designing the DMM topology.
  • Minimum revisions should be updated to include the minimum supported MSM release, NX-OS Release 4.1(1b).
  • DMM is supported on NPV, NPIV, and TFPC.

Note In a storage-based migration, if a new server port tries to access the storage ports once the migration has started, storage can become corrupted.


Homogeneous SANs

A homogeneous SAN contains only Cisco MDS switches. Most topologies fit the following categories:

  • Core-Edge—Hosts at the edge of the network, and storage at the core.
  • Edge-Core—Hosts and storage at the edge of the network, and ISLs between the core switches.
  • Edge-Core-Edge—Hosts and storage connected to opposite edges of the network and core switches with ISLs.

For all of the above categories, we recommend that you locate the MSM-18/4 module or MDS 9222i switch in the switch closest to the storage devices. Following this recommendation ensures that DMM introduces no additional network traffic during data migrations.

A common SAN topology (Figure 3-3), with servers at the edge of the network and storage arrays in the core.

Figure 3-3 Homogeneous SAN Topology

 

In a homogeneous network, you can locate the MSM-18/4 module, MDS 9222i or the MDS 9250i switch on any DMM-enabled MDS switch in the fabric. It is recommended that the MSM-18/4 module or MDS 9222i switch is installed in the switch connected to the existing storage. The new storage should be connected to the same switch as the existing storage. If the MSM-18/4 module, MDS 9222i or the MDS 9250i switch is on a different switch from the storage, additional ISL traffic crosses the network during the migration (all traffic between storage and server is routed through the MSM-18/4 module, MDS 9222i, or the MDS 9250i switch).

Heterogeneous SANs

When planning Cisco MDS DMM data migration for a heterogeneous SAN, note the following guidelines:

  • The existing and new storage devices for the migration must be connected to MDS switches.
  • The path from the MSM-18/4 module, MDS 9222i, or the MDS 9250i switch to the storage-connected switch must be through a Cisco fabric.

Depending on the topology, you may need to make configuration changes prior to data migration.

Ports in a Server-Based Job

This section provides guidelines for configuring server-based migration jobs.

When creating a server-based migration job, you must include all possible paths from the host to the LUNs being migrated. All writes to a migrated LUN need to be mirrored in the new storage until the job is destroyed, so that no data writes are lost. Therefore, all active ports on the existing storage that expose the same set of LUNs to the server must be added to a single data migration job.

In a multipath configuration, two or more active storage ports expose the same set of LUNs to two HBA ports on the server (one initiator/target port pair for each path). Multipath configurations are supported in dual-fabric topologies (one path through each fabric) and in single-fabric topologies (both paths through the single fabric).

In a single-path configuration, only one active storage port exposes the LUN set to the server. The migration job includes one initiator and target port pair (DMM does not support multiple servers accessing the same LUN set).

The following sections describe how to apply the rules to various configurations:

Single LUN Set, Active-Active Array

The server accesses three LUNs over Fabric 1 (Figure 3-4) using storage port ES1. The server accesses the same LUNs over Fabric 2 using storage port ES2.

Both storage ports (ES1 and ES2) must be included in the same data migration job, as both ports are active and expose the same LUN set.

Figure 3-4 Single LUN Set, Active-Active Array

 

You create a data migration job with the following configuration:

 

Server Port
Existing Storage Port
New Storage Port

H1

ES1

NS1

H2

ES2

NS2


Note If the example in Figure 3-4 showed multipathing over a single fabric SAN, there would be no difference in the data migration job configuration.


Multiple LUN Set, Active-Active Arrays

The server accesses three LUNs over Fabric 1 (see Figure 3-5), using storage port ES1. The server accesses the same LUNs over Fabric 2 using storage port ES2. The server accesses three different LUNs over Fabric 1 using storage port ES3, and accesses the same LUNs over Fabric 2 using storage port ES4.

Figure 3-5 Multiple LUN Set, Active-Active Arrays

 

You need to create two data migration jobs, because the server has access to two LUN sets on two different storage ports. You need to include two storage ports in each data migration job, as they are active-active multipathing ports.

One migration job has the following configuration:

 

Server Port
Existing Storage
New Storage

H1

ES1

NS1

H2

ES2

NS2

This job includes three data migration sessions (for LUNs 1, 2, and 3).

The other migration job has the following configuration:

 

Server Port
Existing Storage
New Storage

H1

ES3

NS3

H2

ES4

NS4

This job includes three data migration sessions (for LUNs 7, 8, and 9).

Single LUN Set, Active-Passive Array

In an active-passive array, the LUNs exposed by a storage port may be active or passive.

Example 1: Each controller has two active ports

The server accesses a single LUN set. (See Figure 3-6.) However, all LUNs are not active on a single storage port. The active-passive array in the example has two controllers, each with two ports. LUN 0 and LUN 1 are active on ES1 and ES2. LUN 2 and LUN 3 are active on ES3 and ES4.

Logically, the server sees two active LUN sets that are accessed from two different storage ports. Each storage port is paired for multipathing.

Figure 3-6 Example 1: Single LUN Set, Active-Passive Array

 

The server accesses LUN 0 and LUN 1 over Fabric 1 using storage port ES1. The server accesses the same LUNs over Fabric 2 using storage port ES2. The server accesses LUN 2 and LUN 3 over Fabric 1 using storage port ES3, and accesses the same LUNs over Fabric 2 using storage port ES4.

You need to create two data migration jobs, because the server has access to two LUN sets over two different storage ports. Each of the data migration jobs includes two storage ports, because both ports access the active LUNs on the storage.

Only the active LUNs and associated storage ports are included in each job. (LUNs 0 and 1 in one job and LUNs 1 and 2 in the other job).


Note You can use the Server Lunmap Discovery (SLD) tool to see the LUNs that are active on each port of an active-passive array.



Note In Cisco DMM, if a data migration job is configured for an Active-Passive array, only the paths on the active controller of the storage are included as part of the job. As a result, if a LUN Trespass has occurred due to a controller failover, the host I/Os on the new path to the storage are not captured by DMM and they are not applied to the new storage. If a LUN trespass or controller-failover occurs during migration, destroy the job and recreate it to perform the migration again to ensure that the old and new storage are synchronized.


One migration job has the following configuration:

 

Server Port
Existing Storage
New Storage

H1

ES1

NS1

H2

ES2

NS2

This job includes two data migration sessions (for LUNs 0 and 1).

The other migration job has the following configuration:

 

Server Port
Existing Storage
New Storage

H1

ES3

NS3

H2

ES4

NS4

This job includes two data migration sessions (for LUNs 2 and 3).

Example 2: Each controller has only one active port

The server accesses a single LUN set. (See Figure 3-7.) However, all LUNs are not active on a single storage port. The active-passive array in the example has two controllers, each with a single port. LUN 0 and LUN 1 are active on ES1. LUN 2 and LUN 3 are active on ES2.

Logically, the server sees two active LUN sets that are accessed from different storage ports.

Figure 3-7 Example 2: Single LUN Set, Active-Passive Array

 

The server accesses LUN 0 and LUN 1 over Fabric 1 using storage port ES1. The server accesses LUN 3 and LUN 4 over Fabric 2 using storage port ES2.

You need to create two data migration jobs, because the server has access to two LUN sets over two different storage ports. Each of the data migration jobs includes the ports from a single fabric.

One migration job has the following configuration:

Server Port
Existing Storage
New Storage

H1

ES1

NS1

The other migration job has the following configuration:

Server Port
Existing Storage
New Storage

H2

ES2

NS2