The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product. Learn more about how Cisco is using Inclusive Language.
Cisco MDS 9000 Family IP storage (IPS) services extend the reach of Fibre Channel SANs by using open-standard, IP-based technology. The switch can connect separated SAN islands using Fibre Channel over IP (FCIP).
Note FCIP is supported on the MDS 9222i switch, MSM-18/4 module, 16-Port Storage Services Node (SSN-16), and IPS modules on MDS 9200 Series directors and MDS 9250i Multiservice Fabric Switch.
This section briefly describes the new and updated features for releases starting from Cisco MDS NX-OS Release 6.2(13).
Configuring FCIP Tunnels for Maximum Performance on a Cisco MDS 9250i Switch |
This feature enables users to achieve maximum FCIP performance in 10 Gbps and 1 Gbps modes on a Cisco MDS 9250i Multiservice Fabric Switch. |
The Fibre Channel over IP Protocol (FCIP) is a tunneling protocol that connects geographically distributed Fibre Channel storage area networks (SAN islands) transparently over IP local area networks (LANs), metropolitan area networks (MANs), and wide area networks (WANs).(See Figure 2-1).
Figure 2-1 Fibre Channel SANs Connected by FCIP
FCIP uses TCP as a network layer transport. The DF bit is set in the TCP header.
Note For more information about FCIP protocols, refer to the IETF standards for IP storage at http://www.ietf.org. Also refer to Fibre Channel standards for switch backbone connection at http://www.t11.org (see FC-BB-2).
This section includes the following topics:
To configure IPS modules or MSM-18/4 modules for FCIP, you should have a basic understanding of the following concepts:
Figure 2-2 shows the internal model of FCIP in relation to Fibre Channel Inter-Switch Links (ISLs) and Cisco’s extended ISLs (EISLs).
FCIP virtual E (VE) ports operate exactly like standard Fibre Channel E ports, except that the transport in this case is FCIP instead of Fibre Channel. The only requirement is for the other end of the VE port to be another VE port.
A virtual ISL is established over an FCIP link and transports Fibre Channel traffic. Each associated virtual ISL looks like a Fibre Channel ISL with either an E port or a TE port at each end (see Figure 2-2).
Figure 2-2 FCIP Links and Virtual ISLs
See the “Configuring E Ports” section for more information.
FCIP links consist of one or more TCP connections between two FCIP link endpoints. Each link carries encapsulated Fibre Channel frames.
When the FCIP link comes up, the VE ports at both ends of the FCIP link create a virtual Fibre Channel (E)ISL and initiate the E port protocol to bring up the (E)ISL.
By default, the FCIP feature on any Cisco MDS 9000 Family switch creates two TCP connections for each FCIP link:
To enable FCIP on the IPS module or MSM-18/4 module, an FCIP profile and FCIP interface (interface FCIP) must be configured.
The FCIP link is established between two peers, the VE port initialization operation is identical to a normal E port. This operation is independent of the link being FCIP or pure Fibre Channel, and is based on the E port discovery process (ELP, ESC).
Once the FCIP link is established, the VE port operation is identical to E port operation for all inter-switch communication (including domain management, zones, and VSANs). At the Fibre Channel layer, all VE and E port operations are identical.
The FCIP profile contains information about the local IP address and TCP parameters. The profile defines the following information:
The FCIP profile’s local IP address determines the Gigabit Ethernet port where the FCIP links terminate (see Figure 2-3).
Figure 2-3 FCIP Profile and FCIP Links
The FCIP interface is the local endpoint of the FCIP link and a VE port interface. All the FCIP and E port parameters are configured in context to the FCIP interface.
The following high-availability solutions are available for FCIP configurations:
Figure 2-4 provides an example of a PortChannel-based load-balancing configuration. To perform this configuration, you need two IP addresses on each SAN island. This solution addresses link failures.
Figure 2-4 PortChannel-Based Load Balancing
The following characteristics set Fibre Channel PortChannel solutions apart from other solutions:
Figure 2-5 displays a FPSF-based load balancing configuration example. This configuration requires two IP addresses on each SAN island, and addresses IP and FCIP link failures.
Figure 2-5 FSPF-Based Load Balancing
The following characteristics set FSPF solutions apart from other solutions:
Figure 2-6 displays a Virtual Router Redundancy Protocol (VRRP)-based high availability FCIP configuration example. This configuration requires at least two physical Gigabit Ethernet ports connected to the Ethernet switch on the island where you need to implement high availability using VRRP.
Figure 2-6 VRRP-Based High Availability
The following characteristics set VRRP solutions apart from other solutions:
Note Port-fast needs to be enabled in the Cisco catalyst 6500 series and Cisco Nexus 7000 series switches where the Gigabit Ethernet or Mgmt port is connected.
Note VRRP IPv6 is not supported for MDS 9250i switch.
Figure 2-7 displays an Ethernet PortChannel-based high-availability FCIP example. This solution addresses the problem caused by individual Gigabit Ethernet link failures.
Figure 2-7 Ethernet PortChannel-Based High Availability
The following characteristics set Ethernet PortChannel solutions apart from other solutions:
Note Ethernet port-channel is not supported for IP Storage port for MDS 9250i switch.
Ethernet PortChannels offer link redundancy between the Cisco MDS 9000 Family switch’s Gigabit Ethernet ports and the connecting Ethernet switch. Fibre Channel PortChannels also offer (E)ISL link redundancy between Fibre Channel switches. FCIP is an (E)ISL link and is only applicable for a Fibre Channel PortChannel. Beneath the FCIP level, an FCIP link can run on top of an Ethernet PortChannel or on one Gigabit Ethernet port. This link is totally transparent to the Fibre Channel layer.
An Ethernet PortChannel restriction only allows two contiguous IPS ports, such as ports 1–2 or 3–4, to be combined in one Ethernet PortChannel (see Chapter 6, “Configuring Gigabit Ethernet High Availability” for more information). This restriction only applies to Ethernet PortChannels. The Fibre Channel PortChannel (to which FCIP link can be a part of) does not have a restriction on which (E)ISL links can be combined in a Fibre Channel PortChannel as long as it passes the compatibility check (see the Cisco Fabric Manager Interfaces Configuration Guide and Cisco MDS 9000 Family NX-OS Interfaces Configuration Guide for more information). The maximum number of Fibre Channel ports that can be put into a Fibre Channel PortChannel is 16 (see Figure 2-8).
Figure 2-8 PortChannels at the Fibre Channel and Ethernet Levels
To configure Fibre Channel PortChannels, see the Cisco MDS 9000 Family NX-OS Interfaces Configuration Guide and Cisco Fabric Manager Interfaces Configuration Guide.
To configure Ethernet PortChannels, see the Cisco Fabric Manager High Availability and Redundancy Configuration Guide and Cisco MDS 9000 Family NX-OS High Availability and Redundancy Configuration Guide.
This section describes how to configure FCIP and includes the following topics:
To begin configuring the FCIP feature, you must explicitly enable FCIP on the required switches in the fabric. By default, this feature is disabled in all switches in the Cisco MDS 9000 Family.
The configuration and verification operations commands for the FCIP feature are only available when FCIP is enabled on a switch. When you disable this feature, all related configurations are automatically discarded.
To use the FCIP feature, you need to obtain the SAN extension over IP package license (SAN_EXTN_OVER_IP or SAN_EXTN_OVER_IP_IPS4) (see the Cisco Family NX-OS Licensing Guide). By default, the MDS 9222i, 9250i and 9222i switches are shipped with the SAN extension over IP package license.
To enable FCIP on any participating switch, follow these steps:
Check the Use Large MTU Size (Jumbo Frames) option to use jumbo size frames of 2300. Since Fibre Channel frames are 2112, we recommended that you use this option. If you uncheck the box, the FCIP Wizard does not set the MTU size, and the default value of 1500 is set.
Note In Cisco MDS 9000 SAN-OS, Release 3.0(3), by default the Use Large MTU Size (Jumbo Frames) option is not selected.
Note The MTU size for the immediate next hop should be same between the IPS port on a Cisco MDS switch and the Nexus 7000 Series or Cisco Catalyst 6000 Series Switches. If you configure the MTU sizes to be different (for example, 2500 on the Cisco MDS 9250i Multiservice Fabric Switch and 1500 on the Nexus 7000 Series or Cisco Catalyst 6000 Series), it could result in flapping of the FCIP tunnels.
Figure 2-9 Enabling IPsec on an FCIP link
Step 1 Click Next. You see the IP Address/Route input screen.
Step 2 Select Add IP Route if you want to add an IP route, otherwise retain the defaults. See Figure 2-10.
Figure 2-10 Specify IP Address/Route
Step 3 Click Next. You see the TCP connection characteristics.
Step 4 Set the minimum and maximum bandwidth settings and round-trip time for the TCP connections on this FCIP link, as shown in Figure 2-11.
You can measure the round-trip time between the Gigabit Ethernet endpoints by clicking the Measure button.
Figure 2-11 Specifying Tunnel Properties
Step 5 Check the Write Acceleration check box to enable FCIP write acceleration on this FCIP link.
See the “FCIP Write Acceleration” section.
Step 6 Check the Enable Optimum Compression check box to enable IP compression on this FCIP link.
See the “FCIP Compression” section.
Step 7 Check the Enable XRC Emulator check box to enable XRC emulator on this FCIP link.
For more information on the XRC Emulator, see the Cisco Fabric Manager Fabric Configuration Guide.
Step 9 Set the Port VSAN and click the Trunk Mode radio button for this FCIP link. (See Figure 2-12).
Note If FICON is enabled and FICON VSAN is present on both the switches, Figure 2-14 is displayed, otherwise Figure 2-15 is displayed.
Figure 2-13 Enter FICON Port Address
Figure 2-15 Enter FICON Port Address
Step 10 Click Finish to create this FCIP link.
Once you have created FCIP links using the FCIP wizard, you may need to modify parameters for these links. This includes modifying the FCIP profiles as well as the FCIP link parameters. Each Gigabit Ethernet interface can have three active FCIP links at one time. For 9250i, each IPStorage port can have 6 active FCIP links at a time.
To configure an FCIP link, follow these steps on both switches:
Step 1 Configure the Gigabit Ethernet interface.
See the Cisco MDS 9000 Family NX-OS IP Services Configuration Guide.
Step 2 Create an FCIP profile and then assign the Gigabit Ethernet interface’s IP address to the profile.
Step 3 Create an FCIP interface and then assign the profile to the interface.
Step 4 Configure the peer IP address for the FCIP interface.
You must assign a local IP address of a Gigabit Ethernet interface or subinterface to the FCIP profile to create an FCIP profile. You can assign IPv4 or IPv6 addresses to the interfaces. Figure 2-16 shows an example configuration.
Figure 2-16 Assigning Profiles to Each Gigabit Ethernet Interface
To create an FCIP profile in switch 1 in Figure 2-16, follow these steps:
Creates a profile for the FCIP connection. The valid range is from 1 to 255. |
||
Associates the profile (10) with the local IPv4 address of the Gigabit Ethernet interface (3/1). |
To assign FCIP profile in switch 2 in Figure 2-16, follow these steps:
Associates the profile (20) with the local IPv4 address of the Gigabit Ethernet interface. |
To create an FCIP profile in switch 1, follow these steps:
Step 1 Verify that you are connected to a switch that contains an IPS module.
Step 2 From Fabric Manager, choose Switches > ISLs > FCIP in the Physical Attributes pane. From Device Manager, choose FCIP from the IP menu.
Step 3 Click the Create Row button in Fabric Manager or the Create button on Device Manager to add a new profile.
Step 4 Enter the profile ID in the ProfileId field.
Step 5 Enter the IP address of the interface to which you want to bind the profile.
Step 6 Modify the optional TCP parameters, if desired. Refer to Fabric Manager Online Help for explanations of these fields.
Step 7 (Optional) Click the Tunnels tab and modify the remote IP address in the Remote IPAddress field for the endpoint to which you want to link.
Step 8 Enter the optional parameters, if required. See the“FCIP Profiles” section for information on displaying FCIP profile information.
Step 9 Click Apply Changes icon to save these changes.
Example 2-1 Displays FCIP Profiles for SSN-16 and 18+4
Example 2-2 Displays FCIP Profiles for Cisco MDS 9250i Multiservice Fabric Switch
Example 2-3 Displays the Specified FCIP Profile Information for SSN-16 and 18+4
Example 2-4 Displays the Specified FCIP Profile Information for Cisco MDS 9250i Multiservice Fabric Switch
When two FCIP link endpoints are created, an FCIP link is established between the two IPS modules or MSM-18/4 modules. To create an FCIP link, assign a profile to the FCIP interface and configure the peer information. The peer IP switch information initiates (creates) an FCIP link to that peer switch (see Figure 2-17).
Figure 2-17 Assigning Profiles to Each Gigabit Ethernet Interface
To verify the FCIP interfaces and Extended Link Protocol (ELP) on Device Manager, follow these steps:
Step 1 Make sure you are connected to a switch that contains an IPS module.
Step 2 Select FCIP from the Interface menu.
Step 3 Click the Interfaces tab if it is not already selected. You see the FCIP Interfaces dialog box.
Step 4 Click the ELP tab if it is not already selected. You see the FCIP ELP dialog box.
To check the trunk status for the FCIP interface on Device Manager, follow these steps:
Step 1 Make sure you are connected to a switch that contains an IPS module.
Step 2 Select FCIP from the IP menu.
Step 3 Click the Trunk Config tab if it is not already selected. You see the FCIP Trunk Config dialog box. This shows the status of the interface.
Step 4 Click the Trunk Failures tab if it is not already selected. You see the FCIP Trunk Failures dialog box.
Cisco Transport Controller (CTC) is a task-oriented tool used to install, provision, and maintain network elements. It is also used to troubleshoot and repair NE faults.
To launch CTC using Fabric Manager, follow these steps:
Step 1 Right-click an ISL carrying optical traffic in the fabric.
Step 3 Enter the URL for the Cisco Transport Controller.
To create an FCIP link endpoint in switch 1, follow these steps:
Assigns the peer IPv4 address information (10.1.1.1 for switch 2) to the FCIP interface. |
||
To create an FCIP link endpoint in switch 2, follow these steps:
A basic FCIP configuration uses the local IP address to configure the FCIP profile. In addition to the local IP address and the local port, you can specify other TCP parameters as part of the FCIP profile configuration.
This sections includes the following topics:
Note TCP send buffer size is not available for the Cisco MDS 9250i Multiservice Fabric Switch.
FCIP configuration options can be accessed from the
switch(Config-
profile
)#
submode prompt.
To configure TCP listener ports, follow these steps:
Creates the profile (if it does not already exist) and enters profile configuration submode. The valid range is from 1 to 255. |
The default TCP port for FCIP is 3225. You can change this port by using the port command.
To change the default FCIP port number (3225), follow these steps:
You can control TCP behavior in a switch by configuring the TCP parameters that are described in this section.
Note When FCIP is sent over a WAN link, the default TCP settings may not be appropriate. In such cases, we recommend that you tune the FCIP WAN link by modifying the TCP parameters (specifically bandwidth, round-trip times, and CWM burst size).
You can control the minimum amount of time TCP waits before retransmitting. By default, this value is 200 milliseconds (msec).
To configure the minimum retransmit time, follow these steps:
You can configure the interval that the TCP connection uses to verify that the FCIP link is functioning. This ensures that an FCIP link failure is detected quickly even when there is no traffic.
If the TCP connection is idle for more than the specified time, then keepalive timeout packets are sent to ensure that the connection is active. The keepalive timeout feature This command can be used to tune the time taken to detect FCIP link failures.
You can configure the first interval during which the connection is idle (the default is 60 seconds). When the connection is idle for the configured interval, eight keepalive probes are sent at 1-second intervals. If no response is received for these eight probes and the connection remains idle throughout, that FCIP link is automatically closed.
Note Only the first interval (during which the connection is idle) can be changed.
To configure the first keepalive timeout interval, follow these steps:
You can specify the maximum number of times a packet is retransmitted before TCP decides to close the connection.
Path MTU (PMTU) is the minimum MTU on the IP network between the two endpoints of the FCIP link. PMTU discovery is a mechanism by which TCP learns of the PMTU dynamically and adjusts the maximum TCP segment accordingly (RFC 1191).
By default, PMTU discovery is enabled on all switches with a timeout of 3600 seconds. If TCP reduces the size of the maximum segment because of PMTU change, the reset-timeout specifies the time after which TCP tries the original MTU.
To configure PMTU, follow these steps:
TCP may experience poor performance when multiple packets are lost within one window. With the limited information available from cumulative acknowledgments, a TCP sender can only learn about a single lost packet per round trip. A selective acknowledgment (SACK) mechanism helps overcome the limitations of multiple lost packets during a TCP transmission.
The receiving TCP sends back SACK advertisements to the sender. The sender can then retransmit only the missing data segments. By default, SACK is enabled on Cisco MDS 9000 Family switches.
The optimal TCP window size is automatically calculated using the maximum bandwidth parameter, the minimum available bandwidth parameter, and the dynamically measured round-trip time (RTT).
Note The configured round-trip-time parameter determines the window scaling factor of the TCP connection. This parameter is only an approximation. The measured RTT value overrides the round trip time parameter for window management. If the configured round-trip-time is too small compared to the measured RTT, then the link may not be fully utilized due to the window scaling factor being too small.
The min-available-bandwidth parameter and the measured RTT together determine the threshold below which TCP aggressively maintains a window size sufficient to transmit at minimum available bandwidth.
The max-bandwidth-mbps parameter and the measured RTT together determine the maximum window size.
Note • Set the maximum bandwidth to match the worst-case bandwidth available on the physical link, considering other traffic that might be going across this link (for example, other FCIP tunnels, WAN limitations). Maximum bandwidth should be the total bandwidth minus all other traffic going across that link.
To configure window management, follow these steps:
By enabling the congestion window monitoring (CWM) parameter, you allow TCP to monitor congestion after each idle period. The CWM parameter also determines the maximum burst size allowed after an idle period. By default, this parameter is enabled and the default burst size is 50 KB.
The interaction of bandwidth parameters and CWM and the resulting TCP behavior is outlined as follows:
The software uses standard TCP rules to increase the window beyond the one required to maintain the min-available-bandwidth to reach the max-bandwidth.
Note The default burst size is 50 KB.
Tip We recommend that this feature remain enabled to realize optimal performance. Increasing the CWM burst size can result in more packet drops in the IP network, impacting TCP performance. Only if the IP network has sufficient buffering, try increasing the CWM burst size beyond the default to achieve lower transmit latency.
Jitter is defined as a variation in the delay of received packets. At the sending side, packets are sent in a continuous stream with the packets spaced evenly apart. Due to network congestion, improper queuing, or configuration errors, this steady stream can become lumpy, or the delay between each packet can vary instead of remaining constant.
You can configure the maximum estimated jitter in microseconds by the packet sender. The estimated variation should not include network queuing delay. By default, this parameter is enabled in Cisco MDS switches when IPS modules or MSM-18/4 modules are present.
The default value is 1000 microseconds for FCIP interfaces.
You can define the required additional buffering—beyond the normal send window size —that TCP allows before flow controlling the switch’s egress path for the FCIP interface. The default FCIP buffer size is 0 KB.
Note • Use the default if the FCIP traffic is passing through a high throughput WAN link. If you have a mismatch in speed between the Fibre Channel link and the WAN link, then time stamp errors occur in the DMA bridge. In such a situation, you can avoid time stamp errors by increasing the buffer size.
To set the buffer size, follow these steps:
Configure the advertised buffer size to 5000 KB. The valid range is from 0 to 16384 KB. |
||
Reverts the switch to its factory default. The default is 0 KB. |
Note TCP send buffer size is not available for Cisco MDS 9250i Multiservice Fabric Switch.
Use the show fcip profile command to display FCIP profile configuration information for the SSN-16 and 18+4.
Use the show fcip profile command to display FCIP profile configuration information for the Cisco MDS 9250i Multiservice Fabric Switch.
This section describes the options you can configure on an FCIP interface to establish connection to a peer and includes the following topics:
To establish a peer connection, you must first create the FCIP interface and enter the c
onfig-if
submode.
To enter the c
onfig-if
submode, follow these steps:
Note In the Cisco MDS 9250i Multiservice Fabric Switch, the maximum number of FCIP interfaces to be created is 12.
To establish an FCIP link with the peer, you can use the peer IP address option. This option configures both ends of the FCIP link. Optionally, you can also use the peer TCP port along with the IP address.
The basic FCIP configuration uses the peer’s IP address to configure the peer information. You can also specify the peer’s port number to configure the peer information. If you do not specify a port, the default 3225 port number is used to establish connection. You can specify an IPv4 address or an IPv6 address.
To assign the peer information based on the IPv4 address and port number, follow these steps:
To assign the peer information based on the IPv4 address and port number using Fabric Manager, follow these steps:
Step 1 Expand ISLs and select FCIP in the Physical Attributes pane.
You see the FCIP profiles and links in the Information pane.
Step 2 From Device Manager, choose IP > FCIP. You see the FCIP dialog box
Step 3 Click the Tunnels tab. You see the FCIP link information.
Step 4 Click the Create Row icon in Fabric Manager or the Create button in Device Manager.
You see the FCIP Tunnels dialog box.
Step 5 Set the ProfileID and TunnelID fields.
Step 6 Set the RemoteIPAddress and RemoteTCPPort fields for the peer IP address you are configuring.
Step 7 Check the PassiveMode check box if you do not want this end of the link to initiate a TCP connection.
Step 8 (Optional) Set the NumTCPCon field to the number of TCP connections from this FCIP link.
Step 9 (Optional) Check the Enable check box in the Time Stamp section and set the Tolerance field.
Step 10 (Optional) Set the other fields in this dialog box and click Create to create this FCIP link.
To assign the peer information based on the IPv6 address and port number, follow these steps:
To assign the peer information based on the IPv6 address and port number using Fabric Manager, follow these steps:
Step 1 From Fabric Manager, choose ISLs > FCIP from the Physical Attributes pane.
You see the FCIP profiles and links in the Information pane.
From Device Manager, choose IP > FCIP. You see the FCIP dialog box.
Step 2 Click the Tunnels tab. You see the FCIP link information.
Step 3 Click the Create Row icon in Fabric Manager or the Create button in Device Manager.
You see the FCIP Tunnels dialog box.
Step 4 Set the ProfileID and TunnelID fields.
Step 5 Set the RemoteIPAddress and RemoteTCPPort fields for the peer IP address you are configuring.
Step 6 Check the PassiveMode check box if you do not want this end of the link to initiate a TCP connection.
Step 7 (Optional) Set the NumTCPCon field to the number of TCP connections from this FCIP link.
Step 8 (Optional) Check the Enable check box in the Time Stamp section and set the Tolerance field.
Step 9 (Optional) Set the other fields in this dialog box and click Create to create this FCIP link.
You can specify the number of TCP connections from an FCIP link. By default, the switch tries two (2) TCP connections for each FCIP link. You can configure either two or five TCP connections.
Note Make sure that the peer switch FCIP tunnel is also configured with the same number of TCP connections, otherwise FCIP tunnel will not come up.
Note On the MDS platform, 10 Gb IP Storage ports have different performance characteristics than 1 Gb Ethernet ports. To achieve maximum throughput on FCIP tunnels utilizing MDS 10 Gb IP Storage ports, set the number of TCP connections to 5 on these tunnels.
You can configure the required mode for initiating a TCP connection. By default, the active mode is enabled to actively attempt an IP connection. If you enable the passive mode, the switch does not initiate a TCP connection but waits for the peer to connect to it. By default, the switch tries two TCP connections for each FCIP link.
Note Ensure that both ends of the FCIP link are not configured as passive mode. If both ends are configured as passive, the connection is not initiated.
You can instruct the switch to discard packets that are outside the specified time. When enabled, this feature specifies the time range within which packets can be accepted. If the packet arrived within the range specified by this option, the packet is accepted. Otherwise, it is dropped.
By default, time stamp control is disabled in all switches in the Cisco MDS 9000 Family. If a packet arrives within a 2000 millisecond interval (+ or –2000 msec) from the network time, that packet is accepted.
Note The default value for packet acceptance is 2000 microseconds.If the time-stamp option is enabled, be sure to configure NTP on both switches (see the Cisco NX-OS Fundamentals Configuration Guide for more information).
Tip Do not enable time stamp control on an FCIP interface that has tape acceleration or write acceleration configured.
To enable or disable the time stamp control, follow these steps:
While E ports typically interconnect Fibre Channel switches, some SAN extender devices, such as Cisco’s PA-FC-1G Fibre Channel port adapter and the SN 5428-2 storage router, implement a bridge port model to connect geographically dispersed fabrics. This model uses B port as described in the T11 Standard FC-BB-2. Figure 2-18 shows a typical SAN extension over an IP network.
Figure 2-18 FCIP B Port and Fibre Channel E Port
B ports bridge Fibre Channel traffic from a local E port to a remote E port without participating in fabric-related activities such as principal switch election, domain ID assignment, and Fibre Channel fabric shortest path first (FSPF) routing. For example, Class F traffic entering a SAN extender does not interact with the B port. The traffic is transparently propagated (bridged) over a WAN interface before exiting the remote B port. This bridge results in both E ports exchanging Class F information that ultimately leads to normal ISL behavior such as fabric merging and routing.
FCIP links between B port SAN extenders do not exchange the same information as FCIP links between E ports, and are therefore incompatible. This is reflected by the terminology used in FC-BB-2: while VE ports establish a virtual ISL over an FCIP link, B ports use a B access ISL .
The IPS module and MSM-18/4 module support FCIP links that originate from a B port SAN extender device by implementing the B access ISL protocol on a Gigabit Ethernet interface. Internally, the corresponding virtual B port connects to a virtual E port that completes the end-to-end E port connectivity requirement (see Figure 2-19).
Figure 2-19 FCIP Link Terminating in a B Port Mode
The B port feature in the IPS module and MSM-18/4 module allows remote B port SAN extenders to communicate directly with a Cisco MDS 9000 Family switch, eliminating the need for local bridge devices.
When an FCIP peer is a SAN extender device that only supports Fibre Channel B ports, you need to enable the B port mode for the FCIP link. When a B port is enabled, the E port functionality is also enabled and they coexist. If the B port is disabled, the E port functionality remains enabled.
To enable B port mode, follow these steps:
Enables the reception of keepalive responses sent by a remote peer. |
||
Disables the reception of keepalive responses sent by a remote peer (default). |
To enable B port mode using Fabric Manager, follow these steps:
Step 1 Choose ISLs > FCIP from the Physical Attributes pane.
You see the FCIP profiles and links in the Information pane.
From Device Manager, choose IP > FCIP. You see the FCIP dialog box.
You see the FCIP link information.
Step 3 Click the Create Row icon in Fabric Manager or the Create button in Device Manager.
You see the FCIP Tunnels dialog box.
Step 4 Set the ProfileID and TunnelID fields.
Step 5 Set the RemoteIPAddress and RemoteTCPPort fields for the peer IP address you are configuring.
Step 6 Check the PassiveMode check box if you do not want this end of the link to initiate a TCP connection.
Step 7 (Optional) Set the NumTCPCon field to the number of TCP connections from this FCIP link.
Step 8 Check the Enable check box in the B Port section of the dialog box and optionally check the KeepAlive check box if you want a response sent to an ELS Echo frame received from the FCIP peer.
Step 9 (Optional) Set the other fields in this dialog box and click Create to create this FCIP link.
The quality of service (QoS) parameter specifies the differentiated services code point (DSCP) value to mark all IP packets (type of service—TOS field in the IP header).
If the FCIP link has only one TCP connection, that data DSCP value is applied to all packets in that connection.
To set the QoS values on FCIP interfaces, follow these steps:
See the Cisco Fabric Configuration Guide and Cisco MDS 9000 Family NX-OS Fabric Configuration Guide .
See the Cisco Fabric Manager Interfaces Configuration GuideCisco MDS 9000 Family NX-OS Interfaces Configuration Guide.
– Multiple FCIP links can be bundled into a Fibre Channel PortChannel.
– FCIP links and Fibre Channel links cannot be combined in one PortChannel.
See the Cisco Fabric Manager Security Configuration Guide and Cisco MDS 9000 Family NX-OS Security Configuration Guide:
See the Cisco Fabric Manager Fabric Configuration Guide and Cisco MDS 9000 Family NX-OS Fabric Configuration Guide.
See the Cisco Fabric Manager System Management Configuration Guide and Cisco MDS 9000 Family NX-OS System Management Configuration Guide.
See the Cisco Fabric Manager System Management Configuration Guide and Cisco MDS 9000 Family NX-OS System Management Configuration Guide.
Use the show interface commands to view the summary, counter, description, and status of the FCIP link. Use the output of these commands to verify the administration mode, the interface status, the operational mode, the related VSAN ID, and the profile used. See Example 2-5 through Example 2-11.
Example 2-5 Displays the FCIP Summary (SSN-16 and 18+4)
Example 2-6 Displays the FCIP Summary (Cisco MDS 9250i Multiservice Fabric Switch)
Example 2-7 Displays the FCIP Interface Summary of Counters for a Specified Interface (SSN-16 and 18+4)
Example 2-8 Displays the FCIP Interface Summary of Counters for a Specified Interface (Cisco MDS 9250i Multiservice Fabric Switch)
Example 2-9 Displays Detailed FCIP Interface Standard Counter Information (SSN-16 and 18+4)
Example 2-10 Displays Detailed FCIP Interface Standard Counter Information (Cisco MDS 9250i Multiservice Fabric Switch)
Example 2-11 Displays the FCIP Interface Description
The txbytes is the amount of data before compression. After compression, the compressed txbytes bytes are transmitted with compression and the uncompressed txbytes bytes are transmitted without compression. A packet may be transmitted without compression, if it becomes bigger after compression (see Example 2-12).
Example 2-12 Displays Brief FCIP Interface Counter Information (SSN-16 and 18+4)
Example 2-13 Displays Brief FCIP Interface Counter Information (Cisco MDS 9250i Multiservice Fabric Switch)
You can significantly improve application performance by configuring one or more of the following options for the FCIP interface:
The FCIP write acceleration feature enables you to significantly improve application write performance when storage traffic is routed over wide area networks using FCIP. When FCIP write acceleration is enabled, WAN throughput is maximized by minimizing the impact of WAN latency for write operations.
Note FCIP tunnels using write acceleration (WA) must be ensured that all accelerated flows go through a single FCIP tunnel (or Portchannel). This applies to both commands and responses in both directions. If that does not occur, then FCIP WA will fail. Consequently, FCIP WA cannot be used across FSPF equal cost paths because commands and responses could take different paths.
Note IBM Peer-to-Peer Remote Copy (PPRC) is not supported with FCIP write acceleration.
In Figure 2-20, the WRITE command without write acceleration requires two round-trip transfers (RTT), while the WRITE command with write acceleration only requires one RTT. The maximum sized Transfer Ready is sent from the host side of the FCIP link back to the host before the WRITE command reaches the target. This enables the host to start sending the write data without waiting for the long latency over the FCIP link of the WRITE command and Transfer Ready. It also eliminates the delay caused by multiple Transfer Readys needed for the exchange going over the FCIP link.
Figure 2-20 FCIP Link Write Acceleration
Tip FCIP write acceleration (WA) can be enabled for multiple FCIP tunnels if the tunnels are part of a PortChannel configured with "channel mode active". These are PortChannels constructed with Port Channel Protocol (PCP). FCIP WA does not work if multiple non-PortChannel FCIP tunnels exist with equal weight between the initiator and the target ports. This configuration might cause either SCSI discovery failure or failed WRITE or READ operations. When FCIP WA is used the FSPF routing should ensure that a single FCIP Port-Channel or ISL is always in the path between the initiator and the target ports.
Tip Do not enable time stamp control on an FCIP interface with write acceleration configured.
Note Write acceleration cannot be used across FSPF equal cost paths in FCIP deployments. Native Fibre Channel write acceleration can be used with PortChannels. Also, FCIP write acceleration can be used in PortChannels configured with channel mode active or constructed with PortChannel Protocol (PCP).
You can enable FCIP write acceleration when you create the FCIP link using the FCIP Wizard.
To enable write acceleration on an existing FCIP link, follow these steps:
Step 1 Choose ISLs > FCIP from the Physical Attributes pane on Fabric Manager.
You see the FCIP profiles and links in the Information pane.
On Device Manager, choose IP > FCIP.
Step 2 Click the Tunnels (Advanced) tab.
You see the FICP link information (see Figure 2-21).
Figure 2-21 FCIP Tunnels (Advanced) Tab
Step 3 Check or uncheck the Write Accelerator check box.
Step 4 Choose the appropriate compression ratio from the IP Compression drop-down list.
Step 5 Click the Apply Changes icon to save these changes.
Example 2-14 through Example 2-16 show how to display information about write acceleration activity.
Example 2-14 Displays Exchanges Processed by Write Acceleration at the Specified Host End FCIP Link
Example 2-15 Displays Exchanges Processed by Write Acceleration at the Specified Target End FCIP Link
Example 2-16 Displays Detailed FCIP Interface Write Acceleration Counter Information, if Enabled
Tapes are storage devices that store and retrieve user data sequentially. Cisco MDS NX-OS provides both tape write and read acceleration.
Applications that access tape drives normally have only one SCSI WRITE or READ operation outstanding to it. This single command process limits the benefit of the tape acceleration feature when using an FCIP tunnel over a long-distance WAN link. It impacts backup, restore, and restore performance because each SCSI WRITE or READ operation does not complete until the host receives a good status response from the tape drive. The FCIP tape acceleration feature helps solve this problem. It improves tape backup, archive, and restore operations by allowing faster data streaming between the host and tape drive over the WAN link.
In an example of tape acceleration for write operations, the backup server in Figure 2-22 issues write operations to a drive in the tape library. Acting as a proxy for the remote tape drives, the local Cisco MDS switch proxies a transfer ready to signal the host to start sending data. After receiving all the data, the local Cisco MDS switch proxies the successful completion of the SCSI WRITE operation. This response allows the host to start the next SCSI WRITE operation. This proxy method results in more data being sent over the FCIP tunnel in the same time period compared to the time taken to send data without proxying. The proxy method improves the performance on WAN links.
Figure 2-22 FCIP Link Tape Acceleration for Write Operations
At the tape end of the FCIP tunnel, another Cisco MDS switch buffers the command and data it has received. It then acts as a backup server to the tape drive by listening to a transfer ready from the tape drive before forwarding the data.
Note In some cases such as a quick link up/down event (FCIP link, Server/Tape Port link) in a tape library environment that exports Control LUN or a Medium Changer as LUN 0 and tape drives as other LUNs, tape acceleration may not detect the tape sessions and may not accelerate these sessions. You need to keep the FCIP link disabled for a couple of minutes before enabling the link. This does not apply to tape environments where the tape drives are either direct FC attached or exported as LUN 0.
The Cisco NX-OS provides reliable data delivery to the remote tape drives using TCP/IP over the WAN. It maintains write data integrity by allowing the WRITE FILEMARKS operation to complete end-to-end without proxying. The WRITE FILEMARKS operation signals the synchronization of the buffer data with the tape library data. While tape media errors are returned to backup servers for error handling, tape busy errors are retried automatically by the Cisco NX-OS software.
In an example of tape acceleration for read operations, the restore server in Figure 2-23 issues read operations to a drive in the tape library. During the restore process, the remote Cisco MDS switch at the tape end, in anticipation of more SCSI read operations from the host, sends out SCSI read operations on its own to the tape drive. The prefetched read data is cached at the local Cisco MDS switch. The local Cisco MDS switch on receiving SCSI read operations from the host, sends out the cached data. This method results in more data being sent over the FCIP tunnel in the same time period compared to the time taken to send data without read acceleration for tapes. This improves the performance for tape reads on WAN links.
Figure 2-23 FCIP Link Tape Acceleration for Read Operations
The Cisco NX-OS provides reliable data delivery to the restore application using TCP/IP over the WAN. While tape media errors during the read operation are returned to the restore server for error handling, the Cisco NX-OS software recovers from any other errors.
Note The FCIP tape acceleration feature is disabled by default and must be enabled on both sides of the FCIP link. If it is only enabled on one side of the FCIP tunnel, the tape acceleration feature is turned operationally off.
Tip FCIP tape acceleration does not work if the FCIP port is part of a PortChannel or if there are multiple paths between the initiator and the target port. Such a configuration might cause either SCSI discovery failure or broken write or read operations.
Note When you enable the tape acceleration feature for an FCIP tunnel, the tunnel is reinitialized and the write and read acceleration feature is also automatically enabled.
In tape acceleration for writes, after a certain amount of data has been buffered at the remote Cisco MDS switch, the write operations from the host are flow controlled by the local Cisco MDS switch by not proxying the Transfer Ready. On completion of a write operation when some data buffers are freed, the local Cisco MDS switch resumes the proxying. Likewise, in tape acceleration for reads, after a certain amount of data has been buffered at the local Cisco MDS switch, the read operations to the tape drive are flow controlled by the remote Cisco MDS switch by not issuing any further reads. On completion of a read operation, when some data buffers are freed, the remote Cisco MDS switch resumes issuing reads.
The default flow control buffering uses the automatic option. This option takes the WAN latencies and the speed of the tape into account to provide optimum performance. You can also specify a flow control buffer size (the maximum buffer size is 12 MB).
Tip We recommend that you use the default option for flow-control buffering.
Tip Do not enable time-stamp control on an FCIP interface with tape acceleration configured.
Note If one end of the FCIP tunnel is running Cisco MDS SAN-OS Release 3.0(1) or later and NX-OS Release 4.x, and the other end is running Cisco MDS SAN-OS Release 2.x, and tape acceleration is enabled, then the FCIP tunnel will run only tape write acceleration, not tape-read acceleration.
Note In Cisco MDS NX-OS Release 4.2(1), the FCIP tape acceleration feature is not supported on FCIP back-to-back connectivity between MDS switches.
If a tape library provides logical unit (LU) mapping and FCIP tape acceleration is enabled, you must assign a unique LU number (LUN) to each physical tape drive accessible through a target port.
Figure 2-24 shows tape drives connected to Switch 2 through a single target port. If the tape library provides LUN mapping, then all the four tape drives should be assign unique LUNs.
Figure 2-24 FCIP LUN Mapping Example
For the mappings described in Table 2-2 and Table 2-3 , Host 1 has access to Drive 1 and Drive 2, and Host 2 has access to Drive 3 and Drive 4.
Table 2-2 describes correct tape library LUN mapping.
Table 2-3 describes incorrect tape library LUN mapping.
Another example setup is when a tape drive is shared by multiple hosts through a single tape port. For instance, Host 1 has access to Drive1 and Drive2, and Host 2 has access to Drive 2, Drive 3, and Drive 4. A correct LUN mapping configuration for such a setup is shown in Table 2-4 .
Note In an FCIP tape acceleration link, if the trunk mode is on for TA enabled tunnels, then the trunk mode allowed VSAN should be configured such that each VSAN’s traffic passes through only one tunnel. If the traffic passes through multiple tunnels, it may cause traffic failures.
To enable FCIP tape acceleration, follow these steps:
To enable FCIP tape acceleration using Fabric Manager, follow these steps:
Step 1 From Fabric Manager, choose ISLs > FCIP from the Physical Attributes pane.
You see the FCIP profiles and links in the Information pane.
From Device Manager, choose IP > FCIP.
Step 2 Click the Tunnels tab. You see the FICP link information.
Step 3 Click the Create Row icon in Fabric Manager or the Create button in Device Manager.
You see the FCIP Tunnels dialog box.
Step 4 Set the profile ID in the ProfileID field and the tunnel ID in the TunnelID fields.
Step 5 Set the RemoteIPAddress and RemoteTCPPort fields for the peer IP address you are configuring.
Step 6 Check the TapeAccelerator check box.
Step 7 (Optional) Set the other fields in this dialog box and click Create to create this FCIP link.
Example 2-17 through Example 2-20 show how to display information about tape acceleration activity.
Example 2-17 Displays Information About Tapes for Which Exchanges are Tape Accelerated
Example 2-18 Displays Information About Tapes for Which Exchanges are Tape Accelerated at the Host-End FCIP Link
Example 2-19 Displays Information About Tapes for Which Exchanges are Tape Accelerated at the Target-End FCIP Link
Example 2-20 Displays Detailed FCIP Interface Tape Acceleration Counter Information, if Enabled
The FCIP compression feature allows IP packets to be compressed on the FCIP link if this feature is enabled on that link. This feature does not increase the FCIP throughput, but it helps in reducing the amount of IP traffic sent over the IP network. By default the FCIP compression is disabled. When enabled, the software defaults to using the auto mode (if a mode is not specified).
Mode3 compression mode is deprecated in Cisco MDS NX-OS Release 5.0(1a) and later. Mode1 compression mode is supported in Cisco MDS NX-OS Release 5.2(6) and later. The 9250i, MSM-18/4 and SSN-16 modules support Auto, Mode1 and Mode2 compression modes. All of these modes internally use the hardware compression engine in the module. Auto mode is enabled by default. Mode2 uses a larger batch size for compression than Auto-mode, which results in higher compression throughput. However, Mode2 incurs a small latency due to the compression throughput. For those deployments where aggressive throughput is most important, Mode2 can be used. Mode1 gives the best compression ratio when compared to all other modes. For those deployments where compression ratio is most important, Mode1 can be used.
Note The auto mode (default) selects the appropriate compression scheme based on the card type and bandwidth of the link (the bandwidth of the link configured in the FCIP profile’s TCP parameters).
If both ends of the FCIP link are running NX-OS Release 4.x and 5.0(1a) or later, and you enable compression at one end of the FCIP tunnel, be sure to enable it at the other end of the link.
Note • Compression Mode 1 is supported on MSM-18/4, SSN-16 linecards and MDS 9222i Multiservice Modular Switches from 5.2(6) release onwards. Compression Mode 1 is supported on 9250i Multiservice Fabric Switch from 6.2(5) release onwards
Example 2-21 and Example 2-23 show how to display FCIP compression information.
Example 2-21 Displays Detailed FCIP Interface Compression Information, if Enabled
Example 2-22 Displays the Compression Engine Statistics for the 9250i
Example 2-23 Displays the Compression Engine Statistics for the MSM-18/4 Module
Creation of FCIP tunnels between Cisco MDS 9250i and 18+4, SSN16, 24/10 port SAN Extension Module, and Cisco MDS 9222i is supported. However, we recommend that the maximum and minimum bandwidth parameters in an FCIP profile be the same on both sides.
Note FCIP tunnels with a tcp max-bandwidth-mbps of 33Mbps or lesser normally get an FSPF calculated cost of 30000. This makes the interface unusable. Starting from Cisco MDS NX-OS Releases 6.2(21) and 8.2(1), the FSPF cost for such low bandwidth FCIP tunnels will be set to 28999. Because this value is lesser than the FSPF maximum cost of 30000, it allows traffic to be routed across the interface. It also allows additional FC or FCoE hops (including the FCIP hop) in the end-to-end path. The total FSPF cost of these additional hops should not exceed 1000 because the path will not be usable. If the FSPF cost of 28999 is not applicable for a specific topology, it should be manually configured using the fspf cost interface configuration command. To check the FSPF cost of an interface, use the show fspf interface command. For more information on FSPF Cost, see the Cisco MDS Fabric Configuration Guide.
To achieve maximum FCIP performance in 1 Gbps mode, the following configuration is recommended:
Step 1 Create an FCIP tunnel on the GigabitEthernet port.
Note If more than one FCIP tunnel is bound to an IP storage or GigabitEthernet interface at 1 Gbps, the combined maximum bandwidth of all tunnels bound to that interface must not exceed 1 Gbps.
Step 2 Set the TCP maximum and minimum bandwidth as 1000 Mbps and 800 Mbps respectively.
Step 3 Configure two TCP connections on each FCIP tunnel.
Step 4 Set the MTU size to 2500 for the IP storage or GigabitEthernet port.
Step 5 Enable compression on each FCIP tunnel.
To achieve maximum FCIP performance in 1 Gbps mode, follow these configuration steps:
To achieve maximum FCIP performance in 10 Gbps mode, the following configuration is recommended:
Step 1 Create an FCIP tunnel on the IP storage port.
If more than two FCIP tunnels are bound to an IP storage interface at 10 Gbps, the combined maximum bandwidth of all tunnels bound to that interface must not exceed 10 Gbps.
Note Creation of FCIP tunnels between M9250i and 18+4, SSN16, and M9222i is supported. However, we recommend that the maximum and minimum bandwidth parameters in an FCIP profile be the same on both sides.
Step 2 Set the TCP maximum and minimum bandwidth to 5000 Mbps and 4000 Mbps respectively (default value).
Step 3 Configure five TCP connections on each FCIP tunnel.
Step 4 Set the MTU size to 2500 on the IP storage port.
Step 5 Enable compression on each FCIP tunnel.
To achieve maximum FCIP performance in 10 Gbps mode, follow these configuration steps:
To achieve maximum FCIP performance in 1 Gbps mode, the following configuration is recommended:
Step 1 Create an FCIP tunnel on the IP storage or GigabitEthernet port.
Note If more than one FCIP tunnel is bound to an IP storage or GigabitEthernet interface at 1 Gbps, the combined maximum bandwidth of all tunnels bound to that interface must not exceed 1 Gbps.
Step 2 Set the TCP maximum and minimum bandwidth as 1000 Mbps and 800 Mbps respectively.
Step 3 Configure two TCP connections on each FCIP tunnel.
Step 4 Set the MTU size to 2500 for the IP storage or GigabitEthernet port.
Step 5 Enable compression on each FCIP tunnel.
To achieve maximum FCIP performance in 1 Gbps mode, follow these configuration steps:
Example: Recommendation for Achieving Maximum FCIP Performance in 10 Gbps Mode
Example: Recommendation for Achieving Maximum FCIP Performance in 1 Gbps Mode (Cisco MDS 9250i Multiservice Fabric Switch)
Example: Recommendation for Achieving Maximum FCIP Performance 1 Gbps Mode (SSN-16 and 18+4)
Table 2-5 lists the default settings for FCIP parameters.