Servicing a Blade Server

This chapter contains the following sections:

Replacing a Drive

The Cisco UCS B200 M4 blade server uses an optional Cisco UCS FlexStorage modular storage subsystem that can provide support for two drive bays and RAID controller, or NVMe-based PCIe SSD support functionality. If you purchased the UCS B200 M4 blade server without the modular storage system configured as a part of the system, a pair of blanking panels may be in place. These panels should be removed before installing hard drives, but should remain in place to ensure proper cooling and ventilation if the drive bays are unused.

You can remove and install hard drives without removing the blade server from the chassis.

The drives supported in this blade server come with the hot-plug drive sled attached. Empty hot-plug drive sled carriers (containing no drives) are not sold separately from the drives. A list of currently supported drives is in the Cisco UCS B200 M4 Blade Server Specification Sheet.

Before upgrading or adding a drive to a running blade server, check in the service profile and make sure the new hardware configuration will be within the parameters allowed by the service profile.


Note

See also 4K Sector Format SAS/SATA Drives Considerations.


Removing a Blade Server Hard Drive

To remove a hard drive from a blade server, follow these steps:

Procedure


Step 1

Push the button to release the ejector, and then pull the hard drive from its slot.

Step 2

Place the hard drive on an antistatic mat or antistatic foam if you are not immediately reinstalling it in another server.

Step 3

Install a hard disk drive blank faceplate to keep dust out of the blade server if the slot will remain empty.


Installing a Blade Server Drive

To install a drive in a blade server, follow these steps:

Procedure


Step 1

Place the drive ejector into the open position by pushing the release button.

Figure 1. Installing a Hard Drive in a Blade Server
Step 2

Gently slide the drive into the opening in the blade server until it seats into place.

Step 3

Push the drive ejector into the closed position.

You can use Cisco UCS Manager to format and configure RAID services. For details, see the Configuration Guide for the version of Cisco UCS Manager that you are using. The configuration guides are available at the following URL: http://www.cisco.com/en/US/products/ps10281/products_installation_and_configuration_guides_list.html

If you need to move a RAID cluster, see the Cisco UCS Manager Troubleshooting Reference Guide.


4K Sector Format SAS/SATA Drives Considerations

  • You must boot 4K sector format drives in UEFI mode, not legacy mode. See the procedure in this section for setting UEFI boot mode in the boot policy.

  • Do not configure 4K sector format and 512-byte sector format drives as part of the same RAID volume.

  • Operating system support on 4K sector drives is as follows: Windows: Win2012 and Win2012R2; Linux: RHEL 6.5, 6.6, 6.7, 7.0, 7.2, 7.3; SLES 11 SP3, and SLES 12. ESXi/VMWare is not supported.

Setting Up UEFI Mode Booting in the UCS Manager Boot Policy

Procedure

Step 1

In the Navigation pane, click Servers.

Step 2

Expand Servers > Policies.

Step 3

Expand the node for the organization where you want to create the policy.

If the system does not include multitenancy, expand the root node.

Step 4

Right-click Boot Policies and select Create Boot Policy.

The Create Boot Policy wizard displays.

Step 5

Enter a unique name and description for the policy.

This name can be between 1 and 16 alphanumeric characters. You cannot use spaces or any special characters other than - (hyphen), _ (underscore), : (colon), and . (period). You cannot change this name after the object is saved.

Step 6

(Optional) After you make changes to the boot order, check the Reboot on Boot Order Change check box to reboot all servers that use this boot policy.

For boot policies applied to a server with a non-Cisco VIC adapter, even if the Reboot on Boot Order Change check box is not checked, when SAN devices are added, deleted, or their order is changed, the server always reboots when boot policy changes are saved.

Step 7

(Optional) If desired, check the Enforce vNIC/vHBA/iSCSI Name check box.

  • If checked, Cisco UCS Manager displays a configuration error and reports whether one or more of the vNICs, vHBAs, or iSCSI vNICs listed in the Boot Order table match the server configuration in the service profile.

  • If not checked, Cisco UCS Manager uses the vNICs or vHBAs (as appropriate for the boot option) from the service profile.

Step 8

In the Boot Mode field, choose the UEFI radio button.

Step 9

Check the Boot Security check box if you want to enable UEFI boot security.

Step 10

Configure one or more of the following boot options for the boot policy and set their boot order:

  • Local Devices boot—To boot from local devices, such as local disks on the server, virtual media, or remote virtual disks, continue with Configuring a Local Disk Boot for a Boot Policy in the Cisco UCS Manager Server Management Guide for your release.

  • SAN boot—To boot from an operating system image on the SAN, continue with Configuring a SAN Boot for a Boot Policy in the Cisco UCS Manager Server Management Guide for your release.

You can specify a primary and a secondary SAN boot. If the primary boot fails, the server attempts to boot from the secondary.


Removing a Blade Server Cover

Procedure


Step 1

Press and hold the button down as shown in the figure below.

Step 2

While holding the back end of the cover, pull the cover back and then up.

Figure 2. Opening a Cisco UCS B200 M4 Blade Server

Air Baffles

The air baffles direct and improve air flow for the server components. Two identical baffles ship with each B200 M4 server. No tools are necessary to install them, just place them over the DIMMs as shown, with the holes in the center of the baffles aligned with the corresponding motherboard standoffs.

Figure 3. Cisco UCS B200 M4 Air Baffles

Internal Components

Figure 4. Inside View of the UCS B200 M4 Blade Server

1

SD card slots

2

Modular storage subsystem connector

3

USB connector

An internal USB 2.0 port is supported. A 16 GB USB drive (UCS-USBFLSHB-16GB) is available from Cisco. A clearance of 0.950 inches (24.1 mm) is required for the USB device to be inserted and removed.

4

DIMM slots

5

Front heat sink and CPU 1

6

CPU heat sink install guide pins

7

Rear heat sink and CPU 2

8

CMOS battery

9

Trusted Platform Module (TPM)

10

DIMM diagnostic LED button

11

Adapter slot 1

12

Adapter slot 2

Note

When the storage module is installed, the USB connector is underneath it. Use the small cutout opening in the storage module to visually determine the location of the USB connector when you need to insert it.


Diagnostics Button and LEDs

At blade start-up, POST diagnostics test the CPUs, DIMMs, HDDs, and rear mezzanine modules, and any failure notifications are sent to Cisco UCS Manager. You can view these notifications in the Cisco UCS Manager System Error Log or in the output of the show tech-support command. If errors are found, an amber diagnostic LED also lights up next to the failed component. During run time, the blade BIOS and component drivers monitor for hardware faults and will light up the amber diagnostic LED as needed.

LED states are saved, and if you remove the blade from the chassis the LED values will persist for up to 10 minutes. Pressing the LED diagnostics button on the motherboard causes the LEDs that currently show a component fault to light for up to 30 seconds for easier component identification. LED fault values are reset when the blade is reinserted into the chassis and booted, and the process begins from its start.

If DIMM insertion errors are detected, they may cause the blade discovery process to fail and errors will be reported in the server POST information, which is viewable using the UCS Manager GUI or CLI. DIMMs must be populated according to specific rules. The rules depend on the blade server model. Refer to the documentation for a specific blade server for those rules.

Faults on the DIMMs or rear mezzanine modules also cause the server health LED to light solid amber for minor error conditions or blinking amber for critical error conditions.

Installing a CMOS Battery

All Cisco UCS blade servers use a CR2032 battery to preserve BIOS settings while the server is not installed in a powered-on chassis. Cisco supports the industry standard CR2032 battery that is available at most electronics stores.


Warning

There is danger of explosion if the battery is replaced incorrectly. Replace the battery only with the same or equivalent type recommended by the manufacturer. Dispose of used batteries according to the manufacturer’s instructions.



To install or replace the battery, follow these steps:

Procedure


Step 1

Remove the existing battery:

  1. Power off the blade, remove it from the chassis, and remove the top cover.

  2. Push the battery socket retaining clip away from the battery.

  3. Lift the battery from the socket. Use needle-nose pliers to grasp the battery if there is not enough clearance for your fingers.

Step 2

Install the replacement battery:

  1. Push the battery socket retaining clip away from where the battery fits in the housing.

  2. Insert the new battery into the socket with the battery’s positive (+) marking facing away from the retaining clip. Ensure that the retaining clip can click over the top of the battery to secure it in the housing.

  3. Replace the top cover.

  4. Replace the blade server in the chassis.

    Figure 5. Location of the Motherboard CMOS Battery

Installing the FlexStorage Module

The Cisco UCS B200 M4 blade server uses an optional Cisco UCS FlexStoarge modular storage subsystem that can provide support for two drive bays and RAID controller or NVMe-based PCIe SSD support functionality.

Procedure


Step 1

Place the FlexStorage module over the two standoff posts on the motherboard at the front of the server.

Step 2

Press down on the drive bay cage where it is labeled "Press Here to Install" until the FlexStorage module clicks into place.

Figure 6. FlexStorage Module
Step 3

Using a Phillips-head screwdriver, tighten the four screws to secure the FlexStorage module. The locations of the screws are labeled "Secure Here."


Upgrading to Intel Xeon E5-2600 v4 CPUs

Before upgrading to Intel Xeon E5-2600 v4 Series CPUs, ensure that the server is running the required minimum software and firmware versions that support Intel E5-2600 v4 Series CPUs, as listed in the following table.

Software or Firmware

Minimum Version

Cisco UCS Manager

Release 3.1(1e) with 3.1(1g) ucs-catalog.3.1.1g.T.bin, or Release 2.2(7b) (See the following Note for additional supported versions.)

Cisco IMC

Release 3.1(1g) or Release 2.2(7b)

BIOS

Release 3.1(1g) or Release 2.2(7b)


Note

Cisco UCS Manager Release 2.2(4) introduced a server pack feature that allows Intel E5-2600 v4 CPUs to run with Cisco UCS Manager Release 2.2(4) or later, provided that the Cisco IMC, BIOS and Capability Catalog are all running Release 2.2(7) or later.



Caution

Ensure that the server is running the required software and firmware before installing the Intel E5-2600 v4 Series CPUs. Failure to do so can result in a non-bootable CPU.


Do one of the following actions:

  • If the server software and firmware are already at the required minimum version as shown in the preceding table, replace the CPUs by using the procedure in the following section.
  • If the server software or firmware is not at the required minimum version, follow the instructions in the Cisco UCS B200 M4 Server Upgrade Guide for E5-2600 v4 Series CPUs to upgrade it. Then replace the CPUs by using the procedure in the following section.

Removing a Heat Sink and CPU

Procedure


Step 1

Unscrew the four captive screws.

Step 2

Remove the heat sink.

Figure 7. Removing the Heat Sink and CPU
Step 3

Unhook the self-loading socket (SLS) lever that has the unlock icon .

Step 4

Unhook the SLS lever that has the lock icon .

Step 5

Grasp the sides of the CPU carrier (indicated by the arrows in the illustration) and swing it into a standing position in the SLS plug seat.

Figure 8. CPU Carrier and SLS Plug Seat
Step 6

Pull the CPU carrier up and out of the SLS plug seat.


Installing a New CPU and Heat Sink

Before installing a new CPU in a server, verify the following:

  • A BIOS update is available and installed that supports the CPU and the given server configuration.

  • The service profile for this server in Cisco UCS Manager will recognize and allow the new CPU.

  • The CPUs and heat sinks are different and must be installed in the correct location. The front heat sink and CPU 1 can only be installed in the front of the blade server and the rear heat sink and CPU 2 can only be installed in the rear of the blade server.

Procedure


Step 1

Hold the CPU carrier by its sides (indicated by the arrows). Insert and align the two CPU carrier pegs into the self-loading socket (SLS) plug seat. To ensure proper seating, verify that the horizontal yellow line below the word ALIGN is straight.

Figure 9. Inserting the CPU Carrier
Step 2

Press gently on the top of the CPU carrier from the exterior side until it snaps into place.

Step 3

Close the socket latch.

Step 4

Hook the self-loading socket (SLS) lever that has the lock icon .

Step 5

Hook the SLS lever that has the unlock icon .

Step 6

Thermally bond the CPU and heat sink. Using the syringe of thermal grease provided with the replacement CPU, apply 2 cubic centimeters of thermal grease to the top of the CPU where it will contact the heat sink. Apply the grease in the pattern shown in the following figure, which should use approximately half the contents of the syringe.

Figure 10. Thermal Grease Application Pattern
Step 7

Replace the heat sink. The yellow CPU heat sink install guide pins that are attached to the motherboard must align with the cutout on the heat sink to ensure proper installation of the heat sink.

Figure 11. Replacing the Heat Sink
Step 8

Tighten the four captive screws in the order shown.


Installing Memory

To install a DIMM into the blade server, follow these steps:

Procedure


Step 1

Open both DIMM connector latches.

Step 2

Press the DIMM into its slot evenly on both ends until it clicks into place.

DIMMs are keyed. If a gentle force is not sufficient, make sure the notch on the DIMM is correctly aligned.

Note 

Be sure that the notch in the DIMM aligns with the slot. If the notch is misaligned you may damage the DIMM, the slot, or both.

Step 3

Press the DIMM connector latches inward slightly to seat them fully.


Supported DIMMs

Do not use any memory DIMMs other than those listed in the specification sheet. Doing so may irreparably damage the server and require down time.

Memory Population

The blade server contains 24 DIMM slots—12 for each CPU. Each set of 12 DIMM slots is arranged into four channels, where each channel has three DIMMs.

Figure 12. Memory Slots In the Blade Server

1

Channels A-D for CPU 1

2

Channels E-H for CPU 2

DIMMs and Channels

Each channel is identified by a letter—A, B, C, D for CPU 1, and E, F, G, H for CPU 2. Each DIMM slot is numbered 1, 2, or 3. Note that each DIMM slot 1 is blue, each slot 2 is black, and each slot 3 is off-white or beige.

The figure below shows how DIMMs and channels are physically laid out on the blade server. The DIMM slots in the upper and lower right are associated with the second CPU (CPU shown on right in the diagram), while the DIMM slots in the upper and lower left are associated with the first CPU (CPU shown on left).

Figure 13. Physical Representation of DIMMs and Channels

The figure below shows a logical view of the DIMMs and channels.

Figure 14. Logical Representation of DIMMs and Channels

DIMMs can be used in the blade server in a one DIMM per Channel (1DPC) configuration, in a two DIMMs per Channel (2DPC) configuration, or a three DIMMs per Channel (3DPC) configuration.

The following tables show recommended DIMM population order for non-mirroring and mirroring configurations. For single-CPU configurations, read only the CPU 1 columns of the tables.

Table 1. Supported DIMM Population Order (Non-Mirroring)

DIMMs Per CPU

CPU 1 Installed Slots

CPU 2 Installed Slots

1

A1

E1

2

A1, B1

E1, F1

3

A1, B1, C1

E1, F1, G1

4

A1, B1, C1, D1

E1, F1, G1, H1

8

A1, B1, C1, D1, A2, B2, C2, D2

E1, F1, G1, H1, E2, F2, G2, H2

12

A1, B1, C1, D1, A2, B2, C2, D2, A3, B3, C3, D3

E1, F1, G1, H1, E2, F2, G2, H2, E3, F3, G3, H3


Note

System performance is optimized when the DIMM type and quantity are equal for both CPUs, and when each populated channel is filled equally across the CPUs in the server.


Table 2. Supported DIMM Population Order (Mirroring)

DIMMs per CPU

CPU 1 Installed Slots

CPU 2 Installed Slots

2

A1, B1

E1, F1

4

A1, B1, C1, D1

E1, F1, G1, H1

8

A1, B1, C1, D1, A2, B2, C2, D2

E1, F1, G1, H1, E2, F2, G2, H2

8 (CPU1)

and 4 (CPU2)

Not recommended for performance reasons.

A1, B1, C1, D1, A2, B2, C2, D2

E1, F1, E2, F2

12

A1, B1, C1, D1, A2, B2, C2, D2, A3, B3, C3, D3

E1, F1, G1, H1, E2, F2, G2, H2, E3, F3, G3, H3

Memory Performance

When considering the memory configuration of the blade server, there are several things to consider. For example:

  • When mixing DIMMs of different densities (capacities), the highest density DIMM goes in slot 1 then in descending density.

  • Besides DIMM population and choice, the selected CPU(s) can have some effect on performance.

  • DIMMs can be run in a 1DPC, a 2DPC, or a 3DPC configuration. 1DPC and 2DPC can provide the maximum rated speed that the CPU and DIMMs are rated for. 3DPC causes the DIMMs to run at a slower speed.

Memory Mirroring and RAS

The Intel CPUs within the blade server support memory mirroring only when an even number of channels are populated with DIMMs. Furthermore, if memory mirroring is used, DRAM size is reduced by 50 percent for reasons of reliability.

Installing a Virtual Interface Card Adapter

The Cisco Virtual Interface Card (VIC) 1340 and VIC 1240 are specialized adapters that provide dual 2 x 10 Gb of Ethernet or Fiber Channel over Ethernet (FCoE) connectivity to each blade server. They plug into the dedicated VIC connector, and they are the only adapters that can be plugged into the slot 1 connector. They provide connectivity through Cisco UCS 6100, 6200, and 6300 Series Fabric Interconnects. The Cisco VIC 1200 Series (1240 and 1280) is compatible in UCS domains that implement both UCS 6100 and 6200 Series Fabric Interconnects. The Cisco VIC 1300 Series (1340 and 1380) is compatible with the UCS 6200 Series Fabric Interconnects and UCS 6300 Series Fabric Interconnects.


Note

You must remove the adapter card to service it.


To install a Cisco VIC 1340 or VIC 1240 in the blade server, follow these steps:

Procedure


Step 1

Position the VIC board connector above the motherboard connector and align the captive screw to the standoff post on the motherboard.

Step 2

Firmly press the VIC board connector into the motherboard connector.

Step 3

Tighten the captive screw.

Tip 

To remove a VIC, reverse the above procedure. You might find it helpful when removing the connector from the motherboard to gently rock the board along the length of the connector until it loosens.

Figure 15. Installing a VIC mLOM Adapter

Installing an Adapter Card in Addition to the VIC mLOM Adapter

All supported adapter cards have a common installation process. A list of currently supported and available adapters for this server is in the Cisco UCS B200 M4 Blade Server Specification Sheet.

The UCS B200 M4 blade server has two adapter slots (Slots 1 [mLOM slot] and 2) that support the following VIC cards:

  • VIC 1340 and VIC 1380

  • VIC 1240 and VIC 1280

Slot 1 is for the VIC 1340 or VIC 1240 mLOM adapter cards. Slot 2 is for the VIC 1380 and VIC 1280 cards, and can also be used for the VIC port expander, the nVidia M6 GPU, the Intel Crypto accelerator card, and non-I/O mezzanine cards, such as Fusion ioMemory 3 Series.


Note

When the Cisco Nexus 2104XP Fabric Extender (FEX) module is used, the VIC 1280 and the VIC port expander cards are ignored because there are no traces on the Cisco 2104XP to connect to any VIC or IO card installed in Slot 2.


The VIC 1340 and VIC 1380 require a Cisco UCS 6200 Series Fabric Interconnect or Cisco UCS 6300 Series Fabric Interconnect, and they support the Cisco Nexus 2208XP, 2204XP, 2348UPQ FEX modules.

The VIC 1240 and VIC 1280 support Cisco UCS 6200 and 6100 Series Fabric Interconnects, and they support the Cisco Nexus 2208XP, 2204XP, and 2104XP FEX modules. When a VIC 1240 or 1280 is used with a UCS 6100 Series Fabric Interconnect, the UCS B200 M4 blade server requires a maximum software release of 2.2(x) for Cisco UCS Manager.

If you are switching from one type of adapter card to another, before you physically perform the switch make sure that you download the latest device drivers and load them into the server’s operating system. For more information, see the firmware management chapter of one of the Cisco UCS Manager software configuration guides.

Procedure


Step 1

Position the adapter board connector above the motherboard connector and align the two adapter captive screws to the standoff posts on the motherboard.

Step 2

Firmly press the adapter connector into the motherboard connector (callout 2).

Step 3

Tighten the two captive screws (callout 3).

Tip 

Removing an adapter card is the reverse of installing it. You might find it helpful when removing the connector from the motherboard to gently rock the board along the length of the connector until it loosens.

Figure 16. Installing an Adapter Card

Installing the NVIDIA M6 GPU Adapter Card

The NVIDIA M6 graphics processing unit (GPU) adapter card provides graphics and computing capabilities to the server. If you are installing the NVIDIA GPU to a B200 M4 in the field, the option kit comes with the GPU itself (CPU and heat sink), a T-shaped installation wrench, and a custom standoff to support and attach the GPU on the B200 M4 motherboard. See the three components of the option kit in the following figure:

Figure 17. NVIDIA M6 GPU Option Kit

1

NVIDIA M6 GPU (CPU and heat sink)

2

T-shaped wrench

3

Custom standoff

Before you begin

Before installing the NVIDIA M6 GPU:

  • Remove any adapter card, such as a VIC 1380, VIC 1280, or VIC port expander card from slot 2. You cannot use any other card in slot 2 when the NVIDIA M6 GPU is installed.

  • Upgrade the Cisco UCS domain that the GPU will installed into to a version of Cisco UCS Manager that supports this card. Refer to the latest version of the Release Notes for Cisco UCS Software at the following URL for information about supported hardware: http://www.cisco.com/c/en/us/support/servers-unified-computing/ucs-manager/products-release-notes-list.html.

Procedure


Step 1

Use the T-shaped wrench that comes with the GPU to remove the existing standoff at the back end of the motherboard.

Step 2

Install the custom standoff in the same location at the back end of the motherboard.

Step 3

Position the GPU over the connector on the motherboard and align all captive screws to the standoff posts (callout 1).

Step 4

Tighten the captive screws (callout 2).

Figure 18. Installing the NVIDIA M6 GPU


The following figure shows an NVIDIA M6 GPU installed in a Cisco UCS B200 M4 blade server.
Figure 19. Installed NVIDIA M6 GPU


1

Front of server

2

Custom standoff screw


What to do next

After you complete the installation of the NVIDIA M6 GPU, see NVIDIA Licensing Information for information on how to download NVIDIA software and acquire the necessary NVIDIA license. Follow the instructions to complete these steps in order:

  1. Register your product activation keys with NVIDIA.

  2. Download the GRID software suite.

  3. Install the GRID License Server software to a host.

  4. Generate licenses on the NVIDIA Licensing Portal and download them.

  5. Manage your GRID licenses.

  6. Decide whether to use the GPU in compute mode or graphics mode.

Enabling the Trusted Platform Module

The Trusted Platform Module (TPM) is a component that can securely store artifacts used to authenticate the server. These artifacts can include passwords, certificates, or encryption keys. A TPM can also be used to store platform measurements that help ensure that the platform remains trustworthy. Authentication (ensuring that the platform can prove that it is what it claims to be) and attestation (a process helping to prove that a platform is trustworthy and has not been breached) are necessary steps to ensure safer computing in all environments. It is a requirement for the Intel Trusted Execution Technology (TXT) security feature, which must be enabled in the BIOS settings for a server equipped with a TPM.


Note

TPM installation is supported after-factory. However, a TPM installs with a one-way screw and cannot be replaced, upgraded, or moved to another server. If a server with a TPM is returned, the replacement server must be ordered with a new TPM.

If there is no existing TPM in the server, you can install TPM 2.0. You must first upgrade to UCS firmware that supports Intel E5-2600 v4 CPUs, which is Cisco UCS Manager Release 2.2(7) and later or Release 3.1(1) and later (because Cisco aligned support for TPM 2.0 with these CPUs).


Although TPM 2.0 can be installed in servers that are running Intel Xeon Processor E5-2600 v3 or v4 CPUs, TPM 2.0 requires UCS firmware that supports Intel E5-2600 v4 CPUs, either Cisco UCS Manager Release 2.2(7) and later or Release 3.1(1) and later.


Caution

If the Cisco UCS B200 M4 server (with Intel E5-2600 v4 or v3 CPUs) is running UCS firmware that added support for Intel E5-2600 v4 CPUs, then it will work with TPM version 2.0. However, if you downgrade the firmware and BIOS to a version earlier than Release 2.2(7) or earlier than Release 3.1(1), then you are vulnerable to a potential security exposure. See the following support matrix for TPM versions.


Table 3. TPM Support Matrix by Intel CPU Version

Intel CPU

TPM Version Supported

Minimum UCS Manager (UCSM) Version

Intel E5-2600 v3

TPM 1.2

Release 2.2(3)

TPM 2.0

Release 2.2(7) or Release 3.1(1)

Intel E5-2600 v4

TPM 1.2

Release 2.2(7) or Release 3.1(1)

TPM 2.0

Release 2.2(7) or Release 3.1(1)

Procedure


Step 1

Install the TPM hardware.

  1. Decommission and remove the blade server from the chassis.

  2. Remove the blade server cover.

  3. Install the TPM to the TPM socket on the server motherboard and secure it using the one-way screw that is provided. See the figure below for the location of the TPM socket.

  4. Return the blade server to the chassis and allow it to be automatically reacknowledged, reassociated, and recommissioned.

  5. Continue with enabling TPM support in the server BIOS in the next step.

Figure 20. TPM Socket Location

1

Front of server

2

TPM socket on motherboard

Step 2

Enable TPM Support in the BIOS.

If TPM support was disabled for any reason, use the following procedure to enable it.

  1. In the Cisco UCS Manager Navigation pane, click the Servers tab.

  2. On the Servers tab, expand Servers > Policies.

  3. Expand the node for the organization where you want to configure the TPM.

  4. Expand BIOS Policies and select the BIOS policy for which you want to configure the TPM.

  5. In the Work pane, click the Advanced tab.

  6. Click the Trusted Platform sub-tab.

  7. To enable TPM support, click Enable or Platform Default.

  8. Click Save Changes.

  9. Continue with the next step.

Step 3

Enable TXT Support in the BIOS Policy.

Follow the procedures in the Cisco UCS Manager Configuration Guide for the release that is running on the server.