Performing Diagnostics


You can use diagnostics to test and verify the functionality of the hardware components of your system (chassis, supervisor engines, modules, and ASICs) while your Catalyst 4500 series switch is connected to a live network. Diagnostics consists of packet-switching tests that test hardware components and verify the data path and control signals.

Online diagnostics are categorized as bootup, on-demand, schedule, or health-monitoring diagnostics. Bootup diagnostics run during bootup; on-demand diagnostics run from the CLI; scheduled diagnostics run at user-designated intervals or specified times when the switch is connected to a live network; and health-monitoring runs in the background.

This chapter consists of these sections:

Configuring Online Diagnostics

Performing Diagnostics

Power-On Self-Test Diagnostics


Note For complete syntax and usage information for the switch commands used in this chapter, refer to the Catalyst 4500 Series Switch Cisco IOS Command Reference and related publications at this location:

http://www.cisco.com/en/US/products/ps6350/index.html


Configuring Online Diagnostics

These sections describe how to configure online diagnostics:

Configuring On-Demand Online Diagnostics

Scheduling Online Diagnostics

Configuring On-Demand Online Diagnostics

You can run on-demand online diagnostic tests from the CLI. You can set the execution action to either stop or continue the test when a failure is detected, or to stop the test after a specific number of failures occur with the failure count setting. The iteration setting allows you to configure a test to run multiple times.

To schedule online diagnostics, perform this task:

Command
Purpose
Switch# diagnostic ondemand 
{iteration iteration_count} | 
{action-on-error {continue | stop} 
[error_count]}

Configures on-demand diagnostic tests to run, how many times to run (iterations), and what action to take when errors are found.


{

This example shows how to set the on-demand testing iteration count:

Switch# diagnostic ondemand iterations 3
Switch#
 
   

This example shows how to set the execution action when an error is detected:

Switch# diagnostic ondemand action-on-fAilure continue 2
Switch# 

Scheduling Online Diagnostics

You can schedule online diagnostics to run at a designated time of day or on a daily, weekly, or monthly basis. You can schedule tests to run only once or to repeat at an interval. Use the no form of this command to remove the scheduling.

To configure online diagnostics, perform this task:

Command
Purpose
Switch(config)# diagnostic 
schedule module number test 
{test_id | test_id_range | all} 
[port {num | num_range | all} {on 
mm dd yyyy hh:mm} | {daily hh:mm} 
| {weekly day_of_week hh:mm}}

Schedules on-demand diagnostic tests on the specified module for a specific date and time, how many times to run (iterations), and what action to take when errors are found.


This example shows how to schedule diagnostic testing on a specific date and time for a specific port on module 6:

Switch(config)# diagnostic schedule module 6 test 2 port 3 on may 23 2009 23:32 
Switch(config)# 
 
   

This example shows how to schedule diagnostic testing to occur daily:

Switch(config)# diagnostic schedule module 6 test 2 port 3 daily 12:34 
Switch(config)# 
 
   

This example shows how to schedule diagnostic testing to occur weekly:

Switch(config)# diagnostic schedule module 6 test 2 port 3 weekly friday 09:23 
Switch(config)# 

Performing Diagnostics

After you configure online diagnostics, you can start or stop diagnostic tests or display the test results. You can also see which tests are configured and what diagnostic tests have already run.

These sections describe how to run online diagnostic tests after they have been configured:

Starting and Stopping Online Diagnostic Tests

Displaying Online Diagnostic Tests and Test Results

Line card Online Diagnostics

Troubleshooting with Online Diagnostics


Note Before you enable any online diagnostics tests, enable the logging console or monitor to observe all warning messages.



Note When running disruptive tests, only run them when you are connected through the console. When disruptive tests complete, a warning message on the console will recommend that you reload the system to return to normal operation. Strictly follow this warning.


Starting and Stopping Online Diagnostic Tests

After you configure diagnostic tests, you can use the start and stop keywords to begin or end a test.

To start or stop an online diagnostic command, perform one of these tasks:

Command
Purpose
Switch# diagnostic start module 
number test {test_id | 
test_id_range | minimal | complete 
| basic | per-port | 
non-disruptive | all} [port {num | 
port#_range | all}]

Starts a diagnostic test on a port or range of ports on the specified module.

Switch# diagnostic stop module 
number

Stops a diagnostic test on the specified module.


This example shows how to start a diagnostic test on module 6:

Switch# diagnostic start module 6 test 2
Diagnostic[module 6]: Running test(s) 2 Run interface level cable diags
Diagnostic[module 6]: Running test(s) 2 may disrupt normal system operation
Do you want to continue? [no]: yes
Switch#
*May 14 21:11:46.631: %DIAG-6-TEST_RUNNING: module 6: Running online-diag-tdr{ID=2} ...
*May 14 21:11:46.631: %DIAG-6-TEST_OK: module 6: online-diag-tdr{ID=2} has completed 
successfully
Switch#
 
   

This example shows how to stop a diagnostic test on module 6:

Switch# diagnostic stop module 6
Diagnostic[module 6]: Diagnostic is not active.
 
   
The message indicates no active diagnostic on module 6

Displaying Online Diagnostic Tests and Test Results

You can display the configured online diagnostic tests and check the results of the tests with the show diagnostic command.

To display the configured diagnostic tests, perform this task:

Command
Purpose
Switch# show diagnostic content [ 
module { num | all } ]

Displays diagnostic test information for a module.

Switch# show diagnostic 
description module num [ test 
test_id ]

Displays test description for a given test on a module.

Switch# show diagnostic events

Displays diagnostic event log.

Switch# show diagnostic ondemand 
settings

Displays ondemand test configuration.

Switch# show diagnostic result 
module { num | all } [ detail | 
failure | xml | test { test_id | 
test_range | all } [ detail ] ]
 
        

Shows diagnostic test results.

Switch# show diagnostic schedule 
module { num | all }
 
        

Shows diagnostic schedule for module.

Switch# show diagnostic status

Shows status of currently running diagnostic tests.


This example shows how to display the online diagnostics configured on module 1:

Switch# show diagnostic content module 6
module 6:
Diagnostics test suite attributes:
    M/C/* - Minimal bootup level test / Complete bootup level test / NA
      B/* - Basic ondemand test / NA
    P/V/* - Per port test / Per device test / NA
    D/N/* - Disruptive test / Non-disruptive test / NA
      S/* - Only applicable to standby unit / NA
      X/* - Not a health monitoring test / NA
      F/* - Fixed monitoring interval test / NA
      E/* - Always enabled monitoring test / NA
      A/I - Monitoring is active / Monitoring is inactive
      cable-tdr/* - Interface cable diags / NA
      o/* - Ongoing test, always active / NA
Test Interval   Thre-
  ID   Test Name                          Attributes      day hh:mm:ss.ms shold
  ==== ================================== ============    =============== =====
    1) linecard-online-diag ------------> M**D****I**     not configured  n/a
    2) online-diag-tdr -----------------> **PD****Icable- not configured  n/a 
 
   

This example shows how to display the test description for a given test on a module:

Switch# show diagnostic description module 6 test 1
 
   
linecard-online-diag :
        Linecard online-diagnostics run after the system boots up but
        before it starts passing traffic.  Each linecard port is placed in
        loopback, and a few packets are injected into the switching fabric
        from the cpu to the port.  If the packets are successfully
        received by the cpu, the port passes the test.  Sometimes one port
        or a group of ports sharing common components fail.  The linecard
        is then placed in partial faulty mode.  If no ports can loop back
        traffic, the board is placed in faulty state.
 
   
Switch#
 
   

This example shows how to display the online diagnostic results for module 6:

Switch# show diagnostic result module 6
 
   
Current bootup diagnostic level: minimal
 
   
module 6:   SerialNo : JAB0815059L
 
   
  Overall Diagnostic Result for module 6 : PASS
  Diagnostic level at card bootup: minimal
 
   
  Test results: (. = Pass, F = Fail, U = Untested)
 
   
    1) linecard-online-diag ------------> .
    2) online-diag-tdr:
 
   
   Port  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
   ----------------------------------------------------------------------------
         U  U  .  U  U  U  U  U  U  U  U  U  U  U  U  U  U  U  U  U  U  U  U  U
 
   
 
   
Switch#
 
   

This example shows how to display the online diagnostic results details for module 6:

Switch# show diagnostic result module 6 detail
 
   
Current bootup diagnostic level: minimal
 
   
module 6:   SerialNo : JAB0815059L
 
   
  Overall Diagnostic Result for module 6 : PASS
  Diagnostic level at card bootup: minimal
 
   
  Test results: (. = Pass, F = Fail, U = Untested)

___________________________________________________________________________

1) linecard-online-diag ------------> .
 
   
          Error code ------------------> 0 (DIAG_SUCCESS)
          Total run count -------------> 1
          Last test testing type ------> n/a
          Last test execution time ----> Jun 01 2009 11:19:36
          First test failure time -----> n/a
          Last test failure time ------> n/a
          Last test pass time ---------> Jun 01 2009 11:19:36
          Total failure count ---------> 0
          Consecutive failure count ---> 0
 
   
Slot Ports Card Type                              Diag Status      Diag Details
---- ----- -------------------------------------- ---------------- ------------
 6    24   10/100/1000BaseT (RJ45)V, Cisco/IEEE   Passed           None 
 
   
Detailed Status
---------------
. = Pass              U = Unknown
L = Loopback failure  S = Stub failure
P = Port failure
E = SEEPROM failure   G = GBIC integrity check failure
 
   
 
   
Ports 1   2   3   4   5   6   7   8   9   10  11  12  13  14  15  16
      .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .
 
   
Ports 17  18  19  20  21  22  23  24
      .   .   .   .   .   .   .   .

___________________________________________________________________________

2) online-diag-tdr:
 
   
   Port  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
   ----------------------------------------------------------------------------
         U  U  .  U  U  U  U  U  U  U  U  U  U  U  U  U  U  U  U  U  U  U  U  U
 
   
 
   
          Error code ------------------> 0 (DIAG_SUCCESS)
          Total run count -------------> 1
          Last test testing type ------> OnDemand
          Last test execution time ----> Jun 03 2009 05:39:00
          First test failure time -----> n/a
          Last test failure time ------> n/a
          Last test pass time ---------> Jun 03 2009 05:39:00
          Total failure count ---------> 0
          Consecutive failure count ---> 0
 
   
Detailed Status
---------------
Interface Speed  Local pair Cable length Remote channel Status
Gi6/3     1Gbps   1-2        N/A          Unknown       Terminated
                  3-6        N/A          Unknown       Terminated
                  4-5        N/A          Unknown       Terminated
                  7-8        N/A          Unknown       Terminated
 
   
  ___________________________________________________________________________
 
   
Switch#
 
   

This example shows how to display diagnostic events:

Switch# show diagnostic events
Diagnostic events (storage for 500 events, 3 events recorded)
Number of events matching above criteria = 3
Event Type (ET): I - Info, W - Warning, E - Error
 
   
Time Stamp         ET [Card] Event Message
------------------ -- ------ --------------------------------------------------
02/03 04:17:45.063 I  [1]    stub-rx-errors Passed
02/03 04:17:45.095 I  [1]    stub-rx-errors Passed
02/03 04:17:45.127 I  [1]    stub-rx-errors Passed
 
   

This example shows how to display ondemand test settings:

 
   
Switch# show diagnostic ondemand settings
Test iterations = 3
Action on test failure = continue until test failure limit reaches 5
 
   

This example shows how to display the test schedule for a module:

 
   
Switch# show diagnostic schedule module 1
Current Time = 07:10:53 UTC Sat Feb 3 2001
 
   
Diagnostic for module 1 is not scheduled.

This example shows how to display the status of currently running tests

Switch# show diagnostic status
 
   
 
   
<BU> - Bootup Diagnostics, <HM> - Health Monitoring Diagnostics,
 
   
<OD> - OnDemand Diagnostics, <SCH> - Scheduled Diagnostics
 
   
 
   
====== ================================= =============================== ======
 
   
Card   Description                       Current Running Test            Run by
 
   
------ --------------------------------- ------------------------------- ------
 
   
1                                        N/A                             N/A   
 
   
 
   
 
   
====== ================================= =============================== ======

Line card Online Diagnostics

A line card online diagnostic test verifies that all ports on a line card are working correctly. The test can detect whether the path to the front panel port on the line card is broken. The test cannot indicate where along the path that the problem occurred.


Note This test is run only for line cards that have stub chips.


Line card online diagnostics runs only once, when the line cards boot. This situation can happen when you insert a linecard or power up a chassis.

Line card online diagnostics are performed by sending a packet from the CPU to every port on the line card. Because this packet is marked loopback, the CPU expects to see this packet return from the port. The packet first traverses the ASICs on the supervisor engine card, then travels via the chassis backplane and the stub chip on the line cards to the PHYs. The PHY sends it back down the same path.


Note The packet does not reach or exit the front panel port.


Troubleshooting with Online Diagnostics

A faulty line card occurs if any of the following conditions occurs.

All ports fail

All ports on a stub chip fail

Only one port fails

For all of these situations, the output of the show module command would display the status of the line card as faulty:

Switch# show mod
Chassis Type : WS-C4507R
Power consumed by backplane : 40 Watts
 
Mod Ports Card Type                              Model              Serial No.
---+-----+--------------------------------------+------------------+-----------
 1     6  Sup II+10GE 10GE (X2), 1000BaseX (SFP) WS-X4013+10GE      JAB091502G0 
 2     6  Sup II+10GE 10GE (X2), 1000BaseX (SFP) WS-X4013+10GE      JAB091502FC 
 3    48  100BaseX (SFP)                         WS-X4248-FE-SFP    JAB093305RP 
 4    48  10/100BaseTX (RJ45)V                   WS-X4148-RJ45V     JAE070717E5 
 5    48  10/100BaseTX (RJ45)V                   WS-X4148-RJ45V     JAE061303U3 
 6    48  10/100BaseTX (RJ45)V                   WS-X4148-RJ45V     JAE061303WJ 
 7    24  10/100/1000BaseT (RJ45)V, Cisco/IEEE   WS-X4524-GB-RJ45V  JAB0815059Q 
 
 M MAC addresses                    Hw  Fw           Sw               Status
--+--------------------------------+---+------------+----------------+---------
 1 000b.5f27.8b80 to 000b.5f27.8b85 0.2 12.2(27r)SG( 12.2(37)SG Ok       
 2 000b.5f27.8b86 to 000b.5f27.8b8b 0.2 12.2(27r)SG( 12.2(37)SG Ok       
 3 0005.9a80.6810 to 0005.9a80.683f 0.4                               Ok       
 4 000c.3016.aae0 to 000c.3016.ab0f 2.6                               Ok       
 5 0008.a3a3.4e70 to 0008.a3a3.4e9f 1.6                               Ok       
 6 0008.a3a3.3fa0 to 0008.a3a3.3fcf 1.6                                 Faulty      
 7 0030.850e.3e78 to 0030.850e.3e8f 1.0                               Ok       
          
Mod  Redundancy role     Operating mode      Redundancy status
----+-------------------+-------------------+----------------------------------
 1   Active Supervisor   SSO                 Active                            
 2   Standby Supervisor  SSO                 Standby hot 
 
   

To troubleshoot a faulty line card, do the following:


Step 1 Enter the command show diagnostic result module 3.

If a faulty line card was inserted in the chassis, it will fail the diagnostics and the output will be similar to the following:

Current bootup diagnostic level: minimal
 
   
module 3:   SerialNo : JAB093305RP 
 
   Overall Diagnostic Result for module 3 : MAJOR ERROR
   Diagnostic level at card bootup: minimal
 
   Test results: (. = Pass, F = Fail, U = Untested)
 
     1) linecard-online-diag ------------> F
 
 
   
Switch#
 
   

Issue an RMA for the line card, contact TAC, and skip steps 2 and 3.

The output may display the following:

module 3: 
 
   
 	 Overall diagnostic result: PASS
 
   
  	Test results: (. = Pass, F = Fail, U = Untested)
 
   
    	1) linecard-online-diag --------------------> .
 
   

The message indicates that the line card passed online diagnostics either when it was inserted into the chassis the last time or when the switch was powered up (as reported by the "."). You need to obtain additional information to determine the cause.

Step 2 Insert a different supervisor engine card and reinsert the line card.

If the linecard passes the test, it suggests that the supervisor engine card is defective.

Issue an RMA for the supervisor engine, contact TAC, and skip step 3.

Because online diagnostics does not run on the supervisor engine card, you cannot use the
#show diagnostic module 1 command to test whether the supervisor engine card is faulty.

Step 3 Reinsert the linecard in a different chassis.

If the linecard passes the test, the problem is associated with the chassis.

Issue an RMA for the chassis and contact TAC.


Power-On Self-Test Diagnostics

The following topics are discussed:

Overview

Sample POST Results

Power-On Self-Test Results for Supervisor Engine V-10GE

Troubleshooting the Test Failures

Overview

All Catalyst 4500 series switches have power-on self-test (POST) diagnostics that run whenever a supervisor engine boots. POST tests the basic hardware functionality for the supervisor switching engine, its associated packet memory and other on-board hardware components. The results of the POST impacts how the switch boots, because the health of the supervisor engine is critical to the operation of the switch. The switch might boot in a marginal or faulty state.

POST is currently supported on the following supervisor engines:

WS-X45-SUP6-E

WS-X45-SUP6L-E

WS-X45-SUP7-E

The POST results are indicated with a period (.) or a Pass for Pass, an F for a Fail and a U for Untested.


Note A supervisor engine runs POST during boot up (insertion or power on). In a redundant topology, both supervisor engines run POST individually. Switchovers are allowed only after both supervisor engines have booted. During a switchover, the standby supervisor engine does not run POST again because it has already booted.


Sample POST Results

For all the supervisor engines, POST performs CPU, traffic, system, system memory, and feature tests.

For CPU tests, POST verifies appropriate activity of the supervisor SEEPROM, temperature sensor, and Ethernet end-of-band channel (EOBC), when used.

The following example illustrates the output of a CPU subsystem test on all supervisor engines except the WS-X4013+TS:

[..]
Cpu Subsystem Tests ...
seeprom: . temperature_sensor: . eobc: . 
[..]
 
   

The following example illustrates the output of a CPU subsystem test on a WS-X4013+TS supervisor engine.

[..]
Cpu Subsystem Tests ...
seeprom: . temperature_sensor: . 
[..]
 
   

For traffic tests, POST sends packets from the CPU to the switch. These packets loop several times within the switch core and validate the switching, the Layer 2 and the Layer 3 functionality. To isolate the hardware failures accurately, the loop back is done both inside and outside the switch ports.

The following example illustrates the output of a Layer 2 traffic test at the switch ports on the supervisor engines WS-X4516, WS-X4516-10GE, WS-X4013+10GE, WS-C4948G-10GE:

Port Traffic: L2 Serdes Loopback ...
 0: .  1: .  2: .  3: .  4: .  5: .  6: .  7: .  8: .  9: . 10: . 11: . 
12: . 13: . 14: . 15: . 16: . 17: . 18: . 19: . 20: . 21: . 22: . 23: . 
24: . 25: . 26: . 27: . 28: . 29: . 30: . 31: . 32: . 33: . 34: . 35: . 
36: . 37: . 38: . 39: . 40: . 41: . 42: . 43: . 44: . 45: . 46: . 47: . 
 
   

The following example illustrates the output of a Layer 2 traffic test at the switch ports on the supervisor engines WS-X4013+TS, WS-X4515, WS-X4013+, WS-X4014, WS-C4948G:

Port Traffic: L2 Serdes Loopback ...
 0: .  1: .  2: .  3: .  4: .  5: .  6: .  7: .  8: .  9: . 10: . 11: . 
12: . 13: . 14: . 15: . 16: . 17: . 18: . 19: . 20: . 21: . 22: . 23: . 
24: . 25: . 26: . 27: . 28: . 29: . 30: . 31:
 
   

POST also performs tests on the packet and system memory of the switch. These are numbered dynamically in ascending order starting with 1 and represent different memories.

The following example illustrates the output from a system memory test:

Switch Subsystem Memory ...
 1: .  2: .  3: .  4: .  5: .  6: .  7: .  8: .  9: . 10: . 11: . 12: . 
13: . 14: . 15: . 16: . 17: . 18: . 19: . 20: . 21: . 22: . 23: . 24: . 
25: . 26: . 27: . 28: . 29: . 30: . 31: . 32: . 33: . 34: . 35: . 36: . 
37: . 38: . 39: . 40: . 41: . 42: . 43: . 44: . 45: . 46: . 47: . 48: . 
49: . 50: . 51: . 52: . 53: . 54: . 55: . 
 
   

POST also tests the NetFlow services card and the NetFlow services feature (Supervisor Engine 7-E). Failures from these tests are treated as marginal, as they do not impact functionality of the switch (except for the unavailability of the NetFlow features):

Netflow Services Feature ...
se: . cf: . 52: . 53: . 54: . 55: . 56: . 57: . 58: . 59: . 60: . 61: . 
62: . 63: . 64: . 65: . 
 
   

Note Supervisor Engine VI-E retains most of the previous supervisor engines' POST features including the CPU subsystem tests, Layer 3 and Layer 2 traffic tests, and memory tests. Redundant ports on redundant systems are not tested. All POST diagnostics are local to the supervisor engine running the tests.


 
   

The following example shows the output for a WS-X45-SUP6-E supervisor engine:

Switch# show diagnostic result module 3 detail
 
   
module 3:   SerialNo : XXXXXXXXXXX
 
   
  Overall diagnostic result: PASS
 
   
  Test results: (. = Pass, F = Fail, U = Untested)
___________________________________________________________________________
 
   
    1) supervisor-bootup ---------------> 
          Error code ------------------> 0 (DIAG_SUCCESS)
          Total run count -------------> 1
          Last test execution time ----> Oct 01 2007 17:37:04
          First test failure time -----> n/a
          Last test failure time ------> n/a
          Last test pass time ---------> Oct 01 2007 17:37:04
          Total failure count ---------> 0
          Consecutive failure count ---> 0
Power-On-Self-Test Results for ACTIVE Supervisor
prod: WS-X45-SUP6-E part: XXXXXXXXX serial: XXXXXXXXXX
Power-on-self-test for Module 3: WS-X45-SUP6-E
 Test Status: (. = Pass, F = Fail, U = Untested) 
 
   
CPU Subsystem Tests ... 
 seeprom: Pass
 
   
Traffic: L3 Loopback ... 
 Test Results: Pass 
 
   
Traffic: L2 Loopback ... 
 Test Results: Pass
 
   
Switching Subsystem Memory ... 
 Packet Memory Test Results: Pass
 
   
Module 3 Passed
___________________________________________________________________________
 
   
    2) linecard-online-diag ------------> 
          Error code ------------------> 0 (DIAG_SUCCESS)
          Total run count -------------> 1
          Last test execution time ----> Oct 01 2007 17:37:04
          First test failure time -----> n/a
          Last test failure time ------> n/a
          Last test pass time ---------> Oct 01 2007 17:37:04
          Total failure count ---------> 0
          Consecutive failure count ---> 0 
 
   
Slot Ports Card Type                              Diag Status      Diag Details
---- ----- -------------------------------------- ---------------- ------------
 3     6   Sup 6-E 10GE (X2), 1000BaseX (SFP)     Skipped          Packet memory 
Detailed Status
---------------
. = Pass              U = Unknown
L = Loopback failure  S = Stub failure
P = Port failure
E = SEEPROM failure   G = GBIC integrity check failure
 
   
Ports 1   2   3   4   5   6 
      .   .   .   .   .   . 
  ___________________________________________________________________________
Switch#
 
   

The following example shows the output for a WS-X45-SUP7-E supervisor engine:

Switch# show diagnostic result module 3 detail
 
   
Checking digital signature
/nfs/gsg-sw/interim/flo_gsbu8/newest_image/iosxe/prod/cat4500e-universal.bin: Digitally 
Signed Development Software with key version A
 
   
Rommon reg: 0x00004FA8
Reset2Reg: 0x00000F00
 
   
Image load status: 0x00000000
#####
Snowtrooper 220 controller 0x04324CF8..0x044EDFA6 Size:0x0058B0C1 Program Done!
##############
Linux version 2.6.24.4.3.3.k10 (priypras@gsg-lnx-bld6) (gcc version 4.2.1 p4 (Cisco 
c4.2.1-p4)) #1 SMP Mon Jul 18 02:35:13 PDT 2011
Starting System Services
 
   
diagsk10-post version 4.1.7.4
 
   
prod: WS-X45-SUP7-E part: 73-12064-08 serial: CAT1418L05H
 
   
 
   
Power-on-self-test for Module 1: WS-X45-SUP7-E
 Test Status: (. = Pass, F = Fail, U = Untested)
 
   
 
   
CPU Subsystem Tests ... 
 seeprom: Pass
 
   
Traffic: L3 Looopback ... 
 Test Results: Pass
 
   
Traffic: L2 Loopback ... 
 Test Results: Pass
post done
Exiting to ios...
 
   

Power-On Self-Test Results for Supervisor Engine V-10GE

For the Supervisor Engine V-10GE (WS-X4516-10GE), POST tests extra redundancy features on the 10-Gigabit ports.

The following topics are discussed:

POST on the Active Supervisor Engine

Sample POST Results on an Active Supervisor Engine

POST on a Standby Supervisor Engine

Sample Display of the POST on a Standby Supervisor Engine

POST on the Active Supervisor Engine

The active supervisor engine tests the remote redundant 10-Gigabit ports on the standby supervisor engine if it is present when the active supervisor engine is booting. The status of the port is displayed as
"Remote TenGigabit Port Status." If no standby supervisor engine is present, the remote port status is always displayed as Untested. This situation persists even after a new standby supervisor engine is inserted. The remaining tests are conducted using only the Gigabit ports' configuration.

After the active supervisor engine has completed the bootup diagnostics, if the standby supervisor engine is now removed, the remote port status is changed to Untested in the overall diagnostic results.

Sample POST Results on an Active Supervisor Engine

Switch# show diagnostic result module 1 detail
 
   
module 1: 
 
   
  Overall diagnostic result: PASS
 
   
  Test results: (. = Pass, F = Fail, U = Untested)
 
   
  ___________________________________________________________________________
 
   
    1) supervisor-bootup -----------------------> .
 
   
          Error code --------------------------> 0 (DIAG_SUCCESS)
          Total run count ---------------------> 1
          Last test execution time ------------> Jul 19 2005 13:28:16
          First test failure time -------------> n/a
          Last test failure time --------------> n/a
          Last test pass time -----------------> Jul 19 2005 13:28:16
          Total failure count -----------------> 0
          Consecutive failure count -----------> 0
 
   
Power-On-Self-Test Results for ACTIVE Supervisor
 
   
 
   
Power-on-self-test for Module 1:  WS-X4516-10GE
 Port/Test Status: (. = Pass, F = Fail, U = Untested)
 Reset Reason: Software/User 
 
   
 
   
 
   
Cpu Subsystem Tests ...
seeprom: . temperature_sensor: . eobc: . 
 
   
Port Traffic: L3 Serdes Loopback ...
 0: .  1: .  2: .  3: .  4: .  5: .  6: .  7: .  8: .  9: . 10: . 11: . 
12: . 13: . 14: . 15: . 16: . 17: . 18: . 19: . 20: . 21: . 22: . 23: . 
24: . 25: . 26: . 27: . 28: . 29: . 30: . 31: . 32: . 33: . 34: . 35: . 
36: . 37: . 38: . 39: . 40: . 41: . 42: . 43: . 44: . 45: . 46: . 47: . 
 
   
Local 10GE Port 62: . 
 
   
Local 10GE Port 63: . 
 
   
Port Traffic: L2 Serdes Loopback ...
 0: .  1: .  2: .  3: .  4: .  5: .  6: .  7: .  8: .  9: . 10: . 11: . 
12: . 13: . 14: . 15: . 16: . 17: . 18: . 19: . 20: . 21: . 22: . 23: . 
24: . 25: . 26: . 27: . 28: . 29: . 30: . 31: . 32: . 33: . 34: . 35: . 
36: . 37: . 38: . 39: . 40: . 41: . 42: . 43: . 44: . 45: . 46: . 47: . 
48: . 49: . 50: . 51: . 
 
   
 
   
Port Traffic: L2 Asic Loopback ...
 0: .  1: .  2: .  3: .  4: .  5: .  6: .  7: .  8: .  9: . 10: . 11: . 
12: . 13: . 14: . 15: . 16: . 17: . 18: . 19: . 20: . 21: . 22: . 23: . 
24: . 25: . 26: . 27: . 28: . 29: . 30: . 31: . 32: . 33: . 34: . 35: . 
36: . 37: . 38: . 39: . 40: . 41: . 42: . 43: . 44: . 45: . 46: . 47: . 
48: . 49: . 50: . 51: . 
 
   
 
   
Switch Subsystem Memory ...
 1: .  2: .  3: .  4: .  5: .  6: .  7: .  8: .  9: . 10: . 11: . 12: . 
13: . 14: . 15: . 16: . 17: . 18: . 19: . 20: . 21: . 22: . 23: . 24: . 
25: . 26: . 27: . 28: . 29: . 30: . 31: . 32: . 33: . 34: . 35: . 36: . 
37: . 38: . 39: . 40: . 41: . 42: . 43: . 44: . 45: . 46: . 47: . 48: . 
49: . 50: . 51: . 
 
   
 
   
Netflow Services Feature ...
se: . cf: . 52: . 53: . 54: . 55: . 56: . 57: . 58: . 59: . 60: . 61: . 
62: . 63: . 64: . 65: . 
          
 
   
Module 1 Passed
 
   
Remote TenGigabitPort status: Passed
 
   
  ___________________________________________________________________________
 
   
    2) packet-memory-bootup --------------------> U
 
   
          Error code --------------------------> 0 (DIAG_SUCCESS)
          Total run count ---------------------> 0
          Last test execution time ------------> n/a
          First test failure time -------------> n/a
          Last test failure time --------------> n/a
          Last test pass time -----------------> n/a
          Total failure count -----------------> 0
          Consecutive failure count -----------> 0
packet buffers on free list: 64557 bad: 0 used for ongoing tests: 979
 
   
 
   
Exhaustive packet memory tests did not run at bootup.
Bootup test results:5
No  errors.
 
   
  ___________________________________________________________________________
 
   
    3) packet-memory-ongoing -------------------> U
 
   
          Error code --------------------------> 0 (DIAG_SUCCESS)
          Total run count ---------------------> 0
          Last test execution time ------------> n/a
          First test failure time -------------> n/a
          Last test failure time --------------> n/a
          Last test pass time -----------------> n/a
          Total failure count -----------------> 0
          Consecutive failure count -----------> 0
packet buffers on free list: 64557 bad: 0 used for ongoing tests: 979
 
   
 
   
Packet memory errors: 0 0
Current alert level: green
Per 5 seconds in the last minute: 
    0 0 0 0 0 0 0 0 0 0 
    0 0 
Per minute in the last hour: 
    0 0 0 0 0 0 0 0 0 0 
    0 0 0 0 0 0 0 0 0 0 
    0 0 0 0 0 0 0 0 0 0 
    0 0 0 0 0 0 0 0 0 0 
    0 0 0 0 0 0 0 0 0 0 
    0 0 0 0 0 0 0 0 0 0 
Per hour in the last day: 
    0 0 0 0 0 0 0 0 0 0 
    0 0 0 0 0 0 0 0 0 0 
    0 0 0 0 
Per day in the last 30 days: 
    0 0 0 0 0 0 0 0 0 0 
    0 0 0 0 0 0 0 0 0 0 
    0 0 0 0 0 0 0 0 0 0 
Direct memory test failures per minute in the last hour: 
    0 0 0 0 0 0 0 0 0 0 
    0 0 0 0 0 0 0 0 0 0 
    0 0 0 0 0 0 0 0 0 0 
    0 0 0 0 0 0 0 0 0 0 
    0 0 0 0 0 0 0 0 0 0 
    0 0 0 0 0 0 0 0 0 0 
Potential false positives: 0 0
  Ignored because of rx errors: 0 0
  Ignored because of cdm fifo overrun: 0 0
  Ignored because of oir: 0 0
  Ignored because isl frames received: 0 0
  Ignored during boot: 0 0
  Ignored after writing hw stats: 0 0
  Ignored on high gigaport: 0
Ongoing diag action mode: Normal
Last 1000 Memory Test Failures:
Last 1000 Packet Memory errors:
First 1000 Packet Memory errors:
 
   
  ___________________________________________________________________________
 
   
Switch#

POST on a Standby Supervisor Engine

Ports 62 and 63 of the supervisor engine always remain Untested (U). Because the standby supervisor engine never tests the remote 10-Gigabit port on the active supervisor engine, the remote 10-Gigabit port status on the standby supervisor engine is always Untested. The supervisor engine performs the remaining tests using the Gigabit ports' configuration.


Note On a redundant chassis, concurrent POST is supported on supervisor engines that are already inserted. However, if a second supervisor engine is inserted while the first one is loading, you might boot the first supervisor engine in a faulty IOS state (POST will abort and some of the POST's tests will be bypassed). This situation only happens during concurrent bootup of the supervisor engines. You should not insert any additional supervisor engines in the empty supervisor engine slot while an already seated supervisor engine is running POST. The POST sequence is completed when the "Exiting to ios..." message is displayed.


Sample Display of the POST on a Standby Supervisor Engine

Switch# show diagnostic result module 2 detail
 
   
module 2: 
 
   
  Overall diagnostic result: PASS
 
   
  Test results: (. = Pass, F = Fail, U = Untested)
 
   
  ___________________________________________________________________________
 
   
    1) supervisor-bootup -----------------------> .
 
   
          Error code --------------------------> 0 (DIAG_SUCCESS)
          Total run count ---------------------> 1
          Last test execution time ------------> Jul 19 2005 13:29:44
          First test failure time -------------> n/a
          Last test failure time --------------> n/a
          Last test pass time -----------------> Jul 19 2005 13:29:44
          Total failure count -----------------> 0
          Consecutive failure count -----------> 0
 
   
Power-On-Self-Test Results for ACTIVE Supervisor
 
   
 
   
Power-on-self-test for Module 2:  WS-X4516-10GE
 Port/Test Status: (. = Pass, F = Fail, U = Untested)
 Reset Reason: OtherSupervisor Software/User 
 
   
 
   
 
   
Cpu Subsystem Tests ...
seeprom: . temperature_sensor: . eobc: . 
 
   
Port Traffic: L3 Serdes Loopback ...
 0: .  1: .  2: .  3: .  4: .  5: .  6: .  7: .  8: .  9: . 10: . 11: . 
12: . 13: . 14: . 15: . 16: . 17: . 18: . 19: . 20: . 21: . 22: . 23: . 
24: . 25: . 26: . 27: . 28: . 29: . 30: . 31: . 32: . 33: . 34: . 35: . 
36: . 37: . 38: . 39: . 40: . 41: . 42: . 43: . 44: . 45: . 46: . 47: . 
 
   
Local 10GE Port 62: U 
 
   
Local 10GE Port 63: U 
 
   
Port Traffic: L2 Serdes Loopback ...
 0: .  1: .  2: .  3: .  4: .  5: .  6: .  7: .  8: .  9: . 10: . 11: . 
12: . 13: . 14: . 15: . 16: . 17: . 18: . 19: . 20: . 21: . 22: . 23: . 
24: . 25: . 26: . 27: . 28: . 29: . 30: . 31: . 32: . 33: . 34: . 35: . 
36: . 37: . 38: . 39: . 40: . 41: . 42: . 43: . 44: . 45: . 46: . 47: . 
48: . 49: . 50: . 51: . 
 
   
 
   
Port Traffic: L2 Asic Loopback ...
 0: .  1: .  2: .  3: .  4: .  5: .  6: .  7: .  8: .  9: . 10: . 11: . 
12: . 13: . 14: . 15: . 16: . 17: . 18: . 19: . 20: . 21: . 22: . 23: . 
24: . 25: . 26: . 27: . 28: . 29: . 30: . 31: . 32: . 33: . 34: . 35: . 
36: . 37: . 38: . 39: . 40: . 41: . 42: . 43: . 44: . 45: . 46: . 47: . 
48: . 49: . 50: . 51: . 
 
   
 
   
Switch Subsystem Memory ...
 1: .  2: .  3: .  4: .  5: .  6: .  7: .  8: .  9: . 10: . 11: . 12: . 
13: . 14: . 15: . 16: . 17: . 18: . 19: . 20: . 21: . 22: . 23: . 24: . 
25: . 26: . 27: . 28: . 29: . 30: . 31: . 32: . 33: . 34: . 35: . 36: . 
37: . 38: . 39: . 40: . 41: . 42: . 43: . 44: . 45: . 46: . 47: . 48: . 
49: . 50: . 51: . 
 
   
 
   
Netflow Services Feature ...
se: . cf: . 52: . 53: . 54: . 55: . 56: . 57: . 58: . 59: . 60: . 61: . 
62: . 63: . 64: . 65: . 
          
 
   
Module 2 Passed
 
   
Remote TenGigabitPort status: Untested
 
   
  ___________________________________________________________________________
 
   
    2) packet-memory-bootup --------------------> U
 
   
          Error code --------------------------> 0 (DIAG_SUCCESS)
          Total run count ---------------------> 0
          Last test execution time ------------> n/a
          First test failure time -------------> n/a
          Last test failure time --------------> n/a
          Last test pass time -----------------> n/a
          Total failure count -----------------> 0
          Consecutive failure count -----------> 0
packet buffers on free list: 64557 bad: 0 used for ongoing tests: 979
 
   
 
   
Exhaustive packet memory tests did not run at bootup.
Bootup test results:5
No  errors.
 
   
  ___________________________________________________________________________
 
   
    3) packet-memory-ongoing -------------------> U
 
   
          Error code --------------------------> 0 (DIAG_SUCCESS)
          Total run count ---------------------> 0
          Last test execution time ------------> n/a
          First test failure time -------------> n/a
          Last test failure time --------------> n/a
          Last test pass time -----------------> n/a
          Total failure count -----------------> 0
          Consecutive failure count -----------> 0
packet buffers on free list: 64557 bad: 0 used for ongoing tests: 979
 
   
 
   
Packet memory errors: 0 0
Current alert level: green
Per 5 seconds in the last minute: 
    0 0 0 0 0 0 0 0 0 0 
    0 0 
Per minute in the last hour: 
    0 0 0 0 0 0 0 0 0 0 
    0 0 0 0 0 0 0 0 0 0 
    0 0 0 0 0 0 0 0 0 0 
    0 0 0 0 0 0 0 0 0 0 
    0 0 0 0 0 0 0 0 0 0 
    0 0 0 0 0 0 0 0 0 0 
Per hour in the last day: 
    0 0 0 0 0 0 0 0 0 0 
    0 0 0 0 0 0 0 0 0 0 
    0 0 0 0 
Per day in the last 30 days: 
    0 0 0 0 0 0 0 0 0 0 
    0 0 0 0 0 0 0 0 0 0 
    0 0 0 0 0 0 0 0 0 0 
Direct memory test failures per minute in the last hour: 
    0 0 0 0 0 0 0 0 0 0 
    0 0 0 0 0 0 0 0 0 0 
    0 0 0 0 0 0 0 0 0 0 
    0 0 0 0 0 0 0 0 0 0 
    0 0 0 0 0 0 0 0 0 0 
    0 0 0 0 0 0 0 0 0 0 
Potential false positives: 0 0
  Ignored because of rx errors: 0 0
  Ignored because of cdm fifo overrun: 0 0
  Ignored because of oir: 0 0
  Ignored because isl frames received: 0 0
  Ignored during boot: 0 0
  Ignored after writing hw stats: 0 0
  Ignored on high gigaport: 0
Ongoing diag action mode: Normal
Last 1000 Memory Test Failures:
Last 1000 Packet Memory errors:
First 1000 Packet Memory errors:
 
   
  ___________________________________________________________________________
 
   
Switch#

Note To ensure that the maximum number of ports are tested, ensure that both supervisor engines are present on power-up.


Troubleshooting the Test Failures

A failure of any of the POST tests reflects a problem with the hardware on the supervisor engine.
Cisco IOS boots the supervisor engine with limited functionality, allowing the user to evaluate and display the diagnostic test results. To determine the failure cause, do one of the following:

Evaluate whether the hardware failure is persistent by power cycling the supervisor engine to rerun the POST tests.

Remove and reinsert the supervisor engine into the chassis to ensure that the seating is correct.

Please call the Cisco Systems customer support team for more information.


Note On a redundant chassis, concurrent POST is supported on supervisor engines that are already inserted. However, if a second supervisor engine is inserted while the first one is loading, you might boot the first supervisor engine in a faulty Cisco IOS state (POST will abort, and some of the POST"s tests will be bypassed). This situation only happens during concurrent bootup of the supervisor engines. You should not insert any additional supervisor engines in the empty supervisor engine slot while an already seated supervisor engine is running POST. The POST sequence is completed when the "Exiting to ios..." message is displayed.