Performing Diagnostics
You can use diagnostics to test and verify the functionality of the hardware components of your system (chassis, supervisor engines, modules, and ASICs) while your Catalyst 4500 series switch is connected to a live network. Diagnostics consists of packet-switching tests that test hardware components and verify the data path and control signals.
Online diagnostics are categorized as bootup, on-demand, schedule, or health-monitoring diagnostics. Bootup diagnostics run during bootup; on-demand diagnostics run from the CLI; scheduled diagnostics run at user-designated intervals or specified times when the switch is connected to a live network; and health-monitoring runs in the background.
Note Online diagnostics is not supported on the LAN Base image.
This chapter consists of these sections:
Note For complete syntax and usage information for the switch commands used in this chapter, first look at the Cisco Catalyst 4500 Series Switch Command Reference and related publications at this location:
http://www.cisco.com/en/US/products//hw/switches/ps4324/index.html
If the command is not found in the Catalyst 4500 Command Reference, it will be found in the larger Cisco IOS library. Refer to the Cisco IOS Command Reference and related publications at this location:
http://www.cisco.com/en/US/products/ps6350/index.html
Configuring Online Diagnostics
Note Online diagnostics is not supported on the LAN Base image.
These sections describe how to configure online diagnostics:
Configuring On-Demand Online Diagnostics
You can run on-demand online diagnostic tests from the CLI. You can set the execution action to either stop or continue the test when a failure is detected, or to stop the test after a specific number of failures occur with the failure count setting. The iteration setting allows you to configure a test to run multiple times.
Note Online diagnostics is not supported on the LAN Base image.
To schedule online diagnostics, perform this task:
{
|
|
Switch# diagnostic ondemand {iteration iteration_count} | {action-on-error {continue | stop} [error_count]}
|
Configures on-demand diagnostic tests to run, how many times to run (iterations), and what action to take when errors are found. |
This example shows how to set the on-demand testing iteration count:
Switch# diagnostic ondemand iterations 3
This example shows how to set the execution action when an error is detected:
Switch# diagnostic ondemand action-on-error continue 2
Scheduling Online Diagnostics
You can schedule online diagnostics to run at a designated time of day or on a daily, weekly, or monthly basis. You can schedule tests to run only once or to repeat at an interval. Use the no form of this command to remove the scheduling.
To configure online diagnostics, perform this task:
|
|
Switch(config)# diagnostic schedule module number test {test_id | test_id_range | all} [port {num | num_range | all} {on mm dd yyyy hh:mm} | {daily hh:mm} | {weekly day_of_week hh:mm}}
|
Schedules on-demand diagnostic tests on the specified module for a specific date and time, how many times to run (iterations), and what action to take when errors are found. |
This example shows how to schedule diagnostic testing on a specific date and time for a specific port on module 6:
Switch(config)# diagnostic schedule module 6 test 2 port 3 on may 23 2009 23:32
This example shows how to schedule diagnostic testing to occur daily:
Switch(config)# diagnostic schedule module 6 test 2 port 3 daily 12:34
This example shows how to schedule diagnostic testing to occur weekly:
Switch(config)# diagnostic schedule module 6 test 2 port 3 weekly friday 09:23
Performing Diagnostics
After you configure online diagnostics, you can start or stop diagnostic tests or display the test results. You can also see which tests are configured and what diagnostic tests have already run.
These sections describe how to run online diagnostic tests after they have been configured:
Note Before you enable any online diagnostics tests, enable the logging console or monitor to observe all warning messages.
Note When running disruptive tests, only run them when you are connected using the console. When disruptive tests complete, a warning message on the console will recommend that you reload the system to return to normal operation. Strictly follow this warning.
Starting and Stopping Online Diagnostic Tests
After you configure diagnostic tests, you can use the start and stop keywords to begin or end a test.
To start or stop an online diagnostic command, perform one of these tasks:
|
|
Switch# diagnostic start module number test {test_id | test_id_range | minimal | complete | basic | per-port | non-disruptive | all} [port {num | port#_range | all}]
|
Starts a diagnostic test on a port or range of ports on the specified module. |
Switch# diagnostic stop module number
|
Stops a diagnostic test on the specified module. |
This example shows how to start a diagnostic test on module 6:
Switch# diagnostic start module 6 test 2
Diagnostic[module 6]: Running test(s) 2 Run interface level cable diags
Diagnostic[module 6]: Running test(s) 2 may disrupt normal system operation
Do you want to continue? [no]: yes
*May 14 21:11:46.631: %DIAG-6-TEST_RUNNING: module 6: Running online-diag-tdr{ID=2}...
*May 14 21:11:46.631: %DIAG-6-TEST_OK: module 6: online-diag-tdr{ID=2} has completed successfully
This example shows how to stop a diagnostic test on module 6:
Switch# diagnostic stop module 6
Diagnostic[module 6]: Diagnostic is not active.
The message indicates no active diagnostic on module 6
Displaying Online Diagnostic Tests and Test Results
You can display the configured online diagnostic tests and check the results of the tests with the show diagnostic command.
To display the configured diagnostic tests, perform this task:
|
|
Switch# show diagnostic {bootup | cns | content [module num] | description [module num] | events [module num] [event-type event-type] | ondemand | result [module num] [detail] | schedule [module num] | simulation | status}
|
Displays the test results of online diagnostics and lists supported test suites. |
This example shows how to display the online diagnostics configured on module 1:
Switch# show diagnostic content module 6
Diagnostics test suite attributes:
M/C/* - Minimal bootup level test / Complete bootup level test / NA
B/* - Basic ondemand test / NA
P/V/* - Per port test / Per device test / NA
D/N/* - Disruptive test / Non-disruptive test / NA
S/* - Only applicable to standby unit / NA
X/* - Not a health monitoring test / NA
F/* - Fixed monitoring interval test / NA
E/* - Always enabled monitoring test / NA
A/I - Monitoring is active / Monitoring is inactive
cable-tdr/* - Interface cable diags / NA
o/* - Ongoing test, always active / NA
ID Test Name Attributes day hh:mm:ss.ms shold
==== ================================== ============ =============== =====
1) linecard-online-diag ------------> M**D****I** not configured n/a
2) online-diag-tdr -----------------> **PD****Icable- not configured n/a
3) stub-rx-errors ------------------> ***N****A** 000 00:01:00.00 n/a
4) supervisor-rx-errors ------------> ***N****A** 000 00:01:00.00 n/a
This example shows how to display the test description for a given test on a module:
Switch# show diagnostic description module 6 test 1
Linecard online-diagnostics run after the system boots up but
before it starts passing traffic. Each linecard port is placed in
loopback, and a few packets are injected into the switching fabric
from the cpu to the port. If the packets are successfully
received by the cpu, the port passes the test. Sometimes one port
or a group of ports sharing common components fail. The linecard
is then placed in partial faulty mode. If no ports can loop back
traffic, the board is placed in faulty state.
This example shows how to display the online diagnostic results for module 6:
Switch# show diagnostic result module 6
Current bootup diagnostic level: minimal
module 6: SerialNo : JAB0815059L
Overall Diagnostic Result for module 6 : PASS
Diagnostic level at card bootup: minimal
Test results: (. = Pass, F = Fail, U = Untested)
1) linecard-online-diag ------------>.
Port 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
----------------------------------------------------------------------------
U U. U U U U U U U U U U U U U U U U U U U U U
3) stub-rx-errors ------------------>.
4) supervisor-rx-errors ------------>.
This example shows how to display the online diagnostic results details for module 6:
Switch# show diagnostic result module 6 detail
Current bootup diagnostic level: minimal
module 6: SerialNo : JAB0815059L
Overall Diagnostic Result for module 6 : PASS
Diagnostic level at card bootup: minimal
Test results: (. = Pass, F = Fail, U = Untested)
___________________________________________________________________________
1) linecard-online-diag ------------>.
Error code ------------------> 0 (DIAG_SUCCESS)
Total run count -------------> 1
Last test testing type ------> n/a
Last test execution time ----> Jun 01 2009 11:19:36
First test failure time -----> n/a
Last test failure time ------> n/a
Last test pass time ---------> Jun 01 2009 11:19:36
Total failure count ---------> 0
Consecutive failure count ---> 0
Slot Ports Card Type Diag Status Diag Details
---- ----- -------------------------------------- ---------------- ------------
6 24 10/100/1000BaseT (RJ45)V, Cisco/IEEE Passed None
L = Loopback failure S = Stub failure
E = SEEPROM failure G = GBIC integrity check failure
Ports 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Ports 17 18 19 20 21 22 23 24
___________________________________________________________________________
Port 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
----------------------------------------------------------------------------
U U. U U U U U U U U U U U U U U U U U U U U U
Error code ------------------> 0 (DIAG_SUCCESS)
Total run count -------------> 1
Last test testing type ------> OnDemand
Last test execution time ----> Jun 03 2009 05:39:00
First test failure time -----> n/a
Last test failure time ------> n/a
Last test pass time ---------> Jun 03 2009 05:39:00
Total failure count ---------> 0
Consecutive failure count ---> 0
Interface Speed Local pair Cable length Remote channel Status
Gi6/3 1Gbps 1-2 N/A Unknown Terminated
3-6 N/A Unknown Terminated
4-5 N/A Unknown Terminated
7-8 N/A Unknown Terminated
___________________________________________________________________________
3) stub-rx-errors ------------------->.
Error code ------------------> 3 (DIAG_SUCCESS)
Total run count -------------> 4
Last test testing type ------> Health Monitoring
Last test execution time ----> Dec 20 2009 22:30:41
First test failure time -----> n/a
Last test failure time ------> n/a
Last test pass time ---------> Dec 20 2009 22:30:41
Total failure count ---------> 0
Consecutive failure count ---> 0
___________________________________________________________________________
4) supervisor-rx-trends ---------->.
Error code ------------------> 3 (DIAG_SUCCESS)
Total run count -------------> 4
Last test testing type ------> Health Monitoring
Last test execution time ----> Dec 20 2009 22:30:41
First test failure time -----> n/a
Last test failure time ------> n/a
Last test pass time ---------> Dec 20 2009 22:30:41
Total failure count ---------> 0
Consecutive failure count ---> 0
___________________________________________________________________________
Displaying Data Path Online Diagnostics Test Results
A data path online diagnostic test verifies that the data paths between the supervisor engine and the linecards (defined as a number of stub ASICs) are functioning correctly. There is a direct connection between each stub ASIC on a line card and the supervisor engine. Error counters on the supervisor engine (supervisor-rx-trends) and each stub ASIC on a line card (stub-rx-trends) are monitored periodically. Error counters that continually increase indicate malfunctioning hardware in the data path and cause the test to fail. Data path online diagnostic tests are non-destructure and the error counters are polled every minute.
Errors on the stub end of the data path are reported as errors in traffic egressing to the line card from the supervisor engine switching ASICs. Some initial errors might be revealed as links are brought up, but they should not increase. An increasing count indicates a poor connection between the supervisor engine and a line card. If only one line card is affected, the cause is likely an incorrectly seated or faulty line card. The error counts include idle frames, so detection can occur when traffic is not flowing.
Errors on the supervisor end of the data path are reported as errors in traffic ingressing to the supervisor engine from linecards. The error counts should not increase and the detection includes idle frames. If the error counts increase for more than one line card, the likely cause is a faulty supervisor engine or chassis. If only one stub or line card is affected, the likely cause is a faulty line card or a defective mux buffer (for a redundant chassis).
In addition to running periodically, data path online diagnostics can be also be invoked on-demand in the following way:
Switch# diagnostic start module 1 test stub-rx-errors
*Apr 1 09:25:14.211: %DIAG-6-TEST_RUNNING: module 1: Running stub-rx-errors{ID=3}...
*Apr 1 09:25:14.211: %DIAG-6-TEST_OK: module 1: stub-rx-errors{ID=3} has completed
Switch# diagnostic start module 1 test supervisor-rx-errors
*Apr 1 09:25:26.503: %DIAG-6-TEST_RUNNING: module 1: Running supervisor-rx-errors{ID=4}...
*Apr 1 09:25:26.503: %DIAG-6-TEST_OK: module 1: supervisor-rx-errors{ID=4} has completed successfully
Detailed information about the test results can be viewed as follows:
Switch# show diagnostic result module 1 test stub-rx-errors detail
Current bootup diagnostic level: minimal
Test results: (. = Pass, F = Fail, U = Untested)
___________________________________________________________________________
3) stub-rx-errors ------------------>.
Error code ------------------> 0 (DIAG_SUCCESS)
Total run count -------------> 7
Last test testing type ------> OnDemand
Last test execution time ----> Apr 01 2010 09:25:14
First test failure time -----> n/a
Last test failure time ------> n/a
Last test pass time ---------> Apr 01 2010 09:25:14
Total failure count ---------> 0
Consecutive failure count ---> 0
___________________________________________________________________________
Switch# show diagnostic result module 1 test supervisor-rx-errors detail
Current bootup diagnostic level: minimal
Test results: (. = Pass, F = Fail, U = Untested)
___________________________________________________________________________
4) supervisor-rx-errors ------------>.
Error code ------------------> 0 (DIAG_SUCCESS)
Total run count -------------> 4
Last test testing type ------> OnDemand
Last test execution time ----> Apr 01 2010 09:25:26
First test failure time -----> n/a
Last test failure time ------> n/a
Last test pass time ---------> Apr 01 2010 09:25:26
Total failure count ---------> 0
Consecutive failure count ---> 0
___________________________________________________________________________
Line Card Online Diagnostics
A line card online diagnostic test verifies that all ports on a line card are working correctly. The test can detect whether the path to the front panel port on the line card is broken. The test cannot indicate where along the path that the problem occurred.
Note This test is run only for line cards that have stub chips.
Line card online diagnostics runs only once, when the line cards boot. This situation can happen when you insert a line card or power up a chassis.
Line card online diagnostics are performed by sending a packet from the CPU to every port on the line card. Because this packet is marked loopback, the CPU expects to see this packet return from the port. The packet first traverses the ASICs on the supervisor engine card, then travels by using the chassis backplane and the stub chip on the line cards to the PHYs. The PHY sends it back down the same path.
Note The packet does not reach or exit the front panel port.
Troubleshooting with Online Diagnostics
A faulty line card occurs if any of the following conditions occurs.
- All ports fail
- All ports on a stub chip fail
- Only one port fails
For all of these situations, the output of the show module command would display the status of the line card as faulty:
Power consumed by backplane : 40 Watts
Mod Ports Card Type Model Serial No.
---+-----+--------------------------------------+------------------+-----------
1 6 Sup II+10GE 10GE (X2), 1000BaseX (SFP) WS-X4013+10GE JAB091502G0
2 6 Sup II+10GE 10GE (X2), 1000BaseX (SFP) WS-X4013+10GE JAB091502FC
3 48 100BaseX (SFP) WS-X4248-FE-SFP JAB093305RP
4 48 10/100BaseTX (RJ45)V WS-X4148-RJ45V JAE070717E5
5 48 10/100BaseTX (RJ45)V WS-X4148-RJ45V JAE061303U3
6 48 10/100BaseTX (RJ45)V WS-X4148-RJ45V JAE061303WJ
7 24 10/100/1000BaseT (RJ45)V, Cisco/IEEE WS-X4524-GB-RJ45V JAB0815059Q
M MAC addresses Hw Fw Sw Status
--+--------------------------------+---+------------+----------------+---------
1 000b.5f27.8b80 to 000b.5f27.8b85 0.2 12.2(27r)SG( 12.2(37)SG Ok
2 000b.5f27.8b86 to 000b.5f27.8b8b 0.2 12.2(27r)SG( 12.2(37)SG Ok
3 0005.9a80.6810 to 0005.9a80.683f 0.4 Ok
4 000c.3016.aae0 to 000c.3016.ab0f 2.6 Ok
5 0008.a3a3.4e70 to 0008.a3a3.4e9f 1.6 Ok
6 0008.a3a3.3fa0 to 0008.a3a3.3fcf 1.6 Faulty
7 0030.850e.3e78 to 0030.850e.3e8f 1.0 Ok
Mod Redundancy role Operating mode Redundancy status
----+-------------------+-------------------+----------------------------------
1 Active Supervisor SSO Active
2 Standby Supervisor SSO Standby hot
To troubleshoot a faulty line card, follow these steps:
Step 1 Enter the command show diagnostic result module 3.
If a faulty line card was inserted in the chassis, it will fail the diagnostics and the output will be similar to the following:
Current bootup diagnostic level: minimal
module 3: SerialNo : JAB093305RP
Overall Diagnostic Result for module 3 : MAJOR ERROR
Diagnostic level at card bootup: minimal
Test results: (. = Pass, F = Fail, U = Untested)
1) linecard-online-diag ------------> F
Issue an RMA for the line card, contact TAC, and skip steps 2 and 3.
The output may display the following:
Overall diagnostic result: PASS
Test results: (. = Pass, F = Fail, U = Untested)
1) linecard-online-diag -------------------->.
The message indicates that the line card passed online diagnostics either when it was inserted into the chassis the last time or when the switch was powered up (as reported by the “.”). You need to obtain additional information to determine the cause.
Step 2 Insert a different supervisor engine card and reinsert the line card.
If the line card passes the test, it suggests that the supervisor engine card is defective.
Issue an RMA for the supervisor engine, contact TAC, and skip step 3.
Because online diagnostics are not run on the supervisor engine card, you cannot use the
#show diagnostic module 1 command to test whether the supervisor engine card is faulty.
Step 3 Reinsert the line card in a different chassis.
If the line card passes the test, the problem is associated with the chassis.
Issue an RMA for the chassis and contact TAC.
Power-On Self-Test Diagnostics
The following topics are discussed:
Overview of Power-On Self-Test Diagnostics
All Catalyst 4500 series switches have power-on self-test (POST) diagnostics that run whenever a supervisor engine boots. POST tests the basic hardware functionality for the supervisor switching engine, its associated packet memory and other on-board hardware components. The results of the POST impacts how the switch boots, because the health of the supervisor engine is critical to the operation of the switch. The switch might boot in a marginal or faulty state.
POST is currently supported on the following supervisor engines:
- WS-X4014
- WS-X4515
- WS-X4516
- WS-X4516-10GE
- WS-X4013+
- WS-X4013+TS
- WS-X4013+10GE
- WS-C4948G
- WS-C4948G-10GE
- ME-4924-10GE
- WS-X45-SUP6-E
- WS-X45-SUP6L-E
The POST results are indicated with a period (.) or a Pass for Pass, an F for a Fail and a U for Untested.
POST Result Example
For all the supervisor engines, POST performs CPU, traffic, system, system memory, and feature tests.
For CPU tests, POST verifies appropriate activity of the supervisor engine SEEPROM, temperature sensor, and Ethernet end-of-band channel (EOBC), when used.
The following example illustrates the output of a CPU subsystem test on all supervisor engines except the WS-X4013+TS:
seeprom:. temperature_sensor:. eobc:.
The following example illustrates the output of a CPU subsystem test on a WS-X4013+TS supervisor engine:
seeprom:. temperature_sensor:.
For traffic tests, the POST sends packets from the CPU to the switch. These packets loop several times within the switch core and validate the switching, the Layer 2 and the Layer 3 functionality. To isolate the hardware failures accurately, the loop back is done both inside and outside the switch ports.
The following example illustrates the output of a Layer 2 traffic test at the switch ports on the supervisor engines WS-X4516, WS-X4516-10GE, WS-X4013+10GE, WS-C4948G-10GE:
Port Traffic: L2 Serdes Loopback...
0:. 1:. 2:. 3:. 4:. 5:. 6:. 7:. 8:. 9:. 10:. 11:.
12:. 13:. 14:. 15:. 16:. 17:. 18:. 19:. 20:. 21:. 22:. 23:.
24:. 25:. 26:. 27:. 28:. 29:. 30:. 31:. 32:. 33:. 34:. 35:.
36:. 37:. 38:. 39:. 40:. 41:. 42:. 43:. 44:. 45:. 46:. 47:.
The following example illustrates the output of a Layer 2 traffic test at the switch ports on the supervisor engines WS-X4013+TS, WS-X4515, WS-X4013+, WS-X4014, WS-C4948G:
Port Traffic: L2 Serdes Loopback...
0:. 1:. 2:. 3:. 4:. 5:. 6:. 7:. 8:. 9:. 10:. 11:.
12:. 13:. 14:. 15:. 16:. 17:. 18:. 19:. 20:. 21:. 22:. 23:.
24:. 25:. 26:. 27:. 28:. 29:. 30:. 31:
POST also performs tests on the packet and system memory of the switch. These are numbered dynamically in ascending order starting with 1 and represent different memories.
The following example illustrates the output from a system memory test:
Switch Subsystem Memory...
1:. 2:. 3:. 4:. 5:. 6:. 7:. 8:. 9:. 10:. 11:. 12:.
13:. 14:. 15:. 16:. 17:. 18:. 19:. 20:. 21:. 22:. 23:. 24:.
25:. 26:. 27:. 28:. 29:. 30:. 31:. 32:. 33:. 34:. 35:. 36:.
37:. 38:. 39:. 40:. 41:. 42:. 43:. 44:. 45:. 46:. 47:. 48:.
49:. 50:. 51:. 52:. 53:. 54:. 55:.
POST also tests the NetFlow services card (Supervisor Engine IV and Supervisor Engine V) and the NetFlow services feature (Supervisor Engine V -10GE). Failures from these tests are treated as marginal, as they do not impact functionality of the switch (except for the unavailability of the NetFlow features):
Netflow Services Feature...
se:. cf:. 52:. 53:. 54:. 55:. 56:. 57:. 58:. 59:. 60:. 61:.
Note Supervisor Engine VI-E retains most of the previous supervisor engines’ POST features including the CPU subsystem tests, Layer 3 and Layer 2 traffic tests, and memory tests. Redundant ports on redundant systems are not tested. All POST diagnostics are local to the supervisor engine running the tests.
The following example shows the output for a WS-X4516 supervisor engine:
Switch# show diagnostic result module 2 detail
Overall diagnostic result: PASS
Test results: (. = Pass, F = Fail, U = Untested)
___________________________________________________________________________
1) supervisor-bootup ----------------------->.
Error code --------------------------> 0 (DIAG_SUCCESS)
Total run count ---------------------> 1
Last test execution time ------------> Jul 20 2005 14:15:52
First test failure time -------------> n/a
Last test failure time --------------> n/a
Last test pass time -----------------> Jul 20 2005 14:15:52
Total failure count -----------------> 0
Consecutive failure count -----------> 0
Power-On-Self-Test Results for ACTIVE Supervisor
Power-on-self-test for Module 2: WS-X4516
Port/Test Status: (. = Pass, F = Fail, U = Untested)
Reset Reason: PowerUp RemoteDebug
seeprom:. temperature_sensor:. eobc:.
Port Traffic: L2 Serdes Loopback...
0:. 1:. 2:. 3:. 4:. 5:. 6:. 7:. 8:. 9:. 10:. 11:.
12:. 13:. 14:. 15:. 16:. 17:. 18:. 19:. 20:. 21:. 22:. 23:.
24:. 25:. 26:. 27:. 28:. 29:. 30:. 31:. 32:. 33:. 34:. 35:.
36:. 37:. 38:. 39:. 40:. 41:. 42:. 43:. 44:. 45:. 46:. 47:.
Port Traffic: L2 Asic Loopback...
0:. 1:. 2:. 3:. 4:. 5:. 6:. 7:. 8:. 9:. 10:. 11:.
12:. 13:. 14:. 15:. 16:. 17:. 18:. 19:. 20:. 21:. 22:. 23:.
24:. 25:. 26:. 27:. 28:. 29:. 30:. 31:. 32:. 33:. 34:. 35:.
36:. 37:. 38:. 39:. 40:. 41:. 42:. 43:. 44:. 45:. 46:. 47:.
Port Traffic: L3 Asic Loopback...
0:. 1:. 2:. 3:. 4:. 5:. 6:. 7:. 8:. 9:. 10:. 11:.
12:. 13:. 14:. 15:. 16:. 17:. 18:. 19:. 20:. 21:. 22:. 23:.
24:. 25:. 26:. 27:. 28:. 29:. 30:. 31:. 32:. 33:. 34:. 35:.
36:. 37:. 38:. 39:. 40:. 41:. 42:. 43:. 44:. 45:. 46:. 47:.
Switch Subsystem Memory...
1:. 2:. 3:. 4:. 5:. 6:. 7:. 8:. 9:. 10:. 11:. 12:.
13:. 14:. 15:. 16:. 17:. 18:. 19:. 20:. 21:. 22:. 23:. 24:.
25:. 26:. 27:. 28:. 29:. 30:. 31:. 32:. 33:. 34:. 35:. 36:.
37:. 38:. 39:. 40:. 41:. 42:. 43:. 44:. 45:. 46:. 47:. 48:.
49:. 50:. 51:. 52:. 53:. 54:. 55:.
___________________________________________________________________________
2) packet-memory-bootup --------------------> U
Error code --------------------------> 0 (DIAG_SUCCESS)
Total run count ---------------------> 0
Last test execution time ------------> n/a
First test failure time -------------> n/a
Last test failure time --------------> n/a
Last test pass time -----------------> n/a
Total failure count -----------------> 0
Consecutive failure count -----------> 0
packet buffers on free list: 64557 bad: 0 used for ongoing tests: 979
Exhaustive packet memory tests did not run at bootup.
___________________________________________________________________________
3) packet-memory-ongoing -------------------> U
Error code --------------------------> 0 (DIAG_SUCCESS)
Total run count ---------------------> 0
Last test execution time ------------> n/a
First test failure time -------------> n/a
Last test failure time --------------> n/a
Last test pass time -----------------> n/a
Total failure count -----------------> 0
Consecutive failure count -----------> 0
packet buffers on free list: 64557 bad: 0 used for ongoing tests: 979
Packet memory errors: 0 0
Current alert level: green
Per 5 seconds in the last minute:
Per minute in the last hour:
Per hour in the last day:
Per day in the last 30 days:
Direct memory test failures per minute in the last hour:
Potential false positives: 0 0
Ignored because of rx errors: 0 0
Ignored because of cdm fifo overrun: 0 0
Ignored because of oir: 0 0
Ignored because isl frames received: 0 0
Ignored after writing hw stats: 0 0
Ignored on high gigaport: 0
Ongoing diag action mode: Normal
Last 1000 Memory Test Failures:
Last 1000 Packet Memory errors:
First 1000 Packet Memory errors:
___________________________________________________________________________
The following example shows the output for a WS-X45-SUP6-E supervisor engine:
Switch# show diagnostic result module 3 detail
module 3: SerialNo : XXXXXXXXXXX
Overall diagnostic result: PASS
Test results: (. = Pass, F = Fail, U = Untested)
___________________________________________________________________________
1) supervisor-bootup --------------->
Error code ------------------> 0 (DIAG_SUCCESS)
Total run count -------------> 1
Last test execution time ----> Oct 01 2007 17:37:04
First test failure time -----> n/a
Last test failure time ------> n/a
Last test pass time ---------> Oct 01 2007 17:37:04
Total failure count ---------> 0
Consecutive failure count ---> 0
Power-On-Self-Test Results for ACTIVE Supervisor
prod: WS-X45-SUP6-E part: XXXXXXXXX serial: XXXXXXXXXX
Power-on-self-test for Module 3: WS-X45-SUP6-E
Test Status: (. = Pass, F = Fail, U = Untested)
Switching Subsystem Memory...
Packet Memory Test Results: Pass
___________________________________________________________________________
2) linecard-online-diag ------------>
Error code ------------------> 0 (DIAG_SUCCESS)
Total run count -------------> 1
Last test execution time ----> Oct 01 2007 17:37:04
First test failure time -----> n/a
Last test failure time ------> n/a
Last test pass time ---------> Oct 01 2007 17:37:04
Total failure count ---------> 0
Consecutive failure count ---> 0
Slot Ports Card Type Diag Status Diag Details
---- ----- -------------------------------------- ---------------- ------------
3 6 Sup 6-E 10GE (X2), 1000BaseX (SFP) Skipped Packet memory
L = Loopback failure S = Stub failure
E = SEEPROM failure G = GBIC integrity check failure
___________________________________________________________________________
Power-On Self-Test Results for Supervisor Engine V-10GE
For the Supervisor Engine V-10GE (WS-X4516-10GE), POST tests extra redundancy features on the 10-Gigabit ports.
The following topics are discussed:
POST on the Active Supervisor Engine
The active supervisor engine tests the remote redundant 10-Gigabit ports on the standby supervisor engine if it is present when the active supervisor engine is booting. The status of the port is displayed as
“Remote TenGigabit Port Status.” If no standby supervisor engine is present, the remote port status is always displayed as Untested. This situation persists even after a new standby supervisor engine is inserted. The remaining tests are conducted using only the Gigabit ports’ configuration.
After the active supervisor engine has completed the bootup diagnostics, if the standby supervisor engine is now removed, the remote port status is changed to Untested in the overall diagnostic results.
POST Results on an Active Supervisor Engine Example
Switch# show diagnostic result module 1 detail
Overall diagnostic result: PASS
Test results: (. = Pass, F = Fail, U = Untested)
___________________________________________________________________________
1) supervisor-bootup ----------------------->.
Error code --------------------------> 0 (DIAG_SUCCESS)
Total run count ---------------------> 1
Last test execution time ------------> Jul 19 2005 13:28:16
First test failure time -------------> n/a
Last test failure time --------------> n/a
Last test pass time -----------------> Jul 19 2005 13:28:16
Total failure count -----------------> 0
Consecutive failure count -----------> 0
Power-On-Self-Test Results for ACTIVE Supervisor
Power-on-self-test for Module 1: WS-X4516-10GE
Port/Test Status: (. = Pass, F = Fail, U = Untested)
Reset Reason: Software/User
seeprom:. temperature_sensor:. eobc:.
Port Traffic: L3 Serdes Loopback...
0:. 1:. 2:. 3:. 4:. 5:. 6:. 7:. 8:. 9:. 10:. 11:.
12:. 13:. 14:. 15:. 16:. 17:. 18:. 19:. 20:. 21:. 22:. 23:.
24:. 25:. 26:. 27:. 28:. 29:. 30:. 31:. 32:. 33:. 34:. 35:.
36:. 37:. 38:. 39:. 40:. 41:. 42:. 43:. 44:. 45:. 46:. 47:.
Port Traffic: L2 Serdes Loopback...
0:. 1:. 2:. 3:. 4:. 5:. 6:. 7:. 8:. 9:. 10:. 11:.
12:. 13:. 14:. 15:. 16:. 17:. 18:. 19:. 20:. 21:. 22:. 23:.
24:. 25:. 26:. 27:. 28:. 29:. 30:. 31:. 32:. 33:. 34:. 35:.
36:. 37:. 38:. 39:. 40:. 41:. 42:. 43:. 44:. 45:. 46:. 47:.
Port Traffic: L2 Asic Loopback...
0:. 1:. 2:. 3:. 4:. 5:. 6:. 7:. 8:. 9:. 10:. 11:.
12:. 13:. 14:. 15:. 16:. 17:. 18:. 19:. 20:. 21:. 22:. 23:.
24:. 25:. 26:. 27:. 28:. 29:. 30:. 31:. 32:. 33:. 34:. 35:.
36:. 37:. 38:. 39:. 40:. 41:. 42:. 43:. 44:. 45:. 46:. 47:.
Switch Subsystem Memory...
1:. 2:. 3:. 4:. 5:. 6:. 7:. 8:. 9:. 10:. 11:. 12:.
13:. 14:. 15:. 16:. 17:. 18:. 19:. 20:. 21:. 22:. 23:. 24:.
25:. 26:. 27:. 28:. 29:. 30:. 31:. 32:. 33:. 34:. 35:. 36:.
37:. 38:. 39:. 40:. 41:. 42:. 43:. 44:. 45:. 46:. 47:. 48:.
Netflow Services Feature...
se:. cf:. 52:. 53:. 54:. 55:. 56:. 57:. 58:. 59:. 60:. 61:.
Remote TenGigabitPort status: Passed
___________________________________________________________________________
2) packet-memory-bootup --------------------> U
Error code --------------------------> 0 (DIAG_SUCCESS)
Total run count ---------------------> 0
Last test execution time ------------> n/a
First test failure time -------------> n/a
Last test failure time --------------> n/a
Last test pass time -----------------> n/a
Total failure count -----------------> 0
Consecutive failure count -----------> 0
packet buffers on free list: 64557 bad: 0 used for ongoing tests: 979
Exhaustive packet memory tests did not run at bootup.
___________________________________________________________________________
3) packet-memory-ongoing -------------------> U
Error code --------------------------> 0 (DIAG_SUCCESS)
Total run count ---------------------> 0
Last test execution time ------------> n/a
First test failure time -------------> n/a
Last test failure time --------------> n/a
Last test pass time -----------------> n/a
Total failure count -----------------> 0
Consecutive failure count -----------> 0
packet buffers on free list: 64557 bad: 0 used for ongoing tests: 979
Packet memory errors: 0 0
Current alert level: green
Per 5 seconds in the last minute:
Per minute in the last hour:
Per hour in the last day:
Per day in the last 30 days:
Direct memory test failures per minute in the last hour:
Potential false positives: 0 0
Ignored because of rx errors: 0 0
Ignored because of cdm fifo overrun: 0 0
Ignored because of oir: 0 0
Ignored because isl frames received: 0 0
Ignored after writing hw stats: 0 0
Ignored on high gigaport: 0
Ongoing diag action mode: Normal
Last 1000 Memory Test Failures:
Last 1000 Packet Memory errors:
First 1000 Packet Memory errors:
___________________________________________________________________________
POST on a Standby Supervisor Engine
Ports 62 and 63 of the supervisor engine always remain Untested (U). Because the standby supervisor engine never tests the remote 10-Gigabit port on the active supervisor engine, the remote 10-Gigabit port status on the standby supervisor engine is always Untested. The supervisor engine performs the remaining tests using the Gigabit ports’ configuration.
Note On a redundant chassis, concurrent POST is supported on supervisor engines that are already inserted. However, if a second supervisor engine is inserted while the first one is loading, you might boot the first supervisor engine in a faulty Cisco IOS state (POST will abort and some of the POST’s tests will be bypassed). This situation only happens during concurrent bootup of the supervisor engines. You should not insert any additional supervisor engines in the empty supervisor engine slot while an already seated supervisor engine is running POST. The POST sequence is completed when the “Exiting to ios...” message is displayed.
Display of the POST on a Standby Supervisor Engine Example
Switch# show diagnostic result module 2 detail
Overall diagnostic result: PASS
Test results: (. = Pass, F = Fail, U = Untested)
___________________________________________________________________________
1) supervisor-bootup ----------------------->.
Error code --------------------------> 0 (DIAG_SUCCESS)
Total run count ---------------------> 1
Last test execution time ------------> Jul 19 2005 13:29:44
First test failure time -------------> n/a
Last test failure time --------------> n/a
Last test pass time -----------------> Jul 19 2005 13:29:44
Total failure count -----------------> 0
Consecutive failure count -----------> 0
Power-On-Self-Test Results for ACTIVE Supervisor
Power-on-self-test for Module 2: WS-X4516-10GE
Port/Test Status: (. = Pass, F = Fail, U = Untested)
Reset Reason: OtherSupervisor Software/User
seeprom:. temperature_sensor:. eobc:.
Port Traffic: L3 Serdes Loopback...
0:. 1:. 2:. 3:. 4:. 5:. 6:. 7:. 8:. 9:. 10:. 11:.
12:. 13:. 14:. 15:. 16:. 17:. 18:. 19:. 20:. 21:. 22:. 23:.
24:. 25:. 26:. 27:. 28:. 29:. 30:. 31:. 32:. 33:. 34:. 35:.
36:. 37:. 38:. 39:. 40:. 41:. 42:. 43:. 44:. 45:. 46:. 47:.
Port Traffic: L2 Serdes Loopback...
0:. 1:. 2:. 3:. 4:. 5:. 6:. 7:. 8:. 9:. 10:. 11:.
12:. 13:. 14:. 15:. 16:. 17:. 18:. 19:. 20:. 21:. 22:. 23:.
24:. 25:. 26:. 27:. 28:. 29:. 30:. 31:. 32:. 33:. 34:. 35:.
36:. 37:. 38:. 39:. 40:. 41:. 42:. 43:. 44:. 45:. 46:. 47:.
Port Traffic: L2 Asic Loopback...
0:. 1:. 2:. 3:. 4:. 5:. 6:. 7:. 8:. 9:. 10:. 11:.
12:. 13:. 14:. 15:. 16:. 17:. 18:. 19:. 20:. 21:. 22:. 23:.
24:. 25:. 26:. 27:. 28:. 29:. 30:. 31:. 32:. 33:. 34:. 35:.
36:. 37:. 38:. 39:. 40:. 41:. 42:. 43:. 44:. 45:. 46:. 47:.
Switch Subsystem Memory...
1:. 2:. 3:. 4:. 5:. 6:. 7:. 8:. 9:. 10:. 11:. 12:.
13:. 14:. 15:. 16:. 17:. 18:. 19:. 20:. 21:. 22:. 23:. 24:.
25:. 26:. 27:. 28:. 29:. 30:. 31:. 32:. 33:. 34:. 35:. 36:.
37:. 38:. 39:. 40:. 41:. 42:. 43:. 44:. 45:. 46:. 47:. 48:.
Netflow Services Feature...
se:. cf:. 52:. 53:. 54:. 55:. 56:. 57:. 58:. 59:. 60:. 61:.
Remote TenGigabitPort status: Untested
___________________________________________________________________________
2) packet-memory-bootup --------------------> U
Error code --------------------------> 0 (DIAG_SUCCESS)
Total run count ---------------------> 0
Last test execution time ------------> n/a
First test failure time -------------> n/a
Last test failure time --------------> n/a
Last test pass time -----------------> n/a
Total failure count -----------------> 0
Consecutive failure count -----------> 0
packet buffers on free list: 64557 bad: 0 used for ongoing tests: 979
Exhaustive packet memory tests did not run at bootup.
___________________________________________________________________________
3) packet-memory-ongoing -------------------> U
Error code --------------------------> 0 (DIAG_SUCCESS)
Total run count ---------------------> 0
Last test execution time ------------> n/a
First test failure time -------------> n/a
Last test failure time --------------> n/a
Last test pass time -----------------> n/a
Total failure count -----------------> 0
Consecutive failure count -----------> 0
packet buffers on free list: 64557 bad: 0 used for ongoing tests: 979
Packet memory errors: 0 0
Current alert level: green
Per 5 seconds in the last minute:
Per minute in the last hour:
Per hour in the last day:
Per day in the last 30 days:
Direct memory test failures per minute in the last hour:
Potential false positives: 0 0
Ignored because of rx errors: 0 0
Ignored because of cdm fifo overrun: 0 0
Ignored because of oir: 0 0
Ignored because isl frames received: 0 0
Ignored after writing hw stats: 0 0
Ignored on high gigaport: 0
Ongoing diag action mode: Normal
Last 1000 Memory Test Failures:
Last 1000 Packet Memory errors:
First 1000 Packet Memory errors:
___________________________________________________________________________
Note To ensure that the maximum number of ports are tested, ensure that both supervisor engines are present on power-up.
Troubleshooting the Test Failures
A failure of any of the POST tests reflects a problem with the hardware on the supervisor engine.
Cisco IOS boots the supervisor engine with limited functionality, allowing you to evaluate and display the diagnostic test results. To determine the failure cause, do one of the following:
- Evaluate whether the hardware failure is persistent by power cycling the supervisor engine to rerun the POST tests.
- Remove and reinsert the supervisor engine into the chassis to ensure that the seating is correct.
Contact Cisco Systems customer support team for more information.
Note On a redundant chassis, concurrent POST is supported on supervisor engines that are already inserted. However, if a second supervisor engine is inserted while the first one is loading, you might boot the first supervisor engine in a faulty Cisco IOS state (POST will abort, and some of the POST’s tests will be bypassed). This situation only happens during concurrent bootup of the supervisor engines. You should not insert any additional supervisor engines in the empty supervisor engine slot while an already seated supervisor engine is running POST. The POST sequence is completed when the “Exiting to ios...” message is displayed.