本產品的文件集力求使用無偏見用語。針對本文件集的目的,無偏見係定義為未根據年齡、身心障礙、性別、種族身分、民族身分、性別傾向、社會經濟地位及交織性表示歧視的用語。由於本產品軟體使用者介面中硬式編碼的語言、根據 RFP 文件使用的語言,或引用第三方產品的語言,因此本文件中可能會出現例外狀況。深入瞭解思科如何使用包容性用語。
思科已使用電腦和人工技術翻譯本文件,讓全世界的使用者能夠以自己的語言理解支援內容。請注意,即使是最佳機器翻譯,也不如專業譯者翻譯的內容準確。Cisco Systems, Inc. 對這些翻譯的準確度概不負責,並建議一律查看原始英文文件(提供連結)。
本文檔介紹如何使用StarOS調試shell中的TCP轉儲來解決Diameter連線問題。通常會出現請求協助排除Diameter連線未啟動或已關閉的案例,即使假設未發生配置或網路更改。 直徑連線無法在初始TCP/IP協商級別或功能交換請求(CER)/功能交換應答(CEA)級別建立。
雖然不存在典型的「直徑對等」問題,但它們確實可以歸為幾類:
通常,TCP埠3868(預設值)用於Diameter伺服器端,不過也可以指定其他埠,如果對等配置線路的埠號線上路末尾指定,則該埠確認不同於配置中的3868。
在此示例中,端點3gpp-aaa-s6b的對等體通過show diameter peer full all報告,在對等體行中未指定埠#,因此預設情況下使用埠3868,而用於Gy的對等體則對不同的對等體使用3868、3869和3870的組合。
show diameter peers all報告所有diameter端點的所有已配置對等體。此處我們看到為3gpp-aaa-s6b(已中斷)和Gy(正在工作)配置了6個對等體以及關聯的配置行,並注意到Gy具有一些自定義埠#s:
diameter endpoint 3gpp-aaa-s6b origin realm epc.mnc260.mcc310.3gppnetwork.org use-proxy origin host s6b.IEPCF201.epc.mnc260.mcc310.3gppnetwork.org address 10.168.86.144 max-outstanding 64 route-failure threshold 100 route-failure deadtime 600 route-failure recovery-threshold percent 50 dscp af31 peer mp2.daldra01.dra.epc.mnc260.mcc310.3gppnetwork.org realm epc.mnc260.mcc310.3gppnetwork.org address 10.160.113.136 peer mp2.elgdra01.dra.epc.mnc260.mcc310.3gppnetwork.org realm epc.mnc260.mcc310.3gppnetwork.org address 10.160.114.136 peer mp2.nvldra01.dra.epc.mnc260.mcc310.3gppnetwork.org realm epc.mnc260.mcc310.3gppnetwork.org address 10.160.115.136 peer tsa06.draaro01.dra.epc.mnc260.mcc310.3gppnetwork.org realm epc.mnc260.mcc310.3gppnetwork.org address 10.162.6.73 peer tsa06.drasyo01.dra.epc.mnc260.mcc310.3gppnetwork.org realm epc.mnc260.mcc310.3gppnetwork.org address 10.164.57.41 peer tsa06.drawsc01.dra.epc.mnc260.mcc310.3gppnetwork.org realm epc.mnc260.mcc310.3gppnetwork.org address 10.177.70.201 route-entry peer mp2.daldra01.dra.epc.mnc260.mcc310.3gppnetwork.org route-entry peer mp2.elgdra01.dra.epc.mnc260.mcc310.3gppnetwork.org route-entry peer mp2.nvldra01.dra.epc.mnc260.mcc310.3gppnetwork.org route-entry peer tsa06.draaro01.dra.epc.mnc260.mcc310.3gppnetwork.org route-entry peer tsa06.drasyo01.dra.epc.mnc260.mcc310.3gppnetwork.org route-entry peer tsa06.drawsc01.dra.epc.mnc260.mcc310.3gppnetwork.org #exit [local]IEPCF201# show diameter peers all Friday December 11 20:27:43 UTC 2020 Diameter Peer details ====================== ------------------------------------------------------------------------------- Context: billing Endpoint: 3gpp-aaa-s6b ------------------------------------------------------------------------------- Peer: mp2.daldra01.dra.epc.mnc260.mc Addr:Port 10.160.113.136:3868 Peer: mp2.elgdra01.dra.epc.mnc260.mc Addr:Port 10.160.114.136:3868 Peer: mp2.nvldra01.dra.epc.mnc260.mc Addr:Port 10.160.115.136:3868 Peer: tsa06.draaro01.dra.epc.mnc260. Addr:Port 10.162.6.73:3868 Peer: tsa06.drasyo01.dra.epc.mnc260. Addr:Port 10.164.57.41:3868 Peer: tsa06.drawsc01.dra.epc.mnc260. Addr:Port 10.177.70.201:3868 ------------------------------------------------------------------------------- diameter endpoint credit-control origin realm starent.gy.com use-proxy origin host iepcf201.gy address 10.168.86.151 destination-host-avp always route-failure threshold 100 route-failure deadtime 600 route-failure recovery-threshold percent 50 peer ln24.daldra01.dra.epc3.mnc260.mcc310.3gppnetwork.org realm nsn-gy address 10.160.113.136 port 3869 peer ln24.drawsc01.dra.epc3.mnc260.mcc310.3gppnetwork.org realm nsn-gy address 10.177.70.201 port 3870 peer tsa05.drachr01.dra.epc3.mnc260.mcc310.3gppnetwork.org realm nsn-gy address 10.164.144.88 peer tsa05.draphx01.dra.epc3.mnc260.mcc310.3gppnetwork.org realm nsn-gy address 10.198.93.88 peer tsa05.drapol01.dra.epc3.mnc260.mcc310.3gppnetwork.org realm nsn-gy address 10.182.16.88 peer tsa06.drachr01.dra.epc3.mnc260.mcc310.3gppnetwork.org realm nsn-gy address 10.164.144.89 peer tsa06.draphx01.dra.epc3.mnc260.mcc310.3gppnetwork.org realm nsn-gy address 10.198.93.89 peer tsa06.drapol01.dra.epc3.mnc260.mcc310.3gppnetwork.org realm nsn-gy address 10.182.16.89 route-entry peer ln24.drawsc01.dra.epc3.mnc260.mcc310.3gppnetwork.org weight 20 route-entry peer ln24.daldra01.dra.epc3.mnc260.mcc310.3gppnetwork.org route-entry peer tsa05.drapol01.dra.epc3.mnc260.mcc310.3gppnetwork.org route-entry peer tsa06.drapol01.dra.epc3.mnc260.mcc310.3gppnetwork.org route-entry peer tsa05.drachr01.dra.epc3.mnc260.mcc310.3gppnetwork.org weight 5 route-entry peer tsa05.draphx01.dra.epc3.mnc260.mcc310.3gppnetwork.org weight 5 route-entry peer tsa06.drachr01.dra.epc3.mnc260.mcc310.3gppnetwork.org weight 5 route-entry peer tsa06.draphx01.dra.epc3.mnc260.mcc310.3gppnetwork.org weight 5 #exit
另外值得注意的是,對於大多數設定,使用Proxy可配置被指定為ASR端設定對等,以使用所有活動卡上運行的Diamproxy進程,例如,這是一個vPC-DI,其中這些卡稱為服務功能卡。
[local]IEPCF201# show task resources facility diamproxy all Friday December 11 20:34:37 UTC 2020 task cputime memory files sessions cpu facility inst used allc used alloc used allc used allc S status ----------------------- ----------- ------------- --------- ------------- ------ 3/0 diamproxy 5 0.12% 90% 41.62M 250.0M 38 2500 -- -- - good 5/0 diamproxy 2 0.11% 90% 41.63M 250.0M 51 2500 -- -- - good 6/0 diamproxy 6 0.13% 90% 41.62M 250.0M 35 2500 -- -- - good 7/0 diamproxy 3 0.12% 90% 41.64M 250.0M 34 2500 -- -- - good 8/0 diamproxy 4 0.13% 90% 41.65M 250.0M 34 2500 -- -- - good 10/0 diamproxy 1 0.10% 90% 41.64M 250.0M 49 2500 -- -- - good Total 6 0.71% 249.8M 241 0 [local]IEPCF201#
此處show diameter peers full all is taken from show support details捕獲了3gpp-aaa-s6b終結點的Diameter peers all down這一事實。請注意,這是從show support details(SSD)獲取的show diameter peers full命令的特殊debug版本,因此它還顯示與aaamgr進程的所有對等體連線(此處不顯示輸出),因此連線的最終計數遠高於正常運行時的計數,但底部顯示的是摘要輸出,如同使用較少連線數正常運行一樣(144)。 FULL輸出附在本文中,因此只顯示一個對等體的連線(但所有6個鑽石代數)以簡潔方式顯示。
還顯示一個用於Gy端點的開放工作連線的示例,在該示例中,您可以看到一個名為Local Address的額外欄位,該欄位捕獲ASR端正在運行的連線,而在斷開的3gpp-aaa-s6b對等體上,該欄位不存在。(稍後顯示的輸出是客戶為包含本地地址的3gpp-aaa-s6b對等點修復問題後的輸出。)
******** show diameter peers full ******* Sunday December 13 15:19:00 UTC 2020 ------------------------------------------------------------------------------- Context: billing Endpoint: 3gpp-aaa-s6b ------------------------------------------------------------------------------- Peer Hostname: mp2.daldra01.dra.epc.mnc260.mcc310.3gppnetwork.org Local Hostname: 0001-diamproxy.s6b.IEPCF201.epc.mnc260.mcc310.3gppnetwork.org Peer Realm: epc.mnc260.mcc310.3gppnetwork.org Local Realm: epc.mnc260.mcc310.3gppnetwork.org Peer Address: 10.160.113.136:3868 State: IDLE [TCP] CPU: 10/0 Task: diamproxy-1 Messages Out/Queued: 0/0 Supported Vendor IDs: None Admin Status: Enable DPR Disconnect: N/A Peer Backoff Timer running:N/A Peer Hostname: mp2.daldra01.dra.epc.mnc260.mcc310.3gppnetwork.org Local Hostname: 0002-diamproxy.s6b.IEPCF201.epc.mnc260.mcc310.3gppnetwork.org Peer Realm: epc.mnc260.mcc310.3gppnetwork.org Local Realm: epc.mnc260.mcc310.3gppnetwork.org Peer Address: 10.160.113.136:3868 State: IDLE [TCP] CPU: 5/0 Task: diamproxy-2 Messages Out/Queued: 0/0 Supported Vendor IDs: None Admin Status: Enable DPR Disconnect: N/A Peer Backoff Timer running:N/A Peer Hostname: mp2.daldra01.dra.epc.mnc260.mcc310.3gppnetwork.org Local Hostname: 0003-diamproxy.s6b.IEPCF201.epc.mnc260.mcc310.3gppnetwork.org Peer Realm: epc.mnc260.mcc310.3gppnetwork.org Local Realm: epc.mnc260.mcc310.3gppnetwork.org Peer Address: 10.160.113.136:3868 State: IDLE [TCP] CPU: 7/0 Task: diamproxy-3 Messages Out/Queued: 0/0 Supported Vendor IDs: None Admin Status: Enable DPR Disconnect: N/A Peer Backoff Timer running:N/A Peer Hostname: mp2.daldra01.dra.epc.mnc260.mcc310.3gppnetwork.org Local Hostname: 0004-diamproxy.s6b.IEPCF201.epc.mnc260.mcc310.3gppnetwork.org Peer Realm: epc.mnc260.mcc310.3gppnetwork.org Local Realm: epc.mnc260.mcc310.3gppnetwork.org Peer Address: 10.160.113.136:3868 State: IDLE [TCP] CPU: 8/0 Task: diamproxy-4 Messages Out/Queued: 0/0 Supported Vendor IDs: None Admin Status: Enable DPR Disconnect: N/A Peer Backoff Timer running:N/A Peer Hostname: mp2.daldra01.dra.epc.mnc260.mcc310.3gppnetwork.org Local Hostname: 0005-diamproxy.s6b.IEPCF201.epc.mnc260.mcc310.3gppnetwork.org Peer Realm: epc.mnc260.mcc310.3gppnetwork.org Local Realm: epc.mnc260.mcc310.3gppnetwork.org Peer Address: 10.160.113.136:3868 State: IDLE [TCP] CPU: 3/0 Task: diamproxy-5 Messages Out/Queued: 0/0 Supported Vendor IDs: None Admin Status: Enable DPR Disconnect: N/A Peer Backoff Timer running:N/A Peer Hostname: mp2.daldra01.dra.epc.mnc260.mcc310.3gppnetwork.org Local Hostname: 0006-diamproxy.s6b.IEPCF201.epc.mnc260.mcc310.3gppnetwork.org Peer Realm: epc.mnc260.mcc310.3gppnetwork.org Local Realm: epc.mnc260.mcc310.3gppnetwork.org Peer Address: 10.160.113.136:3868 State: IDLE [TCP] CPU: 6/0 Task: diamproxy-6 Messages Out/Queued: 0/0 Supported Vendor IDs: None Admin Status: Enable DPR Disconnect: N/A Peer Backoff Timer running:N/A ... ------------------------------------------------------------------------------- Context: billing Endpoint: credit-control ------------------------------------------------------------------------------- ... Peer Hostname: ln24.daldra01.dra.epc3.mnc260.mcc310.3gppnetwork.org Local Hostname: 0001-diamproxy.iepcf201.gy Peer Realm: nsn-gy Local Realm: starent.gy.com Peer Address: 10.160.113.136:3869 Local Address: 10.168.86.151:55584 State: OPEN [TCP] CPU: 10/0 Task: diamproxy-1 Messages Out/Queued: 0/0 Supported Vendor IDs: 10415 Admin Status: Enable DPR Disconnect: N/A Peer Backoff Timer running:N/A Peers Summary: Peers in OPEN state: 1404 Peers in CLOSED state: 468 Peers in intermediate state: 0 Total peers matching specified criteria: 1872
供參考,以下是此命令的正常輸出,其中顯示了沒有aamgrs的連線計數:
Peers Summary: Peers in OPEN state: 107 Peers in CLOSED state: 36 Peers in intermediate state: 1 Total peers matching specified criteria: 144
正如所討論的,此場景顯示s6b端點的所有直徑對等點都已關閉,問題不是針對特定的diamproxy/卡,這表示任何卡的PCAP收集應適當地代表問題進行故障排除。如果僅在特定鑽石上發現問題,則更重要的是為該流程捕獲PCAP。這一點非常重要,因為收集過程需要指定特定的卡 — 它不能通過單個捕獲在所有卡上運行 — 儘管在這種情況下,問題確實在所有卡上可見,但下面顯示了兩張卡上的捕獲,以幫助對如何分析結果資料做出一些說明。
首先檢視卡表並從中挑選幾張活動卡(3和5)來運行捕獲,同時注意哪些是Demux卡不應指定。
[local]IEPCF201# show card table Friday December 11 17:15:28 UTC 2020 Slot Card Type Oper State SPOF Attach ----------- -------------------------------------- ------------- ---- ------ 1: CFC Control Function Virtual Card Active No 2: CFC Control Function Virtual Card Standby - 3: FC 4-Port Service Function Virtual Card Active No <===== 4: FC 4-Port Service Function Virtual Card Standby - 5: FC 4-Port Service Function Virtual Card Active No <===== 6: FC 4-Port Service Function Virtual Card Active No 7: FC 4-Port Service Function Virtual Card Active No 8: FC 4-Port Service Function Virtual Card Active No 9: FC 4-Port Service Function Virtual Card Active No 10: FC 4-Port Service Function Virtual Card Active No [local]IEPCF201# [local]IEPCF201# show session recovery status verbose Saturday December 12 21:43:11 UTC 2020 Session Recovery Status: Overall Status : Ready For Recovery Last Status Update : 4 seconds ago ----sessmgr--- ----aaamgr---- demux cpu state active standby active standby active status ---- ------- ------ ------- ------ ------- ------ ------------------------- 3/0 Active 12 1 12 1 0 Good 4/0 Standby 0 12 0 12 0 Good 5/0 Active 12 1 12 1 0 Good 6/0 Active 12 1 12 1 0 Good 7/0 Active 12 1 12 1 0 Good 8/0 Active 12 1 12 1 0 Good 9/0 Active 0 0 0 0 8 Good (Demux) 10/0 Active 12 1 12 1 0 Good [local]IEPCF201#
此外,需要檢索定義diameter對等體的上下文#,在這種情況下,將啟用計費上下#2。
******** show context ******* Sunday December 13 15:14:24 UTC 2020 Context Name ContextID State Description --------------- --------- ---------- ----------------------- local 1 Active billing 2 Active <========== calea 3 Active gi 4 Active sgw 5 Active
接下來是登入Linux debug shell中要在其自己的CLI作業階段中收集PCAP的卡(本例中為卡3和卡5):
附註:大多數操作員除非被告知特定於機箱/客戶的密碼(具體取決於其設定方式),否則他們可能無權訪問調試外殼。當登入到卡(ASR 5000或ASR 5500的PSC或DPC)或虛擬機器(vPC-DI的服務功能(SF))的基礎作業系統時,請務必小心。
[local]IEPCF201# cli test password <password> Saturday December 12 21:43:54 UTC 2020 Warning: Test commands enables internal testing and debugging commands USE OF THIS MODE MAY CAUSE SIGNIFICANT SERVICE INTERRUPTION [local]IEPCF201# [local]IEPCF201# debug shell card 3 cpu 0 Saturday December 12 21:44:02 UTC 2020 Last login: Fri Dec 11 19:26:34 +0000 2020 on pts/1 from card1-cpu0. qvpc-di:card3-cpu0#
現在,運行一個特殊的Linux命令setvr(設定虛擬路由器),該命令僅在自定義StarOS版本的Linux中可用,並指定以前檢索到的上下文#。請注意,提示符會更改:
qvpc-di:card3-cpu0# setvr 2 bash bash-2.05b#
此時,可以使用以下引數運行TCP轉儲。請注意,如果埠號與前面所示的gy示例不同,則應使用該埠號。此外,如果存在要捕獲資料包的特定對等地址,則可以使用host <host ip address>指定主機IP地址。運行該命令幾分鐘,然後使用Control-C停止捕獲。如果捕獲了PACKETS,則顯示資料包數。
bash-2.05b# tcpdump -i any -s 0 -w /tmp/diameter_SF3.pcap "port 3868" tcpdump: listening on any ^C 1458 packets received by filter 0 packets dropped by kernel bash-2.05b#
接下來,使用exit命令退出虛擬路由器空間,然後將檔案複製到活動管理卡的快閃記憶體中,對於ASR 5500為MIO 5或6,或者在此情況下為vPC-DI 1或2。
bash-2.05b# exit exit qvpc-di:card3-cpu0# scp /tmp/diameter_SF3.pcap card1:/flash/sftp/diameter_SF3.pcap diameter_SF3.pcap 100% 110KB 110.4KB/s 00:00 qvpc-di:card3-cpu0# exit [local]IEPCF201#
此時,可以使用sftp使用網路中存在的任何方式檢索檔案,以訪問/flash目錄。
以下是適用於SF 5的命令,重複剛才針對SF 3顯示的命令。理想情況下,同時運行兩個作業階段,以便同時擷取進行分析(不過這可能不是必要的)。
[local]IEPCF201# cli test password <password> Saturday December 12 21:43:28 UTC 2020 Warning: Test commands enables internal testing and debugging commands USE OF THIS MODE MAY CAUSE SIGNIFICANT SERVICE INTERRUPTION [local]IEPCF201# debug shell card 5 cpu 0 Saturday December 12 21:44:13 UTC 2020 qvpc-di:card5-cpu0# qvpc-di:card5-cpu0# setvr 2 bash bash-2.05b# tcpdump -i any -s 0 -w /tmp/diameter_SF5.pcap "port 3868" tcpdump: listening on any ^C 1488 packets received by filter 0 packets dropped by kernel bash-2.05b# exit exit qvpc-di:card5-cpu0# scp /tmp/diameter_SF5.pcap card1:/flash/sftp/diameter_SF5.pcap diameter_SF5.pcap 100% 113KB 112.7KB/s 00:00 qvpc-di:card5-cpu0# exit [local]IEPCF201#
這裡的目標是確定故障在直徑連線建立過程中的位置。如前所述,它可能在TCP/IP連線中,也可能在隨後的CER/CEA步驟中。對於TCP/IP,請檢視是否正在傳送TCP SYN,是否正在接收TCP SYN ACK,然後是從ASR傳送的ACK。 可以使用任意數量的過濾器過濾資料包,以幫助進行分析。在這種情況下,filter tcp.flags.syn == 1顯示針對此特定卡的所有6個對等體都傳送了SYN。在檢視未過濾的檢視時,右擊SYN資料包並利用Wireshark中的TCP資料流功能,該功能通過選擇跟隨來聚合使用同一TCP埠號的所有TCP資料包……TCP流,檢視是否存在建立連線的TCP資料包的對應交換。
在此案例中,請注意SYN以外沒有其他封包,這確認ASR可能正在傳送SYN但並未收到任何回應,如此一來ASR便不會成為連線設定失敗的原因(雖然不能保證確實如此,但可能並未傳送封包,或是回應遭捨棄,在這種情況下,外部PCAP將有助於進一步縮小問題範圍)。
另外值得注意的是,模式每30秒重複一次,這與diameter端點30秒的預設配置相匹配,以重試連線 — ASR不會放棄,而是將永遠重試,直到成功為止。 SF 5的PCAP顯示完全相同的行為。
context billing diameter endpoint 3gpp-aaa-s6b connection timeout 30 connection retry-timeout 30
將各種因素結合在一起,直徑基數統計資訊顯示,失敗連線的數量正在以與SF/diamproxies數量和重試超時相當的速度增加。數學如下:6個對等點* 6個鑽石代理=每30秒嘗試次數36次。因此,在一分鐘內將嘗試72次,這可以通過運行show diameter statistics proxy並檢視Connection Timeouts在一分鐘內從60984遞增到61056 = 72來看到,如CLI時間戳所示。
[local]IEPCF201# show diameter statistics proxy | grep "Connection Timeouts" Friday December 11 20:39:10 UTC 2020 Connection Timeouts: 60984 [local]IEPCF201# show diameter statistics proxy | grep "Connection Timeouts" Friday December 11 20:39:12 UTC 2020 Connection Timeouts: 60984 [local]IEPCF201# show diameter statistics proxy | grep "Connection Timeouts" Friday December 11 20:39:14 UTC 2020 Connection Timeouts: 60984 [local]IEPCF201# show diameter statistics proxy | grep "Connection Timeouts" Friday December 11 20:39:17 UTC 2020 Connection Timeouts: 60990 [local]IEPCF201# show diameter statistics proxy | grep "Connection Timeouts" Friday December 11 20:39:19 UTC 2020 Connection Timeouts: 60990 [local]IEPCF201# show diameter statistics proxy | grep "Connection Timeouts" Friday December 11 20:39:21 UTC 2020 Connection Timeouts: 60996 [local]IEPCF201# show diameter statistics proxy | grep "Connection Timeouts" Friday December 11 20:39:25 UTC 2020 Connection Timeouts: 61002 [local]IEPCF201# show diameter statistics proxy | grep "Connection Timeouts" Friday December 11 20:39:27 UTC 2020 Connection Timeouts: 61002 [local]IEPCF201# show diameter statistics proxy | grep "Connection Timeouts" Friday December 11 20:39:29 UTC 2020 Connection Timeouts: 61008 [local]IEPCF201# show diameter statistics proxy | grep "Connection Timeouts" Friday December 11 20:39:32 UTC 2020 Connection Timeouts: 61014 [local]IEPCF201# show diameter statistics proxy | grep "Connection Timeouts" Friday December 11 20:39:35 UTC 2020 Connection Timeouts: 61014 [local]IEPCF201# show diameter statistics proxy | grep "Connection Timeouts" Friday December 11 20:39:37 UTC 2020 Connection Timeouts: 61020 [local]IEPCF201# show diameter statistics proxy | grep "Connection Timeouts" Friday December 11 20:39:40 UTC 2020 Connection Timeouts: 61020 [local]IEPCF201# show diameter statistics proxy | grep "Connection Timeouts" Friday December 11 20:39:43 UTC 2020 Connection Timeouts: 61020 [local]IEPCF201# show diameter statistics proxy | grep "Connection Timeouts" Friday December 11 20:39:45 UTC 2020 Connection Timeouts: 61026 [local]IEPCF201# show diameter statistics proxy | grep "Connection Timeouts" Friday December 11 20:39:47 UTC 2020 Connection Timeouts: 61026 [local]IEPCF201# show diameter statistics proxy | grep "Connection Timeouts" Friday December 11 20:39:50 UTC 2020 Connection Timeouts: 61038 [local]IEPCF201# show diameter statistics proxy | grep "Connection Timeouts" Friday December 11 20:39:56 UTC 2020 Connection Timeouts: 61038 [local]IEPCF201# show diameter statistics proxy | grep "Connection Timeouts" Friday December 11 20:39:58 UTC 2020 Connection Timeouts: 61044 [local]IEPCF201# show diameter statistics proxy | grep "Connection Timeouts" Friday December 11 20:40:01 UTC 2020 Connection Timeouts: 61044 [local]IEPCF201# show diameter statistics proxy | grep "Connection Timeouts" Friday December 11 20:40:03 UTC 2020 Connection Timeouts: 61050 [local]IEPCF201# show diameter statistics proxy | grep "Connection Timeouts" Friday December 11 20:40:05 UTC 2020 Connection Timeouts: 61056 [local]IEPCF201# show diameter statistics proxy | grep "Connection Timeouts" Friday December 11 20:40:07 UTC 2020 Connection Timeouts: 61056 [local]IEPCF201# show diameter statistics proxy | grep "Connection Timeouts" Friday December 11 20:40:09 UTC 2020 Connection Timeouts: 61056 [local]IEPCF201# show diameter statistics proxy | grep "Connection Timeouts" Friday December 11 20:40:12 UTC 2020 Connection Timeouts: 61056 [local]IEPCF201# show diameter statistics proxy | grep "Connection Timeouts" Friday December 11 20:40:14 UTC 2020 Connection Timeouts: 61056 [local]IEPCF201# show diameter statistics proxy | grep "Connection Timeouts" Friday December 11 20:40:16 UTC 2020 Connection Timeouts: 61062 [local]IEPCF201# show diameter statistics proxy | grep "Connection Timeouts" Friday December 11 20:40:18 UTC 2020 Connection Timeouts: 61062 [local]IEPCF201# show diameter statistics proxy | grep "Connection Timeouts" Friday December 11 20:40:20 UTC 2020 Connection Timeouts: 61068 [local]IEPCF201# show diameter statistics proxy | grep "Connection Timeouts" Friday December 11 20:40:22 UTC 2020 Connection Timeouts: 61074 [local]IEPCF201# show diameter statistics proxy | grep "Connection Timeouts" Friday December 11 20:40:25 UTC 2020 Connection Timeouts: 61074 [local]IEPCF201# show diameter statistics proxy | grep "Connection Timeouts" Friday December 11 20:40:27 UTC 2020 Connection Timeouts: 61074 [local]IEPCF201#
另請注意,CER/CEA的數量(跨所有直徑對等體)微不足道,這證明它從未達到嘗試交換這些資料包的程度,這意味著這是TCP/IP設定問題。
[local]IEPCF201# show diameter statistics proxy Friday December 11 20:57:09 UTC 2020 ... Capabilities Exchange Requests and Answers statistics: Connection CER sent: 109 Connection CER send errors: 0 CERs received: 0 Connection CER create failures: 0 CEAs received: 108 CEA AVPs unknown: 0 CEA Application ID mismatch: 0 Read CEA Messages: 108 Read CEA Messages Unexpected: 0 Read CEA Missing: 0 Read CEA Negotiation Failure: 0 Read CER Messages: 0 Read CER Messages Unexpected: 0 Read CER Missing: 0 Tw Expire Waiting for CEA: 0
最後請注意,在客戶解決問題後,處於CLOSED狀態的對等體會返回到0,並且show diameter peers full all output中會顯示Local Address欄位。
Peer Hostname: mp1.daldra01.dra.epc.mnc260.mcc310.3gppnetwork.org Local Hostname: 0001-diamproxy.s6b.IEPCF201.epc.mnc260.mcc310.3gppnetwork.org Peer Realm: epc.mnc260.mcc310.3gppnetwork.org Local Realm: epc.mnc260.mcc310.3gppnetwork.org Peer Address: 10.160.113.133:3868 Local Address: 10.168.86.144:32852 State: OPEN [TCP] CPU: 10/0 Task: diamproxy-1 Messages Out/Queued: 0/0 Supported Vendor IDs: None Admin Status: Enable DPR Disconnect: N/A Peer Backoff Timer running:N/A Peers Summary: Peers in OPEN state: 144 Peers in CLOSED state: 0 Peers in intermediate state: 0 Total peers matching specified criteria: 144 [local]IEPCF101#