本產品的文件集力求使用無偏見用語。針對本文件集的目的,無偏見係定義為未根據年齡、身心障礙、性別、種族身分、民族身分、性別傾向、社會經濟地位及交織性表示歧視的用語。由於本產品軟體使用者介面中硬式編碼的語言、根據 RFP 文件使用的語言,或引用第三方產品的語言,因此本文件中可能會出現例外狀況。深入瞭解思科如何使用包容性用語。
思科已使用電腦和人工技術翻譯本文件,讓全世界的使用者能夠以自己的語言理解支援內容。請注意,即使是最佳機器翻譯,也不如專業譯者翻譯的內容準確。Cisco Systems, Inc. 對這些翻譯的準確度概不負責,並建議一律查看原始英文文件(提供連結)。
本文檔介紹如何使用Kubernetes和CEE OPS-Center命令對POD進行故障排除。
1.1列出所有名稱空間
指令:
kubectl get namespace
範例:
cisco@brusmi-master1:~$ kubectl get namespace
NAME STATUS AGE
cee-cee Active 6d
default Active 6d
kube-node-lease Active 6d
kube-public Active 6d
kube-system Active 6d
lfs Active 6d
nginx-ingress Active 6d
smf-data Active 6d
smi-certs Active 6d
smi-vips Active 6d
1.2列出特定名稱空間的所有服務:
指令:
kubectl get svc -n <namespace>
範例:
cisco@brusmi-master1:~$ kubectl get svc -n smf-data
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
base-entitlement-smf ClusterIP 10.97.93.253 <none> 8000/TCP 6d
datastore-ep-session ClusterIP 10.101.15.88 <none> 8882/TCP 6h51m
datastore-notification-ep ClusterIP 10.110.182.26 <none> 8890/TCP 6h51m
datastore-tls-ep-session ClusterIP 10.110.115.33 <none> 8883/TCP 6h51m
documentation ClusterIP 10.110.85.239 <none> 8080/TCP 6d
etcd ClusterIP None <none> 2379/TCP,7070/TCP 6h51m
etcd-smf-data-etcd-cluster-0 ClusterIP 10.103.194.229 <none> 2380/TCP,2379/TCP 6h51m
grafana-dashboard-app-infra ClusterIP 10.98.161.155 <none> 9418/TCP 6h51m
grafana-dashboard-cdl ClusterIP 10.104.32.111 <none> 9418/TCP 6h51m
grafana-dashboard-smf ClusterIP 10.106.64.191 <none> 9418/TCP 6h51m
gtpc-ep ClusterIP 10.99.49.25 x.x.x.201 9003/TCP,8080/TCP 6h51m
helm-api-smf-data-ops-center ClusterIP 10.109.206.198 <none> 3000/TCP 6d
kafka ClusterIP None <none> 9092/TCP,7070/TCP 6h51m
li-ep ClusterIP 10.106.134.35 <none> 9003/TCP,8080/TCP 6h51m
local-ldap-proxy-smf-data-ops-center ClusterIP 10.99.160.226 <none> 636/TCP,369/TCP 6d
oam-pod ClusterIP 10.105.223.47 <none> 9008/TCP,7001/TCP,8879/TCP,10080/TCP 6h51m
ops-center-smf-data-ops-center ClusterIP 10.103.164.204 <none> 8008/TCP,8080/TCP,2024/TCP,2022/TCP,7681/TCP 6d
smart-agent-smf-data-ops-center ClusterIP 10.97.143.81 <none> 8888/TCP 6d
smf-n10-service ClusterIP 10.102.197.22 10.10.10.205 8090/TCP 6h51m
smf-n11-service ClusterIP 10.108.109.186 10.10.10.203 8090/TCP 6h51m
smf-n40-service ClusterIP 10.111.170.158 10.10.10.206 8090/TCP 6h51m
smf-n7-service ClusterIP 10.102.140.179 10.10.10.204 8090/TCP 6h51m
smf-nodemgr ClusterIP 10.102.68.172 <none> 9003/TCP,8884/TCP,9201/TCP,8080/TCP 6h51m
smf-protocol ClusterIP 10.111.219.156 <none> 9003/TCP,8080/TCP 6h51m
smf-rest-ep ClusterIP 10.109.189.99 <none> 9003/TCP,8080/TCP,9201/TCP 6h51m
smf-sbi-service ClusterIP 10.105.176.248 10.10.10.201 8090/TCP 6h51m
smf-service ClusterIP 10.100.143.237 <none> 9003/TCP,8080/TCP 6h51m
swift-smf-data-ops-center ClusterIP 10.98.196.46 <none> 9855/TCP,50055/TCP,56790/TCP 6d
zookeeper ClusterIP None <none> 2888/TCP,3888/TCP 6h51m
zookeeper-service ClusterIP 10.109.109.102 <none> 2181/TCP,7070/TCP 6h51m
1.3列出特定名稱空間的所有面板:
指令:
kubectl get pods -n <namespace>
範例:
cisco@brusmi-master1:~$ kubectl get pods -n smf-data
NAME READY STATUS RESTARTS AGE
api-smf-data-ops-center-57c8f6b4d7-wt66s 1/1 Running 0 6d
base-entitlement-smf-fcdb664d-fkgss 1/1 Running 0 6d
cache-pod-0 1/1 Running 0 6h53m
cache-pod-1 1/1 Running 0 6h53m
cdl-ep-session-c1-dbb5f7874-4gmfr 1/1 Running 0 6h53m
cdl-ep-session-c1-dbb5f7874-5zbqw 1/1 Running 0 6h53m
cdl-index-session-c1-m1-0 1/1 Running 0 6h53m
cdl-slot-session-c1-m1-0 1/1 Running 0 6h53m
documentation-5dc8d5d898-mv6kx 1/1 Running 0 6d
etcd-smf-data-etcd-cluster-0 1/1 Running 0 6h53m
grafana-dashboard-app-infra-5b8dd74bb6-xvlln 1/1 Running 0 6h53m
grafana-dashboard-cdl-5df868c45c-vbr4r 1/1 Running 0 6h53m
grafana-dashboard-smf-657755b7c8-fvbdt 1/1 Running 0 6h53m
gtpc-ep-n0-0 1/1 Running 0 6h53m
kafka-0 1/1 Running 0 6h53m
li-ep-n0-0 1/1 Running 0 6h53m
oam-pod-0 1/1 Running 0 6h53m
ops-center-smf-data-ops-center-7fbb97d9c9-tx7qd 5/5 Running 0 6d
smart-agent-smf-data-ops-center-6667dcdd65-2h7nr 0/1 Evicted 0 6d
smart-agent-smf-data-ops-center-6667dcdd65-6wfvq 1/1 Running 0 4d18h
smf-nodemgr-n0-0 1/1 Running 0 6h53m
smf-protocol-n0-0 1/1 Running 0 6h53m
smf-rest-ep-n0-0 1/1 Running 0 6h53m
smf-service-n0-0 1/1 Running 5 6h53m
smf-udp-proxy-0 1/1 Running 0 6h53m
swift-smf-data-ops-center-68bc75bbc7-4zdc7 1/1 Running 0 6d
zookeeper-0 1/1 Running 0 6h53m
zookeeper-1 1/1 Running 0 6h52m
zookeeper-2 1/1 Running 0 6h52m
1.4列出特定Pod名稱(標籤、映像、埠、卷、事件等)的完整詳細資訊。
指令:
kubectl describe pods <pod_name> -n <namespace>
範例:
cisco@brusmi-master1:~$ kubectl describe pods smf-service-n0-0 -n smf-data
smf-service-n0-0 <<< POD name
smf-data <<< Namespace
2.1獲取特定Pod的容器名稱:
指令:
kubectl describe pods <pod_name> -n <namespace> | grep Containers -A1
範例:
cisco@brusmi-master1:~$ kubectl describe pods smf-service-n0-0 -n smf-data | grep Containers -A1
容器:
smf-service:
--
ContainersReady True
PodScheduled True
2.2在Kubernetes上觀察到Pod崩潰時查詢日誌:
指令:
kubectl get pods -n <namespace> | grep -v Running
範例:
cisco@brusmi-master1:~$ kubectl get pods -n smf-data | grep -v Running
NAME READY STATUS RESTARTS AGE
smart-agent-smf-data-ops-center-6667dcdd65-2h7nr 0/1 Evicted 0 5d23h
smf-service-n0-0 0/1 CrashLoopBackOff 2 6h12m
指令:
kubectl logs <pod_name> -c <container_name> -n <namespace>
範例:
cisco@brusmi-master1:~$ kubectl logs smf-service-n0-0 -c smf-service -n smf-data
/opt/workspace
-rwxrwxrwx 1 root root 84180872 Mar 31 06:18 /opt/workspace/smf-service
Launching: /opt/workspace/tini /opt/workspace/smf-service
2020-06-09 20:26:16.341043 I | proto: duplicate proto type registered: internalmsg.SessionKey
2020-06-09 20:26:16.341098 I | proto: duplicate proto type registered: internalmsg.NInternalTxnMsg
2020-06-09 20:26:16.343170 I | smf-service [INFO] [main.go:18] [smfservice] ##########################MARCH DROP################
#########
2020-06-09 20:26:16.343197 I | smf-service [INFO] [main.go:19] [smfservice] ###########################SMF######################
#########
2020-06-09 20:26:16.343210 I | smf-service [INFO] [main.go:20] [smfservice] SMF-SERVICE
2020-06-09 20:26:16.343221 I | smf-service [INFO] [main.go:21] [smfservice] ###########################SMF######################
#########
2020-06-09 20:26:16.343232 I | smf-service [INFO] [main.go:22] [smfservice] ###########################MARCH DROP###############
#########
2020/06/09 20:26:16.343 smf-service [DEBUG] [Tracer.go:181] [unknown] Loaded initial tracing configuration TracerType: , TracerJ
aegerTransportType: , TracerEndpoint: , ServiceName: smf-service, TracerServiceName: , EnableTracePercent: 0, AppendMessages: fa
.
.
2020/06/09 20:44:28.157 smf-service [DEBUG] [RestRouter.go:24] [infra.rest_server.core] Rest message received
2020/06/09 20:44:28.158 smf-service [DEBUG] [RestRouter.go:43] [infra.rest_server.core] Set Ping as name for the RestMessage and Sla SlaInfo[enabled:false,timeout:0] from the Endpoint
2020/06/09 20:44:28.159 smf-service [INFO] [ApplicationEndpoint.go:333] [infra.application.core] Ping server response!
2020/06/09 20:44:30.468 smf-service [DEBUG] [MetricsServer_v1.go:305] [infra.application.core] Checkpointing gauge with name smf_session_counters
2020/06/09 20:44:31.158 smf-service [DEBUG] [RestRouter.go:24] [infra.rest_server.core] Rest message received
2020/06/09 20:44:31.158 smf-service [DEBUG] [RestRouter.go:43] [infra.rest_server.core] Set Ping as name for the RestMessage and Sla SlaInfo[enabled:false,timeout:0] from the Endpoint
2020/06/09 20:44:31.158 smf-service [INFO] [ApplicationEndpoint.go:333] [infra.application.core] Ping server response!
smf-service-n0-0 <<< POD name
smf-service <<< Container Name
smf-data <<< Namespace
2.3驗證是否生成核心轉儲:
指令:
ls -lrt /var/lib/systemd/coredump/
範例:
cisco@brusmi-master1:~$ ls -lrt /var/lib/systemd/coredump/
total 0
註:核心檔案應生成於 /var/lib/systemd/coredump/
各自虛擬機器中的路徑。TAC Dashboard上還提供核心。
3.1從Master k8登入cee Ops-Center:
cisco@brusmi-master1:~$ kubectl get namespace
NAME STATUS AGE
cee-cee Active 5d3h
default Active 5d3h
kube-node-lease Active 5d3h
kube-public Active 5d3h
kube-system Active 5d3h
lfs Active 5d3h
nginx-ingress Active 5d3h
smf-data Active 5d3h
smi-certs Active 5d3h
smi-vips Active 5d3h
cisco@brusmi-master1:~$ ssh -p 2024 admin@$(kubectl get svc -n cee-cee | grep ^ops-center | awk '{print $3}')
admin@10.102.44.219's password:
Welcome to the cee CLI on brusmi/cee
admin connected from 192.x.0.1 using ssh on ops-center-cee-cee-ops-center-79cf55b49b-6wrh9
[brusmi/cee] cee#
註:在上述示例中,CEE名稱空間為「cee-cee」。必須替換此名稱,以防您需要它。
3.2生成TAC包ID以引用檢索到的收集檔案:
指令:
tac-debug-pkg create from <Start_time> to <End_time>
範例:
[brusmi/cee] cee# tac-debug-pkg create from 2020-06-08_14:00:00 to 2020-06-08_15:00:00
response : Tue Jun 9 00:22:17 UTC 2020 tac-debug pkg ID : 1592948929
此外,還可以包括其他篩選器,例如namespace或pod_name,如下所示:
指令:
tac-debug-pkg create from <Start_time> to <End_time> logs-filter { namespace <namespace> pod_name <pod_name> }
範例:
[brusmi/cee] cee# tac-debug-pkg create from 2020-06-08_14:00:00 to 2020-06-08_15:00:00 logs-filter { namespace smf-data pod_name cache-pod-0 }
response : Tue Jun 9 00:28:49 UTC 2020 tac-debug pkg ID : 1591662529
註:建議生成一個時間段的tac包ID(1小時或最多2小時)。
3.3顯示每個服務的狀態:
[brusmi/cee] cee# tac-debug-pkg status
response : Tue Jun 9 00:28:51 UTC 2020
Tac id: 1591662529
Gather core: completed!
Gather logs: in progress
Gather metrics: in progress
Gather stats: completed!
Gather config: completed!
[brusmi/cee] cee#
[brusmi/cee] cee# tac-debug-pkg status
response : Tue Jun 9 00:43:45 UTC 2020
No active tac debug session <<< If none active tac debug session is displayed, it means that all information was already gathered.
注意:如果沒有可用磁碟空間,請刪除舊的調試檔案。
[brusmi/cee] cee# tac-debug-pkg create from 2020-06-08_09:00:00 to 2020-06-08_10:00:00 logs-filter { namespace smf-data }
response : Tue Jun 9 00:45:48 UTC 2020
Available disk space on node is less than 20 %. Please remove old debug files and retry.
[brusmi/cee] cee# tac-debug-pkg delete tac-id 1591662529
3.4建立TAC調試ID以僅收集度量:
[nyucs504-cnat/global] cee# tac-debug-pkg create from 2021-02-24_12:30:00 to 2021-02-24_14:30:00 cores false logs false cfg false stats false
response : Wed Feb 24 19:39:49 UTC 2021 tac-debug pkg ID : 1614195589
目前,有三種不同的選項可以從CEE下載TAC調試:
4.1來自主VIP的SFTP(建議較少,需要較長時間)。
4.1.1獲取URL以下載上收集的日誌 tac package ID
:
指令:
kubectl get ingress -n <namespace> | grep show-tac
範例:
cisco@brusmi-master1:~$ kubectl get ingress -n cee-cee | grep show-tac
show-tac-manager-ingress show-tac-manager.cee-cee-smi-show-tac.192.168.208.10.xxx.x 80, 443 5d4h
4.1.2壓縮並獲取來自 show-tac-manager
pod:
a.獲取show-tac pod的ID。
指令:
kubectl get pods -n <namespace> | grep show-tac
範例:
cisco@brusmi-master1:~$ kubectl get pods -n cee-cee | grep show-tac
show-tac-manager-85985946f6-bflrc 2/2 Running 0 12d
b.在中運行執行命令 show-tac pod
並壓縮TAC調試日誌。
指令:
kubectl exec -it -n <namespace> <pod_name> bash
範例:
cisco@brusmi-master1:~$ kubectl exec -it -n cee-cee show-tac-manager-85985946f6-bflrc bash
Defaulting container name to show-tac-manager.
Use 'kubectl describe pod/show-tac-manager-85985946f6-bflrc -n cee-cee' to see all of the containers in this pod.
groups: cannot find name for group ID 101
groups: cannot find name for group ID 190
groups: cannot find name for group ID 303
I have no name!@show-tac-manager-85985946f6-bflrc:/show-tac-manager/bin$ cd /home/tac/
I have no name!@show-tac-manager-85985946f6-bflrc:/home/tac$ tar -zcvf tac-debug_1591662529.tar.gz 1591662529
1591662529/
1591662529/config/
1591662529/config/192.x.1.14_configuration.tar.gz.base64
1591662529/stats/
1591662529/stats/Stats_2020-06-08_14-00-00_2020-06-08_15-00-00.tar.gz
1591662529/manifest.json
1591662529/metrics/
1591662529/metrics/Metrics_2020-06-08_14-00-00_2020-06-08_15-00-00.tar.gz
1591662529/web/
1591662529/web/index.html
1591662529/logs/
1591662529/logs/brusmi-master1/
1591662529/logs/brusmi-master1/brusmi-master1_Logs_2020-06-08_14-00-00_2020-06-08_15-00-00.tar.gz
I have no name!@show-tac-manager-85985946f6-bflrc:/home/tac$ ls
1591662490 1591662529 1592265088 tac-debug_1591662529.tar.gz
4.1.3將檔案複製到 /tmp
主VIP上的目錄:
指令:
kubectl cp <namespace>/<show-tac_pod_name>:/home/tac/<file_name.tar.gz> /tmp/<file_name.tar.gz>
範例:
cisco@brusmi-master1:~$ kubectl cp cee-cee/show-tac-manager-85985946f6-bflrc:/home/tac/tac-debug_1591662529.tar.gz /tmp/tac-debug_1591662529.tar.gz
Defaulting container name to show-tac-manager.
tar: Removing leading `/' from member names
cisco@brusmi-master1:~$ cd /tmp
cisco@brusmi-master1:/tmp$ ls
cee.cfg
tac-debug_1591662529.tar.gz
tiller_service_acct.yaml
4.1.4從主VIP通過sftp傳輸檔案。
4.2下載調試工具 wget
命令(macOS/Ubuntu)。
4.2.1從「k8s get ingress」輸出獲取show-tac連結:
cisco@brusmi-master1:~$ kubectl get ingress -n cee-cee | grep show-tac
show-tac-manager-ingress show-tac-manager.cee-cee-smi-show-tac.192.168.208.10.xxx.x 80, 443 5d4h
4.2.2輸入 wget
從您的PC終端發出命令:
wget -r -np https://show-tac-manager.cee-cee-smi-show-tac.192.168.208.10.xxx.x/tac/
<tac-id>/ --no-check-certificate --http-user=<NTID_username>
--http-password=<NTID_password>
5.1登入 smf-data
Master k8s的Ops-Center:
cisco@brusmi-master1:~$ ssh -p 2024 admin@$(kubectl get svc -n smf-data | grep ^ops-center | awk '{print $3}')
admin@10.103.164.204's password:
Welcome to the smf CLI on brusmi/data
admin connected from 192.x.0.1 using ssh on ops-center-smf-data-ops-center-7fbb97d9c9-tx7qd
5.2確認「日誌記錄級別應用程式」是否已啟用:
[brusmi/data] smf# show running-config | i logging
logging level application debug
logging level transaction debug
logging level tracing debug
logging name infra.config.core level application debug
logging name infra.config.core level transaction debug
logging name infra.config.core level tracing debug
logging name infra.message_log.core level application debug
logging name infra.message_log.core level transaction debug
logging name infra.resource_monitor.core level application off
logging name infra.rest_server.core level application debug
5.3從Master k8登入cee Ops-Center:
cisco@brusmi-master1:~$ ssh -p 2024 admin@$(kubectl get svc -n cee-cee | grep ^ops-center | awk '{print $3}')
admin@10.102.44.219's password:
Welcome to the cee CLI on brusmi/cee
admin connected from 192.x.0.1 using ssh on ops-center-cee-cee-ops-center-79cf55b49b-6wrh9
[brusmi/cee] cee#
註:在上述示例中,CEE名稱空間為「cee-cee」。必須替換此名稱,以防您需要它。
5.4跟蹤所有SMF POD的日誌,以「smf
-"(smf-nodemgr
中, smf-protocol
中, smf-rest , smf-service
中, smf-udp-proxy
)。收集日誌幾秒鐘,然後使用Ctrl+C停止資料收集:
[brusmi/cee] cee# cluster logs ^smf- -n smf-data
error: current-context must exist in order to minify
Will tail 5 logs...
smf-nodemgr-n0-0
smf-protocol-n0-0
smf-rest-ep-n0-0
smf-service-n0-0
smf-udp-proxy-0
[smf-service-n0-0] 2020/06/08 17:04:57.331 smf-service [DEBUG] [RestRouter.go:24] [infra.rest_server.core] Rest message received
[smf-service-n0-0] 2020/06/08 17:04:57.331 smf-service [DEBUG] [RestRouter.go:43] [infra.rest_server.core] Set Ping as name for the RestMessage and Sla SlaInfo[enabled:false,timeout:0] from the Endpoint
[smf-service-n0-0] 2020/06/08 17:04:57.331 smf-service [INFO] [ApplicationEndpoint.go:333] [infra.application.core] Ping server response!
[smf-service-n0-0] 2020/06/08 17:05:00.331 smf-service [DEBUG] [RestRouter.go:24] [infra.rest_server.core] Rest message received
[smf-service-n0-0] 2020/06/08 17:05:00.332 smf-service [DEBUG] [RestRouter.go:43] [infra.rest_server.core] Set Ping as name for the RestMessage and Sla SlaInfo[enabled:false,timeout:0] from the Endpoint
[smf-service-n0-0] 2020/06/08 17:05:00.332 smf-service [INFO] [ApplicationEndpoint.go:333] [infra.application.core] Ping server response!
[smf-service-n0-0] 2020/06/08 17:05:01.658 smf-service [DEBUG] [MetricsServer_v1.go:305] [infra.application.core] Checkpointing gauge with name smf_session_counters
[smf-service-n0-0] 2020/06/08 17:05:03.330 smf-service [DEBUG] [RestRouter.go:24] [infra.rest_server.core] Rest message received
[smf-service-n0-0] 2020/06/08 17:05:03.330 smf-service [DEBUG] [RestRouter.go:43] [infra.rest_server.core] Set Ping as name for the RestMessage and Sla SlaInfo[enabled:false,timeout:0] from the Endpoint
[smf-service-n0-0] 2020/06/08 17:05:03.330 smf-service [INFO] [ApplicationEndpoint.go:333] [infra.application.core] Ping server response!
[smf-service-n0-0] 2020/06/08 17:05:06.330 smf-service [DEBUG] [RestRouter.go:24] [infra.rest_server.core] Rest message received
[smf-service-n0-0] 2020/06/08 17:05:06.330 smf-service [DEBUG] [RestRouter.go:43] [infra.rest_server.core] Set Ping as name for the RestMessage and Sla SlaInfo[enabled:false,timeout:0] from the Endpoint
[smf-service-n0-0] 2020/06/08 17:05:06.330 smf-service [INFO] [ApplicationEndpoint.go:333] [infra.application.core] Ping server response!
[smf-protocol-n0-0] 2020/06/08 17:04:58.441 smf-protocol [DEBUG] [RestRouter.go:24] [infra.rest_server.core] Rest message received
[smf-service-n0-0] 2020/06/08 17:05:06.661 smf-service [DEBUG] [MetricsServer_v1.go:305] [infra.application.core] Checkpointing gauge with name smf_session_counters
[smf-protocol-n0-0] 2020/06/08 17:04:58.441 smf-protocol [DEBUG] [RestRouter.go:43] [infra.rest_server.core] Set Ping as name for the RestMessage and Sla SlaInfo[enabled:false,timeout:0] from the Endpoint
[smf-protocol-n0-0] 2020/06/08 17:04:58.441 smf-protocol [INFO] [ApplicationEndpoint.go:333] [infra.application.core] Ping server response!
[smf-nodemgr-n0-0] 2020/06/08 17:04:57.329 smf-nodemgr [DEBUG] [CacheClient.go:118] [infra.cache_client.core] Received UpdateRecord request with the param cacheId:"IPAM" filter:<pKeyVal:"IPAM/CacheContext/datastore/Local:smf-data:0" > action:<nonUniqueKeyVals:"IPAM/CacheContext/datastore" nonUniqueKeyVals:"IPAM" timeVal:1591635897
注意:如果您需要從特定的Pod、容器或多個Pod中收集日誌,您可以更加具體。
### Specific pod ###
[brusmi/cee] cee# cluster logs smf-nodemgr-n0-0 -n smf-data
[brusmi/cee] cee# cluster logs smf-rest-ep-n0-0 -n smf-data
### Specific container ###
[brusmi/cee] cee# cluster logs smf-nodemgr -n smf-data
[brusmi/cee] cee# cluster logs smf-service -n smf-data
[brusmi/cee] cee# cluster logs zookeeper -n smf-data
[brusmi/cee] cee# cluster logs smf-rest-ep -n smf-data
### Multiple pods ###
[brusmi/cee] cee# cluster logs "(smf-service.|smf-rest.|smf-nodemgr.|smf-protocol.|gtpc-ep.|smf-udp-proxy.)" -n smf-data -e
6.1獲取訪問Grafana的URL:
cisco@brusmi-master1:~$ kubectl get ingress -n cee-cee | grep grafana
grafana-ingress grafana.192.168.168.208.10.xxx.x 80, 443 6d18h
6.2按如下方式開啟帶有HTTPS的網頁:
https://grafana.192.168.208.10.xxx.x
修訂 | 發佈日期 | 意見 |
---|---|---|
1.0 |
31-Mar-2023 |
初始版本 |