[root@k8s-master ~]# kubectl get pods -n kube-system -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-kube-controllers-7f4f5bf95d-pvvt9 1/1 Running 26 16d 10.244.169.129 k8s-node2 <none> <none>
calico-node-6qcv4 1/1 Running 1 16d 192.168.32.171 k8s-master <none> <none>
calico-node-dncv8 1/1 Running 1 16d 192.168.32.173 k8s-node2 <none> <none>
calico-node-sgkn2 0/1 CrashLoopBackOff 80 16d 192.168.32.172 k8s-node1 <none> <none>
coredns-7f89b7bc75-2cf7q 1/1 Running 0 16d 10.244.36.74 k8s-node1 <none> <none>
coredns-7f89b7bc75-494l7 1/1 Running 1 16d 10.244.169.130 k8s-node2 <none> <none>
etcd-k8s-master 1/1 Running 1 16d 192.168.32.171 k8s-master <none> <none>
kube-apiserver-k8s-master 1/1 Running 2 16d 192.168.32.171 k8s-master <none> <none>
kube-controller-manager-k8s-master 1/1 Running 0 133m 192.168.32.171 k8s-ma
2021-06-26 calico-node-xxx pod 状态反复CrashLoopBackOff
最新推荐文章于 2025-09-27 15:51:36 发布
本文记录了在k8s集群中遇到的calico-node Pod状态反复出现CrashLoopBackOff的问题。通过检查pod event、logs及节点端口占用,发现127.0.0.1:9099端口冲突。尝试kill相关进程,但效果不明显。进一步检查kubelet和containerd状态后,最终通过删除并重建calico-node容器解决了问题,恢复正常运行。

最低0.47元/天 解锁文章
1万+

被折叠的 条评论
为什么被折叠?



