kube-admin和kube-scheduler总是莫名的重启,集群状态还ok,没有问题

这篇博客记录了在Kubernetes集群中遇到kube-controller-manager健康检查失败的问题,通过查看`kubectl get po -n kube-system`和`kubectl describe pod kube-controller-manager -n kube-system`命令的输出,发现liveness probe失败。博主修改了kube-controller-manager和kube-scheduler的配置文件,添加了`address=127.0.0.1`参数,并删除并重新创建了Pod,最终解决了问题。虽然日志中仍有警告,但目前系统运行正常,博主将继续观察是否会再次出现重启情况。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

看截图信息

# kubectl  get po -n kube-system
NAME                                        READY   STATUS    RESTARTS       AGE
coredns-6d8c4cb4d-8xghq                     1/1     Running   0              38m
coredns-6d8c4cb4d-q65vq                     1/1     Running   0              38m
etcd-host-10-19-83-151                      1/1     Running   4              23h
kube-apiserver-master                       1/1     Running   1              23h
kube-controller-manager-master              1/1     Running   31 (25m ago)   23h
kube-flannel-ds-amd64-2pwps                 1/1     Running   0              61m
kube-flannel-ds-amd64-svfg6                 1/1     Running   0              61m
kube-flannel-ds-amd64-xmppt                 1/1     Running   1              61m
kube-proxy-d4bb2                            1/1     Running   0              23h
kube-proxy-k2skv                            1/1     Running   1              23h
kube-proxy-x9k76                            1/1     Running   1 (23h ago)    23h
kube-scheduler-master                       1/1     Running   32 (25m ago)   23h

查看详细信息后发现,探针一直有探测失败的情况

# kubectl describe po kube-controller-manager-master -n kube-system
Name:                 kube-controller-manager-master
Namespace:            kube-system
Priority:             2000001000
Priority Class Name:  system-node-critical
......
Events:
  Type     Reason     Age                  From     Message
  ----     ------     ----                 ----     -------
  Normal   Pulled     84m (x6 over 5h36m)  kubelet  Container image "registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.4" already present on machine
  Normal   Created    84m (x6 over 5h36m)  kubelet  Created container kube-controller-manager
  Normal   Started    84m (x6 over 5h36m)  kubelet  Started container kube-controller-manager
  Warning  Unhealthy  33m                  kubelet  Liveness probe failed: Get "https://127.0.0.1:10257/healthz": read tcp 127.0.0.1:59840->127.0.0.1:10257: read: connection reset by peer
  Normal   Pulled     33m (x4 over 44m)    kubelet  Container image "registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.4" already present on machine
  Normal   Created    32m (x4 over 44m)    kubelet  Created container kube-controller-manager
  Normal   Started    32m (x4 over 44m)    kubelet  Started container kube-controller-manager
  Warning  BackOff    30m (x11 over 43m)   kubelet  Back-off restarting failed container
  Warning  Unhealthy  28m (x4 over 43m)    kubelet  Liveness probe failed: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused

依次检查cs,没有发现问题

# kubectl get cs 
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
scheduler            Healthy   ok                              
controller-manager   Healthy   ok                              
etcd-0               Healthy   {"health":"true","reason":""}   

修改kube-controller-manager.yaml文件

containers:
  - command:
    - kube-controller-manager
    - --allocate-node-cidrs=true
    - --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
    - --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
    - --bind-address=127.0.0.1
    - --client-ca-file=/etc/kubernetes/pki/ca.crt
    - --cluster-cidr=10.244.0.0/16
    - --cluster-name=kubernetes
    - --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
    - --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
    - --controllers=*,bootstrapsigner,tokencleaner
    - --kubeconfig=/etc/kubernetes/controller-manager.conf
    - --leader-elect=true
    - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
    - --root-ca-file=/etc/kubernetes/pki/ca.crt
    - --service-account-private-key-file=/etc/kubernetes/pki/sa.key
    - --service-cluster-ip-range=10.96.0.0/12
    - --use-service-account-credentials=true
    - --address=127.0.0.1                #添加这条信息
    image: registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.4
    imagePullPolicy: IfNotPresent

修改kube-scheduler.yaml文件

spec:
  containers:
  - command:
    - kube-scheduler
    - --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
    - --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
    - --bind-address=127.0.0.1
    - --kubeconfig=/etc/kubernetes/scheduler.conf
    - --leader-elect=true
    - --address=127.0.0.1   #添加此行
    image: registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.4
    imagePullPolicy: IfNotPresent

修改完以后,再重新启动

# kubectl delete -f kube-controller-manager.yaml 
Error from server (NotFound): error when deleting "kube-controller-manager.yaml": pods "kube-controller-manager" not found

# kubectl delete -f kube-scheduler.yaml 
Error from server (NotFound): error when deleting "kube-scheduler.yaml": pods "kube-scheduler" not found

# kubectl apply -f kube-scheduler.yaml       
pod/kube-scheduler created

# kubectl apply -f kube-controller-manager.yaml 
pod/kube-controller-manager created

 查看po的日志信息,看日志信息,貌似加入 -address参数,也没有作用,但是问题确实是解决了。这个报错,因为只有一个master,此时,master运行到了其他的node节点上,故目录下没有相应的文件,删掉,指定运行在master主机上即可

# kubectl  logs  kube-controller-manager -n kube-system   
Flag --address has been deprecated, This flag has no effect now and will be removed in v1.24.
I0310 09:01:13.528382       1 serving.go:348] Generated self-signed cert in-memory
unable to create request header authentication config: open /etc/kubernetes/pki/front-proxy-ca.crt: no such file or directory

再次检查,暂时没有重启,在继续观察,是否还会重启

# kubectl get pod -n kube-system -o wide
NAME                                        READY   STATUS    RESTARTS      AGE   IP             NODE                NOMINATED NODE   READINESS GATES
coredns-6d8c4cb4d-8xghq                     1/1     Running   0             73m   10.244.2.186   node2   			 <none>           <none>
coredns-6d8c4cb4d-q65vq                     1/1     Running   0             73m   10.244.1.49    node1   			 <none>           <none>
etcd-master                      			1/1     Running   4 (77m ago)   24h   10.19.83.151   master  			 <none>           <none>
kube-apiserver-master            			1/1     Running   1 (77m ago)   24h   10.19.83.151   master  			 <none>           <none>
kube-controller-manager-master   			1/1     Running   0             13m   10.19.83.151   master  			 <none>           <none>
kube-flannel-ds-amd64-2pwps                 1/1     Running   0             96m   10.19.83.154   node2   			 <none>           <none>
kube-flannel-ds-amd64-svfg6                 1/1     Running   0             96m   10.19.83.153   node1   			 <none>           <none>
kube-flannel-ds-amd64-xmppt                 1/1     Running   1 (77m ago)   96m   10.19.83.151   master  			 <none>           <none>
kube-proxy-d4bb2                            1/1     Running   0             24h   10.19.83.154   node2   			 <none>           <none>
kube-proxy-k2skv                            1/1     Running   1             24h   10.19.83.151   master  			 <none>           <none>
kube-proxy-x9k76                            1/1     Running   1 (24h ago)   24h   10.19.83.153   node1   			 <none>           <none>
kube-scheduler                              1/1     Running   0             10m   10.19.83.153   node1   			 <none>           <none>
kube-scheduler-master            			1/1     Running   0             11m   10.19.83.151   master  			 <none>           <none>

### 在 VMware 上部署 CentOS 集群的指南 在 VMware 上部署 CentOS 集群通常涉及多个步骤,包括虚拟机配置、网络设置、服务安装集群管理。以下是详细说明: #### 1. 准备工作 确保 VMware 虚拟化环境已经安装并运行正常。为每个节点创建一个 CentOS 虚拟机实例,并分配足够的资源(CPU、内存、磁盘空间)。同时,配置静态 IP 地址以确保网络通信稳定。 #### 2. 网络配置 为所有节点配置相同的子网,并确保可以通过主机名或 IP 地址相互访问。可以使用 `hosts` 文件或 DNS 服务器来解析节点名称[^1]。 ```bash # 编辑 /etc/hosts 文件 echo "192.168.3.210 master" | sudo tee -a /etc/hosts echo "192.168.3.211 slave1" | sudo tee -a /etc/hosts echo "192.168.3.212 slave2" | sudo tee -a /etc/hosts ``` #### 3. 安装必要的软件包 在所有节点上安装基础工具,如 SSH 防火墙管理工具。关闭防火墙或配置允许集群服务的端口通过。 ```bash sudo yum install -y epel-release sudo yum install -y sshpass rsync sudo systemctl stop firewalld sudo systemctl disable firewalld ``` #### 4. 配置时间同步 确保所有节点的时间一致,可以使用 NTP 服务进行时间同步。 ```bash sudo yum install -y ntp sudo systemctl start ntpd sudo systemctl enable ntpd ``` #### 5. 分发密钥以便无密码登录 在主节点生成 SSH 密钥,并将其分发到所有从节点。 ```bash ssh-keygen -t rsa ssh-copy-id root@slave1 ssh-copy-id root@slave2 ``` #### 6. 部署 Kubernetes 集群 如果目标是部署 Kubernetes 集群,可以参考以下命令初始化 Master 节点。 ```bash kubeadm init \ --kubernetes-version=v1.17.4 \ --pod-network-cidr=10.244.0.0/16 \ --service-cidr=10.96.0.0/12 \ --apiserver-advertise-address=192.168.3.210 ``` 将 kubeconfig 文件复制到用户目录下以方便管理。 ```bash mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config ``` #### 7. 启动核心服务 确保所有核心服务(如 etcd、kube-apiserver、kube-controller-manager、kube-scheduler)已启动并设置为开机自启[^2]。 ```bash for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do systemctl restart $SERVICES systemctl enable $SERVICES done ``` #### 8. 部署 Hadoop 集群 如果目标是部署 Hadoop 集群,可以在主节点上分发配置文件并启动相关服务[^4]。 ```bash rsync -av /opt/module root@slave1:/opt/ rsync -av /opt/module root@slave2:/opt/ #!/bin/bash if [ $# -lt 1 ]; then echo "No Args Input..." exit fi case $1 in "start") echo " =================== 启动 hadoop集群 ===================" echo " --------------- 启动 hdfs ---------------" ssh hadoop104 "/opt/module/hadoop-3.2.4/sbin/start-dfs.sh" echo " --------------- 启动 yarn ---------------" ssh hadoop105 "/opt/module/hadoop-3.2.4/sbin/start-yarn.sh" echo " --------------- 启动 historyserver ---------------" ssh hadoop104 "/opt/module/hadoop-3.2.4/bin/mapred --daemon start historyserver" ;; "stop") echo " =================== 关闭 hadoop集群 ===================" echo " --------------- 关闭 historyserver ---------------" ssh hadoop104 "/opt/module/hadoop-3.2.4/bin/mapred --daemon stop historyserver" echo " --------------- 关闭 yarn ---------------" ssh hadoop105 "/opt/module/hadoop-3.2.4/sbin/stop-yarn.sh" echo " --------------- 关闭 hdfs ---------------" ssh hadoop104 "/opt/module/hadoop-3.2.4/sbin/stop-dfs.sh" ;; *) echo "Input Args Error..." ;; esac ``` ### 注意事项 - 确保所有节点之间的网络连接正常。 - 根据实际需求调整配置文件中的路径主机名。 - 测试集群功能以验证部署是否成功。
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值