kubeadm upgrade 命令使用指南
kubeadm upgrade 是用于将 Kubernetes 集群平滑升级到新版本的工具。以下是该命令的一些关键使用说明和示例:
一、核心子命令解析
1. 升级计划: plan
使用 kubeadm upgrade plan 检查哪些版本可以升级到,并验证当前集群是否可升级。此命令还会显示一个包含组件配置版本状态的表格
kubeadm upgrade plan <目标版本>
示例输出会显示可用的升级路径及预检结果。
执行结果
root@k8s-master01:~# kubeadm upgrade plan
[preflight] Running pre-flight checks.
[upgrade/config] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
[upgrade/config] Use 'kubeadm init phase upload-config --config your-config.yaml' to re-upload it.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: 1.32.2
[upgrade/versions] kubeadm version: v1.32.2
[upgrade/versions] Target version: v1.32.3
[upgrade/versions] Latest version in the v1.32 series: v1.32.3
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT NODE CURRENT TARGET
kubelet k8s-master02 v1.32.1 v1.32.3
kubelet k8s-master01 v1.32.3 v1.32.3
kubelet k8s-master03 v1.32.3 v1.32.3
Upgrade to the latest version in the v1.32 series:
COMPONENT NODE CURRENT TARGET
kube-apiserver k8s-master01 v1.32.2 v1.32.3
kube-apiserver k8s-master02 v1.32.2 v1.32.3
kube-apiserver k8s-master03 v1.32.2 v1.32.3
kube-controller-manager k8s-master01 v1.32.2 v1.32.3
kube-controller-manager k8s-master02 v1.32.2 v1.32.3
kube-controller-manager k8s-master03 v1.32.2 v1.32.3
kube-scheduler k8s-master01 v1.32.2 v1.32.3
kube-scheduler k8s-master02 v1.32.2 v1.32.3
kube-scheduler k8s-master03 v1.32.2 v1.32.3
kube-proxy 1.32.2 v1.32.3
CoreDNS v1.11.3 v1.11.3
etcd k8s-master01 3.5.16-0 3.5.16-0
etcd k8s-master02 3.5.16-0 3.5.16-0
etcd k8s-master03 3.5.16-0 3.5.16-0
You can now apply the upgrade by executing the following command:
kubeadm upgrade apply v1.32.3
Note: Before you can perform this upgrade, you have to update kubeadm to v1.32.3.
2. 应用升级:apply
使用 kubeadm upgrade apply [version] 将 Kubernetes 集群升级到指定版本。例如,kubeadm upgrade apply v1.32.3, 配合 --dry-run 参数进行模拟操作
kubeadm upgrade apply <目标版本> --dry-run # 模拟升级
kubeadm upgrade apply <目标版本> # 实际执行
此步骤会更新控制平面组件(如 API Server、Controller Manager)的静态 Pod 配置。
执行升级步骤
1.安装新版本
apt install -y kubelet=1.32.3-1.1 kubeadm=1.32.3-1.1 kubectl=1.32.3-1.1
2.执行升级
root@k8s-master01:~# kubeadm upgrade apply v1.32.3
[upgrade] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
[upgrade] Use 'kubeadm init phase upload-config --config your-config.yaml' to re-upload it.
[upgrade/preflight] Running preflight checks
[upgrade] Running cluster health checks
[upgrade/preflight] You have chosen to upgrade the cluster version to "v1.32.3"
[upgrade/versions] Cluster version: v1.32.2
[upgrade/versions] kubeadm version: v1.32.3
[upgrade] Are you sure you want to proceed? [y/N]: y
[upgrade/preflight] Pulling images required for setting up a Kubernetes cluster
[upgrade/preflight] This might take a minute or two, depending on the speed of your internet connection
[upgrade/preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[upgrade/control-plane] Upgrading your static Pod-hosted control plane to version "v1.32.3" (timeout: 5m0s)...
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests1054603147"
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Renewing etcd-server certificate
[upgrade/staticpods] Renewing etcd-peer certificate
[upgrade/staticpods] Renewing etcd-healthcheck-client certificate
[upgrade/staticpods] Restarting the etcd static pod and backing up its manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2025-04-04-15-18-05/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This can take up to 5m0s
[apiclient] Found 3 Pods for label selector component=etcd
[upgrade/staticpods] Component "etcd" upgraded successfully!
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moving new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backing up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2025-04-04-15-18-05/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This can take up to 5m0s
[apiclient] Found 3 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moving new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backing up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2025-04-04-15-18-05/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This can take up to 5m0s
[apiclient] Found 3 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moving new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backing up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2025-04-04-15-18-05/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This can take up to 5m0s
[apiclient] Found 3 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upgrade/control-plane] The control plane instance for this node was successfully upgraded!
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upgrad/kubeconfig] The kubeconfig files for this node were successfully upgraded!
W0404 15:19:20.983422 2226794 postupgrade.go:117] Using temporary directory /etc/kubernetes/tmp/kubeadm-kubelet-config4122064312 for kubelet config. To override it set the environment variable KUBEADM_UPGRADE_DRYRUN_DIR
[upgrade] Backing up kubelet config file to /etc/kubernetes/tmp/kubeadm-kubelet-config4122064312/config.yaml
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade/kubelet-config] The kubelet configuration for this node was successfully upgraded!
[upgrade/bootstrap-token] Configuring bootstrap token and cluster-info RBAC rules
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[upgrade/addon] Skipping upgrade of addons because control plane instances [k8s-master02 k8s-master03] have not been upgraded
[upgrade/addon] Skipping upgrade of addons because control plane instances [k8s-master02 k8s-master03] have not been upgraded
[upgrade] SUCCESS! A control plane node of your cluster was upgraded to "v1.32.3".
[upgrade] Now please proceed with upgrading the rest of the nodes by following the right order.
3.重启服务验证
systemctl daemon-reload
systemctl restart kubelet
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready control-plane 22m v1.32.3
k8s-master02 Ready control-plane 40m v1.32.2
k8s-master03 Ready control-plane 40m v1.32.3
3.升级节点:node
用于节点级别的升级操作,包括:升级单个工作节点或控制平面节点
更新节点的 kubelet 配置
kubeadm upgrade node # 按提示操作
diff 显示当前静态 Pod 清单与升级后版本的差异,需结合 apply --dry-run 使用。
kubeadm upgrade diff <目标版本>
二、标准升级流程
升级控制平面(Master 节点)
1.执行 kubeadm upgrade plan 确认目标版本。
2.手动更新节点上的 kubelet 和 kubectl 版本。
3.使用 kubeadm upgrade apply 升级 Master 组件。
升级工作节点(Worker 节点)
逐个节点执行 kubeadm upgrade node1。
驱逐节点上的 Pod(kubectl drain <节点名>),升级后恢复调度(kubectl uncordon <节点名>)。
三、注意事项
版本兼容性:需遵循 Kubernetes 官方支持的版本升级路径(如 1.31 → 1.32,不可跨多个主版本升级)。
备份:操作前备份 /etc/kubernetes 目录及关键资源(如 ETCD 数据)。
组件一致性:确保所有节点的 kubeadm、kubelet、kubectl 版本一致。